[Note from Karyn: Usually when someone is kind enough to write a guest post, I labor over a worthy introduction. But true to her detail-oriented self—see post, below—Elizabeth wrote her own introduction. So I'll just say that we apologize for the delay between posts, but there was this thing known as ALA. We'll be back on track in the upcoming weeks with a writeup of the RealCommittee celebrations from Sophie, and then more on that pesky series issue. For now, please enjoy the amazing statistical guest post below. I love empirical data!]
Back in her April “Reading, Reading, Reading!” post, Karyn said, “Remember that any book with three or more stars [from the six major review journals] is automatically a contenda,” leading me to ask in the comments, “Is there an empirical rationale for considering 3-star books auto-contenders? Has the Printz (including honor books) statistically gone to books with multiple stars, or is this just a handy way of forming our reading list?” Anecdotally it didn’t seem true. Last year, for example, Chime by Franny Billingsley earned six stars but no major awards, and Where Things Come Back by John Corey Whaley earned only one star, but took home both the Printz and the Morris.
In the comment section of that post and others, we all offered our views of why stars and Printz awards might not match up, but I wanted to see exactly how much they didn’t match up. And so the lovely Jen Baker, who is equally fascinated by quantifying children’s literature, compiled a spreadsheet with the starred reviews that all twelve years of Printz Winners and Honors earned (or didn’t) from the six journals. I enlisted the help of my economist husband to crunch the numbers and create the charts.
If you’re a numbers nerd, stick around for some fun statistics. If you’re not, skip straight to the conclusion at the bottom.