[Note from Karyn: Usually when someone is kind enough to write a guest post, I labor over a worthy introduction. But true to her detail-oriented self—see post, below—Elizabeth wrote her own introduction. So I'll just say that we apologize for the delay between posts, but there was this thing known as ALA. We'll be back on track in the upcoming weeks with a writeup of the RealCommittee celebrations from Sophie, and then more on that pesky series issue. For now, please enjoy the amazing statistical guest post below. I love empirical data!]
Back in her April “Reading, Reading, Reading!” post, Karyn said, “Remember that any book with three or more stars [from the six major review journals] is automatically a contenda,” leading me to ask in the comments, “Is there an empirical rationale for considering 3-star books auto-contenders? Has the Printz (including honor books) statistically gone to books with multiple stars, or is this just a handy way of forming our reading list?” Anecdotally it didn’t seem true. Last year, for example, Chime by Franny Billingsley earned six stars but no major awards, and Where Things Come Back by John Corey Whaley earned only one star, but took home both the Printz and the Morris.
In the comment section of that post and others, we all offered our views of why stars and Printz awards might not match up, but I wanted to see exactly how much they didn’t match up. And so the lovely Jen Baker, who is equally fascinated by quantifying children’s literature, compiled a spreadsheet with the starred reviews that all twelve years of Printz Winners and Honors earned (or didn’t) from the six journals. I enlisted the help of my economist husband to crunch the numbers and create the charts.
If you’re a numbers nerd, stick around for some fun statistics. If you’re not, skip straight to the conclusion at the bottom.
The Data: How many starred reviews each Printz Winner and Honor received between 2000 to 2012 from each of the major review journals: Booklist, The Bulletin of the Center for Children’s Books (Bulletin), Horn Book (Horn), Kirkus, Publishers Weekly (PW), and School Library Journal (SLJ).
How Many Stars Do Printz Books Typically Win?
Our first chart shows the percentage of Printz Winners and Honors that received one, two, three, four, five, or six stars. Clearly, a contenda doesn’t have to have three stars. Yes, six Winners did get three stars, and one (Postcards from No Man’s Land) got five. But six Winners also got only one or two stars.
Similarly, the Honors books are about as likely to get more than three stars as they are to get fewer than three stars. Two Honors have been won with zero stars (The Earth, My Butt, and Other Big Round Things and Repossessed), but so far no book has won the gold without garnering at least one star.
In nine out of thirteen years, at least one of the Honor books has received more stars than the Winner. The extreme case is perhaps the six stars of Why We Broke Up, which received an honor, while Where Things Come Back won with one star. (See also Octavian Nothing’s six stars compared with Jellicoe Road’s two.)
Are Some Journals’ Stars More Informative Than Others?
First we want to clarify that this question only refers to predicting the Printz awards, not to the inherent value or “correctness” of each journal’s star choices. We assume that the journals have their own audiences, and different objectives in making their recommendations, and that their stars are probably not there to predict Printz awards.
That said, how well can we use the journals to forecast Printzes? The next plot shows which journals gave stars to the Winners and Honors:
The graph suggests that stars from Booklist signal a future Printz award the most, with Bulletin and Horn the least, and Kirkus, PW, and SLJ in between. Kirkus stars are poorly correlated with Winners, but do fairly well with Honors. SLJ does as well as Booklist in predicting Honors.
We say “suggest” because there are some caveats here. The first is: beware of averages of small numbers. The little blue brackets next to each bar are the standard errors. They’re a good guess of the band of uncertainty in these numbers. The Winner statistics are a lot less certain than the Honor statistics, because there are so few Winners.
The second caveat is that we only know the number of Winners and Honors that got a starred review from, say, Booklist. We don’t know the converse: the percentage of total Booklist starred reviews that won a Printz. (Ideally, we’d like to have data on how many starred reviews each journal gave each year, including books that didn’t win awards.) As an extreme example, if Booklist gives every book a star, then all winners would have a star but the star would not be a useful forecast of winners.
To infer from our graph that a Booklist star is a better predictor of a Printz than a Bulletin star, you have to assume that Booklist and Bulletin review about the same number of books each year, and each journal gives out roughly the same number of stars. Now, that may even be approximately true, but we don’t have the data, and it’s an important qualification.
How Much Agreement or Disagreement is there Between the Journals?
The top left plot answers the question, “Suppose Booklist gives a star review to a Printz-award book. How often do the other journals also star-review that book?” The answer is remarkably low. Bulletin and Horn only star such a book about 15% and 25% of the time, respectively. Kirkus, PW, and SLJ are more likely to agree, but still only about half the time.
The reverse question gives a different answer (top right). If Bulletin gives a starred review, Booklist is likely to agree about 50% of the time, as is everybody else. Similarly, a Horn Book star (middle left) gives a 60% chance of a Booklist star, and a half a chance for everyone else.
SLJ is another interesting case (bottom right). Booklist starred the same Printz title that SLJ starred 60% of the time, but the others (and especially Bulletin and Horn) did not. Yet SLJ agreed with the Bulletin and Horn’s picks.
The simple interpretation is that Booklist and SLJ are less choosy. However, that’s too superficial. They may be starring books for specific audiences that Bulletin and Horn do not share.
What Genres Win the Printz?
The next chart is unrelated to stars, but breaks down the Printz awards by genre. The categories, shortened a bit in the graphs, are Realistic or Contemporary Fiction (9.5 winners, 25 honors) Fantasy (2, 11) Memoir or Biography (1, 5) Historical Fiction (0.5, 7) and Poetry/Non-Fiction (0, 1). Note that classifying books is a slippery business; for instance, sometimes realistic fiction has a hint of fantasy (Kit’s Wilderness), or a fantasy is set in a historical period (Monstrumologist) or realistic fiction is told through poems (Keesha’s House), etc. If you’d like to see what genre we assigned to each book, that data is posted on Elizabeth’s blog.
Clearly, the vast majority of Printz awards go to realistic or contemporary fiction, with the others picking up the pieces in order. In general, categories other than realistic or contemporary fiction have a better chance of winning an honor than the gold.
Another caveat, before you give up your Printz hopes because you write poetry: we don’t have the denominators of these fractions. We don’t know how many overall books are written in each category. For example, if there are 3,000 realistic fiction books per year, and only a handful of poetry, then the poetry actually has a better chance of winning a Printz.
We can, however, tell that categories other than realistic or contemporary fiction have a better chance of winning an honor than the gold: the blue bar is higher than the maroon for realistic/contemporary fiction, but the opposite pattern holds for the other categories. This comparison is valid even though we don’t see all books published. Thus, it seems the Printz committees honor these genres, but more often save the “win” for realistic or contemporary fiction.
The too-long-didn’t-read (or I-hate-math) bottom line:
1. Printz Winners and Honors are just as likely to have fewer than three stars as more than three stars from the six major review journals.
2. Booklist’s stars seem more likely to predict a Printz Winner, and Booklist and School Library Journal are the most likely to predict an Honor book. The main warning about this conclusion is that Booklist and SLJ may give more stars overall.
3. Looking only at Printz Winners and Honors, the journals agree with each other’s stars about 50% of the time. Booklist and SLJ give out more stars to books their colleagues overlook.
4. The category of realistic or contemporary fiction brings home the most awards, with fantasy a distant second. Book in genres other than realistic or contemporary fiction are more likely to receive an honor than the gold.