Before we return to our regularly scheduled abstract theorizing about literature (with Sarah and I weighing in on that standalone thing, as we keep promising to do), we’ve got an addendum to the numbers-loaded guest post from two weeks back.
In the comments on that post, which was full of fascinating data, the question was raised about correlations between stars and wins/honors. And so our valiant number crunchers tackled the question, as follows. (Have I mentioned how happy I am that we found some readers who can actually deal with data? You don’t want to know how many hours Sarah and I spent on last year’s Mock poll data, and I suspect we still made some data errors. Numbers are so very much not my strong point.)
And so, with no further ado, Predicting the Printz, Part 2: Another guest post by Elizabeth Fama (YA author) and John Cochrane (Professor of Economics), with heroic data collection by Jen Baker (Librarian).
In our previous guest post we looked at the relationship between Printz awards and starred reviews of the six largest review journals. We noticed that Printz Winners and Honors were most likely to have Booklist and SLJ (School Library Journal) starred reviews. One might be tempted to conclude that Booklist and SLJ stars are the best predictors of Printz awards. But we issued a warning that Booklist and SLJ may give more stars overall. If, as an extreme example, a journal gives every book a starred review, those stars are no longer useful predictors of a Printz.
In the comment section of that post, Jen Baker did a calculation that gave rough estimates for “percentage of reviews starred” for each journal. Thanks to this inspiration from Jen, and her gracious sharing of the data, John and I decided to answer the question: “What is the chance of getting a Printz, given that a book has a starred review from Journal X?”
This probability is equal to the chance of getting a Printz and having a starred review from X—which we tabulated before—divided by the chance of getting a starred review from X. Dividing by the number of starred reviews* corrects for the possibility that a journal gives out a lot of stars. To find the number of starred reviews, we went one step deeper in Jen’s data: we added up the starred reviews of books from both 2011 and 2012 (through April) for each journal, focusing only on titles we considered to be young-adult and eligible for a Printz award. If one journal gives lots of starred reviews to picture books, we didn’t want that to count against it as a predictor of Printz awards.
The first row of our new table below gives the number of starred reviews in 2011 and 2012 (through April). Kirkus, PW, and Booklist give the most starred YA reviews.
The second and third rows of the table are from our previous post—the number of Printz Winners and Honors that received starred reviews from each journal. Booklist, PW, and SLJ initially looked like better predictors than Horn and Kirkus – but what that number didn’t recognize was that Booklist, PW, and SLJ give out a lot more starred reviews than Horn, and Kirkus gives out the most.
In the last two rows, we divide Winners and Honors by stars.** These are the number we’re looking for. For example, from the table, if a young-adult book gets a star from Booklist, it has a 0.80 percent chance of winning a Printz, and a 3.4 percent chance of getting a Printz Honor. To help digest the table, we also made the usual bar graph.
- Kirkus gave out 153 starred reviews. Of these books, 2 won, and 21 received honors. (They may have been unlucky to only have two Winners; one more Winner would have raised the probability of winning given a Kirkus review by 50%.)
- Only 3 Winners and 12 Honors received Horn stars. As you recall, Horn didn’t seem to fare well in our earlier comparison. But Horn only gave out fifteen starred YA reviews during our sample period, much fewer than the others, so a Horn YA star generates a remarkable 2% probability of winning and 8% probability of honor—far greater than every other journal.
Our usual caveats apply: First, the numbers are low, especially for Kirkus, whose stars only included two winners. Second, this analysis is not meant to imply that one journal’s stars are “better” than another’s. We’re only asking how useful their stars are for predicting Printz awards. We assume the objective of the journals is not to predict Printz awards, but that their stars are tailored to their separate audiences.
Math Notes (please feel free to skip):
*Very observant readers will see that the number we used as the divisor isn’t the probability of getting a starred review from X, but is just the straight number of starred reviews given by X. It works for the following reason: The probability B of getting a starred review = number of starred reviews/total number of books. The probability A of getting a starred review and a Printz is the number of (starred review and Printz) / total number of books. Our number is A / B, so the total number of books cancels from top and bottom and you don’t need it.
**We multiply stars by ten—there are thirteen years of awards and 1.3 years of stars data, so this approximates the number of stars each journal gives in 13 years and therefore gives a better guess of the actual probability of winning a Printz given that a qualifying book has a starred review.