Follow This Blog: RSS feed
Neverending Search
Inside Neverending Search

Thinking about credibility and about Turnitin’s SEER: The Source Educational Evaluation Rubric

I’ve not been a huge fan of listy/form type evaluation tools.  So much of the process of assessing credibility has to do with context.

Black and white decisions and rules of thumb are far more fuzzy in a read/write, citizen journalist, open scholarship, media-rich web.

Truth is, I often find value in casually published, unvetted content and I am sometimes disappointed in work that is vetted (and often delayed).

Nevertheless, I want my students to know how, why and when scholarly sources matter and how, why, and when socially networked content is a reliable and defendable option.

So, when students ask me: Is this a good source?, the answer I most often share is: It depends. Have you thought about it? Can you defend it in the context of its creation and your own information/research need?

How do we inspire our kiddos to think critically about their possible sources in this world of credibility confusion?

How do we get them to research energetically and thoughtfully is an environment focused more on right answers than authentic inquiry?

I’ve been inspired by David Warlick’s goals-based approach and by Howard Rheingold’s take on crap detection.

I have mixed feelings about EasyBib’s New Extensions: two/too “easy buttons”? , largely because I want my young scholars to internalize criteria and make their own critical decisions.

I am currently exploring Turnitin’s recently released interactive Source Educational Evaluation Rubric (SEER).

Here’s the story behind the development of the tool:

The white paper, What’s Wrong with Wikipedia? Evaluating the Sources Used by Students, that preceded the development of the tool reviews findings from Pew’s How Teens Do Research in the Digital World and Turnitin’s own study of student research practices.

Though they do not share details about methodology, Turnitin analyzed more than 37 million high school and college student papers submitted between July 2011 and June 2012. They bucketed student source choices into six categories in an effort to determine how students make source decisions.

The categories that emerged were: Homework & Academic; Social Networking & Content Sharing; Paper Mills & Cheat Sites; Encyclopedias; News & Portals; and Shopping.

Researchers pointed to the following insights relating to student information seeking behavior:

Students appear to value immediacy over quality in online research,  The ease with which “the answer” may be found online places sites such as Wikipedia, homework help sites, answer sites, and other social and content sharing sites to the top in terms of source matches.
• Students often use cheat sites and paper mills as sources  Less a research competency issue than a moral and ethical one, the significant number of sources that match to cheat sites and paper mills suggest that for students there is a bias towards immediate outcomes and results rather than towards concerted effort to meet assignment goals.
• There is an over reliance on the “wisdom of the crowd”
Students appear to demonstrate a strong appetite for crowd-sourced content in their research. Though it is not immediately evident why students seek these sources out, the strong reliance on these types of sites indicate difficulty assessing the authority and legitimacy of the content these sources present.
• Student “research” is synonymous with “search”  The frequent and uninhibited use of sites with limited educational value (as defined by the quality and authority of content) in student work underscores a preference for “searched,” rather than “researched” content.                                                      •Existing student source choices warrant a need for better search skills  In addition to a preference for immediacy, the popularity of crowd-sourced content online indicates that a majority of students are engaging in cursory or shallow searches for content. At play may be an absence of awareness of how search engines work and how to effectively conduct searches to find appropriate content. What also appears to be absent is the use of criteria (whether internally—or externally—defined) to judge that content.

Based on the findings of their study, Turnitin worked with academic experts to develop the website evaluation rubric. The tool was field-tested by secondary and college educators.

SEER, the resulting interactive rubric, is designed to

analyze and grade the academic quality of Internet sources used by students in their writing.  Instructors and students who use SEER can quickly evaluate a website and arrive at a single score based on five criteria scaled to credibility: 

  • Authority: Is the site well regarded, cited, and written by experts in the field?
  • Educational Value: Does the site content help advance educational goals?
  • Intent: Is the site a well-respected source of content intended to inform users?
  • Originality: Is the site a source of original content and viewpoints?
  • Quality: Is the site highly vetted with good coverage of the topical area? 

Turnitin hopes the tool will inspire teacher/student discussions about credibility and emphasize the importance of quality sources in academic writing.

In the Webinar What’s Wrong with Wikipedia? Best Practices for Evaluating Student Sources, English professor and writing teacher Renee Bangerter and Turnitin’s Senior Education Manager, Jason Chu shared that only 3% of students really care to get research right, likely because of the overemphasis of teaching to the test and student desperation to quickly get the job done without much critical thought.  The general approach to search, is to look for answers rather than engage in research.  Students have no sense of bias and frequently read only the first two pages of a source.  Bangerter and Chu feel we have to change the game.

I like the idea of using SEER with my high school students to initiate discussion about quality sources for academic writing.  In terms of using it, I am glad it is adaptable–that criteria may be tweaked and re-weighted.   I like that it values Originality, recognizing the value of primary sources and viewpoints.   (I wonder if the word Originality really captures that criteria for students.)

I am a bit confused by the Educational Value criteria and by the Homework/Academic bucket which I would have liked to see broken down a bit to distinguish among journals, ebooks, government and institutional portals, etc.

As for the Educational Value, criteria I am not sure that most of the content students seek to use for research is directly intended to meet instructional goals as described in the criteria.  I’d have trouble addressing this as a student.  I get the idea, but the wording doesn’t work for me.  Maybe academic/scientific/social value? Not sure, but instructional value doesn’t feel right to me.

I’d like to see criteria relating to timeliness and relative value to the researcher’s question or thesis.  I’d like to add purposes other than to inform to the Intent criteria.

The white paper includes three examples of using SEER in practice., using The New York Times, eNotes (a community-based site offering literature study guides and lesson plans), and (a paper mill heavily used by Turnitiners).

I wish these example could have gotten a little more granular, perhaps by using a specific news article or editorial from the NYT.

The white paper notes that The  New York Times has an industry leading reputation for accurate, timely, and unbiased reporting and that opinion-driven pieces are clearly demarcated as such.  I am not sure I completely agree with this broad stroke assessment.  Is it possible that The New York Times may occasionally lean slightly left?  More importantly, shouldn’t individual articles with individual authors be evaluated on their own merit?  While it is true that solid decisions may be made based on publisher, I’d like students to make decisions based on authorship as well.  T The rubric criteria appears to focus more on publication site than author.

I will absolutely give the new SEER rubric a try.

But my personal secret weapon to inspire students to consider credibility is the annotated works cited.

Annotated works cited sections require the demonstration of critical research and evaluation skills.

For elementary students, it can be as simple as:

  • Who wrote this?
  • Why did they write it?
  • How does it help me answer my question or make my argument?

For secondary students, it’s a bit more complicated.  Students demonstrate their critical thinking and research energy by addressing several of the following, depending upon the project and/or requirements of their teacher:

  • Author’s credentials
  • Primary, secondary, tertiary source?
  • Credibility of the publisher or site
  • Scope and purpose of the work: Is it an overview, persuasive, editorial?
  • Timeliness
  • Comparison of the work with others dealing with the same topic or others in your Works Cited list
  • Intended audience
  • Brief summary of contents
  • Evaluation of research: Is the work logical, clear, well-researched? Were the sources the author(s) used credible? Impressive?
  • Evaluation of author bias or lens
  • Relative value of the work to the research question or thesis

See also:

IMSA’s 21st Century Information Literacy Project Evaluating Guide

Kathy Schrock’s extensive list of Critical Evaluation of Information resources

Four tools for determining web cred

and this playlist of resources on credibility assessment:

Joyce Valenza About Joyce Valenza

Joyce is an Assistant Professor of Teaching at Rutgers University School of Information and Communication, a technology writer, speaker, blogger and learner. Follow her on Twitter: @joycevalenza


  1. james Mattiace says:

    Great ideas. Do you happen to know if there is a version of SEER that is Mac friendly? I used it with my students for their first assignment, but many are reporting that it won’t work on their Macs. I tried with mine and had similar results. I get the rubric with no check boxes, or can open it as a PDF but it won’t calculate.

    I also have issues with the Educational Value line. For a student that wouldn’t be on their radar. This is also a great piece

    Thanks for the post.



  1. […] May 25, 2013H.A.G. Leave a comment Create your own Playlist on MentorMob! Love the post on Joyce Valenza’s Blog on criteria for evaluation.  This is in my summer file of to-do’s for the fall. Share […]

Speak Your Mind