A team from the University of California, Santa Cruz, has developed software that flags up questionable lines in Wikipedia entries:
By diving into Wikipedia's open volumes of edit histories, the software counts the degree to which any given contributor's work survives subsequent edits by other people. In general, the less tinkering your work on Wikipedia engenders, the more trustworthy you are deemed to be.It's a kind of consensus audit.
Obviously this is indicative rather than authoritative. One Wikipedia entry I edited repeatedly, because the entry was repeatedly reverted to false information by someone, was subsequently locked at my suggestion by the editors. In other words, while I could show good, evidenced reasons for my concerns, good enough for action to be taken, this edit history would be flagged.
But it is an example of how innovative tools can deepen the analysis on entries such that we can learn things about them that are not accessible in other types of reference media. Detractors of the project might reflect that as it becomes increasingly sophisticated, it will also become increasingly reliable.