News of University of California Santa Cruz computer scientist Luca de Alfaro's Wikipedia trust-coloring system
revived - and improved - an idea I've been playing with: automated reputation-management for politicians. The idea is to make the concept of honor meaningful again, by creating new social rewards and penalties for behavior that affects the rest of us. (It could, of course, also be applied to journalists, corporate leaders or other public figures.)
Very interesting idea here but it hinges on the feasibility of step two: "software would check to see if each name appeared in the context of a correction of an untruth/exaggeration/'misstatement'". This would be a complex natural language processing problem, especially given that Factcheck.org articles are careful to parse out exactly what part of a statement is true or untrue, creating degrees of truth. For example, this article about Giuliani's claim to have "cut or eliminated 23 taxes" gives him credit for eliminating 15 taxes, and supporting the other 8.
The article also has several updates from the campaign staff (who encouragingly are at least are paying attention to Factcheck) where they quibble over whether he can take credit for supporting tax cuts, etc.
Maybe an easier way to do this would be to mashup Factcheck.org (assuming they already don't have public access to their 'database'), and create (assuming human labor here) "untrustworthy incident reports" every time a misstatement is corrected by Factcheck.org. This would create a database that could easily be queried against or exported in XML formats for other applications. It would also provide a transparent process for users to see how a candidate got the score they did and possibly challenge the compilers if they misinterpreted Factcheck.org's findings.
Great comments, Greg, thanks. Could be that the daily volume on factcheck.org is low enough that your mashup idea is quite feasible for a third party to do. And/or I wonder if either of the following might work as shortcuts:
- Collaborate with factcheck.org so that when they prepare each new item, they assign an "incident-seriousness" score to the person mentioned. That way we get the benefits of human judgment while adding very little new overhead.
- Or have software do a relatively crude check to confirm that the item is a correction of some sort, and score people based on the number of times they appear in corrections. I'd guess this could be about as accurate as a Google search, and that the cumulative score would grow more accurate with the number of incidents. Also, having the scale be relative rather than absolute would provide some protection against mistakes, since having a less-than-perfect record would not prevent someone from winning "green" status. They'd just need to have a better-than-average score.