Abstract
Investments in artificial intelligence (AI) spurred development of online fact-checking tools designed to produce accuracy and truthfulness in response to common questions and claims; positioned to potentially serve as more accurate alternatives to public search engines and/or chatbots. This study analyzes the efficacy of online AI tools in producing accurate readings in response to debunked claims determined by a consensus of independent fact-checking organizations, recorded key distinctions among tools, and provided recommendations for future analysis in the efficacy of AI fact-checking. Four AI search engines selected for this study include: ClaimBuster, Full Fact, TheFactual - IsThisCredible?, and Google’s Fact-Check Explorer. Ten claims were inputted into each of the four AI tools to produce individual fact-check reports. Forty fact-check reports were conducted and recorded to reflect their efficacy in producing an accurate reading. Additionally, notes were recorded to describe nuances and key differences for each tool. The study produced an efficacy rating of 100 % regarding the ability of the selected AI tools to produce an overall accurate result debunking the inputted claims. 89 % of fact-check reports produced a result that was unanimous in determining a false/misleading/unsupported claim. The Factual’s - IsThisCredible featured a “Moderate-Right” or “Right” politically-leaning source as its “alternate viewpoint” in 90% of its reports. This study provides support for the notion that AI can play an effective role in aiding truth-seeking in political communications, and its determinations and accuracy depend on referencing a consensus view of independent human fact-checkers.
Presenters
Russell HartleyStudent, M.S. Communication, B.A. Health Communication, Grand Valley State University, Michigan, United States
Details
Presentation Type
Theme
KEYWORDS
Artificial Intelligence, Fact-Checking, Political Communication, Journalism