Google is Using AI Model Fact Checking in Stories

A Google Inc member of staff walks through the company headquarters in London, U.K., on Wednesday, Aug. 18, 2010. The German government will create a legal framework for consumer data protection in the Internet this year, reacting to a debate about the introduction of Google Inc.'s Street View service. Photographer: Simon Dawson/Bloomberg via Getty Images
  • BERT-based language can understand more complex, natural-language queries
  • Google has more than 10,000 search quality raters
  • “If the system prevents policy-violating content, our enforcement team will take action”

Google has announced leveraging BERT, one of its language AI models, in full coverage news stories to better match stories with fact checks and better understand what results are most relevant to the queries posted on Search.

The more advanced AI-based systems like BERT-based language capabilities can understand more complex, natural-language queries. However, when it comes to high-quality, trustworthy information, even with its advanced information understanding capabilities, Google do not understand content the way humans do.

Prevent Policy-Violating content

 “For example, the number of quality pages that link to a particular page is a signal that a page may be trusted source of information on a topic. These labels come from publishers that use ClaimReview schema to mark up fact checks they have published. So if our systems fail to prevent policy-violating content from appearing, our enforcement team will take action in accordance with our policies,” said Danny Sullivan, Public Liaison for Google Search.


Instead, search engines largely understand the quality of content trough what are commonly called “signals”. The company has also made it easy to spot fact checks in search, News and in Google Images by displaying fact check labels.

Google has more than 10,000 search quality raters, people who collectively perform millions of sample searches and rate the quality of the results.