Chat with us, powered by LiveChat

News &
Events

This AI tool is Smart Enough to Spot AI-generated Articles and Tweets

With advances in technology and easy access to paid and free tools to create spin-offs or rewrite content, people can now use Artificial Intelligence (AI) to make fake news, fabricated reviews, and fictitious social media accounts and thereby spreading misinformation. However, an AI can now also detect fake written material!

This AI tool named Giant Language model Test Room or GLTR was developed by a collaborative effort of Harvard University and MIT-IBM Watson Lab researchers and developers whose goal was to teach information and raise awareness about generated text. The GLTR tool can determine whether a text is written by a human or is an AI spin generating tool by detecting statistical patterns.

The AI text generating programs use an algorithm and specific code to detect data spinning patterns. Therefore, the words and grammar of the spin-off content may still be grammatically correct but the combination of words do not make sense.

What the GLTR system does is to identify the words that are most likely to appear after a preceding word by highlighting them. Green indicates the most predictable words, yellow and red show the less predictable ones, and purple, the less predictable. This means that a text, when tested in the GLTR system, when created by an AI language generation model would look mostly green. Text written by humans is different because of the unique way that humans use words and express meanings.

The results of the research conducted by Harvard University and MIT-IBM Watson Lab show that the GLTR tool helps non-experts to detect fake content up to a 72% precision rate without prior training.

The GLTR system, however, can only spot fake text in individual cases and does not “automatically detect large scale abuse” because of its limited scale. Moreover, researchers confess the possibility that text generators developed in the future may slip past the GLTR system’s detection system through modifying the sampling parameters to make the text look more human-like.

Despite this, efforts like these emphasize the need for people and AI working together to address socio-technological issues and the benefits and limitations of artificial intelligence. Go ahead and try the tool for yourself!

Take away points:

  • AI and natural language generation models are being used in a way that spreads fake news and misinformation.
  • An AI tool, called Giant Language model Testing Room, was developed to detect written content generated using predictable statistical patterns.
  • The system was made by researchers and developers of Harvard University and MIT-IBM Watson Lab.

 

Jaclyn-Mae Floro, BCompSc

Contact W3IP Law on 1300 776 614 or 0451 951 528 for more information about any of our services or get in touch at law@w3iplaw.com.au.

Disclaimer. The material in this post represents general information only and should not be taken to be legal advice.

Leave a Reply

Your email address will not be published. Required fields are marked *