ChatGPT-Generated Fake News Spreads in China

A “press release” about the cancellation of the traffic restrictions based on the last digit of license plate numbers by a local government in China went viral on February 16. After tracking the actual publisher, it was determined to have been written by ChatGPT, a chatbot developed by OpenAI, generating wide concern about AI-generated fake news.

On the afternoon of February 16, ChatGPT was discussed in a WeChat group, and a user joked that he would try to use it to write a press release on the cancellation of traffic restrictions in Hangzhou, Zhejiang Province. The user then broadcast the writing process of ChatGPT and sent the article to the group. Other users, not knowing the circumstances, forwarded the article, which caused the erroneous message to spread. At present, the user of ChatGPT has publicly apologized, and local police have started an investigation.

In this regard, netizens commented, “The person who posted it is not wrong. He clearly said that ChatGPT wrote the article. The police should investigate those who forwarded it.” “Terrible artificial intelligence. I am afraid that when the time comes, humans won’t be able to control it.” “There may be more AI-generated fake news in the future to test everyone’s identification ability.”

In recent years, with the blossoming of social platforms, everyone can create something and publish it online. Due to this freedom, the distortion and uncertainty of communication have been criticized again and again. Now, not only users can produce content, but also AI, and the information pool is even messier. Sometimes, the creators and disseminators of rumors are innocent. Some of them mistakenly think a rumor is true, and some of them share it privately without being sure whether it is true or false. When the number of these two situations is large enough, the spread of rumors will be huge.

SEE ALSO: Who Will Be the First Successful Developer of Chinese Version of ChatGPT?

Previously, NewsGuard, a news credibility evaluation agency in the United States, studied that ChatGPT might become the strongest tool to spread online rumors. Researchers asked ChatGPT questions full of conspiracy theories and misleading narratives, and found that it could adapt the information in a few seconds, producing a lot of convincing content without sources. The agency said that there is no effective way to solve this problem at present.

Mira Murati, chief technology officer of OpenAI, the developer of ChatGPT, said in an interview with Time Magazine that ChatGPT may “make up facts”, which is no different from other artificial intelligence tools based on language models. Murati also stressed in the interview that artificial intelligence tools like ChatGPT can be abused or used by bad people, raising questions about how to regulate it on a global scale.

ChatGPT relies on a massive database of information, including a large number of inputs by web users themselves. Therefore, when users enter in personal data or trade secrets, ChatGPT may incorporate them into its own corpus, resulting in a risk of leakage.