/

January 13, 2023

CNET’s AI-generated Contents: Ethical Implications and Potential Benefits

CNET, one of the most widely read technology news websites, has recently been quietly employing the help of “automation technology” or AI, on a new wave of financial explainer articles, seemingly starting around November of last year. According to the source, the articles are published under the unassuming byline of “CNET Money Staff,” and encompass topics like “Should You Break an Early CD for a Better Rate?” or “What is Zelle and How Does It Work?”. This byline does not paint the full picture, and so the average reader visiting the site would have no idea that what they’re reading is AI-generated. It’s only when one clicks on “CNET Money Staff,” that the actual “authorship” is revealed. “This article was generated using automation technology,” reads a dropdown description, “and thoroughly edited and fact-checked by an editor on our editorial staff.”

Since the program began, CNET has put out around 73 AI-generated articles. That’s not a whole lot for a site that big, and absent an official announcement of the program, it appears leadership is trying to keep the experiment as low-key as possible. CNET did not respond to questions about the AI-generated articles. Because the content seems carefully optimized for search traffic, Google’s outlook on AI-generated content will likely make or break the future of the program.

Though high-ranking Google official John Mueller said last year that AI-generated content with the primary aim of manipulating search rankings is against the company’s policies, a Google spokesperson clarified to the source that AI-generated content isn’t entirely verboten.

“Our goal for Search is to show helpful, relevant content that’s created for people, rather than to attain search engine rankings,” public liaison for Search Danny Sullivan said in a statement provided to the source. “Our ranking team focuses on the usefulness of content, rather than how the content is produced. This allows us to create solutions that aim to reduce all types of unhelpful content in Search, whether it’s produced by humans or through automated processes.”

Even prestigious news agency The Associated Press has been using AI since 2015 to automatically write thousands and thousands of earnings reports. The AP has even proudly proclaimed itself as “one of the first news organizations to leverage artificial intelligence.” However, it is worth noting that the AP’s auto-generated material appears to be essentially filling in blanks in predetermined formats, whereas the more sophisticated verbiage of CNET’s publications suggests that it’s using something more akin to OpenAI’s GPT-3.

In conclusion, the use of AI-generated content by media outlets like CNET has the potential to revolutionize the way we consume and understand news. However, it is important to consider the ethical concerns that this new approach raises, especially the potential for the spread of misinformation and propaganda. Media outlets have a responsibility to ensure that their use of AI-generated content is ethical and responsible, and to be transparent about it. Additionally, policymakers and regulators need to consider the implications of AI-generated content and take steps to ensure that it is used responsibly and ethically. The future of the program will depend on the approach of search engine companies like Google on AI-generated content.

Update:

Google’s SearchLiaison has user’s queries explaining that any content generated in any method without any regards for user in mind will be considered spam.