The AP lays the groundwork for an AI-assisted newsroom


The Associated Press published standards today for generative AI use in its newsroom. The organization, which has a licensing agreement with ChatGPT maker OpenAI, listed a fairly restrictive and common-sense list of measures around the burgeoning tech while cautioning its staff not to use AI to make publishable content. Although nothing in the new guidelines is particularly controversial, less scrupulous outlets could view the AP’s blessing as a license to use generative AI more excessively or underhandedly.

The organization’s AI manifesto underscores a belief that artificial intelligence content should be treated as the flawed tool that it is — not a replacement for trained writers, editors and reporters exercising their best judgment. “We do not see AI as a replacement of journalists in any way,” the AP’s Vice President for Standards and Inclusion, Amanda Barrett, wrote in an article about its approach to AI today. “It is the responsibility of AP journalists to be accountable for the accuracy and fairness of the information we share.”

The article directs its journalists to view AI-generated content as “unvetted source material,” to which editorial staff “must apply their editorial judgment and AP’s sourcing standards when considering any information for publication.” It says employees may “experiment with ChatGPT with caution” but not create publishable content with it. That includes images, too. “In accordance with our standards, we do not alter any elements of our photos, video or audio,” it states. “Therefore, we do not allow the use of generative AI to add or subtract any elements.” However, it carved an exception for stories where AI illustrations or art are a story’s subject — and even then, it has to be clearly labeled as such.

Barrett warns about AI’s potential for spreading misinformation. To prevent the accidental publishing of anything AI-created that appears authentic, she says AP journalists “should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image’s origin, and checking for reports with similar content from trusted media.” To protect privacy, the guidelines also prohibit writers from entering “confidential or sensitive information into AI tools.”

Although that’s a relatively common-sense and uncontroversial set of rules, other media outlets have been less discerning. CNET was caught early this year publishing error-ridden AI-generated financial explainer articles (only labeled as computer-made if you clicked on the article’s byline). Gizmodo found itself in a similar spotlight this summer when it ran a Star Wars article full of inaccuracies. It’s not hard to imagine other outlets — desperate for an edge in the highly competitive media landscape — viewing the AP’s (tightly restricted) AI use as a green light to make robot journalism a central figure in their newsrooms, publishing poorly edited / inaccurate content or failing to label AI-generated work as such.



Source link

Rymo
We will be happy to hear your thoughts

      Leave a reply

      Thanksineededthat.com
      Logo
      Shopping cart