An Apple AI blunder messed up headline summaries so badly some want the feature pulled

An Apple AI blunder messed up headline summaries so badly some want the feature pulled

The Star Online - Tech·2024-12-25 11:01

In a push notification to users last week, Apple’s AI news-summarising tech badly mangled a BBC News report, and alleged that Luigi Mangione, the suspect in the UnitedHealthcare CEO killing, had also shot himself. This just wasn’t true. Mangione is in custody and has since been charged with murder as an “act of terrorism.” It seems Apple isn’t immune to the kind of false information issues that crop up in many other generative AI systems, where the AI simply invents facts and passes them off as real.

Vincent Berthier, the technology and journalism desk chief for journalism advocacy group Reporters Without Borders, called for Apple to “act responsibly by removing this feature,” CNN reported. Berthier summarized the issue in a neat soundbite: “A.I.s are probability machines, and facts can’t be decided by a roll of the dice.” More seriously Reporters Without Borders highlighted that it is “very concerned” about the risk AI tech poses to news reporting, alleging that the tech is “too immature” to be relied on to convey correct information to the general public. The BBC itself contacted Apple, and in a statement said that it was “essential” to the august news body that its “audiences can trust any information or journalism published in our name and that includes notifications.” Apple has reportedly not responded when asked for comments.

The news summarising tech aligns with many of the time- and effort-saving promises of current-gent AI tech. As CNN notes, Apple has promoted its ability to streamline specific content into “a digestible paragraph, bulleted key points, a table, or a list,” and lets users group news notifications into a single push notification.

The big tech giant is going all-in on AI technology, launching a splashy effort under the “Apple Intelligence” brand in the summer, and releasing a clutch of new iPhone models centered around AI systems in September. Cautious about the risks that AI tech embodies, and the fact that some AI systems use users’ data to train their algorithms in ways that may leak sensitive information, Apple is making privacy a key part of its AI push.

It’s even arranged that when OpenAI’s ChatGPT is integrated in the near future into its iPhone operating system to answer more advanced user queries, the AI market leader won’t gain access to sensitive user data. It’s curious then that Apple’s news summarising tech has such a large loophole, allowing it to essentially fabricate “news” in a way that a casual user of Apple’s devices – perhaps one who’s become comfortable trusting the tech giant’s “good behavior” stance – would believe as real information.

As AI advances into the new year, we’re certain to see more of this sort of trouble. AI-generated misinformation is an increasingly prominent concern – a report issued during the World Economic Forum meeting in Davos defined it as the world’s biggest short-term threat. Even OpenAI CEO Sam Altman, a figure you’d expect to be hawkish about the promise of AI, stated he was worried about the threat of AI-generated misinformation, especially in an election year. Interestingly, as AI systems get smarter there is the possibility that the tech itself may be useful in spotting and blocking the spread of misinformation – with another WEF report in June highlighting that AI is good at “analyzing patterns, language and context to aid content moderation.”

All of this is another reminder—should you still need it—that if your company is relying on AI in any way to generate emails, serve as a portal to customer queries, or publish content then you need to check and double check its veracity before it goes “live.” For now there still needs to be a human in the AI publishing loop. – Inc./Tribune News Service

……

Read full article on The Star Online - Tech

Technology Entertainment Malaysia