Apple will label AI-generated images in intelligence metadata


Apple’s new artificial intelligence features, called Apple Intelligence, are designed to help you create new emojis, edit photos, and create images from a simple text prompt or uploaded photo. We now know that Apple Intelligence will also add code to each image, helping people recognize that it was created with AI.

Recently on renowned blogger John Gruber’s podcast, Apple executives explained how the company’s teams wanted to ensure transparency, even in simple photo edits, such as removing an object from the background.

“We make sure to mark the metadata of the generated image to indicate that it has been altered,” said Craig Federighi, Apple’s senior vice president of software engineering. He added that Apple does not intend to create technology that generates realistic images of people or places.

Apple’s commitment to adding information to images touched up by its AI joins a growing list of companies attempting to help people identify when images have been tampered with. TikTok, OpenAI, Microsoft and Adobe have all begun adding a type of digital watermark to help identify content that has been created or manipulated by AI.

ai atlas art badge tag

Media and information experts warn that despite these efforts, the problem is likely to get worse, especially ahead of the contentious 2024 US presidential election. A new term, “slop,” has become increasingly popular to describe actual lies and misinformation created by AI.

Artificial intelligence tools for creating text, video and audio have become much easier to use, allowing people to do all kinds of things without needing any technical knowledge. (Check out CNET’s hands-on reviews of AI image-generating tools like Google’s ImageFX, Adobe Firefly and OpenAI’s Dall-E 3, plus more AI tips, explanations and news on our AI Atlas resource page.)

Read more: How close is this picture to the truth? What to know in the age of AI

At the same time, AI content has become much more reliable. Some of the biggest companies in tech have started adding AI technology to the apps we use daily, though the results have certainly been mixed. One of the most high-profile missteps was Google, whose AI overview summaries attached to search results began inserting incorrect and potentially dangerous information, such as suggesting adding glue to pizza to keep the cheese from sliding.

Apple appears to be taking a more conservative approach to AI for now. The company said it intends to introduce its AI tools into public “beta” testing later this year. It has also partnered with leading startup OpenAI to add additional capabilities to its iPhones, iPads and Mac computers.


Please enter your comment!
Please enter your name here

Share post:




More like this

Today’s NYT Connection Hints, Answers & Help June 21, #376

Need answers to the June 21 New York Times...

We’re still waiting for the next big leap in AI

When OpenAI announced its latest big language model, GPT-4,...

US bans sale of Kaspersky software from Russia citing security risk

The US government announced on Thursday that it is...