When it launched, Apple ‘s Visual Intelligence feature allowed you to point your compatible phone’s camera at things around you and either perform a Google Image Search or ask questions via ChatGPT. At WWDC 2025, the company showed off updates to broaden the usefulness of Visual Intelligence, largely by embedding it into the screenshots system. To quote the company’s press release, “Visual intelligence already helps users learn about objects and places around them using their iPhone camera, and it now enables users to do more, faster, with the content on their iPhone screen.”
This reminded me of the “onscreen awareness” that Apple described as one of Siri’s capabilities when it announced Apple Intelligence last year. In that press release, the company said, “With onscreen awareness, Siri will be able to understand and take action with users’ content in more apps over time.” Though it’s not quite the same, the updated screenshot-based Visual Intelligence more or less allows for your iPhone to serve up contextual actions from your onscreen content, just not via Siri.
In a way, it makes sense. Most people are already accustomed to taking a screenshot when they want to share or save important information they saw on a website or Instagram post. Integrating Apple Intelligence actions here would theoretically put the tools where you expect them, rather than make users talk to Siri (or wait for the update to roll out).
This embedded content is not available in your region.
Basically, in iOS 26 (on devices that support Apple Intelligence), pressing the power and volume down buttons to take a screenshot will result in a new page being pulled up. Instead of the thumbnail of your saved image appearing in the bottom left, you’ll see the picture take up almost all of the display, with options around it for editing, sharing or saving the file, as well as getting Apple Intelligence-based answers and actions at the bottom. In the bottom left and right corners sit options for asking ChatGPT and doing a Google Image Search respectively.
Depending on what’s in your screenshot, Apple Intelligence can suggest various actions below your image. This can be asking where to buy a similar-looking item, adding an event to your calendar or identifying types of plants, animals or food, for instance. If there’s a lot going on in your screenshot, you can draw on an item to highlight it (similar to how you select an object to erase in Photos) and get information specific to that part of…