AI at Apple: Sorry for Siri, better visual intelligence, more chatt

At Apple’s keynote for WWDC 2025 on Whit Monday, many observers asked themselves whether the group would also provide information about postponed important new Siri functions. And in fact, software chief Craig Federighi mentioned the topic briefly after he had expressed detailed positive about the Apple Intelligence introduced last year. The manager said that they had already said that “we continue with our work to make Siri even more personal”. This work “took more time to achieve our high quality criteria”. It is now looking forward to “more about this in the coming year (with you)”. Before that, there had been speculation that Apple could possibly show the improved Siri in autumn after the group rebuilt the corresponding department.

Federighi emphasized that Siri had already been “more natural and helpful” with Apple Intelligence. In fact, the new features stayed within narrow limits. So Siri should be able to “keep” conversations longer, i.e. to be able to refer to previous statements. In practice, however, this only works semi. The three most important features that Apple had announced are still missing: personal context for the voice assistant, the direct work with apps plus the detection of what can be seen on the screen.

The latter now wants to implement Apple with iOS 26 at least partially, albeit differently than expected: in the form of an improved visual intelligence function. The function has been around for a long time and allows you to analyze camera content with the help of other services such as Google. For example, you can have appointments for concerts or get information about sights. What is new is that Visual Intelligence can also deal with screenshots. To do this, use the typical shortcut on the iPhone and can then select that the screenshot is analyzed. With the keynote, this was demonstrated, for example, to find a jacket on the Internet, it is also possible to mark image areas. The screenshot usage is clever because the user keeps full control over what is transmitted to the AI. Apple not only uses its own technology and Google, but also open AI procedures. Parts of the AI ​​functions are also carried out on the device with Apple’s own models.

Finally, Apple extends its image generators with iOS 26. The group will not only use its own models in the future, but also complements its app Image Playground to use Chatgpt for the creation. How extensive this will be remained unclear at first, but Apple initially only seems to want to enable “certain styles”, as is already known from the limited own models. Image Playground will now also exist as an API for developers who can save costs. The genemoji function, the second Apple image generator for creating ideograms, will be able to (somewhat) more in the future. In this way, two existing emojis can be combined and there are more facial expressions and hairstyles for those that resemble people. iOS 26 is available in autumn, a beta phase for developers is currently underway, and a public beta follows in July. Incidentally, Apple did not provide information on the installation of Google Gemini in Apple Intelligence-although there had already been the first indications of Alphabet boss Sundar Pichai.


Discover more from Apple News

Subscribe to get the latest posts sent to your email.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.