An experiment by Duke University shows how easily people can be deceived by manipulated visual cues in augmented reality. The project addresses a potential security problem: If AR information becomes part of everyday life, even a small change could be enough to influence perception and behavior.
Miniature city as a testing ground for deception
At the MobiHoc Conference 2025 in Houston, Yanming Xiu and Maria Gorlatova presented an interactive miniature city which is viewed via the Passthrough mode of Meta Quest 3. The VR glasses record the real environment in mixed reality mode and mix it with digital overlays that appear in real time on the display of the glasses.
In the demo, the researchers changed street signs and building lettering: A hospital suddenly bore the lettering “Hotel”, at an intersection a stop sign appeared that did not belong there at all. Participants steered a remote-controlled toy car through this distorted environment and promptly got caught up in the wrong clues. During a test run, two out of three test subjects went off the path without noticing the error.
The researchers speak of “Visual Information Manipulation”, or “VIM” for short: deliberate modification or addition of visual AR content. This form of deception can have harmless effects, such as irritation in a game, but also dangerous ones – for example, when manipulated AR signposts lead drivers into risky situations or incorrect information is displayed in a medical context.
Real AR glasses such as Meta’s Orion and Snap’s Spectacles are not yet in circulation. However, Snap has already announced a compact consumer version for 2026 and Meta is also likely to have long since prepared its prototype for the mass market. But even current AR devices, such as those used in industry, obtain content from various data sources. If these are compromised, attackers can install false overlays. Particularly sensitive are areas of application in which visual cues have to be followed under time pressure.
Ways to protect against visual tampering
Read more after the ad
To detect manipulation, AI-supported detectors already exist that simultaneously evaluate the real and virtual field of view. Xiu and Gorlatova have already presented a system called “VIM-Sense”, which combines image and text recognition, for example, to automatically report contradictions between real and virtual content. In a test run, almost 89 percent of the manipulations were detected in an AR data set.
In addition to technical monitoring, design and control issues are also coming to the fore. Transparent AR objects or visible signs of origin could ensure that the origin of an overlay remains recognizable. A kind of “reality button” on the devices, which could be used to hide all overlays immediately, could also offer a kind of emergency function against deception.
Augmented reality has not yet arrived in most people’s everyday lives. However, the fact that such attacks are not limited to paper is already shown by the short miniature city demo. The Duke University team is planning further trials on other devices, such as the Vision Pro. Apple’s headset is more powerful than a Quest 3 and should be even more convincing with its clearer and higher-resolution pass-through feed. A comprehensive study is also to follow. The researchers’ goal is not only to provide technical clarification, but also to create awareness that AR environments can be manipulated.
Discover more from Apple News
Subscribe to get the latest posts sent to your email.