The tech world is abuzz with smart glasses, the next big leap in consumer technology. Recent demonstrations by Meta and Snap have wowed audiences with impressive augmented reality (AR) visuals. But what truly sets these devices apart isn’t the AR features – it’s the integration of artificial intelligence (AI) that promises to reshape how we interact with the world around us.
Last week, Meta introduced its latest AR glasses, dubbed Orion, at their developer conference. Similarly, Snap has launched new versions of its Spectacles, and rumours are swirling that Apple has something in the works too. It’s clear that smart glasses are no longer just a futuristic concept – they’re here, and they’re about to go mainstream. But while the holographic displays and gesture controls of these glasses are certainly impressive, the real game-changer lies in their AI capabilities.
Meta’s Orion glasses are still in the prototype stage, costing an eye-watering $10,000 to produce, but they’ve already sparked excitement among tech enthusiasts. The visual aspects, from interactive AR features to surprisingly normal-looking designs, grabbed attention. However, what’s far more intriguing is how these glasses allow users to interact seamlessly with AI. Imagine being able to ask a question or request information without needing to pull out your smartphone or stare at a screen.
With today’s devices, whether you’re querying ChatGPT or asking Google for information, you’re tethered to a screen. Even voice-activated assistants require a phone or laptop to operate. But with smart glasses, that’s no longer the case. Meta’s Ray-Ban Meta smart glasses already allow users to interact with AI hands-free, offering a liberating, screen-free experience. The ability to look at something, ask a question, and receive instant information is where the future of smart glasses truly lies.
Take Snap’s latest Spectacles, for example. Rather than being captivated by the AR features, the most impressive function was the AI’s ability to identify real-world objects. While wearing the glasses, a user could simply look at a distant ship, ask the AI what it was, and immediately receive not only identification but a description. Similarly, Meta’s Orion demo showcased AI recognising ingredients and offering a recipe on the spot – a perfect illustration of how these devices can enhance daily life through AI rather than AR gimmicks.
The true innovation here isn’t virtual objects or games – it’s the practical, everyday applications of AI that can help users understand and interact with their surroundings in real-time. It’s about the ability to get more out of the world, to process and use information effortlessly, without the distraction of screens or clunky interfaces.
In many ways, this concept has been brewing for years. Back in 2013, Google Glass hinted at the potential for smart eyewear to bring useful, context-driven information directly to our line of sight. While Google Glass had its issues, its ability to offer up relevant information via Google Now was revolutionary for the time. The problem then was that the technology wasn’t quite ready. But today, with the power of generative AI, smart glasses are finally coming into their own.
These glasses aren’t just about seeing digital objects floating in front of you. They represent a new way to navigate the world, with AI acting as a constant companion, ready to assist with real-world tasks in a natural, intuitive manner. As AI becomes more multimodal – understanding speech, text, video, and images – smart glasses will be able to process and respond to a wider range of inputs, allowing us to engage with our environment in unprecedented ways.
Years ago, AR headset companies like Magic Leap tried to sell the idea of leaving virtual objects, such as digital bouquets, in physical spaces for others to find. But the real breakthrough in smart glasses has come from the AI that powers them. It’s the ability to see, hear, and respond to the world in real-time that will make smart glasses indispensable – much like smartphones did in the early 2000s.
In conclusion, while augmented reality will no doubt add some fun and useful features to smart glasses, the true revolution lies in their AI capabilities. These glasses will allow us to engage with the world in new ways, offering context and understanding at every turn, all without needing to stare at a screen. The era of smart glasses isn’t just about what we see – it’s about how we think and interact with the world, hands-free and smarter than ever.