Meta is pushing the boundaries of what smart glasses can do. The company has introduced several new AI-powered features for its Ray-Ban Display smart glasses, with neural handwriting being the standout addition. Users can now type messages by moving their fingers in the air or making subtle hand gestures, eliminating the need to pull out their phone or use any physical keyboard.
The feature works through a neural wristband that Meta introduced to complement the smart glasses. This wristband tracks hand gestures and translates them into actions like typing messages or controlling the interface. The technology works across popular messaging apps including WhatsApp, Messenger, Instagram, as well as native Android and iOS messaging applications.
This update represents Meta’s broader strategy to make smart glasses more practical for everyday use. The company has been steadily adding features since launching the Ray-Ban Display glasses just over six months ago, signaling its commitment to making wearable computing more mainstream. The neural handwriting capability addresses one of the biggest friction points with smart glasses – the awkwardness of text input when your hands are busy or when pulling out a phone isn’t convenient.
Beyond gesture-based typing, Meta has added several other features to make the glasses more useful:
- Display recording that captures what users see through the in-lens display alongside real-world view and audio
- Expanded walking directions support across the US and major international cities like London, Paris, and Rome
- Live captions for voice conversations on WhatsApp, Messenger, and Instagram Direct
- New widgets for Weather, Stocks, Calendar, and Reminders
- Faster Spotify access and Instagram Reels support
The display recording feature aims to make content creation easier by allowing users to capture and share their experiences directly from the glasses. This could appeal to content creators and social media users who want to document their perspective without holding up a phone.
Meta has also opened up the platform to developers through a preview program. Developers can now build lightweight web apps for the glasses using standard web technologies like HTML, CSS, and JavaScript. The company is also launching a Wearables Device Access Toolkit that lets developers extend existing mobile apps onto the smart glasses display, adding interface elements like text, images, buttons, lists, and video playback.
The rapid pace of updates – four major software releases since launch – shows Meta’s determination to prove that smart glasses can be more than just a novelty. The company faces competition from Apple’s rumored smart glasses project and needs to establish a strong foothold in the wearable computing market. By focusing on practical features like air typing and seamless app integration, Meta is trying to solve real problems that could drive mainstream adoption of smart glasses technology.
