The Only AI Smart Glasses Built for All‑Day Reality, Not Surveillance Theater
Share this article
Even G2: The Smart Glasses That Refuse to Be Creepy
Smart glasses have spent the last decade trapped in a paradox: the closer they get to feeling like real computing, the less acceptable they become in real life. Cameras make bystanders uneasy. Bulky frames betray your inner cyborg. Two-hour batteries strand you before lunch. Most products are either short-lived gadgets or brand statements disguised as head-worn surveillance.
Even Realities’ new G2 Display Smart Glasses land somewhere else entirely. They are opinionated, constrained, and — in crucial ways — underpowered. And that’s precisely why they may be the first AI glasses that developers, speakers, and privacy-conscious professionals can plausibly wear from breakfast to midnight.
This isn’t a specs arms race against Meta. It’s a design argument about what ambient AI should feel like in public.
Source: This analysis is based on reporting and first-hand impressions published by Jason Hiner for ZDNET (Nov. 13, 2025), plus independent technical interpretation.
A Heads-Up Display for Adults
Even Realities keeps the G2 proposition deceptively simple:
- A bright monochrome green display in both lenses
- No front-facing cameras
- No speakers or open-ear audio
- Lightweight 36g frame, prescription-ready from -12 to +12
- Battery life measured in 1–2 days, not hours
At $599, the G2 is priced like premium eyewear, not a halo gadget. That framing matters. Even isn’t trying to sell you an AR lifestyle; it’s selling you glasses that happen to have a HUD.
Feature-wise, the G2 reads like a pragmatic toolkit:
- Live translation (text-only, on lens)
- Discreet phone notifications
- Step-by-step navigation
- Quick notes
- An embedded AI chatbot
- "Conversate": an AI layer that listens to live dialogue and surfaces contextual info or summaries
All of it runs within the physical and social constraints of a minimalist optical display. No one across the table feels recorded. You don’t stand out in a conference hallway. You’re not streaming your surroundings to an adtech backend.
For a technical audience, the significance is architectural: Even Realities is explicitly optimizing for social comfort, power efficiency, and clarity of use case over sensory maximalism. It’s a rare bet in a category obsessed with multimodal capture.
Teleprompter-as-a-Service, on Your Face
The killer app here is almost disarmingly narrow: a world-class teleprompter.
The G1, released in 2024, went viral among YouTubers, founders, and public officials for one reason — it let them read scripts invisibly while maintaining eye contact. The software ingests a text file, then uses on-device AI to track your speaking pace and auto-scroll in sync.
The G2’s 75% larger, ~30% brighter display pushes that experience from clever gadget to serious tool. Onstage use cases — conference keynotes, investor pitches, live broadcasts — benefit from:
- Higher legibility at natural focal distances
- More forgiving head movement
- Reduced cognitive load versus glancing at phones or confidence monitors
From a systems perspective, the teleprompter showcases a more mature model for wearables: opinionated, high-competence features instead of a cluttered app zoo. Rather than chase "do everything" parity with a smartphone, the G2 nails a high-value vertical workflow.
If you build developer tools or productivity platforms, this is the interesting lesson: tightly scoped, always-available micro-experiences may be where AR finally proves indispensable.
The Software Catch: When Focus Meets Friction
The G1 was rightly criticized for rough software: confusing menus, unfinished navigation, and a retro UI that read more hobbyist than flagship.
The G2 iterates: larger display, refined interface, improved navigation, new AI-powered "Conversate" mode. But outside the teleprompter, some features still feel beta-grade.
For engineers, this underscores the core challenge of wearable UX:
- Input bandwidth is constrained. Heads-up interfaces punish hierarchy-heavy navigation and nested menus.
- Mode errors are costly. Subtle state changes are hard to signal on a small FOV; users need predictable mental models.
- Failures are public. Bugs on a phone are private; bugs on your face are reputational.
Even Realities’ roadmap will live or die by how quickly it can iterate this stack without compromising its privacy posture. A camera-free, mostly on-device or phone-tethered AI model has different data and latency constraints than Meta’s more invasive, cloud-leveraged approach.
R1 Smart Ring: Ambitious Input, Immature Experience
If the G2 is disciplined, the $249 R1 Smart Ring is where Even lets itself experiment — sometimes clumsily.
The intent is strong:
- A discreet, thumb-driven gesture surface (tap, double-tap, swipe) for navigating the glasses
- Additional health and activity tracking, Oura-style
- Integrated stats surfaced directly within the G2 interface
On paper, this is the right direction. Wrist and temple controls are visible and awkward; a ring interface could make HUD navigation nearly invisible and more granular.
In practice, early impressions point to:
- Finicky gesture recognition
- Accidental activations and unintended commands
- Buggy health metrics
Crucially, the G2 remains fully usable without the ring via capacitive controls on the glasses themselves. For now, that’s the smarter path for mission-critical contexts (live talks, high-stakes meetings), where an errant swipe mid-keynote is unacceptable.
From a product architecture standpoint, the R1 is a testbed for modular, multi-device interaction: distribute sensing and control across subtle form factors instead of overloading the glasses. It’s the right thesis; the implementation needs another software generation.
David vs. Meta: A Different Bet on Ambient AI
Meta holds ~70% of the smart glasses market, with camera-centric Ray-Ban models, color displays, neural input bands, and full-stack AI integration. It is building an always-on sensor array tied to a trillion-dollar ad and social graph.
Even Realities, with ~200 employees split between China (tech) and Switzerland (industrial design), is deliberately building the opposite:
- No outward cameras; optics pointed at the wearer’s experience, not the world’s data
- Industrial design that passes as fashion-first eyewear
- Hardware constraints that enforce respectable use cases
This philosophical fork matters for anyone developing AI, AR, or edge experiences:
- Trust is a competitive moat. Camera-free defaults make it easier to wear the product in regulated environments: hospitals, secure facilities, courtrooms, sensitive enterprise campuses.
- Narrow competence beats noisy capability. When a single feature (teleprompter) just works, it creates real professional dependency, unlike generic "assistant" gimmicks.
- Distributed, privacy-preserving AI is viable. Even’s model suggests a path where useful AI augmentation doesn’t require constant full-fidelity capture.
If Meta is building "AI that sees everything," Even is exploring "AI that quietly helps you perform." Both futures can coexist, but only one can walk into certain rooms.
Who Should Actually Care — and Why
For most consumers, the G2 is a niche luxury. For specific technical and professional cohorts, it’s something more consequential.
Developers & builders
- A compelling reference design for single-purpose, high-trust wearables.
- A platform opportunity: focused apps around public speaking, training, sales, field ops, and workflow guidance could thrive on a minimalist HUD.
Security & compliance leaders
- A rare wearable that you can plausibly allow in sensitive spaces.
- No outward cameras reduces both insider risk and privacy liability.
Speakers, trainers, executives, on-air talent
- A portable, invisible teleprompter that doesn’t break eye contact or require stage infrastructure.
Privacy-conscious power users
- A way to keep AI and notifications ambient without strapping cameras to your face.
Even G2 isn’t the future of computing. It’s something more grounded: an argument for useful, bearable augmentation in a category that has confused capability with acceptability.
As AI wearables race toward ever more sensors and spectacle, Even Realities’ refusal to record everything might be its most radical feature.