$ cat projects/g2-bridge/README.md
Voice-controlled coding agent for Even G2 smart glasses - speak to OpenCode from your glasses.
A fully local, self-hosted smart glasses platform that lets you voice-control OpenCode from Even G2 glasses. The entire pipeline runs on my homeserver with no cloud dependencies.
What It Does
Speak naturally to your glasses: "Claude, refactor the auth module to use JWT tokens" → OpenCode executes it → Results display on your glasses as a scrolling teleprompter. All without touching a keyboard.
Architecture
End-to-End Flow
Voice Input → G2 Glasses Mic → Even Hub SDK → Glasses App WebView → WAV conversion → POST /transcribe
Speech-to-Text → Whisper (faster-whisper on local GPU) → Text transcript returned
AI Processing → Coding Agent submits to OpenCode CLI (stream-json mode) → Tool use executed (Bash/Read/Write/Edit/Glob/Grep)
Response → Result stored → Glasses app polls /agent/status → Display service formats for teleprompter → BLE protocol → G2 Glasses Display
Key Features
- Voice Commands: "stop" (abort), "undo" (revert), "use sonnet" (model switch)
- Dual Mode: Traditional OpenCode CLI + REST API with SSE streaming
- Session Continuity:
--resumeflag maintains context across interactions - Silence Detection: Auto-submits after 2 seconds of no audio
- Gesture Control: Tap to activate, scroll to navigate, double-tap for actions
Tech Stack
| Layer | Technology |
|---|---|
| Glasses SDK | Even Hub SDK v0.0.7 |
| Client | TypeScript, Vite, WebView |
| Android | Kotlin, BLE 5.0, OkHttp |
| Server | Python, FastAPI, asyncio |
| STT | faster-whisper (OpenAI Whisper) on local GPU |
| AI Agent | OpenCode CLI |
| Protocol | Custom 2760 BLE, LC3 audio |