Show HN: Simple wrapper for Chrome's built-in local LLM (Gemini Nano)
github.comChrome now includes a native on-device LLM (Gemini Nano) starting in version 138. I've been building with it since it was in origin trials, it's powerful but the official Prompt API is still a bit awkward:
- Enforces sessions even for basic usage
- Requires user-triggered downloads
- Lacks type safety or structured error handling
So I open-sourced a small TypeScript wrapper I originally built for other projects to smooth over the rough edges:
github: https://github.com/kstonekuan/simple-chromium-ai
npm: https://www.npmjs.com/package/simple-chromium-ai
- Stateless prompt() method inspired by Anthropic's SDK
- Built-in error handling and Result-based .Safe.* variants with neverthrow
- Token usage checks
- Simple initialization that provides a helper to trigger downloads (must be triggered by user action)
It’s intentionally minimal for hacking and prototyping. If you need fine-grained control (e.g. streaming, memory control), use the native API directly:
https://developer.chrome.com/docs/ai/prompt-api
Would love to hear what people build with it or any feedback!
Great work on simple-chromium-ai! The TypeScript wrapper looks like a solid way to make Chrome's Prompt API more developer-friendly, especially with the stateless prompt() method and built-in error handling. I’m curious about the trade-offs you considered when designing the wrapper to be minimal—were there specific Prompt API quirks that inspired the neverthrow Result-based approach? Also, for anyone who’s tried this out, what have you been building with Gemini Nano via this wrapper, and how’s the experience compared to using the native API directly?
Could you host a static page of this on Github?
It only works in extensions for now but I can post a demo in the repo soon
You can also join Chrome’s EPP to use it on webpages