I build things that solve real problems.
Not because someone asked me to. Because I wanted to know if it was possible.
I'm Andrew. I work at the intersection of AI, engineering, and the kind of curiosity that keeps you up until 3am because you need to know if a 31-billion-parameter model can run on a five-year-old iMac. (It can. Slowly.)
My approach to building is straightforward: ship first, polish second, write about it third. I believe the best way to understand something is to build it, break it, and build it again. Theory is fine. Working code is better.
Right now I'm deep in the weeds with local AI — running Google's Gemma 4 models on consumer hardware, building agentic workflows that actually work (and documenting the ones that don't), and exploring what's possible when you refuse to use an API key.
What I care about
Local-first AI
Cloud APIs are great until they're not. Rate limits, privacy concerns, vendor lock-in, and that monthly bill that keeps growing. I want to know what you can do with the hardware you already own.
Honest benchmarks
Everyone publishes benchmarks on H100s. Nobody tells you what the model feels like on your machine. I benchmark on real hardware with real tasks, and I publish the failures alongside the wins.
Building in public
The polished launch post is nice. The messy build log is more useful. I share the process — the dead ends, the breakthroughs, the "why does this work now" moments.
Tools over talk
I'd rather build a tool that solves one problem well than write a thread about how AI will change everything. The tool is the argument.
Current Stack
AI / ML
- Ollama
- Gemma 4
- LiteRT
Frontend
- Next.js
- React
- TypeScript
Data
- SQLite
- Drizzle ORM
- D3.js
Infrastructure
- Vercel
- Cloudflare
- iMac 2017