Archive
Deep dives, tutorials, and field notes from the lab.
Hello, Ollama: Local Inference is Your Architectural Insurance
Running LLMs on a Raspberry Pi isn't just a hobby; it's a fallback strategy for system resilience.
Terminal Velocity: The CLI as your AI Abstraction Layer
Why GUIs are a bottleneck for AI engineering and how to build a high-speed command line workflow using WSL2 and Zsh.
API Sovereignty: Building for the 2 AM Failure
Why generic API wrappers are a liability and how to build a resilient, multi-provider fallback chain.
The Mac Mini M4: The Un-official Local LLM King
Why unified memory architecture is the only way to run 70B parameter models without a data-center budget.
The Linux Edge: Why AI Breathes Better in the Kernel
Why I stopped fighting Windows registry errors and moved my entire AI engineering lab to WSL2 and Ubuntu Server.
The Raspberry Pi Master: Your 24/7 Lab Assistant
Why my Pi isn't just for hobbyist projects, but serves as the always-on utility and IoT bridge for the Gekro Lab.
The Architecture of a Lab: Splitting Brain from Body
Why monolithic web frameworks fail for AI engineering and how I built a distributed, agent-first architecture for Gekro.
Zero to Agent: My First Autonomous Loop
The moment I stopped writing scripts and started building systems that sense, think, and act without my permission.