vai-virtual-ai-inference.hashnode.devVAI: Zero-Overhead Model Switching for AI Inferencepublished: trueDescription: "Why we treat model weights like ROM, not malloc()" The Problem Every time you switch models in a typical inference setup: 1. Unload weights from GPU memory 2. Load new weights from disk 3. Rebuild execution state 4. Warm ...Jan 25·4 min read
wiowiz.hashnode.devVHE: GPU-Accelerated Gate-Level Simulation at Zero License CostThe Problem Our NPU design hit 1.4 million gates. Verilator started a convolution test. Runtime: 139 billion cycles VCD trace: 56 GB Status: Killed after 3 days Commercial emulators cost alot. We're a startup in India. That wasn't happening. What We...Jan 16·3 min read