MobileLLM-R1-950M meets Apple Silicon
Sep 15, 2025 · 6 min read · From Stub to Coherent MLX Model, a story-shaped release note Getting Meta's efficient 950M parameter model running natively on Apple Silicon for fast, local inference I’ve done a tiny bit of contributions to the llama.cpp community, including adding ...
Join discussion