Running llama.cpp (compiled from source) on AMD Strix Halo 395
Jan 31 · 1 min read · Just a quick doc/note/tutorial for referencing myself later. Here's how to get llama.cpp running with Vulkan support on AMD AI Max 395 (Strix Halo) based devices. I tried it on a Beelink GTR 9 Pro, but should work for Framework Ddesktop too. Instal...
Join discussion







