Run a LLM Locally on an Intel Mac with an eGPU
Originally published on https://www.ankitbabber.com
I have a Mac with Intel silicon. I also have an eGPU with an AMD 6900XT (...allright!). BUT I COULDN'T HARNESS THAT POWER AND RUN A LLM LOCALLY WITH OLLAMA!!! If you have a Mac with Intel silicon, ...
blogababber.hashnode.dev8 min read