I try to run an experiment once a week with open-source LLMs. This week experiment was using Llama3 via Ollama and AgentRun to have an open-source, 100% local Code Interpreter. The idea is, give an LLM a query that is better answered via code executi...
jonathanadly.com5 min read
No responses yet.