EHNo, your premise is false. AI is neither dangerous nor a weapon. Therefore the rest of your statement is invalid. Try again with a new premise that isn't false.Reply路Article路Jul 11, 2024路Uncensored Models
EHdual-3090 + sli - 48gb vram - ~$3000 (7b-34b inference, maybe tiny finetuning) quad-mi100 + infinity fabric - 128gb VRAM, ~$6000 (good for finetuning and inference) m3 max 128gb - bad for finetuning, great for single-user inference. - ~$6000Reply路Article路Feb 7, 2024路Running Dolphin Locally with Ollama
EHI trained it with no refusals in the data set But the base model still has it's opinions You have to use system prompt. https://github.com/ehartford/dolphin-system-messagesReply路Article路Jan 20, 2024路dolphin-mixtral-8x7b
EHI'll put the 4090s on that one, they have video outputReply路Article路Nov 30, 2023路My Own AI Server Cluster
EHOn windows, this is tested in WSL2 not in native. You might be able to get it to work in native windows, with enough tweakingReply路Article路Jul 28, 2023路Uncensored Models
EHis it possible using faiss instead of pinecone?Comment路Article路Jul 6, 2023路Building an Interactive Chatbot with Langchain, ChatGPT, Pinecone, and Streamlit
EHhaha, gotta leave something as an exercise for the readerReply路Article路May 22, 2023路Uncensored Models