I recently went through your deep dive on uncensored AI models—really insightful! Your breakdown of how alignment gets embedded into models and the case for composable alignment was especially thought-provoking. The step-by-step approach to filtering refusals and fine-tuning WizardLM makes it accessible even for those who haven’t trained models before.
While researching, I came across this guide on setting up ComfyUI from scratch to run the Flux Schnell diffusion model on RunPod: mobisoftinfotech.com/resources/blog/flux-on-runpo… . It covers integrating custom nodes for optimized AI image generation—might be interesting if you're exploring deployment options.
Since you've been working extensively with open-source LLMs, do you think uncensored models will remain viable in the long run, or are there increasing challenges from hosting platforms and regulators? Also, have you experimented with alternative fine-tuning methods like RLHF without refusals to balance openness with practical safeguards?