Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Once you've tested to your heart's content, you'll deploy your model in production. So, looks like this is really just a dev use case, not a production use case.


In production, I'd be more concerned about the possibly of it going off on it's own and autoupdating and causing regressions. FLOSS LLMs are interesting to me because I can precisely control the entire stack.

If Ollama doesn't have a cli flag that disables auto updating and networking altogether, I'm not letting it anywhere near my production environments. Period.


If you’re serious about production deployments vLLM is the best open source product out there. (I’m not affiliated with it)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: