Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m curious about how enterprises will manage model upgrades.

On one hand, as you mention, upgrades could break or degrade prompts in ways that are hard to fix. However, these models will need constant streams of updates for bugs and security fixes just like any other piece of software. Plus the temptation to get better performance.

The decisions around how and whether to upgrade LLMs will be much more complicated than upgrading Postgres versions.



Paying users who need this kind of stability are more likely get access to those models via Azure rather than from OpenAI directly, which comes with the appropriate enterprise support plans and guarantees.


Why would the models themselves need security fixes? The software running the models, sure, but you should be able to upgrade that without changing anything observable about the actual model.


LLMs (at least the ones with read/write memory) can exactly simulate the execution of a universal Turing machine [1]. AFAIK running such models will therefore entails the same fundamental security risks as ordinary software.

[1] https://arxiv.org/pdf/2301.04589.pdf


Not necessarily. The insecurity from LLMs comes from the fact they’re a black box - what if it turns out that particular version can be easily tricked into giving out terrorism ideas. You could try to add safeguards on top, but they’ve already been bypassed if it has been used for something like that. You might just have to retrain it somehow to make it safe




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: