Hacker News new | past | comments | ask | show | jobs | submit login

Azure is hosting and operating the service themselves rather then for OpenAI, with all the security requirements that come with that. I assume this comes with different data and access restrictions as well and ability to run in secured instances (and nothing sent to OpenAI the company).

Most companies use cloud already for their data, processing, etc. and aren’t running anything major locally, let alone ML models, this is putting trust in the cloud they already use.




Ah that's fair. But it is my impression that the bulk of privacy/confidentiality concerns (e.g. law/health/..) would require "end to end" data safety. Not sure if I'm making sense. I guess microsoft is somehow more trustworthy than openai themselves...

EDIT: what you say about existing cloud customers being able to extend their trust to this new thing makes sense, thanks.


Right. If I was an European company worried about, say, industrial espionage, this wouldn't be nearly enough to reassure me.


Yes, this was my understanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: