> with a notice to employees citing security concerns, according to an internal blog post.
> The decision to block ChatGPT, which is made by Microsoft-backed startup OpenAI, was made and communicated by the information-technology department, according to a person familiar with the matter. Management was caught by surprise by the move and later reversed the decision, the person said.
"Security concerns"... decision made by Microsoft internal IT, and then overruled by management.
Unsurprising, really; ChatGPT is not what I would consider a secure system in any way.
Non-story; quote from a previous non-Paywall article:
> "We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees," a spokesperson said. They added: "As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections."
Keep in mind Microsoft themselves develop and sell endpoint control systems, including lists of "bad" sites to optionally block. Microsoft also dogfoods their own stuff. It isn't difficult to imagine how this occurred.
"Customers would like to block it". That list frequently includes Microsoft services that a customer doesn't pay for, and therefore wants to eliminate.
A lot of financial companies have an allowlist approach to their firewalls, so if it's not on the list it needs to be blocked.
> The decision to block ChatGPT, which is made by Microsoft-backed startup OpenAI, was made and communicated by the information-technology department, according to a person familiar with the matter. Management was caught by surprise by the move and later reversed the decision, the person said.
"Security concerns"... decision made by Microsoft internal IT, and then overruled by management.
Unsurprising, really; ChatGPT is not what I would consider a secure system in any way.