> He estimates that his increased productivity is equivalent to the company getting a third of an additional person working for free.
> He's not sure why the company has banned external AI. "I think it's a control thing," he says. "Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative."
as someone who spent several years attacking ML models for security research, so does this
> Around 30% of the applications Harmonic Security has seen being used train using information entered by the user.
> That means the user's information becomes part of the AI tool and could be output to other users in the future.
> … Firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.
> He's not sure why the company has banned external AI. "I think it's a control thing," he says.
They involve sending company IP to a third party to do whatever they want with it.
Or depending on your industry and job function, instead of your company's IP it might be other people's data and have contractual or even statutory rules about what you can do with it.
> He's not sure why the company has banned external AI. "I think it's a control thing," he says. "Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative."
Yeah it is a real MIRACLE why companies don't want their employees to input their sensitive company data into some online form that they may or may not trust. We will likely never know what their real reasoning is.. /s
My employer blocks all AI tools via firewall at the office. I get around this by just using my phone on data or the guest WiFi. I don’t use it often for my work (AIX/Linux admin) but it has been helpful for certain situations.
I feel like privacy/isolated environment remains an unexploited USP in this space. There are enterprise products that somewhat offer it, but these require burdensome minimum head counts, dedicated contracts, and are all high cost.
In the SMB space there is definitely a vacuum. In particular when you look at the small print for products like Office 365's Copilot. We've tried to have Microsoft make us privacy assurances contractually, and they won't.
This is the "performance enhancing drugs" of mental work. Either you use it (and you make the Tour de France) or you don't (and you get no next contract).
It is easily a 6x performance increase in the amount of code I can write. I either have to use it, or get laid off.
I'd say it's about 2-3x at most, in the best case scenarios. When I have to write some kind of wrapper or glue code on a green field, I approach that factor. And I really love using AI code completion in those kinds of task.
However, writing that kind of code maybe makes up 5% of my work. Analysis, trial-and-error, discussions etc. make up the other 95%, and AI only seldomly helps with that. It can sometimes be useful for research and spec ingestion, but it quickly becomes dangerous in those cases because as soon as you enter any kind of niche area (and unfortunately my work has a lot of those) LLMs tend to hallucinate and present made-up "knowledge" with enviable certainty.
It’s great at boilerplate like filling in configs, but the results of iteratively coding with AI are first draft quality.
Yesterday I used copilot to kick out a really quick REST api for a flask app. A lot of the code was boilerplate parsing args and DB lookups, so it was fine.
When I changed one of my db models it broke all the hard coded error handling in frustrating ways.
After writing suitable helpers and rewriting the module (with AI) it cleaned up into some good code.
Definitely speed up the exploration phase, but required a top to bottom rewrite for the final product. Overall I’d say a little faster than me doing this myself (but I don’t use flask often)
> He estimates that his increased productivity is equivalent to the company getting a third of an additional person working for free.
> He's not sure why the company has banned external AI. "I think it's a control thing," he says. "Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative."