I thought we were talking about state of the art agentic general AI that can plan ahead, reason, and execute. Basically something that can perform at human level intelligence must be able to be as dangerous as humans. And no, I don't think it would be bad training data that we are aware of. My opinion is we don't necessarily know what training data will result in bad behavior, and philosophically it is possible we will be in a world with a model that pretends it's dumber than it is, flunks tests intentionally, in order to manipulate and produce false confidence in a model until it has enough freedom to use it's agency to secure itself from human control.
I know that I don't know a lot, but all of this sounds to me to be at least hypothetically possible if we really believe AGI is possible.
Once critical mass of programmers relies on LLMs, original code creation and usage will decline, as LLMs will not provide these as suggestions. So entering the “dark ages of programming” will solve for that, as you won’t need to retrain LLMs.
RAG + documentation might help. I wonder if documentation will start to take a more standardized format across projects that's especially easy for LLMs to parse (maybe everything dumped into a single .txt file?). I'm currently learning Polars, and it's frustrating how LLMs keep giving me deprecated code. But if they load the current documentation, they should be able to catch their mistakes.
I don't know about human rights, but I've had this issue come up in the past. Went and saw a lawyer about it and the outcome was basically they can't stop you from working in your chosen field. It would be a different story though if you went to a rival company and took clients with you.
About 6 months ago posted a Show HN for WhenToExchange http://whentoexchange.com it shows currency exchange rates and keeps track of large price moves that make it favourable to exchange one currency for another.
Didn't receive much feedback, but still use it personally when shopping online and booking travel.
In my experience the CEO/Founder can be classified as either of two types:
1) Great sales guy but has no deep concept of the technology his product is based on and other related aspects such as technical debt. Can be really hard to explain why something is more complicated than it looks to him, etc.
2) Great technical guy who can't sell. If you're not doing something conceptually new it can be difficult to acquire customers if your selling is weak.
I wonder if any one has encountered the unicorn CEO/Founder who's great at both of these things?
Either of those would be preferable to what our company had, which was 3) Poor sales guy with no concept of technology. Even the engineers were were begging for basic marketing concepts like focus groups, A/B testing, user personas, etc. We actually got in trouble for wasting time thinking about marketing instead of doing our jobs.