Hacker News new | past | comments | ask | show | jobs | submit login

From the key findings:

> First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.

So the #1 problem every startup faces.

> Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.

This is interesting, and reinforces the trend towards hoarding data assets.

> Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.

Makes sense, tough to pick an architecture or model to stick with when better options release weekly.

> Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.

Sounds like lack of capital.

> Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.

So only a minority of cases? All in all this report seems to be saying "AI is promising but startups are still hard".




>> Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.

> This is interesting, and reinforces the trend towards hoarding data assets.

Companies completely misunderstand where the data is suppose to come from and attempt to hoard user data. The issue with LLMs is that the problems they are currently best suited for require data generated by the company. Manuals, decision trees, guides, tutorials, expert knowledge in general and companies aren't producing that material, because it's expensive. Also if that data existed, then maybe they wouldn't need an LLM.

Tons of LLM implementations are poor attempts to cover up issues with internal processes, lack of tooling and lack of documentation (without which the LLM can't function).

I'd say 80% has failed, so far.


> > Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.

> Sounds like lack of capital.

I actually think this is also an engineering problem, or at least a 'human capital' issue. The skillset for developing an AI model and the skillset for deploying a massive data-based product are highly different, but people who are good at the former often get press-ganged into doing the latter. This is kind of a capital problem (more money means maybe they can hire a second person to manage the operations), but I think it's also just a general lack of awareness that MLOps is really it's own thing. Especially when you're moving fast, tech-debt with these systems builds up really quickly (shockingly quickly). More money lets you hide these problems better, but IMO the solution is only going to come with time as people develop better and better best-practices for this type of project.

edit: There's a section in the full report called 'Too Few Data Engineers' that does a better job making this point. Everybody wants to make fancy AI models, nobody wants to be responsible for the 10K lines of uncommented Python and SQL you're using to build your test/train sets


>Everybody wants to make fancy AI models, nobody wants to be responsible for the 10K lines of uncommented Python and SQL you're using to build your test/train sets

I'm unfortunately the guy who that gets dumped on and it's the most hated part of my job. I've tried talking to the people who authored such atrocities but they refuse to acknowledge that's bad code and have huge egos about it and see any slam dunk tools like using a linter to be an impediment to their work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: