Hacker News new | past | comments | ask | show | jobs | submit login

I think they necessarily need to specialize, as certain information is only available in the context of the domain. I think bigger context windows will hit a limit, and you'll need to have actually trained and guided the AI on specifics of the domain to be useful.

At the moment it's only the fact that public documentation is available for so many tools that it's proving useful for so many things. But what about massive, closed source, boutique enterprise systems? You can feed it docs as context, but it would be better if it were trained on docs, support tickets and internal forums then properly aligned.




This will create an excellent search engine but a terrible reasoning machine.

There are a lot of ways to search through docs and support tickets now. The ability of an LLM to draw inferences and summarize all of that information comes from being trained on a very large amount of data with billions of parameters. The data can be highly specialized. There just needs to be several thousand gigs of it for the AI to do things that are rare and useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: