Hacker News new | past | comments | ask | show | jobs | submit login

Feels like the amount of progress decreased abruptly after openAI released chatGPT and everyone closed off their research in hopes of $$$$.



I've seen multiple companies the past couple of years drop some really interesting projects to spend several months trying to make LLMs do things they weren't made for. Now, most are simply settling for chat agents running on dedicated capacity.

The real "moat" OpenAI dug was overselling its potential in order to convince so many to halt real AI research, to only end up with a chat bot.


Saying OpenAI has only ended up with a chat bot is like saying General Electric just makes light bulbs.


Poor phrasing on my part. OpenAI ended up with the mantle as the Amazon of AI. Everybody else ended up with a chat bot. The rest of their services are standard NLP/ML behind an API they built up from all the money thrown at them, subsequently used to bolster their core offerings of a chat bot and an automated mood board for artists.


does OpenAI have something more than a chat bot right now?


Really? They are a full platform for most popular applied AI, similar to AWS Bedrock and its other AI services, or Google Vertex. They cover vision, language translation, text generation and summarization, text to speech, speech to text, audio generation, image generation, function calls, vector stores for RAG, an AI agent framework, embeddings, and recently with o1, reasoning, advanced math, etc. this is on top of the general knowledge base.

You might be a wee dismissive of how much a developer can do with OpenAI (or the competitors).


I think the point was that despite all this the only thing that you can reliably make is a fancy chat bot. A human has to be in the seat making the real decisions and simply referring to open AI.

I mean there's TTS and some translation stuff that's in there but it's hard to call that "AI" despite using neural networks and the like to solve that problem.


> A human has to be in the seat making the real decisions and simply referring to open AI.

The OpenAI APIs allow developers to create full programs that do not involve humans to run.


Since when do you need a human in the mix? For example, there are financial risk analytical applications that use prompt templates and function calling , and have no chat bot interface to the end user. This is one of many examples. I think the leap that people miss is that you have to talk to the AI in some way, natural language is how LLM’s fundamentally work and so you have to express the problem space in that mode to it get it to solve problems for you as a developer. For some coders, I guess that is uncomfortable.


They have a digital painter bot too!


Um... yes? What are you even saying? That's one use of the API. It's the one the public is most familiar with, but it's just one of many, many uses.


Do they need more than a chat bot?

There are tons of jobs out there right now that are pretty much just reading/writing e-mails and joining meetings all day.

Are those workers just chat bots?


Are you should making those jobs more efficient is the right goal? David Graeber may have disagreed, or at least agreed that the most efficient action is to remove those jobs altogether.

https://en.wikipedia.org/wiki/Bullshit_Jobs

I'm not sure "doing bullshit busywork more efficiently" leads to better ends; it might just lead to more bullshit busywork.


A customer service agent isn't a bullshit job. They form a user interface between a complex system and a user that isn't an expert in the domain. The customer service agent understands the business domain, as well as how to apply that expertise to what the customer wants and needs. Consider the complexity of what a travel agent or airline agent does. The agent needs to understand the business domain of flight availability and pricing, as well as technical details related to the underlying systems, and have the ability to communicate bidirectionally comfortably with the customer, who knows little or none of the above. This role serves a useful purpose and doesn't really qualify as a bullshit job. But in principle, all of this could be done by a well-crafted system with OpenAI's api's (which others in these threads have said are "just chatbots").

Interfacing with people and understanding business domain knowledge is in fact something we can do with LLM's. There are countless business domains/job areas that fall into the shape I described above, enough to keep engineers busy for a real long time. There are other problem shapes that we can attack with these LLM's as well, such as deep analysis on areas where it can recommend process improvements (six sigma kinds of things). Process improvement, some might say, gets closer to the kinds of things Graeber might call bullshit jobs, though...


In theory, I agree that LLMs could perform those jobs.

I may just be less of a techno optimist. If history is any guide, the automation of front-line human interfaces will lead to less good customer service in the name of lowering labor cost as a means of increasing profits. That seems to make things worse for everyone except shareholders. In those cases, we’re not making the customers experience more efficient, we’re making the development of profit more efficient at the cost of customer experience.


Well their chatbot helped me write a tabbed RDS manager with saved credentials and hosts in .NET last night in about 4 hours. I've never touched .NET in my life. It's probably going to save me 30 minutes per day. Pretty good for a chat bot.


30 minutes per day on an 8 hour day. Thats a 6.25% increase in productivity. All good, but not what was promised by the hype.


That's one thing. I've made dozens of others. So call it 150% if that's how we're doing it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: