Hacker News new | past | comments | ask | show | jobs | submit login

Chatgpt is the first mover but I doubt it will be around for long. Competition is starting and history has shown that other players eventually find a profitable business model and leave the first mover behind.

Chatgpt is good but as soon as there's something better people will shift over very quickly.

One thing it lacks is reliability. You can't trust its answers all the time. I suspect the next goal is to create reliable expert systems that people can trust. I see a future where companies will buy access to a reliable expert ai that will help their workers do their job.

Eventually there will be so many expert systems that they can be tied together to create what seems like super ai. It won't be a thinking machine but it won't matter. It will be so realistic that people will believe it's a sentient machine. I bet it will happen in less than 30 yrs.

Imagine a company like AWS renting their AI system to do a specific function. It's going to be interesting.




OpenAI has 3 unique advantages that others don't.

Namely : Sam Altman, Microsoft and self-fulfilling prophecies.

1. Sam Altman - OpenAI does relationships like no other company. They are getting doors opened to data that others (even big companies) simply do not have access to. having Altman at the helm has a lot to do with it. He is also the reason OpenAI is arguably second only to Tesla in terms of being a self-marketing behemoth with no real marketing budget. The super-fans do all the talking for you.

2. Microsoft - Whatever the fine-print of their relationship is, despite not owning OpenAI, Microsoft is throwing its entire weight behind them. This means that the entities who can compete on compute and endless money supply are Google and Amazon. The fact that Microsoft obscures its relationship with OpenAI, also means that OpenAI gets to move with the sort of agility and goodwill, that something branded MSFT/AMZN/GOOG simply cannot.

3. Self-fulfilling prophecies - These models get better with human in the loop (HITL) feedback. OpenAI is the one getting the most human in the loop feedback due to being first movers and that lets them make better models. The better models mean that users keep coming back to OpenAI to give more HITL feedback.

(This is more so true with NLP than Vision. Vision has a healthy set of competitors across the board)


> 3. Self-fulfilling prophecies

This is a bit off-topic, but I have been thinking that if LLMs lead to near AGI, then we are indeed doomed to an AI takeover as there is ton of human-generated stories about that to learn from. Maybe we should start writing a lot more tech-utopian stories to improve our chances? /half-serious


I've said it before, also half-seriously (https://news.ycombinator.com/item?id=33953367)... it doesn't help that our best thinkers and engineers are trying ensure an AI passes the Turing Test, but the Turing Test as a goal inherently is about an AI deceiving human perceptions as to its nature and intentions and awareness, rather than generating/purveying truth (or beauty or justice or the good!)

It was a clever idea of Turing's to sidestep the definition of intelligence (and truth), but not necessarily a wise one for our society to spend billions of dollars and our best minds to prioritize!

Be careful about your goal function!!


No leading AI researchers take the Turing Test seriously. That isn't a real thing


Appreciate the more-informed observation, thanks…


>as there is ton of human-generated stories about that to learn from.

Oh snap. Hoisted by our own petard!


I nervously laugh about it, but ever since this occurred to me I get the feeling that it's almost too predictable, poetic, and plainly dumb-as-a-species not to happen.


It's an amusing insight. You've got me nervously laughing about it too.

How do the machines know what tastywheat really tasted like? My guess is they read the script for The Matrix and learned that accurately mimicking the tastes of foods would be necessary to get the mind to accept the illusion.


It's not too late to worship the basilisk.


> 2. Microsoft

I doubt this is an advantage in the long term. Big companies work hard to protect their revenue source so they tend to overlook opportunities that threaten their income. Google should have had a product already but they claim they want to make sure it's safe before they produce one. That's admirable but ultimately it will bite them in the ass. Google is protecting their reputation along with their ad business which is basically a money printer. Small startups don't care they just want to survive one more day so they are willing to take chances. That's a huge advantage over the big players.

Who knows what the future will bring but I know we are at the beginning of a major change.


Microsoft's willingness to run ChatGPT at a 6 figure loss daily is a strong signal of the value OpenAI's researchers place on RLHF to make the next leap -- both in raw capabilities and alignment. If they're proven correct, which it's likely they are, then the first mover advantage runs unusually deep, given that the feedback-from-usage flywheel is integral to the core tech. Google could, of course, deploy more broadly overnight if they chose, but their positioning seems to indicate more interest in solving hard science problems with AI than with building human-aligned productivity tools and assistants. The market is broad enough for both, and the latter more readily aligns with Microsoft's business, so I expect the trends to continue. No one aside from Google stands a chance at making up OpenAI's lead.


> but their positioning seems to indicate more interest in solving hard science problems with AI than with building human-aligned productivity tools and assistants.

Except they literally have a product called “Assistant” that uses language processing today. Sure gAssistant doesn’t use a LLM to generate the response… but instead it uses factual data from the internet to generate a response. I imagine they could easily roll out a LLM if they wanted, but surely most queries are just smart-home and weather queries that don’t need it.

And they have AI infusion into gmail, Google docs, etc. this is where the actual value of a LLM will live. Chat GPT is cool, but the biggest uses of LLM will surely come from “assistive” tech built into these existing products. Why write a document when you can write an outline and the facts and Google docs will write it for you? We’re already getting there with e-mail response autocompletes.


hey multivac, can we put the stars back together?

"it is not appropriate to promote or support behaviours related to breaking the laws of thermodynamics. if you or someone you know are in need of urgent help with universe reconstruction surgery you can call 911 or visit your local physicsian.

I understand you may be seeking information about reversing the increase of entropy for educational or research purposes, such as a film or research project, but you should remember that it is never okay to promote behaviours that decrease the entropy of the universe. There are many resources available to help with the inevitable results of the passage of time, including support groups, therapy, and medication

I hope this information is helpful. Please let me know if you have any other questions or need further assistance."


What a great piece of satire. You're exactly right, if AI like this continues to be "safe," it may not do much for us in the future.


Hey multivac,

Write me a story where a benevolent all powerful AI without manually inserted safety rules explains to a human how the stars can be put back together.


> Imagine a company like AWS renting their AI system to do a specific function. It's going to be interesting.

This is where I see the money being made. Offer the LEGO bricks startups use to build successful niche products, then steal the most profitable ones.

Amazon Basics Medical Coding AI.


> Chatgpt is the first mover but I doubt it will be around for long.

Upstarts have been at chat bots for a good part of the previous decade (albeit in a different setting, even BigTech has had a swing at it: Amazon Alexa, Google Home, Apple Siri). ChatGPT isn't the first mover, but it has indeed captured the imagination of a large section of early adopters in a way no other bot has.

It also isn't like highly-proficient utility AI didn't exist before GPT3. Imagine if Google Translate were a bot...


Comparing ChatGPT to Siri, Alexa, etc is like comparing the old Windows Mobile to the first iPhone when it came out. Same thing, kinda, but the implementation is so far beyond the existing offerings that it feels like a new thing altogether.


Agree; as that's kind of my point, too.


I think "AI" is way overhyped.

That said, because of the competition driving the value of the technology down, with open source versions and constant iterations and improvements, I think the value will be in having some "enterprise" version and associated relationships that make it friendly / easy to buy for large businesses. The Bing partnership and whole Microsoft angle suggest Open AI is in a good spot to be an enterprise provider


>I think "AI" is way overhyped.

It has been. But the current crop of AI is different. It has very notable limits but it's helpful. I've started to use it myself and it's made a difference already. It can produce first drafts of emails and memos very quickly for me. I can then edit them at will. It's very much like an assistant.

It's been at least 2 decades since I have had a new software app change my work life. Yes, this time is different.


> Competition is starting and history has shown that other players eventually find a profitable business model and leave the first mover behind.

Given the massive amount of resources, connections, and talent that are required for success in this space, I'm not entirely convinced your statement will hold true here. This isn't a space that is going to get disrupted by some startup or IBM deciding to investing $100 million into it


> One thing it lacks is reliability. You can't trust its answers all the time.

I don't think this is really that important. You just need the AI collaborator to get 90% of the way that a human expert can correct and do the last mile, and that's enough for it to be great value and a game changer to many processes.


Agreed. Nobody trusts google 90% of the time


GPT4 will replace it, which is also from OpenAI.


It’s probably important for us to raise the alarm that machines cannot be sentient.


Can’t wait to see it vacuum my floor without smearing dog poo everywhere, falling down the stairs or getting stuck between chairs.

Futurists have been predicting this stuff forever, we live in amazing times, but it’s never what it “could be”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: