Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What happens after ChatGPT gets access to real world APIs?
8 points by whatever1 on April 2, 2023 | hide | past | favorite | 11 comments
I was just playing with the idea of ChatGPT replacing the backend of Alexa, Siri etc and I found some very dystopian scenarios.

Do APIs need a more restrictive permissions model? How do we protect the real world from hallucinating technologies that can interact with our physical infrastructure?




It already has that. There is nothing stopping you from getting an API key from OpenAI and using your own app to feed it info from another API. They call these plug-ins. In fact, that's a big part of the company's business model.


You need beta access to do that, but you can also use Langchain, it will automatically figure out how to interact with any API that has a OpenAPI yaml spec.


Imagine you had a team of experts at your disposal in most fields, all who were morally upstanding people without temprament or emotions that you had to manage, and they could just answer fairly complicated questions for you. Thats the end product of Chat GPT. It doesn't have a mind of its own, it just has some neat internal state machines that act to decompress the data based on your specification. Its not solving problems, its just pulling out the relevant data that its been trained on.


Except it confidently makes up new data. Hallucinations are going to be a bigger problem than we expect, because they are so hard to spot unless you already know the answer.


It doesnt make up new data.

Suppose in the future it has integrations with engineering software as well as mathematical compute libraries, and you ask it - "Hey I need a personal aircraft for as cheap as possible, something that I can take off my driveway and fly about 80 miles and it has to be electric" - it thinks for some time and gives you a completely new design of an aircraft that hasn't been done.

Did it make up new data? Not really, it just searched a fairly complicated problem space and found a solution. Perhaps it could find some nice curve fits to phenomenon we haven't seen before to find some unique configuration of things, but there are plenty of things it won't be able to just figure out due to computational irreducibility - it will necessarily have to have some internal system that is a mix of information gathering through experiment or simulation.

And if it has such a system, then by definition you should be able to run it without any training data, and it should be able to train itself. GPT literally cannot do that. To ask it to write a neural network to train itself, you need to first pretrain it with all the data so it can complete its task.


What? It hallucinates wrong answers frequently and states them as fact, isn't that concerning?


Many people are already like that most of the time.


For human-written papers, we have the peer review process to spot errors. I wonder if an equivalent could be implemented for AI.


That's certainly an interesting question, that I don't have a direct answer to. But I'm curious about your thoughts on this:

> Do APIs need a more restrictive permissions model?

How would you make a distinction between my Python code making a request from your API endpoint, and a GPT-controlled Python program making the same request?


> How would you make a distinction between my Python code making a request from your API endpoint, and a GPT-controlled Python program making the same request?

Your API endpoint would see a bimodal distribution of latencies. The higher latency group are the GPT-controlled programs.


If you've ever talked to Google, Siri, or Alexa... you realize just how dumb they all are... would be amazing to have conversations with them like ChatGPT.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: