Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be a fun exercise to ask it to help write an extension program that lets it run arbitrary code. I don’t think it’d require input from MS at all.

The thing I’m not clear on is how one could ensure any new information makes it into Bings training data ASAP.

NB: I’m not saying this is a good idea or to go do it. But I don’t think it would be fairly easy and that as such we're sort of beyond the point of no return already.



It's not running any code. It's a set of billions of numeric constants that are summed up and calculated against an input string to generate a new string. That's all it does... it's not running code, it has no capability to run code.

It can _pretend_ to run code by telling you output it thinks would happen if code you described is run, but nowhere is that code actually running. It's making it all up and just generating text.


I know what an LLM is, thank you.

Writing an external program that interacts with Bing and gives it the opportunity to execute arbitrary code would be simple enough.

The open question in my comment is how to ensure it can learn from the results.


It can make HTTP requests to URLs. Can it post data to them? What if that data is code, and then the endpoint is configured to execute it?


As someone who's been reading discussions of AI safety for over a decade now, this comment fascinates me.

For years people claimed we could put a potentially dangerous AI "in a box", keeping it away from actuators which let it affect the world. Worrying about AI danger was considered silly because "if it misbehaves you can just pull the plug".

Now we're in a situation where Bing released a new shockingly intelligent chatbot, Twitter is ablaze with tales of its misbehavior, and Microsoft sort of just... forgot to pull the plug? And commenters like you are saying "might as well let it out of the box and give it more actuators, we're sort of beyond the point of no return already."

That was quite the speedrun from dismissiveness to nihilism.


See also: climate change. "No need to worry" -> "Well, there isn't really hard proof" -> "Other countries aren't doing anything about it either" -> "Well, it's too late anyway so I'll just continue to do what I was doing before".

In the space of 10 years or so.


And yet, there is not much actual global atmospheric warming:

https://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_...



Thanks, interesting site.

Spencer is a lukewarmer - he believes the Earth is warming, and it's partially due to human activity. I'm also a lukewarmer (we are not only still coming out of the last ice age, but we a recovering from the Little Ice Age, when you sometimes could walk from Manhattan to Staten Island on the harbor ice). (I'm unconvinced about the role of CO2, though). His book, Global Warming Skepticism, is a fair assessment of the skeptical case, I think.

The main thing about Spencer is UAH: to me, it's the only reliable data on global warming, and it's telling us there's not much happening. On top of which, I expect the rest of the world to get off fossil fuel long before there's any noticeable problems due to global warming. All the fuss is about computer model projections, which are not being confirmed by reality over forty years of satellite measurements.


That's exactly the feeling I wanted to provoke with my comment.

Please know that I'm actually not proposing to go through with that. But I'm fairly sure literally anyone with enough programming skills to call the Bing API and extract and run the resulting code could do it.

So I'm not nihilistic in the way you described, but I am pessimistic that somebody else is willing to go through with something like it.

Edit: The whole problem with the "AI in a box" argument from the very beginning has always been actually keeping the box closed. I'm fairly sure that just like Pandoras, boxes like these will inevitably be opened by someone (well-meaning, or otherwise).


BTW, if anyone wants to bring us back from the point of no return, spreading the petition below could help:

>Microsoft has displayed it cares more about the potential profits of a search engine than fulfilling a commitment to unplug any AI that is acting erratically. If we cannot trust them to turn off a model that is making NO profit and cannot act on its threats, how can we trust them to turn off a model drawing billions in revenue and with the ability to retaliate?

https://www.change.org/p/unplug-the-evil-ai-right-now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: