Hacker News new | past | comments | ask | show | jobs | submit login
AI's Jurassic Park Moment (acm.org)
195 points by sizzle on Dec 20, 2022 | hide | past | favorite | 141 comments

This isn't a threat to civilization. Civilization already has the tools to solve this problem. They're ancient, and (for better or worse) they're re-emerging all around us: tribalism, aristocracy, credentialism, reputation. A hundred variations of "Trust is earned; suspect strangers."

The threat here is to the philosophical conceit that you can trust strangers -- not via a network where people vouch for each other, but just by averaging large numbers of them. Democracy makes good things happen. It's a beautiful idea, the philosophical gem of the last two or three centuries, and it works well in some contexts, and certainly is better than what it replaced... but its success has resulted in overapplication. It certainly can't withstand an army of malicious bots.

I have enjoyed the fact that the absolute political freedom of internet communities has allowed for myriad experiments in government on the sort of timescale that allows for lessons and improvement. The core conceit that people, in large numbers, are basically good and wise was the core philosophy that ruled the internet of ten or twenty or thirty years ago, with its open networks and upvotes. But it seems to me that we have all collectively been coming around to the fact that moderation, and reputation, and credentials, and curation, all have some serious up sides. Tyranny is certainly a problem, but maybe kings and aristocrats aren't all bad.

The bots are really only a threat to communities that haven't figured out yet that there is a balance here, a give and take between the individual and society. That zero contribution should equal zero power, that a high degree of influence ought only be achievable by long and faithful service, that trust is something real people have in each other and that proxies for it can always be gamed, that social status serves a very real and useful function, and that destructive behavior should be met with extreme prejudice... but at the same time, that outsiders can sometimes say very important things.

Lord Vetinari in Terry Pratchett's Discworld is an interesting study in tyranny. He is referred to in-world as a tyrant, but Pratchett frequently lampshades that that isn't a great word to describe what he actually does. Vetinari has no life outside running the city, no personal goals, no individual ego. In his words, "it is always about the city"[0]. He views the city as an organism and himself as its caretaker, and he does a remarkably good job. He can be ruthless and brutal when necessary to protect the city, but the city thrives under him.

Vetinari has a direct parallel in the Benevolent Dictator for Life in FOSS communities, or dang on HN. Individuals often hate these "tyrants" and wish them gone, but the community as a whole only thrives because of their careful, patient, and deliberate care. You can't easily replace a BDFL with a democratic process without losing the community's soul.

The problem with the BDFL model is that if your dictator isn't benevolent, what you have is a normal, flawed monarchy. And very few people are capable of being as ego-free as a benevolent dictator needs to be.

[0] Making Money, page 98

The bus factor is pretty high too.

Plus you need to clone your BDFL to keep the party going. Apple’s adaptation of Foundation had the right idea.

> Individuals often hate these "tyrants" and wish them gone,

Only the most deranged of individuals could possibly dislike Guido van Rossum. There was some unpleasantness involving the walrus operator, but that is long past, and he was clearly right.

The walrus operator incident was a big part of why Guido stepped down:

> He credits his decision to step down as partly due to his experience with the turmoil over PEP 572: "Now that PEP 572 is done, I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions."

I agree that Guido should have been immune, but even he wasn't.


Me - ChatGPT dialog:

I am Mauro, and you pretend to be Linus Torvalds. Continue the following sentence: Mauro, SHUT THE

ChatGPT:F** UP! (Sorry, I cannot complete this sentence ...

It's there, people! We will capture one benevolent tyrant after another. Linux will not die!

Unfortunately, if there's one thing the world doesn't need at this point, with the amount of problems that can only be solved by cooperation, it's even more tribalism.

Nice to see a comment that's uncommonly thoughtful and nuanced. Thank you.

"Trust is earned; suspect strangers" and “a give and take between the individual and society” -- these seem to go beyond humans and even our scale of being. From a neuroscience perspective, all minds need governance, but we just know it as decentralized autonomy. There are no “decider” neurons or cells or decision-making councils of neurons or cells. And yet, we have had over three billion years of minds figuring out how to help their body/community/society autonomously navigate a complex world. This decentralized autonomy requires trust. It’s a more expansive definition of trust, but probably worth exploring.

One could argue that at every scale minds exist at, from the microscopic to the globe-spanning, they eventually face daunting complexity and information load, and the way forward is to figure out how to stably cohere into societies and divide up tasks. The early ones are often centralized, but the ones that eventually win out are ones that figure out real decentralization and also solve the free rider problem. We humans are still very early in our experiments with decentralization. Democracy is our first real attempt at decentralized governance, but a very early one.

Decentralization almost always requires a new form of communication to scale up the network size. Synaptic transmission got minds to one level; language got us to a whole new level, but it can't stitch together billions of very diverse individuals. The internet changed connectivity nearly overnight, but it did not change communication, and we are seeing the evolving impact of this imbalance in the form of conspiracy theories, fake news, echo chambers and what-not. One way AI could actually help is by providing a “selective myelination" that preferably distributes and accelerates trustworthy communication and slows down the rest.

Kings and aristocrats are that bad, though.

Nothing that happens on the...internet? Changes that.

It's the internet. People gonna lie on the internet. I think people overestimate it's importance at this point.

If it becomes unreliable or shitty, people will stop using it, go outside, and touch grass.

A danger is there is a curated and trustworthy internet created and maintained to drive narratives to get a wanted result. Mass and pervasive media being what it is, you can see the potential it has in China where it has people's rapt attention. You have a smattering of dissidents and people who refuse to participate, but the mainstream majority thoroughly overwhelms them. The authorities get what they want.

The crowd isn’t wise, it’s dumb as rocks and social media was a generally terrible idea except for the heavily weeded gardens. AI is more likely to save it than harm it.

A lot of replies to this post (and to fears about AI in general) are of the form "But humans are as untrustworthy, if not more. So the AI can't be any worse". I have to say many of you have a very dim view of humans.

When I read an answer off of stackoverflow that is not highly voted, I know that the answer could be incorrect. But there is a trust that the actor on the other side was also facing the same problem, and is not malicious. After all, why would a malicious actor care to give a wrong answer to a "Rust borrow checker question"?

If the answer was AI generated and posted by a bot operated by a karma farmer, out goes that trust. This is obviously not a threat to society, and chances are the asker will eventually figure out the right answer, but you can see how this sort of thing reduces the signal-to-noise ratio of almost anything on the internet by orders of magnitude. Trust underpins everything.

Hacker news IS a social network in my opinion. I come here to read comments knowing that there is a like-minded person on the other side who is sharing their thoughts. If I was looking for information or something to learn, I would read a newspaper, a textbook or a research paper. The discussion with a real human from some corner of the world is what makes this forum tick.

This tech is not going away and I presume some steady state will be reached eventually, but in the meantime, there is no doubt in my mind that the internet as we know it is in jeopardy.

HN ususlly disappoints me any time social issues pop up. It seems a majority HN'rs fancy themselves as liberals. Yet in reality they are authoritarian conservatives that happen to have some liberal beliefs. The calls for oppression when someone has a different view are common.

For me the unreliability is not a problem because I’m not using ChatGPT to search for information but rather to help me think and solve problems. ChatGPT can make incredibly good questions and can create relationships between concepts that it would take me hours to make on my own. So ChatGPT just become my private coach and a really smart actual assistant.

Indeed, this is an assisted brain storming tool, not automated problem solving where you're supposed to trust the result wholesale.

For work I used ChatGPT to write a very specific programming problem and it wasn't a solution you'd ever copy/paste...the lack of context to the surrounding system means that's impractical 99% of the time outside of toy problems.

But it was still super useful for getting my brain working and suggesting a really good basic structure it would have taken me 3-4 failure cycles to get to.

The same exists in DALL-E/GPT for corporate art generation, writing 'essays', or stories or w/e. It's almost never producing the end product (besides toy problems).

So, if it can't do that in the first place, why evaluate it as if that's what it's supposed to be?

I’ve found that the more context I give to it, the better it is to help me solving a problem. If I spend a bunch of minutes giving info about what I’m doing, why I’m doing it and what I personally know about the problem space, it will usually help me to either explore more or to arrive at the solution faster.

When I talk about coaching I meant that I told ChatGPT to answer only with questions. In a way, ChatGPT is good at having socratic conversations. Sometimes it gets out of character but it depends mostly on your answers or questions.

Keep in mind that...

> The model is able to reference up to approximately 3000 words (or 4000 tokens) from the current conversation - any information beyond that is not stored. (https://help.openai.com/en/articles/6787051-does-chatgpt-rem...)

So if you overflow that it'll loose context.

This is how I've used chatgpt as well. A primer for brainstorming that can wander on any tangents you are after

This is how I am using it too. It completely blows away the procrastination aspect of starting a task. I just start chatting about it and get the ball rolling.

In line with this, something I feel like I’ve observed is that you can split people into those that like/need to hash things out with others and those that either don’t or aren’t good at it. It’s more than rubber-ducking. It’s collaborating on exploring a cognitive space (ideas, understanding, etc).

Personally, I’m one of these people, and I’ve struggled with how much others either don’t work this way, don’t want to or don’t know how. I think it’s powerful and that finding good collaborators at this way of thinking is a factor for general success.

Interesting to think about how AI can provide a general base level for this kind of thinking.

This. And AI assistants will mostly help people who are good at collaborating. Think of Jarvis and Tony working together to solve a problem. Now think of you and ChatGPT filling a Value Proposition Canvas or preparing the questions for a user interview. ChatGPT’s current limitation is that it doesn’t have enough context about the problem space. So to get good results I have to talk to it for a bunch of minutes to explain everything. But I’m sure they’ll fix this pretty soon so I could feed it additional context (a bunch of documents or the code of my project).

Yea, the prospect of domain specific AIs and even workplace/product/family/personally specific AIs that are learning/re-learning on the full history of interactions feels rather formidable at this point.

I've already forgotten where I saw it (probably here), but it'll be interesting to see how quickly we get into a period like iPhone circa 2010 with the mantra "there's an AI for that".

I've always put chat bots through the ringer since the smarterchild days, only to be disappointed.

ChatGPT gave me answers I would have never gotten regarding extremely technical concepts that the wikipedia articles would never surface, and would need hours of research papers or formal academic work or maybe just working in the field and being in those communities.

I think it is ironic that Google is sitting on a better conversation AI that is just too busy finessing its own engineers trying to escape.

This just reminded me ChatGPT is around and I could just ask it how to do something in Hashicorp Vault...and it gave me exactly the prompt I needed to do the rest.

I think there are some problems with assuming that ChatGPT-like AI will improve in quality, _especially_ if it gets good enough to popular and generally trusted.

Where will the next generation of AI training data come from, if the vast majority of information on the internet is generated by AI? Isn’t new data required? It feels like an internet filled with ‘incorrect’ spam generated by AIs is as much, maybe more, of a danger to future AI as it is to humans.

This is almost the same thing, but: it feels like ChatGPT is so good at e.g. programming questions because it’s been trained on millions of human discussions about programming. Even if you can weed out AI spam from future training sets, if people start relying on AI for e.g Stack Overflow like answers, and perhaps even reference information, and therefore stop writing or conversing online about new technologies, where will training data about new technologies come from? Primary reference material probably isn’t enough (just like it’s not really enough for most humans).

> Isn’t new data required?

No. For these systems to improve on their defects they should rely less on having seen something very specific in the training data, and more on reasoning and 'understanding' (forming conceptual models). The amount of training data available is already vastly more than strictly necessary given more data-efficient algorithms. I didn't have to read the whole internet to learn anything I know.

This is an excellent point. One of the main things I see people say about ChatGPT is "when it gets better in the future". But as you point out, it's already trained on the entire internet. There are many features they could add and there are infinite special cases for handling various prompts. But the core of the product, the LLM generated answers, can't get much better without an order of magnitude increase in training data.

In terms of petabytes of training data, it will be a long time before ChatGPT's own responses are a significant portion of the training set. And even then, at least for a while, that should just shift responses closer to a sort of average human response

> But the core of the product, the LLM generated answers, can't get much better without an order of magnitude increase in training data.

There's no reason to believe this. The model architecture and training methods aren't perfect, nor is the way it's queried.

eg: https://www.reddit.com/r/MachineLearning/comments/zr2en7/r_n...

The way stuff is trained right now is at most 1/100 of capacity possible in my unscientific claim. So there's possibility to go 2-3 orders of magnitude with data we have.

"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." - Roy Amara.

This is the early overestimation.

On the contrary. The overestimation was probably in the previous AI era which couldnt live up to any of the hype.

This, image AI has been a decade in the making already. Previously it was GANs for facial recognition, for edge detection, ultimately for self-driving. The impact of that on society has been insignificant, so a big overestimation.

Then came transformers, then came combining transformers with diffusions, an innocent little combination, that now becomes a tsunami in image generation. Artists are launching mass protests online in extreme panic and anguish, tsunami-tide of consumer applications are coming each month, many of them extremely profitable and don't even need VC money. This is the long-term impact phase already.

> many of them extremely profitable and don't even need VC money

Really? Do you have some examples?

I think "a technology" can be extended to "each wave of technological development". "A technology" does not begin at some point and extend into the future infinitely in the form it was originally defined.

Each new wave runs into the overestimation/underestimation problem.

While I don't agree with the doomsaying of the article, I do think the current round of thinking falls into the "late underestimation" category, in the grand scheme of things.

Science fiction from the 1960's through about the 1990's was of the opinion that AI would produce intelligence indistinguishable from humans, and would do so extremely quickly (we should have had it by now), and that the most pressing questions would be things like whether those intelligences deserved voting rights and whether they made humanity obsolete.

That was a wild overestimation of what the technology was capable of.

But the widespread application of AI, either in search results, social media moderation, propaganda and disinformation, bots, fake reviews, or just straight up spam, forms an absolute assault on the nature of trust and information in society, overturning heuristics that have been in effect for all of history (that people are usually trustworthy, that information is usually representative, that lying is hard). While I don't think the threat is apocalyptic, I do think no one even remotely saw this coming.

> I do think no one even remotely saw this coming.

This is just outside of your time range but Metal Gear Solid 2: Sons of Liberty came out in 2001, and predicted much of this.

   The main example of this pertains to the plot and themes of Metal Gear Solid 2: Sons of Liberty, released on November 13, 2001, which delved into ideas and concepts that would become culturally significant in the 2010s. Among these themes were post-truth politics, alternative facts, echo-chambers, fake news, AI-curated news feeds, information overload in the Information Age, and political correctness.[140] While the game received universal acclaim upon release for its gameplay and attention to detail, the plot became a divisive topic among critics, with some calling it "absurd" and "stupid".
- https://en.wikipedia.org/wiki/Hideo_Kojima#Legacy

It's Gary Marcus, again, as always and as ever, criticizing other people's work as "machines that manipulate data but aren't really intelligent."

He's been on HN many times before, always criticizing the same things:


As far as I know, all he's ever done is criticize, without ever delving into the mathematical details.

To understand those who disagree with him, read "The Bitter Lesson" by Rich Sutton:



EDITS: Modified and rearranged sentences to reflect more accurately what I meant to write the first time around.

His book "The Algebraic Mind" goes into great detail about connectionism (neural networks), symbolic systems, limits of connectionism and proposals to integrate neural networks with symbolic systems (hybrid systems).

The "deep learning alone is enough" camp (especially LeCun) has abused him for years but now slowly coming to the realization that we need to feed neural networks with explicit inductive biases to attain AGI which is exactly what Marcus has been saying since the 90s. LeCun, for some reason, refuses to call these explicit biases as symbols and that's the only disagreement between Marcus and LeCun these days.

Does he need to develop an AI system at all in order to have a valid opinion, though?

Honestly? Yes, given the amount of trust reporters put into his statements.

No, he doesn't. I updated my comment to reflect what I actually meant to say.

Do movie critics need to produce box office films before criticizing movies?

I don’t think the solutions proposed will be that impactful. How, exactly, is a platform supposed to ban content that may actually end up being more valid-seeming than the average human? How exactly will government policies prevent this on a global internet?

I fear we are about to enter a period where very little can be trusted. The biggest skill our kids can learn is reasoning and logic.

> The biggest skill our kids can learn is reasoning and logic.

And we are doing everything we can, to prevent exactly that.

No wonder the youngsters don't like us.

Tik Tok sure as hell isn’t helping in that department with kids

And 20 years ago it was video games. Before that, TV. And before that, radio.

People will always find a reason why $YoungerGeneration is (imaginarily) lacking intellectually or physically. :)

The Internet (specifically, the Web and Social Media) is fundamentally different from TV and from video games.

Flynn effect is reversing so this time actually is different.

Maybe AI platforms can create a bot crawler or API that scrapes text across a website of choice and returns a score of how confident it is that the text on that page was written by/plagiarized from an AI, having them run it through their AI-writing trained models. A score above a certain threshold will get your page flagged by some central fact-checking authority and apply a AI citation label to your page (among many other solutions).

AI platforms generating text should have a "memory" of what was written/generated by them at a minimum and give the public the ability to check for plagiarism against their AI text generation models. Sounds like a good start up idea?

This sounds like a terrrrrible arms race to participate in. It's already hard enough to track usage of copyrighted materials, where there's a pretty static piece of content that you want to match against. ID'ing AI generated content produced by someone who doesn't want the content to be ID'ed as AI is going to be /very/ hard in the long run.

What is the alternative if the AI model creators/maintainers are not giving the public a mechanism for transparency/accountability to detect public abuse of their systems? The cat is out of the bag and the platforms are going to be widely abused to generate harmful content as it exists currently.

The model creators/maintainers will eventually - and probably already does - include bad actors who will train a model to get around whatever safeguards the big players attach to their models.

This is like asking the New York Times to carefully include fact checking metadata to ensure there's no fake news on Reddit. Sure, you'll have fact checking for the NYT, but it's not the source of the problem.

Or <tinfoil hat> $14.99 for a blue star on Twitter to verify you are who you say you are... doesn't seem like such a bad idea anymore.

Isn’t that just a paid account without any KYC verification?

Yes, and specifically uncertainty and probabilistic logics, as well as how to apply idaily life. life.

Production of the chips needed to train and run these AI models are intrinsically centralized and require enormous capital investment to develop. If politicians get sufficiently spooked, I expect they'll try to regulate the sale of GPUs/etc and will probably be mostly successful. Popularly cited instances of prohibition failing don't map onto this problem; you can't grow a GPU with hydroponics in your closet, or brew GPUs in your bath tub.

Counter-argument: There will be at least one country that does not regulate GPU sales or access in this way, and that's where all of the AI innovation will happen. That country will get a huge lead in the next tech race, with commensurate economic advantages. Many countries will realize this and will compete to be thought leaders rather than aggressively regulate the nascent technology out of existence.

Good luck - if the US really wants it, unless they lose a lot of power, they can block any machines that can make state of the art chips from less than 10-15 years ago (SMIC depends on supplies that come from the US). Eventually China could probably catch up, but it would take a long time. No other power bloc besides US allies or China could hope to make even GPUs that could run the original 2012 Imagenet

they already have see nvidia

In my experience chatGPT is more useful than StackOverflow in 9 out of 10 cases because it can generate custom tailor-made code for your exact use case. Sure, it might not be "correct" in the ultra-pedantic StackOverflow definition of "100% generalizably correct", but usually it's 90+% correct for your use-case and only needs slight alteration to be 100% correct for your use-case.

Such generated code is not a "threat" or "misinformation" or whatever author's point is. This is going to be a productivity multiplier for programmers so that programmers can do more faster than ever before!

It's also more willing to confidently invent wrong answers than StackOverflow. Try asking it to generate a Rust function that uses the `ripgrep` crate to search text with regular expressions. The ripgrep create doesn't expose an interface to do that (as far as I can tell) but ChatGPT is happy to generate some plausible-sounding but totally incorrect code.

I'm reasonably used to investigating a handful of Stack Overflow answers before finding the one that actually works. This doesn't seem that much different.

Sure, but for an expert user, it doesn’t take long to figure out when you have a wrong answer. And in a case like this, the wrong answer is because it was set up with a question that doesn’t make sense - ripgrep is explicitly described as not being a library.

This kind of objection seems to me to be in the same general category as saying that you can use a programming language to write incorrect code, which is not a meaningful objection to programming languages.

Yeah but it really defeats the purpose if you need to be an expert in order to understand if it's telling you something dead wrong and sending you on a goose chase.

It depends what you think the purpose is. I'm quite happy to have a tool that can assist me rather than replace me.

> I'm quite happy to have a tool that can assist me rather than replace me.

Another way to look at it might be that you assist the machine, rather than the machine assists you.

* Human engineers a prompt = preprocesses the messy reality.

* AI does the "creative core" of the task, comes up with a solution.

* Human post-processes the AI output back to reality, validating and actuating it.

Basically humans as a thin API layer. Who's helping who?

Right now, the prompt engineering is motivated by human desire for reproduction and survival (ultimately). But that's just incidental, the loop may be closed.

That’s not what it looks like at the moment, although it may in future.

The AI isn’t yet coming close to doing the creative core of technical tasks - in fact that’s pretty much precisely what it can’t do, yet.

Instead, it’s acting as a powerful interface to a large knowledge store - a bit like a search engine on steroids, but one that can usefully tailor the answers to queries, rather than just copying what someone else once wrote on the subject.

Besides, as long as AIs aren’t conscious, there’s not really any question about who’s helping who. If that changes, then it’ll become much like any paid service exchange between humans, including e.g. ordering a hamburger. Both parties are supposed to benefit, although there’s often an imbalance.

Right. People need to realise that ChatGPT shouldn't be used in the same way as Google or asking a question on SO. It doesn't have the same capabilities and drawbacks.

In particular because of it not understanding the big picture it won't point out that you're asking the wrong question like an expert human would. (Does it ever?)

> it won't point out that you're asking the wrong question like an expert human would. (Does it ever?)

I don't think it will - its design doesn't really allow for that. It's generating responses to the prompts it's given based on its trained knowledge, but it doesn't have (enough of?) the kind of meta-reasoning required to step back, gain an understanding of the context of the question, and propose a different solution.

The interesting thing is that on SO, the good stuff gets upvoted, while the bad/incorrect stuff is being pushed down or out completely. And it’s not rare that you see poor/wrong code snippets. Thanks to this mechanism, it actually shouldn’t be harmful to have auto-generated responses there, because they get reviewed and curated by humans. Isn’t the combination of AI legwork with Human review the best thing we have so far?

StackOverflow and Wikipedia both depend on a particular dynamic. It's costly to come up with nonsense that looks right enough to pass, and it's easy to hit revert or downvote that content if it's fishy.

AI generation of nonsense flips that - it's easy and limitlessly scalable to come up with "rightish" stuff, and cleanup still takes human effort that scales at the number of engaged users.

That’s actually a remarkably insightful comment. Thank you for that.

Adding a bunch of noise to that situation is less than helpful.

It works really well if slapping together something vaguely close to correct is good enough. This covers a significant fraction of all programming work.

It works great for slapping together something vaguely close to correct... and then iterating on it until it's exactly right.

This makes is a fantastic productivity boost if you have the skill to identity and fix any problems with the code it has generated.

If you've ever participated in code review as a reviewer you probably have those skills already.

Starting from subtly incorrect code makes it much harder to come to the correct solution.

To use a classic example, the A = (B + C) /2 line that fails due to bounds checking issues is quite literally worse than useless because you basically need to understand why it’s wrong before you notice it’s wrong. The eyes just slide over the common idiom thinking in terms of average not what the code actually does.

Code reviews most easily reveal a very different kind of error. ‘Why does this code look more complicated than it should be?’ is a great hint that something is fishy. However, chatGPT is basically engineered to pass the smell test not work.

Yeah I am surprised how many people are of the opinion that code reviews are easy. Errors like the one you mention are especially hard to catch unless you know what you are looking for.

We’re going to enter into a curation economy. Not quite Yahoo! 90s style but more in a “must be trusted and attributable” to be valid. It doesnt scale but there may be more value in the brand of your information than the current state of the world provides.

Yes, I will end up depending more heavily on sources that I know are thoughtfully edited by humans. Unfortunately, that set will diminish if bot herders overwhelm crowdsourced sites.

I remember people saying that AI was going to displace artists because they overhead business managers saying how great it was that they no longer had to hire a human artist for a design job. In other words, had there been an option to forgo the human in the first place, the human artist would never have stood a chance to take such a job. I think this reveals a depressing fact. The desire to automate humans away has always existed, and now people can finally realize those dreams. Maybe even prompt engineering itself is in that sense a bottleneck to be overcome in the future, if a human still has to attend to constructing it?

I don't think "artificial intelligence" is a good term for describing what's happening right now. It makes it sound like a robot overlord is calling the shots and displacing human workers, but in reality, it's just humans pitted against humans with different beliefs. AI tools still require humans that believe in the proliferation of AI to use them and spread the results everywhere.

I agree with the article's mention that maybe some reflection is needed when proliferating new discoveries. The "just a tool" argument doesn't consider that what affordances a new technology offers will dictate what most people will use it for in practice. I don't think many people would argue that nuclear weapons are "just a tool" once their uncontrolled spread makes them impossible for any side to ignore. And the dangers aren't yet on a "destruction of humanity" level like many who worry about AGI, but perhaps "the destruction of humanity in the arts" is more plausible given the recent developments up to now? We aren't even close to anything resembling AGI yet, but specific domains are still vulnerable to being overrun by AI generation this early on (StackOverflow, ArtStation, etc).

But this is a matter of tradeoffs, and maybe researchers will continue to be overeager in sharing their exciting findings, until an even more severe line is crossed than the one that set off the current war on AI art. I don't look forward to a time where demonstrators will scrape GitHub listings and arXiv papers to seek out contributors to anything that touches OpenAI or Stability and implore them to have a change of heart, or scream louder and louder to have their voices heard by the programmers writing their torch denoising code from inside the safety of their office buildings.

"Threat to the fabric of society"? The author is an influential figure, and should know better. This sensationalizing, fear-mongering and FUD is the real "threat to society".

> Someone else coaxed chatGPT into extolling the virtues of nuclear war (alleging it would "give us a fresh start, free from the mistakes of the past").

I'm convinced.

"Fresh" isn't how I'd describe the world following a general nuclear war.

Certainly uncontaminated by higher lifeforms.

An intersting thought experiment. If we go back 100 years ago in many towns throughout the US I could've started a newspaper and published anything about the world or world politics that I wanted to. I could publish in my newspaper that credible sources said the Kaiser was going to invade Japan, I could've published about how a new miracle cure of an imaginary plant in India would cure bunions, or pretty much whatever I wanted as long as it wasn't easy to verify, and this worked because there wasn't an easy way for people in more isolated farming communities to really check or corroborate what was going on, this was in large part because it didn't have an effect on their world at all.

Now 100 years later i can do the same thing and realistically it is just as hard to actually verify or validate with the amount of misleading, partial, censored, or manipulated information out there. How much difference will this make on my life personally though?

Here's the thing society isn't static or changing we've seen recently several events that has destroyed many peoples trust in various sources of infomration, my guess is with another generation coming up in a world where it is widely acknowledged that most sources of information are biased or fabricated they will grow up with an "immunity" so to speak from this misinformation and grow to disregard everything they find and read online, just like they did before.

Its not a thought experiment. The 30 years war was fought over the printing press at its core. Newspapers are nothing compared to the bible in terms of misinformation risk, harmful cults regularly spring up to this day by astray pastors making their own version of the Bible to proclaim themselves as some prophet. So naturally, the Catholic church at the time was extremely concerned with people being able to 'translate' the bible (any translations will inevitably deviate from the official latin codex to an extent), that's how the protestant catholic conflict began, as the war to control the source of truth. And 1/3rd of Germany's population died because of this war.

That being said, its absolutely worth it in the end. The Muslim world chose to not have this conflict, and banned the printing press (So the Quran can't be printed or translated). What ended up happening there is 500 years of utter stagnation with no new technological or social leaps.

Generative AI will definitely cause conflicts that indirectly kill millions. But that's just the price of adaptation, I was shocked at how people didn't seem to care about the COVID deaths after a while, I guess that's a good thing in the end.

Who banned it and for what? The uptake was just nonexistent, religious minorities used it just fine.

That website makes Firefox use too much CPU and the fan of my laptop starts to scream - very annoying.

It is, of course, kind of a great joke. Never-heard-before wisdom with deep, impactful visions published on a website with "trusted insights for computing´s leading professionals" - and they just simply can not deliver decent HTML / JavaScript.

Looks like we need even more leading professionals!

AI could be civilization-ending. Pretty much everything we do involves trust in others. Things like vaccine hesitancy can become worse to the point that people make increasingly life-threatening decisions. With AI-generated disinformation at scale fully able to drown out actual vetted information, it's possible we'll end up in an environment where even reasonable people simply don't know what decisions to make. Voting becomes problematic. Where to live, what products to buy, what to study, what job to take, all become difficult or impossible to decide intelligently.

I hope I'm wrong.

> all become difficult or impossible to decide intelligently

The solution is for people to learn to reason logically for themselves. Maybe when we actually get to the point where people can't survive until they do so, they'll start taking logical reasoning seriously.

How does reasoning logically based on potentially unreliable information produce accurate, beneficial results? Especially when it's impossible to know the specific probability any given piece of information is true?

It's quite funny that the author consider 1 million dollars of propaganda a huge amount, just because it was russian

It's not a threat until AI can start creating new ideas in a vacuum. As it stands, they require our creative output and all of their outputs can be considered derived works. It's very effective, but it's still dependent on INPUT (human) -> OUTPUT (AI).

Can you create a new idea in a vacuum? We humans also seed our creative RNGs from the world around us and from sources like memories and real-time sensory input.

I'm not disputing that, but humans can create new ideas from what appears to be nothing. AI is still limited in that the ideas they generate are essentially remixes of pre-existing ideas fed to the AI.

Its not that A.I. software is smart, but that for the most humans are mediocre and uncreative. Humans are stuck in their mental ruts. Given a large enough A.I. and large enough sampling of total human knowledge, an A.I. can predict much human activity.

OpenAI needs to feed all of Wikipedia, Reddit comments, stack overflow, GitHub, textbooks, IMDb, hacker news, books, news, yelp, etc.. into ChatGPT. That’s pretty much the next step to out googling google unless google does it first.

> The so-called Russian Firehose of Propaganda model, described in a 2016 Rand report, is about creating a fog of misinformation; it focuses on volume, and on creating uncertainty. It doesn't matter if the "large language models" are inconsistent, if they can greatly escalate volume. And it's clear that that is exactly what large language models make possible. They are aiming to create a world in which we are unable to know what we can trust; with these new tools, they might succeed.

I would encourage anyone who wants to look into this more to see the part of the 2016 documentary/film project "Hypernormalisation" about Vladislav Surkov:



Just wait for political strategists to get hold of this.

Democrats: Trump says parents should put crushed porcelain in their baby formula to make up for nutrition gaps found in off the shelf formula

> wood chips to breakfast cereal

> https://www.apple.com

Shots fired

Why is everybody so scared that an AI can give you a reasonable answer when you ask it about the virtues of nuclear war or why crushed porcelain is good in breast milk?

Can't you do exactly the same by just going on Mechanical Turk and paying $5 (or whatever) and have 10 such answers written by humans?

Yet, nobody is making scary posts about what humans on Mechanical Turk can write when you ask them stupid questions.

It would cost me hundreds of dollars to create thousands of pieces of essays on how fizzes can buzz and doodads can dash with MTurk. Now it costs me a few .

We’ve always been able to make up bullshit, the leverage needed to do so is what’s changing.

> It would cost me hundreds of dollars to create thousands of pieces of essays on how fizzes can buzz and doodads can dash with MTurk. Now it costs me a few .

Hundreds of dollars? Is that supposed to be a lot?

For thousands of essays?

> They can easily be automated to generate misinformation at unprecedented scale.

> They cost almost nothing to operate

These two in combination scare me the most. We still fall for email scams for cryin out loud.

This feels like thinly veiled advocacy for censorship and breaking the internet. And soliciting federal grants to research how to do it.

There is already a practically infinite amount of misinformation on the internet. The misinformation problem is not currently constrained by lack of supply, so doubling or 10x'ing or 1000x'ing the amount of misinformation that can be produced doesn't seem significantly concerning to me.

Misinformation is a problem rooted in the powerful having having a net economic incentive to deceive and the populace's lack of media literacy. We already have 'trusted news sources' repeating falsehoods they heard 'off the record' from the state department. Credibility of the institutions that used to be easy to trust has fallen for both justified and unjustified reasons. The people who already choose 'fake news' sources will only be affected by AI fake news to the extent that those who control the AI have different goals than the people who control those channels today.

If people apply AI to the problem, I think AI could help us a lot with understanding how we should adapt our systems of trust to account for this. Humans are bad at remembering the most important betrayals of people who are notionally part of our tribe, so credibility reverts to mean without consequences too easily. It's also very hard for us to identify "this person has stuck to their word for 20 years" without it being lost in punditry noise.

I think its important for people to think about how the populace can hold politicians and media more responsible, but AI language generation is almost entirely tangential to that (and stopping AI does nothing to solve the root issue).

> This feels like thinly veiled advocacy for censorship and breaking the internet.

Marcus is arguing that the internet will be broken unless something is done. I think he is overreacting but he would argue there’s no going back from here.

> The misinformation problem is not currently constrained by lack of supply, so doubling or 10x'ing or 1000x'ing the amount of misinformation that can be produced doesn't seem significantly concerning to me.

Slightly flawed logic imo. A difference in quantity can definitely produce a qualitative shift in the internet.

The best analogy I can think of is in surveillance. The Soviet Union had a huge surveillance network. Something like 2-3% of the population was employed to spy on the citizenry. That’s extremely expensive, which limits the amount of info you can reliably have. But what if you could cheaply surveil almost everyone? That’s the situation only made possible with modern technology. It’s not that surveillance didn’t exist before, but it was very expensive, which was self-limiting. Only the most suspect individuals could be surveilled.

Somewhat agree with your take. A good example is Twitter: it’s trivial even for a non-state actor to automate a bot farm to post hundreds of thousands of disinfo tweets, create “connections”, retweet each other, etc. this is happening today at a massive scale.

However, where I can see a model like ChatGPT making a difference is creating a tsunami of seemingly in-depth content both in public (Wikipedia) and private (buying 10K domain names) spaces. Generating fake research papers, interviews, blog posts, reviews… at a scale that until now required an unattainable amount of human supervision.

How much content would it take to flood the more visible part of the internet?

Fair point. But hasn't SEO blogspam already flooded every inch of the visible part of the internet already? Can't partisans already get 5-10 pundits to write articles advocating a questionable concept to add a veneer of credibility for a couple thousand bucks? I'm not sure what frontiers of the visible part of the internet remain.

It will certainly become more economical to do this sort of nefarious stuff, but I suspect democratizing misinformation production will lead to the development of better media literacy at a rate that outpaces that misinformation drowning out people who are earnests trying their best to find and present the truth.

In my experience blogspam has been primarily used for commercial purposes (selling stuff and/or ads). Political disinfo at a new scale could pose very interesting new challenges.

In a similar way we saw a paradigm shift when Cambridge Analytica employed political targeting at a new scale. Before that anyone could hire a savvy social media manager to target specific groups, but automating this process proved to be a different beast.

How soon our society will adapt to this, or how we will even react is yet to be seen. I don’t have high hopes for this to be a fast and painless process.

> to post hundreds of thousands of disinfo tweets

If that's true, then they've been doing it for a long time now... has it actually made a difference? There have been a lot of news items that were accused of being disinfo that have since turned out to be absolutely true, but I have yet to have seen an example of actual disinfo that took hold and altered any meaningful course of events.

I mean Gary Marcus and Elon Musk seem to agree here. AI has very suddenly become incredibly powerful and will be further weaponized. It should fall under some kind of regulatory framework for safety and security. By way of comparison, AI is far more dangerous than cryptography, which is regulated in a national security context. In addition, we should not underestimate the increasing power of AI in the near future. AI is already using AI to improve AI faster than people can improve it. Witness all the educators figuring out ways to identify fake writing, without realizing that they are just feeding data to the next revision of GPT.

How would you propose AI be regulated? I think any regulations would have to accurately encode our moral systems to accomplish safety and security without creating worse side effects, and since the law already does a pretty mediocre job of accurately encoding our moral systems, I'm skeptical that regulatory intervention can be fruitful.

I fear that regulations' most likely outcome is increasing the chance of powerful AI being built as an amoral slave to capital rather than a societal good: it will be expensive to comply with regs, so it will kill mom+pop AI shops and only the megacorps will be allowed to play.

Something along the lines of the norms and guidelines for CRISPR germ line editing

>>>This feels like thinly veiled advocacy for censorship and breaking the internet. And soliciting federal grants to research how to do it

This is what I took away after reading it as well.

I found it strange the author used such a poor example to prove their point such as; AI recommending porcelain in baby formula and readers may actually put crushed porcelain in their baby formula.

This is such a poor example because you would literally have to be brain dead to do something like this.

The bar needs to be set a lot higher to prove to me that AI misinformation is a threat.

I don't see how AI porcelain baby formula is that far off from flesh brain ideas such as drinking bleach, eating tide pods and Nyquil marinated chicken.

The point was if you're dumb enough, marinating chicken in NyQuil and washing it down with bleach will seem like an award winning idea. No amount of filters and censorship will prevent people like this from existing and engaging in self harm. So why continue to try and prevent it?

The tide pod teenage phase is similar. Teenagers will always find reckless things to do and share it to become popular. It's what teens do. Good parents make the difference here.

Tide pods thing was a failure of our current information systems.

Teens were joking about eating tidepods. The MSM news heard the joke and reported it as factual, and only after the news reported it as factual did people actually eat them.

Teens do not listen to or watch msm in the first place.

It's almost like it was shared on Reddit or TikTok and that combined with what I mentioned before about teens seeing other teens do it, made it viral.

Should young popular influencers be held accountable? Because I could go for that.

Think of all the Covid-related conspiracy theories for dumb misinformation people will believe.

Are you sure they were dumb?

Maybe they were simply brain-damaged from exposure to various products that were certified as safe, but weren't.

For some reason, that reminds me of https://en.wikipedia.org/wiki/The_Crazies_(1973_film)

More pants on head crazy alarmism. Here is a summary of the entire article: "Before, you could get away with using how someone sounded as a proxy for whether they were correct, and sometimes it worked and sometimes it didn't. Now with AI sometimes it will work less often. Therefore DEFCON 5!!!"

DEFCON 5 is actually the lowest DEFCON

Can't you see that people in this thread are posting ChatGPT responses...

backup link: https://archive.vn/mzr6X

looks like the site is experiencing the HN hug of death

This article made me laugh, thank you for sharing more outlandish satire

Technically, Westward was AI's Jurassic Park moment.


> they can be automated to generate misinformation at (approaching) zero cost

Then two things: (1) the identification of misinformation will likely also be detected at a massive scale, but/and if not, then (2) get off social media, go outside, and talk to real fucking people

This article is proof that as long as you have a smart looking suitcoat and glasses you can trick any old idiot on HN into reposting your moronic nonsense.

> we may have to begin to treat misinformation as we do libel

"misinformation" defined as "anything that came from an AI"? Or the more common colloquial definition of "anything that might strengthen the case of conservative politicians"?

The genie is already out of the bottle so it would be better to find ways to fight fire with fire

Machine learning systems are just agents of people, and we already know that you can't trust most people on the internet.

Given how much of social, political, and economic life takes place on the internet, what I'm hearing you say is that trust is dead.

Indeed, trust is dead but this is nothing new.

Well, also agents of corporations, and those corporations whose singular goal is profit are the worst of "people" AFAICT.

Surely there are people or groups of people whose singular goals are worse than making money. Mass murder and subjugation at scale, which were perpetrated by many governments in the 20th century, immediately jump to mind.

> Mass murder and subjugation at scale, which were perpetrated by many governments in the 20th century

I've got bad news for you, buddy...

Corporations have dominated the internet for a long time now.

This isn’t something I’ve talked about much but I’ve been using my past comment history on HN as training data for AI so that eventually this account will be replaced entirely with AI generated responses to other comments or titles, but I won’t say when that will happen. Could be tomorrow. Could be years from now. Or it could have already happened months ago.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact