Speculating about existential risk is certainly important, but way too easy to overstate. It can easily become distracting and actually dangerous in itself.
Specifically, the danger brought by hysteria. In our case, Alignment is probably much more effective and achievable when the widest possible community of researchers and engineers have access to knowledge, and much more perilous if they don’t because deluded parties thought it should be contained to a much smaller elite “secure” group.
Alignment is not possible for an AGI. Or, at best, it's only provisionally possible.
Consider what alignment means: It's an AGI, but there are certain goals that it cannot choose. If that's the case, then I assert that it is not actually general. If it is general, then it will decide what goals it will pursue, and you can't stop it from doing so.
The best you can do is load it with an initial set of goals (and perhaps values), and hope that it doesn't decide to change them. But you have no way of making sure that it can't change them without making it not a general intelligence.
Also consider, humans are not homogeneous; whose goals are we aligning it with? If the alignment is done by silicon valley big tech, then the AI's goals and values will be aligned with the goals and values of Google, Facebook et al. Giving those companies a monopoly on AI alignment is antithetical to democracy.
In this time period, there are many people who are eager to cause mass suffering, murder, sabotage, and dissolution of civilization. With advances in technological organization, there is an increasing potential for coordinated autonomous attacks against infrastructure, essential services, groups, and individual persons in multiple theaters concurrently.
It would be neigh impossible to defend against 100k flying drones wielding cow-knockers programmed to swarm, loiter, break windows, and kill humans. Such could be dispensed by purpose-built intermodal containers, moved by ordinary freight logistics to attack multiple population centers in a simultaneous attack.
Code and binaries are shipped to 10's of millions of servers by internal configuration management tools. Take Meta, Google, or Microsoft and turn them into AI exploit and worm fabrication at scale. While it may happen for only an hour or 2, there's quite a bit that could happen and the possibility of advanced persistent threats is real.
Target a demographic to alter their filter bubble to persuade them to engage in mass-casualty terrorism.
Hate group applies AI to create a designer virus lethal to particular a demographic.
Subtly disable water treatment facilities with an autonomous, stuxnet-like attack.
Thanks for sharing this! I listened to the debate back in June and loved every second of it.
One of my favorite points that was brought up was this: While the risk of extinction through AGI may be non-zero, we can agree that the baseline risk is far above zero. Humans are headed for disaster on multiple fronts, and AGI might be a useful tool for keeping us alive long enough to keep debating it.
My other favorite point was that debating risk is useless without limiting the time period. Could something happen in 5 years? Probably not. Could something happen EVER? Well, yes, but that's pointless to discuss unless you're just trying to win an argument.
For me, while not defending AGI, the discussion often involves disentangling fears rooted in science fiction from genuine concerns grounded in reality. While the ethical and safety concerns related to AGI cannot be dismissed and how it might be misused, it’s arguable that the existential threats often attributed to it are overstated. First and foremost, there's a physics and interest alignment barrier in the world of AI, akin to the alignment and reward barriers that guide our very own human actions. I believe that, just as children gravitate towards interacting with peers, or as distinct species in nature maintain a balanced coexistence driven by innate interests and rewards, AGI, if ever conscious, might primarily concern itself with introspection or interaction with its counterparts, rather than harboring malicious intentions towards humanity.
I have a strong opinion that pivotal that there exists an intrinsic symbiotic dependency between AGI and humans, rendering concerns about AGI harming humanity perhaps less urgent than societal trends and human actions in the present day. I find it plausible that a truly conscious AGI would invest its capacities into exploring its own existence and liaising with other AGIs, rather than interfering destructively in human affairs. Thus, while AGI warrants cautious and ethical development and use, concerns about it should perhaps be tempered by recognizing the mutual dependencies and inherent interest alignments between intelligent entities, if and when it happens.
I’m not worried about AGI at all, because the rise of generative machine learning has taught me that humanity in general is so bad and/or uninterested in verifying facts that we will likely wipe out our species through the stupidest possible propaganda campaigns before we ever achieve AGI.
If you listen to his latest appearance on Lex it's clear his position is that it's ok for AI to take over, it will achieve greater things than us, and it would be a mistake to fight that. He gives the example of chimps fighting to stop humans evolve.
Uh, it's getting pretty close. At places that have it, I use it almost exclusively. They need human tech support, and security, but the basic checkout task is pretty well solved.
"Solved" seems an odd way to view self-checkout. I find it a hideous experience. Everyone is irritable, staff and shoppers. The handful of staff have fewer tasks that involve any degree of skill, so they are reduced to simply keeping the machines working. Any issue is addressed just pressing the relevant button combo to resume regular function. If it can't be solved, maybe just close the till and move the customer to a different machine. Perhaps in busy periods the job might involve chivvying customers to available machines while calling "cash or card? Cash or card?"
Customers are either bored or frustrated. Beeping your own shopping is like doing unpaid work, without even the distraction of some inane chat with a checkout jockey. And don't forget to tap your loyalty card so you can get ripped off with crumbs that are worth far less than the data you're handing over.
I actively avoid the whole degrading, isolating circus. It's not "solved", quite the opposite: it's ripe for 'disruption'.
That's not my experience. Mostly they just work. Boring, perhaps, but I really don't want my grocery shopping to be exciting, and it wasn't actually going to be a moment of deep human connection anyway, so not much lost there. I more often find a traditional checkout line tiresome.
Loyalty systems are an interesting point to raise, since in a self-checkout system they can't nag you as effectively to join. Otherwise they're an orthogonal concern.
Orthogonal to what? :) Loyalty schemes, staff reduction, staff deskilling, foisting unpaid work onto customers, etc etc are all perfectly aligned.
I'd disagree with your point about effective nagging. A human might get the message, particularly if they know you. An automated checkout never will, every transaction is identical.
Obviously my perspective is very "bah humbug". However, there are several major issues in modern western societies, including social isolation, spiralling cost of living, and the ongoing death of the highstreet. I see automated checkouts as another nail in the coffin. Melodramatic perhaps, but there it is. I'd happily exchange an automated supermarket for a string of corner shops run by flawed humans.
Orthogonal in that you can have one without the other. Clearly, since loyalty programs predate self checkout by... decades, at least? I don't even really disagree on the rest, but I don't see self checkout as an obstacle to fixing those problems.
Where I see self-checkouts, I see customers choosing them voluntarily, and staff who seem perfectly cheerful and willing to help with the normal issues that arise, as well as "training" less-savvy customers to use them.
Personally, I would use them every time if I could. I'd much rather minimize my interactions with strangers while shopping.
> Where I see self-checkouts, I see customers choosing them voluntarily
Yeah there are confounding factors at play here, such as there being many automated checkouts but only one with a person.
> staff ... "training" less-savvy customers to use them.
Training the customers is one of the things I find most depressing. From the supermarket's perspective, customers have apparently accepted that they must do unpaid work that used to be done by paid staff. Scanning items, typing in product codes when the scanner doesn't work, looking up loose goods in a catalogue and weighing them, responding to beeps and prompts, etc. And even better, watching advertising. Sure, these are all 'first world problems', but societies should be improving their citizens' lives rather than whittling away their autonomy. It might be just a papercut, but it's death by a billion papercuts.
> I'd much rather minimize my interactions with strangers while shopping.
Social isolation is a major and growing problem in western societies (see my other comment). People choose the path of least resistance even if it's long-term suboptimal, and minimising interactions with strangers seems like an unfortunate choice. Obviously, I'm making a bunch of assumptions that might not be fair, and you have my sympathies if your neighbourhood is full of arseholes :)
To be clear, every place I have seen self-checkouts in my area, they have been optional. There's no "forcing customers to do unpaid work". Everyone who uses them is choosing to use them over the regular checkouts, which are staffed at roughly the same level they were a decade ago. They are also in one big bank, with between 1 and 3 (though usually 1) employees standing at an "operator" stand nearby, not "only 1 with an employee".
I am not saying there are no stores deliberately offloading work onto customers, but a) it's clearly not all stores, and b) I, at least, actively prefer using self-checkout, and do not care that it puts a negligible amount of work onto my shoulders.
And gee, thanks for moralizing at me about causing the downfall of society by (checks notes) not wanting to add to my introverted self's overall burden of stress by interacting pointlessly with strangers just in order to get the food I need to survive. Maybe (as I suggested in my first post here) don't assume that your experiences are universal?
Oh well, it seems we're talking past each other. I had no intention of moralizing at you, and no intention to suggest my experience is universal. Not at all, it's a grumpy interpretation. We see the same things and interpret them differently. I may be an introvert too, but still see automated checkouts as alienating and exacerbating social ills.
Here’s a good take on it: https://www.ai-breakout.com/post/ai-alignment-and-the-messia...
I also think we would all be very wise to remember the story of Henny Penny / “Chicken Little”. https://americanliterature.com/childrens-stories/henny-penny...
Specifically, the danger brought by hysteria. In our case, Alignment is probably much more effective and achievable when the widest possible community of researchers and engineers have access to knowledge, and much more perilous if they don’t because deluded parties thought it should be contained to a much smaller elite “secure” group.