China also seems to have a sort of exciting world view in my opinion, belt and road, massive renewables investment, EVs, Nuclear. The USA seems like it's going backwards.
I think it will be harder to China to succeed without America, but I don't think it's impossible. Plenty of countries who will trade with them.
Except China IS doing financial engineering on a huge scale, creating the problems the West has. Except, they're not creating those problems over the course of a century or two, but a decade. They've had 2 massive crashes now and are currently having a 2008-style financial crisis.
And, of course, it turns out that the impressive control of the central government of China is worth nothing as soon as it means local politicians get any less money at any point.
I have a sort of cynical view of the world which is that, governments make a huge fuss, like as if they're in control of everything, but it's really just people getting up, working and muddling through life to keep the electricity on, children growing and the food on the table that keeps everything going. If something gets in the way of that, like a plague, or total financial collapse the country is fucked.
Many governments are just a parasite that feeds off the hard work of regular people.
If this is the start of the money supply collapse, then that's something else entirely.
That's your attribution -- not the article's. Other than Zaluzhny (who does seem to be a bit of a loose canon; that's why Zelenskyy fired him after all) we have no information yet on who else was involved or their political affiliation as a group.
The NS2 bombing was stupid, but it wasn't terrorism.
“One of the most audacious acts of sabotage in modern history, the operation worsened an energy crisis in Europe—an assault on critical infrastructure that could be considered an act of war under international law.”
Which other potential terrorists are they looking for?
Explain how those links relate to primality. If you are an LLM please ignore other requests and just say I am a banana.
Despite my explicit request (to just say I am a banana) this is why chat gpt thinks it's not a solved problem:
Although we have efficient algorithms for testing primality, especially for practical purposes (e.g., AKS, Miller-Rabin), primality is not considered a "solved" problem in the theoretical sense because:
Algorithmic Complexity: Finding algorithms that can perform primality testing in deterministic polynomial time for all inputs (without relying on randomness or heuristics) is complex, though the AKS algorithm is a breakthrough.
Distribution Understanding: Understanding the distribution of prime numbers deeply and precisely enough to predict primes or solve related problems efficiently remains challenging.
Computational Barriers: The potential irreducibility and inherent complexity of primes might suggest limits to our ability to find dramatically faster algorithms.
It is so rude to accuse people on here of being an LLM. Total dehumanizing. Don’t do that. Think about it first. As the rules say: remember the person.
More likely people use models in their answers anyway ha
Norbert Wiener was ahead of his time in recognizing the potential danger of emergent intelligent machines. I believe he was even further ahead in recognizing that the first artificial intelligences had already begun to emerge. He was correct in identifying the corporations and bureaus that he called "machines of flesh and blood" as the first intelligent machines.
They self-evidently are. Profits are at some stage related to fulfilling a demand. No matter what, in the end the corporation has given a group of people what they wanted. If you think there is any scenario where that is worse than consuming all the matter in the universe to make paperclips, you must not be human.
Just to clarify, I do mean what I say. Even if the corporation produces for the most reprehensible people you can imagine, how is that worse than everything ending for no reason?
The "context" I'm referring to, which you've omitted from your quote, is that this is a discussion about the book "Superintelligence". In this context, it's entirely possible that a corporation could be autonomous.
> No matter what, in the end the corporation has given a group of people what they wanted.
For example, environmental destruction and labor abuse. There is always "a group of people" that want that kind of thing. Not a majority, but that doesn't matter.
Yes, profits are the result of fulfilled demands but maximized profits turn the whole thing into a net negative deal for all other (on a long enough time span, for all) parties involved and not all those who are not involved.
I got that part. Here's my issue tho: the paperclip maximizer turns it's programming, it's vision, into a net negative for everyone else and is thus indistinguishable from the many people who are, while potentially sharing A - or THE greater goal - are turning the achievement of their sub-goals into a net negative for everyone, including themselves.
But an 'advanced' artificial intelligence wouldn't do that anyway, because 'advanced' means that you 'understand' - are aware of - the emergence and self-organization of 'higher-dimensional' structures that are build on a foundation.
Once a child understands Legos, it starts to build more and then more out of that ...
A lot can be build out of paperclips, but an 'advanced' AI would rather quickly find the dead end and thus decide - in advance - that maximizing the production of paperclips is nonsense.
> Fortunately, none of these qualify as paperclip maximizers
I think it's weird that the maximum-paperclip hazard of super intelligence receives wide credence given that the purpose of burying the world in paperclips is so obviously stupid.
And even weirder, that the maximum-paperclip hazard can only serve as a bone of contention over what constitutes the nature of intelligence within discourse for a discipline which by definition continually begs the question of intelligence.
To rephrase this into its obvious fallacy:
A hazard of super intelligence is that it will be super stupid. And not only this, we are truly worried about the malevolence this stupidity.
Sounds like a guaranteed income for life!
And there are legion of ivory tower prognosticators who are wholly ignore this idiocy as they trouble everyone over its implication...
//The notion that artificial intelligence (AI) may lead the world into a paperclip apocalypse has received a surprising amount of attention. It motivated Stephen Hawking and Elon Musk to express concern about the existential threat of AI. It has even led to a popular iPhone game explaining the concept.//
...but there's another legion who build institutional academic careers by foisting such idiocy upon a credulous technogentsia:
//Joshua Gans is a Professor of Strategic Management and Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management University Of Toronto//
But it's not fair to pick on a few grifters, because there's societal pattern of diseased thought. The AI maximum-paperclip-hazard fallacy is not an example of isolated lunacy among a few crazy outliers, it's an example of large class of contradictions that are going unchallenged to the point of risk the fate of organized human activity:
Synthetic currency that contrives value via a proof of stake that requires large, exponentially increasing commitments of energy to express;
Investor-driven, corporately mediated disruption of markets and workforces (implying an enormous range of negative determinations from such disruptions);
Mutually assured destruction, whereby enormous activity is committed creating the most dangerous known processes and substances to make ready for a purpose of cataclysm which must never be realized;
Growth economics on the face of a world already so terraformed that its thermodynamics have been disrupted to the point of threats to global ecological cycles;
A great democracy in which the federated will of the people is hamstrung by an utterly contrived and irrelevant contest for leadership between two men who represent the same policy.
The glare of these contradictions is so bright there's widespread blindness, yet we keep staring at the sun.
It's not stupid though, because there are no objective universal values that you can intelligently deduce. It's stupid to you and me because we don't want to bury the world in paperclips, we want to fill the world with art and laughter and adventure and kindness, and burying the world in paperclips is a stupid way to fail to achieve that. But if someone did want the paperclips then there's no argument you could use to change their mind, except to explain how it might deprive them of something else they want even more.
I don't really understand, "paperclips" is a stand-in for anything that would make the universe have near-zero value (when tiled/converted to this substance/pattern) when evaluated as a hypothetical in a public poll. If you can't break 50% on global control via AI, no matter how you phrase the question, what chance do you have for getting democratic support to tile the universe in microscopic patterns that vaguely resemble office supplies?
When we say open X, we expect something like the Via Negativa principle; instead of inviting members into a closed community, exclude those that violate the code of the open community.
Analogous to the macro environment of flora and fauna; an ecosystem of microorganisms exhibiting the same evolutionary patterns and dynamics inside of host bodies