The genius — sometimes deliberate, sometimes accidental — of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind.
These days when I want to find obscure knowledge, I purchase books written on the topic by scholars; they concentrate the information very well with good sourcing.
To define China's system of censorship as successful seems premature. We have had similar systems in the past, and while they maintained some form of stability for the status quo they ultimately hindered those society's progress.
"It is true that if you have a tyranny of ideas, so that you know exactly what has to be true, you act very decisively, and it looks good – for a while. But soon the ship is heading in the wrong direction, and no one can modify the direction any more." -Richard Feynman
How what we create ends up shaping us. For example, what we think of "reality" today, is the fiction we created in sitcoms and television shows that shaped our view of the world when we grew up. We dress like "cool" people, we dance like "cool" people, we imitate one another.. so in the end we become the fiction.
But to be honest I don't know which way to read the article. Besides the main "emergent / unplanned" idea I'm not sure what his point is. edit: I guess he meant to say that the "digital revolution" promised more control over our lives, and instead it is becoming something completely unplanned.
I had a feeling related to this when a couple of months ago, after I had given a friend a ride, she told me that she was really amazed of how I was driving around without using an always-on GPS map. I realized that for people in my age (their 20s and 30s) driving in a car now involves being dependent on your GPS maps.
It can be applied to many things.
Part of this is that analog vs digital isn’t the right term. Vacuum tubes are completely irrelevant to the point he is trying to make, and confuse the reader.
The point he’s trying to make (which unfortunately I’m unable to explain right now better than he can) is about the emergent properties of systems made up of independnt entities in a rigid form and how this is a type of computing that differs from what we usually think of as computing. The example he have about DNA vs the brain sort of gives an example of this. DNA is coded similarly to computers, and in many ways cells are like computers/programs (that happen to self-propagate according to their software). On the other hand, brains are this other type of “system” computer made up of a combination of independent actors (neuron cells), a regidid but adaptable structure (the physical structure of the brain), inputs, and information sent between the individual actors. This is very similar to a million computer-regulated human systems where we react to information we receive through the platform, and we relay our reaction to the system, and that reaction affects how the system interacts with other individual actors.
A way of viewing it is inversion of control between emergent behavior and programmed behavior. When the number of interactions between discrete units of programmed behavior exceeds a certain threshold the programmed logic responds to the emergent interactions instead of dictating it. The system as a whole starts obeying its own logic instead of one programmed into it.
I suppose you need to see such systems more like ecologies to be studied with techniques not far off from those used to study natural ecologies.
The buyers are limited by: (a) what's cheap so they can afford, (b) what's available, (c) what's advertised -- among other things.
There's no "perfect market" where people perfectly "vote with their wallets".
Unfortunately few CS people or other engineers have touched these areas.
Getting across a multitude of disciplines is highly underrated.
As a computer scientist/computational neuroscientist, I don't really buy the analogue vs. digital distinction. Basic information theory tells us that any noisy continuous system is equivalent to a discrete system of a certain resolution/bit-depth. As the author continues to write
> [...] analog computers embrace noise; a real-world neural network
needing a certain level of noise to work.
A few bits are sufficient to describe the output of a real-world neuron. Action potential timing has jitter in the sub-millisecond range, which means that relatively coarse discrete time steps are sufficient for a simulation. Yes, our brains are extremely noisy systems, yet this is exactly the reason why they (in theory at least) can be simulated on a discrete/digital computer.
In practice there is little to be gained from analogue computation, except for a (potentially) reduced energy consumption compared to a digital implementation of the system in question. But on a theoretical level, nothing changes.
Although your overall point is correct, I disagree with this statement. It is in fact only theory that distinguishes between continuous systems and very high resolution discrete ones, because the time-evolution of an ideal computer is nowhere differentiable while the time-evolution of a physical system is everywhere differentiable.
Indeed, it is not only "discrete" and "continuous" that are indistinguishable below a certain threshold. For any data, there are an infinite number of continuous theories that are all indistinguishable from each other, and also are indistinguishable from an infinite number of discrete theories, which are yet also indistinguishable from each other - all that it takes for this to happen is for us to agree that the data doesn't specify anything below a certain scale. Then, every theory that agrees on the large scale will match the data, leaving room for anything you can imagine at the bottom. So discrete vs. continuous isn't the core point.
I meant to say that the theory required to describe a computation on a practical analogue computer (that is noisy) is no different than the theory required to describe the same computation on a digital computer, because they essentially are both discrete systems.
However, as you point out (at least as I understand your first paragraph), when we analyse/build systems on an abstract level we assume (often as a simplification) that they are ideal continuous systems.
Not only that, but the https://en.wikipedia.org/wiki/Bekenstein_bound also tells us that in a finite region of space with finite energy, there's a fixed bound on its information content. So if the brain exists in finite region of space with finite energy, then it can be described without loss of accuracy by a discrete system.
Which doesn't usually end well.
Technically you are right - but practically you are wrong. Like 'technically' any logical or even emotional reasoning can be modeled with if-else-structures (maybe add some randomness). But why aren't we able yet to actually create human-like reasoning? Because that approach is 'practically' not useful. That's why the most powerful ML solutions aren't realized with Prolog, but with neural networks at the moment.
> A few bits are sufficient to describe the output of a real-world neuron.
Possibly. But no computer is so far able to even remotely simulate or emulate what is actually going on within a real-world neuron. And that is required to fundamentally understand the output.
> except for a (potentially) reduced energy consumption
And that is quite a big deal. Because the effect of reduced energy in a parallelized system is going to be exponentially relevant!
> But on a theoretical level, nothing changes.
But on a practical level everything will change.
It's easy to say emotion has no value, until you see it in action bringing some sense of control to say a family that has gone through trauma or a country through war.
It doesn't look like digital computation (not digital encoding) can produce such outcomes.
We are constantly seeing, be it the NSA/Zuck/Wall St/China etc etc having access to ridiculous amounts of digital computational power, but being totally surprised on a daily basis by the realization they aren't in control.
Hm, I don't really see why this should be the case. Emotions are a pretty well studied in both animals and humans and are, to put it very handwavingly, merely a global change of brain state/equilibria, for example modulated by brain regions such as the Amygdala and/or the release of neuromodulators . From my understanding, there is nothing about emotions that cannot be computed by a digital computer, and there is little about emotions that is related to noise.
I'll let philosophers think about the experience part of your statement.
But all that’s needed are are a handful of rules, to provide for a system that emotes. You’d probably dismiss it as an inauthentic toy, but emotions actually aren’t the core aspect of agency.
Anyway, the rules just need to assemble a goal, a threshold for equilibium, and reactions for deviation from that equilibrium.
Bonus points if you account for radiant measurements of equilibrium. What I mean by that is anticipation of adjacent conditions that signal a probable loss of equilibrium, such that the system doesn’t just react to an unbalanced circumstance, but also things that could lead to an undesired imbalance.
A. If the cup is disturbed so that the milk spills, then a negative experience ensues.
B. If a balloon, inflated with ordinary compressed air, sinks onto the grass and pops, a negative experience ensues.
C. Ambulate through an environment obstructed by complex obstacles, and negotiate each obstacle without falling onto the ground. Falling onto the ground will result in a negative experience.
Each of these three goals represents a targeted state of equilibrium: don’t spill the milk, keep the balloon safe, don’t fall down go boom.
Now, layer an array of reactions on top of the branched set of possible outcomes. You can also buld up variations on top of each branch.
Positive branches are indicated in moments of success at achieving the goal. Negative branches are indicated upon equilibrium being defeated.
So the computer or robot can externalize its inner state with a happy face or a sad face, but we’re missing some of the emotional range. When would anger display? When the machine can assign blame and consider revenge, of course.
So if an entity (preferably a rival robot, since we wouldn’t want the robot to exact revenge on a person) knocks over the milk, pops the balloon, tackles the robot, the obvious motive is to make sure that never happens again, the root cause is the rival entity. Stand back up, destroy the entity, and acquire more milk, another balloon, and try to achieve equilibrium, and thus happiness again.
Prior to reacquiring its happy state, the machine can externalize an angry face if it can assign blame to a detected responsible entity, in all other cases, it would simply be sad, until it can stand back up, inflate another balloon and pour itself another glass of milk to protect. If it cannot set things back in order, as desired, then it is simply permanently sad (no balloon, no milk, unable to stand or walk), forever.
See how that works? It’s actually not much more complicated than that.
An agent would need to model the behavior of peers, yes.
But communicate? No. Solve goals together? No.
To coalesce civilization or society? Maybe, maybe not. Socialization among peers is not a prerequisite for agency. Not by a mile.
And certainly not amid a state of nature. Not at all would communication or collaboration become a necessity.
Emotion might become an aspect of investment in hypothetical experiments performed by an agent. Hope that equilibrium might be achieved with less work through communication and collaboration.
But put it this way. A caveman grunts at a wild boar standing on top of a hill. The caveman wishes to discern if the silohette atop the hill is potential food by provoking movement, or an inert object offering the illusory shape of a backlit animal. The boar notices and experiences fear. The boar freezes, hoping the grunt was not directed toward it intentionally.
Is neither an agent? Does the conflict of interests preclude emotion?
The boar models the adversary, and experiences emotion to preserve the equilibrium of staying alive.
The caveman experiences hunger as a loss of equilibrium, which provokes a mixture of anxiety, and unhappiness which may cascade into a malaise or depression as weakness progresses with starvation. The aggression of the hunt is not anger, although anger may arrive incidentally.
Is the grunt communication? Perhaps as much as any tactic might be. Deceptive comminication (bird calls, immitating a female in heat to draw male prey) might still be communication, after all.
But to model nature, there must have been a period of where some agents seemingly existed without peers. But those agents likely experienced emotion before cognizant sentience and a rich awareness of the potential for sentience within peers, which most likely precedes a capacity to communicate.
But make no mistake. It is a puppet. It’s a multicore processing circuit with stack pointers, instruction pointers and little else going for it.
It’s your laptop strapped to some motors. It’s not sentient, and has no agency. It’s a guided missile at best. A step above cruise control.
It lacks authority to define where it goes or form a need for continuing to stand. Thus it lacks true agency.
We can ascribe happy/sad to stand/fall, as crude, fundamental binary “emotions” but robots like BigDog are less complicated than amoeboid life found in pond scum.
Consider whether traffic lights are happy or sad, based on whether traffic obeys their signalling. Now consider traffic cameras. Now consider whether an automated ticket for running a red light on camera is an expression of emotion.
This has been an adequate description of me on my way to work on Monday morning at too many times in my life.
I think we might just be disagreeing about how complicated the puppet is.
But you do have options, and the free will to exercise them. You could rob banks, sell drugs, stay in bed, go on a hunger strike, bootleg intellectual property for fun and profit.
You have choices, up to and including suicide.
BigDog can't even commit suicide intentionally.
> It doesn't look like digital computation (not digital encoding) can produce such outcomes.
We have built such systems and they work with enough training, but their training must be as agents in an environment, not as a model training on a static dataset - think AlphaGo. I think AlphaGo has learned emotion related to the world of Go (in this case emotion is related to good-move, bad move, safety and danger), and its human opponents had a lot to say about how it felt to play against it.
Emotion is not something beyond AI agents. It's just how they plan their actions. They might not be human emotions but they are emotions related to their own activity and goals.
Emotions being valuable routes to unpredictable/unknowable states good or bad. How do you see this connecting to what the article talks about?
In my mind, he is saying as the transition happens to such systems we loose control over them.
I work with digitisation and automation in a Danish muniplacity, and I’m actually one of the authors “hidden” hands on architects. Because I’m a techie in a non-tech world, and, because a large part of Enterprise Architecture is the business end, I see the impact daily on the non-tech savvy world. Digitisation has absolutely changed the way organisations work, and not always in the way we intended. There is a reason why AI/ML is so hyped now, and it’s because we’ve trained our organisations to utilise Business Intelligence in every aspect of their daily lives. It’s more than that though, if you give people a system, they’ll use it, and not always in a manner that makes sense. From managers trying to make informed decisions to employees simply trying to do their best.
An excellent example of the dangers popped up last summer. We were reviewing a process scheduled for Robotic Process Automation, to see how suitable it would be and to sum up our benefit realisation prospects. Only it turned out that the process wasn’t really sound at all. In short we had a team of employees who spent a lot of time distributing tasks in Outlook. They even had a colouring system in place, one colour per employee, to make it easy to spot your individual tasks once distributed. Except they had more employees than there are standard colours, and because they didn’t know how to make more colours some people had to share. They wanted it automated, and we could have done that.
Only, the entire thing was just silly. It was even more silly than I’ve just outlined. Because after they had colour distributed the tasks, each employee would archive their individual tasks in our ESDH system (electronic journaling). During this proces, everyone would fill out a standard ESDH form, where you also need to select a responsible employee. So basically they were distributing tasks twice.
No one had questioned this for almost a decade, and these are intelligent workers mind you, to then it was just how the systems worked. We swooped in, looked at it for about 15 minutes and then forwarded them to a LEAN consultant who saved their department from around 600 yearly hours of needless bureaucracy.
And that’s just one story.
There are two elements to this. One is the idea that there's an executive class which is born (or at least educated) to rule, and the other is the not-quite-identical observation that many people lack genuine independent agency, either by nature or because they lack the political/managerial leverage needed to make productive changes.
The personal part of the "digital revolution" was supposed to be a way for people to explore independence, agency, and creativity. It actually turned into yet another scheme by which those who believe they're born to rule can use algorithmic machinery to farm and control the economic and political activity of everyone else (input and output), without the stickiness and inertia of traditional long-term employment. Which was itself another form of farming and control, but with hard-fought humane benefits.
This is a political problem, not a technological problem. It can only be solved with technology where the political situation allows it.
For now it seems to be true that most humans are rule-takers and mimics, not creative innovators or strategic thinkers. I have no idea if that's a genetic limitation or an educational one. It would have been interesting to see what would have happened if personal computing had gone in the direction it was originally supposed to, and education had followed.
Things aren't perfect, but there are opportunities and to spare for those looking.
It also prevents the ability to take a short mental break and go from focused to seeing the bigger picture. Outlook is easy to use, and it's usually different in color and UI from most other work-related software. It's not fully relaxing, sure, but those 600 hours a year of calendar management that you cut out are now potentially 600 hours more stress for the team members where they're being pushed to accomplish more, no matter whether they were at their stress limit or not.
Also if you have a migraine so bad that you couldn't work, you'd call in sick. In fact, a good manager would send you home sick if they spot you. We have paid sick-leave in Scandinavia. If you have recurrent migraines, we even have national programs, that will pay your workplace a compensation for much of your sick leave.
1) Algorithmic tech products replacing basic services are growing beyond even their creators' control
2) Analog computing will replace these algorithmic products and fix all the problems
I disagree. Not with the "growing beyond control" bit, I think that's clear, but to the analog computing bit.
First, Dyson doesn't explain what he means by analog computing, other than a basic "operates on real numbers and continuous functions." He also doesn't give any clues about what hardware or software for these systems will look like, or specifically how they will outcompete what we have now (Google, FB, etc).
Second, I think he's wrong. The only things that determine the broad direction of society are who has the power and money. This has been true as far back as you go, whether kings or merchant guilds or the citizenry, when enough of them decide to work together for a revolution. And there's one fact Dyson ignores:
No matter how much or little control the owners of these algorithmic tech products have over the direction and consequences of their use, they still get the money.
As long as Google, Facebook, Amazon, and the others are making the money, the unintended consequences of their algorithms dictating the shape of our lives will continue to be irrelevant.
I think he means that neural networks are not binary, but use analog communication between the neurons. We simulate that digitally, but maybe a system that uses analog by nature is better for artificial neural networks.
No idea if this is actually the case, or if this is what he meant ;).
Uhhhh, it didn’t end well because (in the book) humanity was doomed to evolutionarily tear itself apart, and the Overlords knew this. Seems pretty disingenuous to use that fictional scenario not ending well in this context.
Yes and no. Humanity ends as something else takes its place. Quoting from the book (haha of course I just pulled this up on my Kindle):
But there is one analogy which is–well, suggestive and helpful. It occurs over and over again in your literature. Imagine that every man's mind is an island, surrounded by ocean. Each seems isolated, yet in reality all are linked by the bedrock from which they spring. If the ocean were to vanish, that would be the end of the islands. They would all be part of one continent, but their individuality would have gone.
Telepathy, as you have called it, is something like this.
Swap "telepathy" with "telecommunications" and the analogy is–well, suggestive and helpful.
The book is one of my favorites. Lo and Behold is also worth a watch.
As an aside, to anybody reading this that enjoys science fiction: read Childhood's End. It's my favourite book.
The internet presents us with a similar existential threat to human identity as in the book. Like the masks hiding a higher intelligence at the end of the Difference Engine, the apotheosis of the internet threatens to rob us of our individuality but offers the chance of the development of a higher form of consciousness, like the Overmind in Clarke's book.
But this article went in a different direction.
They push knowledge, they help draw a personal evolution path, they solve problem. Today we have something similar with Free Software.
Those ideas are not much accepted by "the big and the powerful" because they mean a real free market in witch only knowledge and work have a value, not much marketing and certainly not commercial secret agreements.
So yes, digital revolution has turned from a free academical intelligent project and dream to a new way to subject whole populations and this start to happen years ago, and start to be so evident now that even totally ignorant in IT terms start to feel it.
Few subject succeed to displace public knowledge to private companies, to displace free market in a sort of Soviet Union planned economy draw instead of a dictatorial government by few equally dictatorial companies/founds. The next step is abolish entirely public (paper/coin) money substituted it only with digital payments, another step declare "politics" as a bad thing to be abolish substituting it with a corporate council like "Continuum" TV series foresee well, and George Orwell foresee even before with "1984".
Beware a thing in the past dictators need manpower so "force" is not all in few hands. In a not-so-far future manpower will not be needed anymore. So the chance to revolt and gain freedom is lower and lower.
Imaging a future with autonomous robotic army, cars, "smart" devices everywhere. Think how we can revolt, and against who. Also think how we can communicate, being tied to proprietary devices and platform, without pens, paper anymore, without the knowledge to write text with pen&paper, without the knowledge of "past" things like "postal system" and "newspapers" that well, we know but at the same knowledge of today's people know ancient mimeograph and polygraph...
What we may very ironically be witnessing with the recent international populist uprising that seems in part aligned with certain trans-national oligarchical interests is the emergence of the beginnings of an actual global government. Unfortunately it seems to be a global government run by gangsters.
The fact that US Democrats (and a few Republicans) are hung up on "Russia" shows that they don't get it. Russia is as irrelevant as the USA or China to these people. I mean sure some of them are Russian and may have ties to the Russian government, but Russia and Putin are just tools to them as are all other national and governmental powers. They're trans-national and represent an emerging power that has no national loyalty.
We're heading for a world run by gangsters and corporations. William Gibson remains the most prophetic of all sci-fi writers.
The idea certainly isn't new, having been written about in the '50s in Stanisław Lem's "Dialogi". The observation about digital systems is certainly fresh though.
The perception that programmers are in control is dangerously inaccurate but I'm not sure how we can go about educating the public without destroying the trust we have left.
What is metazoan computing?
Given that our discrete computers encode many analog models, what exactly is he suggesting will be so now that it constitutes a new era?
Reads a little like the writings of someone who so wants to say something bigger than everyone else that they end up not saying anything to anyone.
Analog models are neat though, and pondering “could we do this analog” a good question to have in the design quiver.
Exactly, similar to some of the (not all though) "insights" you have under the influence of psychedelics.
But psychedelics have been clinically shown to help people have genuine insights.
Oh, you must be new to Edge.org, which for as long as I've heard of it has been home to some of the most egregiously pretentious intellectual wankery about computers and technology and shit -- the kind of stuff that The Guy I Almost Was makes fun of. When some of your most level-headed essays come from Eric S. Raymond, it's time to acknowledge that you haven't just misplaced the plot.