> “SANDHOGS,” THEY CALLED THE LABORERS who built the tunnels leading into New York’s Penn Station at the beginning of the last century. Work distorted their humanity, sometimes literally. Resurfacing at the end of each day from their burrows beneath the Hudson and East Rivers, caked in the mud of battle against glacial rock and riprap, many sandhogs succumbed to the bends. Passengers arriving at the modern Penn Station—the luminous Beaux-Arts hangar of old long since razed, its passenger halls squashed underground—might sympathize. Vincent Scully once compared the experience to scuttling into the city like a rat. Zoomorphized, we are joined to the earlier generations.
This goes on for about seven paragraphs before I have any idea what the article about. I understand “setting the scene” but I can’t tell whether or not to care about an article if it meanders about with this flowing exposition before indicating what its central thesis is.
It seems like a popular style in thinkpieces and some areas of journalism. The author makes a semi-relevant title, provacative subtitle, and five - ten paragraphs of “introduction” that throw you right into the thick of a story whose purpose doesn’t seem clear unless you know what the article is about. Rather than capturing my attention with engaging exposition, I find it takes me out of it. But it must work if it’s so uniquitous; presumably their analytics have confirmed this style is engaging.
"But, I explained to my work colleagues as the Princeton local pulled out from platform eight and late-arriving passengers swished up through the carriages in search of empty seats, both the original Penn Station and its unlovely modern spawn were seen at their creation as great feats of engineering."
I had to highlight between the commas to get through that one.
The content need not be true, but at least everyone will be happy with their preferred writing styles....
If I could read fiction that is written exactly for me I would love it. And as for "non-fiction", I reference check any particularly interesting claim anyway, so I'd be happy to try and use the AI for that too. The way I see it, reading is much more about exercising the brain in thinking about new things than about learning new facts.
And yet a world of Pop Tarts is sooooooo boring... And no one makes heart stoppingly good fish stew using Pop Tarts.
This fella may not have written the best piece of the week, we may not remember this piece tomorrow - but I think that the fact that he's attempting to create something gives him a chance of actually getting there. Looking at a dashboard completely kills that in my opinion.
Screw the stats! Make what you think is good !
IMO the author makes some very valid points about fuzzy products and endpoints in the current AI/data/ML/magic craze. These are under-articulated elsewhere, because, well hey there's a lot of money flowing! Who wants to be a killjoy and not "get it" (just like in 1999 ;)?
Two more specific points:
1. The descriptions of the CEO are eerily familiar to me. This guy is almost an archetype. Reminds me of a person I've worked with in that role who was also associated with a similar-ish company. It really paints the con-game side of all this.
2. A deeper point (and worth the read for me) was the author's thinking about how all this didn't fit existing needs and workflows and then has a chilling thought:
"It’s possible that the market for a user-hostile data
system that inaccurately predicts the future and turns its
human operators into automatons exists after all, and is
You can make an argument that this kind of thing has already happened in modern customer service and, with greater negative impact, in healthcare. I.e. where the tail of easy metrics and saleable endpoints ends up wagging the dog of quality.
There's a meme going round about how the best way to refute an argument is to 'steelman' it: present the best arguments of the opposing side before refuting them. He doesn't do that here, which is one of the reasons I found it frustrating.
I agree that the way the venture raising market works today rightfully deserves some fair criticism.
This is a perfect summary of the VC situation today. Too much money chasing no-one knows what exactly, but they're sure they'll know it when they see it.
"... judge the merit of a new idea in AI according to the perceived intelligence of its developers."
about information technology VCs and AI is just totally wrong: I don't believe VCs do that. Why? Generally, from 50,000 feet up, it's too far from the norms of the accounting, banking, and investing communities respected by the limited partners of the VCs. Uh, the limited partners (LPs) are where the VCs get nearly all the money they invest, and the limited partners are conservative people, managers of pension funds, university endowments, etc. Not only do the VCs not do that, the LPs won't let the VCs do that!
Instead, about the shortest believable view I can see is, VCs look for traction that is significant and growing rapidly in a market large enough to permit a company worth $1+ billion in a few years.
The VCs view of traction is a weakening of the usual measures the accounting, banking, and investing communities use and respect of audited revenue and earnings.
So, sure, the best form of traction will be earnings, then next best, revenue, next best lots of interested customers, e.g., advertisers willing to pay for eyeballs, then last best, just lots of eyeballs. In these norms, intelligence, brilliance, AI, technology, etc. are mostly publicity points, window dressing, the wrapping paper on a birthday gift, and with a dime won't cover a 10 cent cup of coffee.
In a sense, the VCs have a good point, more from insight into humans and the real world than anything in a pitch deck: (1) With technology, it's too easy to push totally meaningless, useless BS. (2) Carefully studying core, deep, difficult technology is just too darned difficult to be practical for the VCs.
Or the investors believe in a Markov assumption: The future of the business and the technology from the past are conditionally independent given the current traction, its rate of growth, and the size of the market. To be clear, this Markov assumption does not say that the technology and the future of the company are independent.
The stories in the OP about the company Predata, to abbreviate "predictions from data", are good: The company was floundering around with guesses about what would work, e.g., for predicting terrorist attacks, that were like something from smoking funny stuff.
But here is one big place the VCs and technology are going wrong: We do have some terrific examples of how to do well. The examples are from the past 70+ years of the unique world class, all-time, unchallenged grand champion of using advanced, even original, technology for important practical results -- the US DoD.
A grand example is GPS. GPS was by the USAF, but it was a refinement of an earlier system by the US Navy, for navigation for the missile firing submarines and started at the Johns Hopkins University Applied Physics Laboratory JHU/APL. At one time I worked in the group that did the original work and heard the stories. A key point: The original proposal was by some physicists and almost just on the back of an envelope. Soon the project was approved and pushed forward with a lot of effort. Then, presto, bingo, it all worked just as predicted on the back of the envelope. E.g., a test receiver on the roof navigated its position within one foot, plenty accurate enough for the US Navy.
So, net, for project selection and funding, here is the shocking, surprising, point that the VCs miss: Really, given the back of the envelope work, the rest was relatively routine and low risk.
And the past 70+ years of the US DoD is awash in comparable examples.
In blunt terms, the US DoD has a fantastically high batting average on far out projects evaluated just on paper. Given good evaluations of the work just on paper, the rest is relatively routine and low risk.
Well, that project funding technique does not fully solve the problem of the VCs: The VCs also need to know that the resulting product will have big success in the market. But for that there is an okay approach: The dream product would be one pill taken once, cheap, safe, effective, to cure any cancer. In that case, the technology is so good for such an important practical problem in such a large market that there's no question about making the $1+ billion. So, from this hypothetical lesson, net, need the technology to be the first good or a much better solution, a "must have", for a really pressing problem in a big market. So, right, this filter would reject Facebook, Snap, and more. So, right, need to start with a really big problem where with new technology, say, as in the US DoD examples, can get a "must have" solution for a really big problem, and Facebook and SNAP are not such problems. Just what are such problems? That's part of the challenge. But with current VCs, come up with such a problem and a solution on paper, with brilliant founders, with AI, etc., then still will need more than a time to cover a 10 cent cup of coffee. Again, to get VCs up on their hind legs, bring then good data on traction, significant and growing rapidly in a large market; if the secret sauce technology helps, fine; brilliant founders, fine; even if there is no technology, fine; in all cases, what really matters is the traction.
And do you have a reference for the "fantastically high batting average" of US DoD research? Are you familiar with the SBIR program, for example?
I would judge that neither DoD/DARPA nor VCs have a great batting average. But both have some spectacular wins.
To be more clear, I believe that such other issues, often mentioned, some on the Web sites of VCs, are nearly all just smoke to hide what I listed as the main issues. In particular, of course, I was pushing back against the statement I quoted from the OP -- their statement was much worse than mine!
But here on HN, I warn entrepreneurs who have already sent 100+ e-mail pitch decks to VCs: I gave my best guess on really how VCs select deals.
Batting average reference? I'm not considering the SBIR program at all. E.g., GPS, coding theory, e.g., as part of radar, lots more in high end radar, e.g., phased arrays, Keyhole (a Hubble, before Hubble, but aimed at the earth), the SR-71, the F-117 stealth, the SOSUS nets and adaptive beam forming sonar, some of ABMs, a huge range of parts of the SSBNs, high bypass turbo fan engines, the nuclear power reactors on the submarines and air craft carriers of the US Navy, and much more were not SBIR projects. I am drawing from early in my career in applied math and computing for problems of US national security within 100 miles of the Washington Monument and comparing with what I've seen in VC work.
The Navy's work on rail guns looks darned promising.
For DARPA, yes, they flop a lot, on their batting average, much more than the rest of DoD, but DARPA also has some spectacular wins. E.g., of course, TCP/IP. And they fooled me on their autonomous vehicle "challenge": While I believe that autonomous vehicles are a long way from being ready for real roads with real traffic, I can believe that so far already the DoD has gotten some good progress for some cases of logistics. E.g., one of the issues in Gulf War I was truck drivers. There an issue was that a lot of the drivers for the US were women, and the Saudis didn't like women driving vehicles. So, there was a trick, a deal: The US and the Saudis agreed that when the women were in uniform and driving US military vehicles, they were "soliders" and not women. Otherwise they were still women and could not drive!!!
Uh, the robots of Boston Dynamics are impressive, maybe still less good on legs than a cockroach, but already or well on the way to being useful for the US Army.
These people were eating VC hype money to build Hagbard's FUCKUP from the Illuminatus! Trilogy. 
Not sure who I feel more sorry for. The smart employees wasting years of their prime chasing some unattainable pipe dream, the VC's who got suckered into pouring their money into some vaporware precog technology, the author trying to disguise a shit river with meandering prose, or my upcoming pay cut when the AI winter sets in.
 First Universal Cybernetic-Kinetic-Ultramicro-Programmer (FUCKUP). FUCKUP predicts trends by collecting and processing information about current developments in politics, economics, the weather, astrology, astronomy, the I-Ching, and technology.
Some of PreData's recent "insights":
"China Trade War Fears Still Running High"
"Mall Blaze Sparks Outrage Across Russia"
In short, nothing that couldn't be revealed from the briefest skim of headlines from tomorrow morning's WSJ.com. One can stay better informed leaving a Bloomberg TicToc (which is partially machine generated) tab open all day.
My takeaway is that the world of the Jim Shinns is rapidly approaching extinction. Deals done poolside at country club dinner dances. Name game shmoozing. And serendipitous encounters on private islands. What was considered the predominant pathway to immortality in Fitzgerald's day.
Viable alternatives exist now. And any business model solely differentiated by prestige will be subsumed by free or near-free competition.
> Three months later, Predata secured a second round of venture capital funding.
People like Jim Shinn will always find a way. At least that's the argument the author seems to be making.
That's a really embarrassing mistake.
But I found this Sunday AM read enjoyable, articulate, and largely on-point (overlooking a few minor scientific errors).
The core themes here are about the hubris of a rich CEO/founder, the zaniness of the current AI "market," and their resultant effect on a particular NYC startup.
This is a season of "Silicon Valley" (HBO) done east-coast, hedge fund, Ivy League style.
I'd be shocked if anyone in the industry hasn't worked for or with a Jim. Spot-on.
Technology without vision is dehumanizing - it happened with Penn Station, where narrow quantitative and engineering goals displaced the broader human ones and led to the widely-hated station that's there now, which was excavated by people who were called hogs, and which makes passengers feel like rats. The loss is especially acute there, since everybody knows what the old station was like ( https://duckduckgo.com/?q=old+penn+station&kp=-2&iax=images&... ). It was an edifice comparable to the great gares and bahnhöfe of Europe (or to Grand Central which for some reason we decided to keep), a monument to national power, industrial wealth, and the technologies of the time, but also a space that evoked something a little more noble in the human spirit somehow.
The writer is also drawing a parallel with the dehumanizing effect of the particular startup he worked for. The analysts are the hogs, he's the rat, his own perceived loss of creativity (probably a bit exaggerated... aahhh youth) is the dehumanization part, and the absentee CEO is the lack of vision. (If a CEO has one function, it's to provide vision. And in second place, not far behind, is to establish company culture.)
Arguably, placing technical/quantitative goals above more humanistic ones is what an organization like Nazi Germany was all about. But obviously it's way more complicated than that, and I don't intend to address it further.
I would point you toward Dmitri Orlov's concept of a Technosphere. Analogous to the "biosphere" it models human technology as a quasi-intelligent entity that is global in scope.
Excerpt (not much exposition but you'll get the point): https://cluborlov.blogspot.com/2016/02/the-technospheratu-hy...
Everybody here are the ones who most need to hear this message. Some will doubtless resist the criticism of ML/datasci with the fervor of someone whose long-held religious belief is challenged for the first time. But you needed that. Feel free to prove the critiques wrong, by the way... that's kind of the whole point. Prove them wrong with broad projects that actually benefit humanity instead of being a mess of unintended consequences and unimpressive bullshit.
He is right about his claim of having no right to be called a “director of research", as it seems to me his skills center on cribbing thoughts pulled from other people's thinkpieces. It's clear that he doesn't have a deep background in either neuroscience or engineering and that he was brought to the company from a background in business journalism.
In his condemnation of the state of AI research, there is no mention of AlphaGo, or a description of the teachable pattern recognition techniques that have swept the deep learning scene over the last 6 years.
I'm sorry to be so harsh, but there is a certain tone to this piece, "let's hate all those startup a*holes", "Mark Zuckerberg can't write like F Scott Fitzgerald because his knowledge of liberal arts is too limited, unlike mine" that seems like a snooty class signaler among a certain hipster set.
There is a compelling story in here, but to me the general attitude is just condescending to everyone around him.
Even without extrapolating from the pattern recognition tools we have today, whole classes and ranges of jobs can be fully or partially eliminated.
Here is what he says about the state of AI:
> Even the most eye-catching successes claimed for AI in recent times have been, on closer inspection, relatively underwhelming. The idea that an autonomous superhuman machine intelligence will spontaneously spring, unprogrammed, from these technologies is still the stuff of Kurzweilian fantasy. Forget Skynet; at this stage it’s not certain we’ll even get to Bicentennial Man.
> These techniques might replicate discrete functions of a human mind, but they cannot capture the mind’s totality or what makes it unique: its creativity, its genius for emotion and intuition. There’s something else going on."
"the brain has spiritual magic"
Compare that to quotes from real live top human Chinese Go players defeated by AlphaGo last year:
> “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”
? “AlphaGo has completely subverted the control and judgment of us Go players,” Mr. Gu, the final player to be vanquished by Master, wrote on his Weibo account. “I can’t help but ask, one day many years later, when you find your previous awareness, cognition and choices are all wrong, will you keep going along the wrong path or reject yourself?”
For me the interesting theme was the exploration of the character and dubious success of the mysterious Jim, who is using his connections to ride a wave of poorly understood and possibly malevolent technology to a grand house in the country and membership in the upper classes--almost like flotsam on the rising tide of economic progress. There's a lot of Gatsby in there and some hints of Graham Greene's quiet American as well. Just as you can argue that AI the technology is part and parcel of industrialization, its social effects recall recurring conflicts in American society and culture that authors like Fitzgerald have been exploring for 150 years.
As for other critiques of the style from the Hackerati I would just say yes, it seems like the work of a young writer. Good writing is hard to achieve and it's typically preceded by a lot of bad writing. To paraphrase Senator Palpatine, we will watch his career with great interest.
Fixed: typo/it really is hard to write well
And where is the AI in a humaniform that does all of the above?
There are tentative steps towards some of those activities, but we’re still in the early years with imbuing our machine intelligence models with the equivalent of our kinesthetic sense, object recognition and classification, and natural language interaction. And it is far from clear that we can get there with purely current statistical heuristic-oriented technology. We can only try, but the amount of effort required just for folding clothes to date reminds me of the elaborate Ptolemaic models, or as if we’re trying to build Excel by poking ones and zeroes into memory.
More tinkering required, be back later.
AI will not always represent itself in a one to one relationship with humans to be able to compete or outcompete us. Just as an example a lot of things have become digitalized which have rendered many elements that used to exist in physical form into digital form, music is a good example of that.
So sure there are areas that machines aren't as good at yet because they haven't practiced it enough but it's literally just a matter of training and improving not some fundamental problem that can't be solved.
I heard this echoed many before when pawing through the library stacks in my uni days looking through the littered corpses of AI trends in the past. I believe that we will eventually get to strong AGI. But after either reading or seeing in-person the hype machine sprout up and wither around symbolic programming, semantic programming, neural nets, fourth generation, expert systems, perceptrons, Connection Machine, etc., I'm gun-shy around any proclamation that achieving strong AGI is "just a matter of...<insert-single-solution-space>". The results so far seem to indicate pure cognitive processing is very amenable to the toolbox we have built up to-date in AI research, hence the breakthroughs in game playing.
Manipulating and interacting with the material world and humans however, and the results are a little patchier; I suspect we have lots more work and research ahead of us than we currently realize. When we do get some initial results like the laundry-folding machines, they're single-purpose and uneconomic for mainstream middle-class adoption (not to speak of working-class), and often with lots of attached caveats like Tesla AutoPilot. Instead of all these discussions of whether or not we will get strong AGI, I prefer to see everyone assume it will happen, and when we don't get the incremental result we were anticipating, say, hmm, that's interesting, I wonder why...
I want to see the hype tamped down to the point we can steadily chip away at the overall problem space, and accelerate AI research results and organically reach strong, economically-available AGI sooner than continue experiencing the disappointing two-steps-forward-one-step-back our industry seems to so far historically take in this field. The hype says we're a sprint away from unlocking all sorts of benefits promised by strong AGI, when we are better served accepting the organic incremental benefits as they occur during our acknowledged marathon, and using those incremental benefits as stepping stones to greater understanding.
Again humans evolved from dumb atoms why is that easier to believe?
Inferior to a human, sure. But a start.
Yes, but I don't think you are, either.
The fact is that chess and Go abilities aside, we are probably not even close to insect-level intelligence, and we don't have a clear path of getting there soon -- let-alone anything human-level. Current state-of-the-art so-called AI is basically powerful statistical regression algorithms, that are heuristic improvements over core algorithms invented in the 1960s, and there have been few theoretical breakthroughs since then (far fewer than in most other fields), so much so that many consider machine learning to be in the pre-science or pre-theory stage, being mostly about collecting data and trying out heuristics. It's silly to deny recent successes -- largely due to better hardware (although hardware progress is slowing down quickly) -- but we are behind, not ahead, of where we thought, even as recently as the 1990s, we'd be by now.
At this point we have no idea what role statistical regression plays in intelligence or whether we're even in the right direction. That statistics has become synonymous with intelligence (it used to be synonymous with lies) is certainly a cultural phenomenon that is not directly related to our actual knowledge of the field.
That computers perform some mental tasks (certainly more and more of them) better than humans has been a fact of computing since the 50s, and often a cause for wild claims. The invention of neural networks in the 40s and their implementation in digital computers in the 50s led some very respectable people (like Norbert Wiener) to declare that the problem of the brain will be solved in 5 years. The pragmatic Alan Turing thought that was ridiculous and predicted it would take 50. It's been almost 70 years and we haven't yet reached insect-level intelligence or anywhere near a complete understanding of the insect brain, so at this point, any claims that we are on the cusp of something, or starting to believe that our statistical regression algorithms reflect the beginning of intelligence is... misplaced.
On the other hand, it seems like we have not learned the lessons of misplaced confidence in AI, despite our relatively slow progress, and things are worse now as that we actually have some algorithms that are useful in certain restricted domains that we insist on calling AI, thus causing people to use them in domains where they are not only useless but downright harmful. In the meantime, some people draw attention to the dangers of real AI -- which may be anywhere between decades and centuries away (I believe we'll get there some day but we have no idea how or how soon) -- while distracting from the very real and already present dangers of "AI".
I think the problem is that we do not have a slightest clue what is (even insect-level) intelligence (or consciousness, which is often mixed up in the discussion).
Seems like they managed a honeybee (but I am not sure that it ran in real time or how they validated that) but were hoping for a rat brain.
I'm quite sceptical - I don't believe that there is a good understanding of how a single neuron functions, or agreement on the taxonomy of neurons or an understanding or agreement on their interactions and arrangement apart from in a part of the vision system where there do seem to be some good models.
That said, isn’t the point of the Blue Brain project to answer your skepticism? When we have all those things, the only thing remaining is to see what can be left out of the sim without compromising the behaviour?
(Which is to say “I ought to volunteer”).
Playing a game with fixed rules and a finite set of potential states is something a computer can do.
Designing the computer that does that is intelligent.
There is no connection between the two and one does not lead to the other.
The fact that some of the people developing 'artificial intelligence' have such a limited understanding of what intelligence is no doubt contributes to the mocking tone of some critics.
I had a brief argument with Robert Epstein (the author of that article) because I find the argument that humans don't actually store information to be quite misleadning and missing the point.
The most obvious mistake is this:
"We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not."
This is both true and false. It's true we don't do that like a computer but its wrong to claim that computers fundamentally do that too.
That happens several layers of abstraction up and so a computer fundamentally doesn't actually store an image or a word either it manipulates atoms and turns circuits on and off and several layers of abstraction up it gets translated into meaning first by machines then by humans.
Anyone who have a hard time believing machines can become sentient should first ask themselves why they have a harder time believing that than accepting that dumb immaterial matter somehow have become the pattern recognizing feedback loops that are us.
This argument deserves much more recognition than it gets because 1. it's at this point still empirically true, we have not observed non-organic sentient life and more importantly, because not Searle but everybody else employs 'magic'.
Searle's point is simple. Computation is subjective. Electricity flowing through a machine doing complex things is just a physical process like anything else. You (the sentient observer) classify that physical process as meaningful, but a computer is no more 'computing' things than a falling pen computes gravity.
So sentience really is related to physical agency and sensory experience in the world, which creates conscience in organic brains. That doesn't imply complexity or intelligence or understanding. Syntax and Semantics are different things. Your pocket calculator processes the syntax of mathematics, but it does not understand the semantics of mathematics. A compiler processes symbols according to rules, but it does not understand the meaning of the computation, it has no cognition. It might be very good at what it does, but it has no capacity to understand. That's the essence of the Chinese room, and it's still a convincing argument.
An even stronger point might be made, namely that sentience actually limits intelligence. That it requires a degree of slowness and introspection that is unsuited for fast decision-making. For a fictional treatment of this, Blindsight by Peter Watts is an excellent read.
Of course, the person in the room doesn't understand Chinese just like the individual neuron in a Chines person doesn't understand Chinese.
It's the entire house where the person in the room is just one part of the entire system.
And yes he does imply that there is something magical about the way humans are pattern recognizing feedback loops vs. machines as humans came from immaterial matter ourselves if he doesn't his argument simply do not hold up as there is nothing magical either in the way human conscience is a byproduct of simpler systems all forming to become a scentient one.
If you can buy that humans have evolved from dumb atoms then you have too look that not how humans and "machines" are different but how they are the same.
The samenes is that we are pattern recognizing feedback loops and that our sentience comes out of something non-sentient.
So either something magical is in play or there is nothing that hinders machines to become sentient either from what we know.
A human consists of milions of sub-systems like a calculator and yet we are somehow sentient.
Furthermore there is no know upper limit to how complex silicon based systems can be and so the right answer really is if anything "we don't know" not "because the person in the room doesn't understand chinese it proves that systems can" the person in the room is not the system the entire house including everything happening outside of the room.
In other words unless searle is claiming magic at some level nothing, absolutely nothing indicates that machines can't become sentient.
With regards to your last point then that's the wrong way to look at it.
A better way to understand why it's possible is to start from omniscience and then realize that omniscience means you are aware of everything and thus have no perspective where as all systems that can handle information potentially can become sentient the more complex they become.
How about the brain creates information from constant interaction with the world based on the kinds of bodies we have and our needs/wants? This information doesn't exist as information until the brain creates it. Information is the product of minds. It doesn't exist in the world on it's own to be processed. As such, the brain is something other than a computing device. Computers exist because we figured out how to arrange physical systems to process information that's meaningful to us. But to nature, it's just a physical system (and not even that, since physics is a model of nature we create).
That's Jaron Lanier's paraphrased argument against thinking of the brain as a computer. To say that information exists in the world to be processed is to make a metaphysical commitment that information exists ready made for us.
> and there being something magical about human brains that cannot be simulated
It doesn't have to be magical. There are different philosophical views on the world and the mind which lead to different conclusions. If one takes the hard problem of consciousness seriously, then consciousness cannot be computed. Not because of magic or the supernatural, but just because consciousness is not computable, since computation is itself an abstraction (Turing machines don't exist on their own anymore than do any other mathematical systems). Unless your metaphysics falls along the lines of Tegmark, Plato or Wheeler (it from bit).
Instead you can think of The brain as an information creator. We give meaning to the world. We build models. The world itself just is, it's not information, math, physics or symbols.
What is true is that nobody has done it yet. The process is a mystery in the sense that it's not understood, which means that we don't know if it's computable or not.
What is certain is that there are uncomputable problems, but are any of the problems that humans solve in order to speak, act, socialise uncomputable? Some people think that because they are solved within the physical universe then they must be computable but that implies that the physical universe can be simulated (in principle) by a universal turing machine, since we can express problems (using the machinary of the universe) that a universal turing machine can't do then there is a gap that permits the possibility that there may be some process which is not simulatable by a universal turing machine.
In my belief free will/autonomy/initaitive/creativity are expressions of that process, but belief is not an argument.
If you believe the brain exists in the physical universe, that means you can build a physical system that also solves the same problems.
Sure, but in the case of computation, is it possible to make a computing system that has subjective experiences? Maybe consciousness isn't something that can be expressed in computational terms, because computation is itself based on abstraction.
- "It" wouldn't be a "computer"
- you / someone would have to be able to understand it, which might be impossible (for a human)
- you would have to be able to construct it, which might be very very technically hard
but yes (ish)
This time round there will be no million fold increase in compute power to bail everyone out!
You need to present the best arguments from the side you want to critique and then prssent a case why you think they are wrong. Calling people names and avoiding difficult challenges to your thesis is not the way to do it.
This isn't a Ph.D. exam, this isn't a thesis - it's an outsider calling BS. I'm not impressed by the counter arguments advanced so far. Let's be honest, Alpha Go and Alpha Go Zero are surprises in that they have shown that Go isn't as astonishingly difficult for approximate search - which everyone thought it was - but until we see the real world applications it's all of intellectual interest.. which is the point of the article.
There are a lot of folks who I respect making claims similar to the company that is featured in the piece, I'm really disappointed by that because everthing that we know about learnability is ignored with the cry "we've got deep networks now". We've don't know why dnn's generalise as well as they do but shouldn't, but it's no excuse to just abandon our sanity and go out and bet large amounts of other people's money on them doing things that they can't.
This money, btw, should be spent on hospitals and roads, not on providing near 7 figures for these people.
I guess I'm not in their target market then because it reads like a hit piece - so much so that I was sure that all the names were changed!
The author is out of his damn mind for not changing names, but NMP.