Hacker News new | comments | show | ask | jobs | submit login
The Artificial Intelligentsia (thebaffler.com)
148 points by jamiehall 3 months ago | hide | past | web | favorite | 88 comments



Offtopic, but I have a really difficult time reading articles like this. I don’t know if this reflects a problem with the style or my ability to focus, but I find it really annoying:

> “SANDHOGS,” THEY CALLED THE LABORERS who built the tunnels leading into New York’s Penn Station at the beginning of the last century. Work distorted their humanity, sometimes literally. Resurfacing at the end of each day from their burrows beneath the Hudson and East Rivers, caked in the mud of battle against glacial rock and riprap, many sandhogs succumbed to the bends. Passengers arriving at the modern Penn Station—the luminous Beaux-Arts hangar of old long since razed, its passenger halls squashed underground—might sympathize. Vincent Scully once compared the experience to scuttling into the city like a rat. Zoomorphized, we are joined to the earlier generations.

This goes on for about seven paragraphs before I have any idea what the article about. I understand “setting the scene” but I can’t tell whether or not to care about an article if it meanders about with this flowing exposition before indicating what its central thesis is.

It seems like a popular style in thinkpieces and some areas of journalism. The author makes a semi-relevant title, provacative subtitle, and five - ten paragraphs of “introduction” that throw you right into the thick of a story whose purpose doesn’t seem clear unless you know what the article is about. Rather than capturing my attention with engaging exposition, I find it takes me out of it. But it must work if it’s so uniquitous; presumably their analytics have confirmed this style is engaging.


It's not the style-- it's just not good writing, but it's trying so hard to be. It's the kind of thing that would show up in a college writing workshop and hopefully get workshopped into something more intelligible. As they say, "Show, don't tell." The passage describes a lot but not in a way that helps you actually visualize any of it, thus it's really hard to follow.


Right, any individual sentence is fine and the idea is probably usable, yet it's not clear how each statement relates to those that came before it.


The next sentence afterwards is a monstrosity:

"But, I explained to my work colleagues as the Princeton local pulled out from platform eight and late-arriving passengers swished up through the carriages in search of empty seats, both the original Penn Station and its unlovely modern spawn were seen at their creation as great feats of engineering."

I had to highlight between the commas to get through that one.


It seems growing up with German is a great preparation for such sentences :)


These complaints about Penn station are also a well-worn cliché.


Just imagine ... that some day in the future journalism will be AI based, and will generate entire articles tailored to your viewing habits based on extensive psych profiling and AB testing to maximize clicks and screen time!

The content need not be true, but at least everyone will be happy with their preferred writing styles....

http://karpathy.github.io/2015/05/21/rnn-effectiveness/


What if I dont want the content Im most interested in. It'd probably give me the computer scoentost version of tabloid, but I prefer making myself read things that I don't fully grasp.


There will be a slider for that.


That actually sounds quite appealing to me. In recent years I noticed that the writing style of a book had a much bigger part in my ability to derive value from it than its content.

If I could read fiction that is written exactly for me I would love it. And as for "non-fiction", I reference check any particularly interesting claim anyway, so I'd be happy to try and use the AI for that too. The way I see it, reading is much more about exercising the brain in thinking about new things than about learning new facts.


The piece is just overwrought at the sentence level, as in the example below. I think it's partially inspired by trying to sound like an old-style important newspaper columnist, and partially David Foster Wallace. DFWs long sentences are very readable though, because they are conversational, so you can understand them perfectly if you read them as though hearing them aloud.


The existence of writing like this is why analytics (and attention) are not good ways of deciding if style and subject are "working". Clearly many people hate it - like Garlic - many people hate Garlic; Garlic fails the attention / analytics test. Pop Tarts pass!

And yet a world of Pop Tarts is sooooooo boring... And no one makes heart stoppingly good fish stew using Pop Tarts.

This fella may not have written the best piece of the week, we may not remember this piece tomorrow - but I think that the fact that he's attempting to create something gives him a chance of actually getting there. Looking at a dashboard completely kills that in my opinion.

Screw the stats! Make what you think is good !


It’s a remnant from the time when we paid by the bundle for longform content and trusted the issuing brand not to waste our time.


Some people enjoy writing and some people enjoy reading. It need not be "to the point" all of the time.


I should clarify: I don't mind "unfocused" writing like this. I can definitely appreciate a creative take on exposition. But I think the introduction of an article is not the most appropriate place to do it. An upfront paragraph - even a few sentences - explaining what is happening would basically resolve this for me.


would you expect the same of a novel? why not similarly temper your expectation given the source; some pieces are simply more literary. (nb. im only speaking generally because i havent read the article). and hey at least we have the comments of hn to scan for the tldr :)


It still applies to creative writing. IIRC, I heard it called a promise in a creative writing course. E.g. if you open with an action scene and the rest of the story has no action, that's a broken promise to the reader. It's useful to give the reader an indication of what they're starting in any piece.


Most novels come with a back cover blurb that tells you what the book is about. And I don't know many people who read novels without first reading the cover.


I actually like the baffler, but I 100% agree. The New Yorker and LRB are similar with the new yorker being far far worse. Its distracting and takes away from the story for me as well, and I love long form journalism.


I appreciate the irony of assuming that the writing style of an article about how data-obsessive engineering cultures strip away the capacity for creative thought and engagement must have been determined by profit-maximizing analytics, especially when the article in question was written for a nonprofit leftist magazine.


These convoluted forms of passive voice do make it harder to parse. It’s almost like the polar opposite of Hemingway’s journalistic style.


A thought: Don't let some of the (valid) criticism alone dissuade you from reading this.

IMO the author makes some very valid points about fuzzy products and endpoints in the current AI/data/ML/magic craze. These are under-articulated elsewhere, because, well hey there's a lot of money flowing! Who wants to be a killjoy and not "get it" (just like in 1999 ;)?

Two more specific points: 1. The descriptions of the CEO are eerily familiar to me. This guy is almost an archetype. Reminds me of a person I've worked with in that role who was also associated with a similar-ish company. It really paints the con-game side of all this.

2. A deeper point (and worth the read for me) was the author's thinking about how all this didn't fit existing needs and workflows and then has a chilling thought: "It’s possible that the market for a user-hostile data system that inaccurately predicts the future and turns its human operators into automatons exists after all, and is large." You can make an argument that this kind of thing has already happened in modern customer service and, with greater negative impact, in healthcare. I.e. where the tail of easy metrics and saleable endpoints ends up wagging the dog of quality.


The problem, besides the condescending tone towards everyone around him, is that he doesn't present an understanding of the actual state of the field of AI and deep learning, and what's worse, he cites bad science essays that will misinform more people about how a brain works.

There's a meme going round about how the best way to refute an argument is to 'steelman' it: present the best arguments of the opposing side before refuting them. He doesn't do that here, which is one of the reasons I found it frustrating.

I agree that the way the venture raising market works today rightfully deserves some fair criticism.


Faced with the impossibility of determining whether a technology is intelligent or not—since we don’t know what intelligence is—Silicon Valley’s funders are left instead to judge the merit of a new idea in AI according to the perceived intelligence of its developers. What did they study? Where did they go to school? These are the questions that matter

This is a perfect summary of the VC situation today. Too much money chasing no-one knows what exactly, but they're sure they'll know it when they see it.


From all I've been able to see, that statement

"... judge the merit of a new idea in AI according to the perceived intelligence of its developers."

about information technology VCs and AI is just totally wrong: I don't believe VCs do that. Why? Generally, from 50,000 feet up, it's too far from the norms of the accounting, banking, and investing communities respected by the limited partners of the VCs. Uh, the limited partners (LPs) are where the VCs get nearly all the money they invest, and the limited partners are conservative people, managers of pension funds, university endowments, etc. Not only do the VCs not do that, the LPs won't let the VCs do that!

Instead, about the shortest believable view I can see is, VCs look for traction that is significant and growing rapidly in a market large enough to permit a company worth $1+ billion in a few years.

The VCs view of traction is a weakening of the usual measures the accounting, banking, and investing communities use and respect of audited revenue and earnings.

So, sure, the best form of traction will be earnings, then next best, revenue, next best lots of interested customers, e.g., advertisers willing to pay for eyeballs, then last best, just lots of eyeballs. In these norms, intelligence, brilliance, AI, technology, etc. are mostly publicity points, window dressing, the wrapping paper on a birthday gift, and with a dime won't cover a 10 cent cup of coffee.

In a sense, the VCs have a good point, more from insight into humans and the real world than anything in a pitch deck: (1) With technology, it's too easy to push totally meaningless, useless BS. (2) Carefully studying core, deep, difficult technology is just too darned difficult to be practical for the VCs.

Or the investors believe in a Markov assumption: The future of the business and the technology from the past are conditionally independent given the current traction, its rate of growth, and the size of the market. To be clear, this Markov assumption does not say that the technology and the future of the company are independent.

The stories in the OP about the company Predata, to abbreviate "predictions from data", are good: The company was floundering around with guesses about what would work, e.g., for predicting terrorist attacks, that were like something from smoking funny stuff.

But here is one big place the VCs and technology are going wrong: We do have some terrific examples of how to do well. The examples are from the past 70+ years of the unique world class, all-time, unchallenged grand champion of using advanced, even original, technology for important practical results -- the US DoD.

A grand example is GPS. GPS was by the USAF, but it was a refinement of an earlier system by the US Navy, for navigation for the missile firing submarines and started at the Johns Hopkins University Applied Physics Laboratory JHU/APL. At one time I worked in the group that did the original work and heard the stories. A key point: The original proposal was by some physicists and almost just on the back of an envelope. Soon the project was approved and pushed forward with a lot of effort. Then, presto, bingo, it all worked just as predicted on the back of the envelope. E.g., a test receiver on the roof navigated its position within one foot, plenty accurate enough for the US Navy.

So, net, for project selection and funding, here is the shocking, surprising, point that the VCs miss: Really, given the back of the envelope work, the rest was relatively routine and low risk.

And the past 70+ years of the US DoD is awash in comparable examples.

In blunt terms, the US DoD has a fantastically high batting average on far out projects evaluated just on paper. Given good evaluations of the work just on paper, the rest is relatively routine and low risk.

Well, that project funding technique does not fully solve the problem of the VCs: The VCs also need to know that the resulting product will have big success in the market. But for that there is an okay approach: The dream product would be one pill taken once, cheap, safe, effective, to cure any cancer. In that case, the technology is so good for such an important practical problem in such a large market that there's no question about making the $1+ billion. So, from this hypothetical lesson, net, need the technology to be the first good or a much better solution, a "must have", for a really pressing problem in a big market. So, right, this filter would reject Facebook, Snap, and more. So, right, need to start with a really big problem where with new technology, say, as in the US DoD examples, can get a "must have" solution for a really big problem, and Facebook and SNAP are not such problems. Just what are such problems? That's part of the challenge. But with current VCs, come up with such a problem and a solution on paper, with brilliant founders, with AI, etc., then still will need more than a time to cover a 10 cent cup of coffee. Again, to get VCs up on their hind legs, bring then good data on traction, significant and growing rapidly in a large market; if the secret sauce technology helps, fine; brilliant founders, fine; even if there is no technology, fine; in all cases, what really matters is the traction.


I think that you are over-generalizing. VCs use a number of disparate investment theses, including gut feel and betting-the-team in a "hot" (trendy?) space. Another dynamic is funding a team that previously produced a big win for the VC firm (as appears to be the case here).

And do you have a reference for the "fantastically high batting average" of US DoD research? Are you familiar with the SBIR program, for example?

I would judge that neither DoD/DARPA nor VCs have a great batting average. But both have some spectacular wins.


> VCs use a number of disparate investment theses

To be more clear, I believe that such other issues, often mentioned, some on the Web sites of VCs, are nearly all just smoke to hide what I listed as the main issues. In particular, of course, I was pushing back against the statement I quoted from the OP -- their statement was much worse than mine!

But here on HN, I warn entrepreneurs who have already sent 100+ e-mail pitch decks to VCs: I gave my best guess on really how VCs select deals.

Batting average reference? I'm not considering the SBIR program at all. E.g., GPS, coding theory, e.g., as part of radar, lots more in high end radar, e.g., phased arrays, Keyhole (a Hubble, before Hubble, but aimed at the earth), the SR-71, the F-117 stealth, the SOSUS nets and adaptive beam forming sonar, some of ABMs, a huge range of parts of the SSBNs, high bypass turbo fan engines, the nuclear power reactors on the submarines and air craft carriers of the US Navy, and much more were not SBIR projects. I am drawing from early in my career in applied math and computing for problems of US national security within 100 miles of the Washington Monument and comparing with what I've seen in VC work.

The Navy's work on rail guns looks darned promising.

For DARPA, yes, they flop a lot, on their batting average, much more than the rest of DoD, but DARPA also has some spectacular wins. E.g., of course, TCP/IP. And they fooled me on their autonomous vehicle "challenge": While I believe that autonomous vehicles are a long way from being ready for real roads with real traffic, I can believe that so far already the DoD has gotten some good progress for some cases of logistics. E.g., one of the issues in Gulf War I was truck drivers. There an issue was that a lot of the drivers for the US were women, and the Saudis didn't like women driving vehicles. So, there was a trick, a deal: The US and the Saudis agreed that when the women were in uniform and driving US military vehicles, they were "soliders" and not women. Otherwise they were still women and could not drive!!!

Uh, the robots of Boston Dynamics are impressive, maybe still less good on legs than a cockroach, but already or well on the way to being useful for the US Army.


By this rate, looks like we need a "Fucked AI", in the style of "fuckedcompany.com". [1]

These people were eating VC hype money to build Hagbard's FUCKUP from the Illuminatus! Trilogy. [2]

Not sure who I feel more sorry for. The smart employees wasting years of their prime chasing some unattainable pipe dream, the VC's who got suckered into pouring their money into some vaporware precog technology, the author trying to disguise a shit river with meandering prose, or my upcoming pay cut when the AI winter sets in.

[1] https://en.wikipedia.org/wiki/Fucked_Company

[2] First Universal Cybernetic-Kinetic-Ultramicro-Programmer (FUCKUP). FUCKUP predicts trends by collecting and processing information about current developments in politics, economics, the weather, astrology, astronomy, the I-Ching, and technology.


Excellent Sunday morning long read!

Some of PreData's recent "insights":

"China Trade War Fears Still Running High"

"Mall Blaze Sparks Outrage Across Russia"

In short, nothing that couldn't be revealed from the briefest skim of headlines from tomorrow morning's WSJ.com. One can stay better informed leaving a Bloomberg TicToc (which is partially machine generated) tab open all day.

My takeaway is that the world of the Jim Shinns is rapidly approaching extinction. Deals done poolside at country club dinner dances. Name game shmoozing. And serendipitous encounters on private islands. What was considered the predominant pathway to immortality in Fitzgerald's day.

Viable alternatives exist now. And any business model solely differentiated by prestige will be subsumed by free or near-free competition.


I enjoyed the article as well (see my comments above). But I would debate your takeaway. The money quote is in the last sentence:

> Three months later, Predata secured a second round of venture capital funding.

People like Jim Shinn will always find a way. At least that's the argument the author seems to be making.


The Jim Shinns of the world are often LPs in VCs like Edison Partners. And/or they have led a major exit for the VC. Either of these situations gives them ample leverage for a "vanity" Series A/B. Even without the pool or country club.


> Machine learning, the logic- and rule-based branch of AI supporting Predata....

That's a really embarrassing mistake.


Flawed? You bet. Overwrought? A bit.

But I found this Sunday AM read enjoyable, articulate, and largely on-point (overlooking a few minor scientific errors).

The core themes here are about the hubris of a rich CEO/founder, the zaniness of the current AI "market," and their resultant effect on a particular NYC startup.

This is a season of "Silicon Valley" (HBO) done east-coast, hedge fund, Ivy League style.


Outside of the firms owned/operated by the real clever boys, I wouldn't be surprised if this describes the vast majority of "AI" efforts unfolding at dozens/hundreds/thousands of companies. Everybody is getting on the bandwagon and either don't have any clue or find out that their customers don't even want what they are selling at the end of the day.

I'd be shocked if anyone in the industry hasn't worked for or with a Jim. Spot-on.


This startup's existence and failure and is yet another symptom of how we grossly overestimate what AI can do. If the task isn't simple, repetitive, or clearly defined, unlike the real world, it's probably not going to succeed. Are there any AI startups that are an anti pattern here?


Reading about what Predata was trying to do reminds me of the field of Psychohistory in Asimov's Foundation series.

https://en.m.wikipedia.org/wiki/Psychohistory_(fictional)


The point being made is: Technology without vision is dehumanizing. This is widely known and is, for example, the reason good schools make undergrad engineering students take at least a few humanities classes before they leave.

Technology without vision is dehumanizing - it happened with Penn Station, where narrow quantitative and engineering goals displaced the broader human ones and led to the widely-hated station that's there now, which was excavated by people who were called hogs, and which makes passengers feel like rats. The loss is especially acute there, since everybody knows what the old station was like ( https://duckduckgo.com/?q=old+penn+station&kp=-2&iax=images&... ). It was an edifice comparable to the great gares and bahnhöfe of Europe (or to Grand Central which for some reason we decided to keep), a monument to national power, industrial wealth, and the technologies of the time, but also a space that evoked something a little more noble in the human spirit somehow.

The writer is also drawing a parallel with the dehumanizing effect of the particular startup he worked for. The analysts are the hogs, he's the rat, his own perceived loss of creativity (probably a bit exaggerated... aahhh youth) is the dehumanization part, and the absentee CEO is the lack of vision. (If a CEO has one function, it's to provide vision. And in second place, not far behind, is to establish company culture.)

Arguably, placing technical/quantitative goals above more humanistic ones is what an organization like Nazi Germany was all about. But obviously it's way more complicated than that, and I don't intend to address it further.

I would point you toward Dmitri Orlov's concept of a Technosphere. Analogous to the "biosphere" it models human technology as a quasi-intelligent entity that is global in scope.

Book: https://www.amazon.com/Shrinking-Technosphere-Technologies-A...

Excerpt (not much exposition but you'll get the point): https://cluborlov.blogspot.com/2016/02/the-technospheratu-hy...

Everybody here are the ones who most need to hear this message. Some will doubtless resist the criticism of ML/datasci with the fervor of someone whose long-held religious belief is challenged for the first time. But you needed that. Feel free to prove the critiques wrong, by the way... that's kind of the whole point. Prove them wrong with broad projects that actually benefit humanity instead of being a mess of unintended consequences and unimpressive bullshit.


I found the author to be slightly irritating on several occasions, dropping veiled references to Valleywag-style anti Silicon Valley memes, and then I got to the part where he regurgitates that idiotic article about the brain not processing information, and there being something magical about human brains that cannot be simulated [0].

He is right about his claim of having no right to be called a “director of research", as it seems to me his skills center on cribbing thoughts pulled from other people's thinkpieces. It's clear that he doesn't have a deep background in either neuroscience or engineering and that he was brought to the company from a background in business journalism.

In his condemnation of the state of AI research, there is no mention of AlphaGo, or a description of the teachable pattern recognition techniques that have swept the deep learning scene over the last 6 years.

I'm sorry to be so harsh, but there is a certain tone to this piece, "let's hate all those startup a*holes", "Mark Zuckerberg can't write like F Scott Fitzgerald because his knowledge of liberal arts is too limited, unlike mine" that seems like a snooty class signaler among a certain hipster set.

There is a compelling story in here, but to me the general attitude is just condescending to everyone around him.

[0] https://aeon.co/essays/your-brain-does-not-process-informati...


I dunno. The article seemed like a pretty good riff on the Great Gatsby and the dark side of Silicon Valley culture. The writing is promising.


He doesn't present an honest understanding of his own field, or the field of neuroscience, or the ongoing developments in the technology surrounding his own business, or its implications.

Even without extrapolating from the pattern recognition tools we have today, whole classes and ranges of jobs can be fully or partially eliminated.

Here is what he says about the state of AI:

> Even the most eye-catching successes claimed for AI in recent times have been, on closer inspection, relatively underwhelming. The idea that an autonomous superhuman machine intelligence will spontaneously spring, unprogrammed, from these technologies is still the stuff of Kurzweilian fantasy. Forget Skynet; at this stage it’s not certain we’ll even get to Bicentennial Man.

> These techniques might replicate discrete functions of a human mind, but they cannot capture the mind’s totality or what makes it unique: its creativity, its genius for emotion and intuition. There’s something else going on."

"the brain has spiritual magic"

Compare that to quotes from real live top human Chinese Go players defeated by AlphaGo last year:

> “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

? “AlphaGo has completely subverted the control and judgment of us Go players,” Mr. Gu, the final player to be vanquished by Master, wrote on his Weibo account. “I can’t help but ask, one day many years later, when you find your previous awareness, cognition and choices are all wrong, will you keep going along the wrong path or reject yourself?”

http://archive.is/qCwn8


I believe there is an alternate reading you may be missing. Viewed in a different light the example of Go you cite is not that different from John Henry vs. the steam engine. The history of industrialization is the replacement of humans by machines for specific tasks. People tend to oversell current AI technology as somehow deviating from that long path. It's a fair point to make.

For me the interesting theme was the exploration of the character and dubious success of the mysterious Jim, who is using his connections to ride a wave of poorly understood and possibly malevolent technology to a grand house in the country and membership in the upper classes--almost like flotsam on the rising tide of economic progress. There's a lot of Gatsby in there and some hints of Graham Greene's quiet American as well. Just as you can argue that AI the technology is part and parcel of industrialization, its social effects recall recurring conflicts in American society and culture that authors like Fitzgerald have been exploring for 150 years.

As for other critiques of the style from the Hackerati I would just say yes, it seems like the work of a young writer. Good writing is hard to achieve and it's typically preceded by a lot of bad writing. To paraphrase Senator Palpatine, we will watch his career with great interest.

Fixed: typo/it really is hard to write well


Where is the AI that can fold laundry (clothes, linen, towels)? Do laundry (sort, pre-treat, load, unload, clean lint filter)? Do dishes (clear table, scrape food into compost or trash as appropriate, separate to recycling as appropriate, load, unload, put up)? Keep a lawn (mow, edge, trim hedges, move trimmings to compost, trim trees)? Put up Legos after a 5 year old? Pick up around the house and tell you where it placed or last saw an object when queried?

And where is the AI in a humaniform that does all of the above?

There are tentative steps towards some of those activities, but we’re still in the early years with imbuing our machine intelligence models with the equivalent of our kinesthetic sense, object recognition and classification, and natural language interaction. And it is far from clear that we can get there with purely current statistical heuristic-oriented technology. We can only try, but the amount of effort required just for folding clothes to date reminds me of the elaborate Ptolemaic models, or as if we’re trying to build Excel by poking ones and zeroes into memory.

More tinkering required, be back later.


This is the wrong way to think about it to be frank. Animals can do things humans can't that doesn't mean that humans can't come up with ways to make what those animals can obsolete. Humans don't need wings to fly, we don't need to be able to run faster than a leopard to be able to move faster than a leopard.

AI will not always represent itself in a one to one relationship with humans to be able to compete or outcompete us. Just as an example a lot of things have become digitalized which have rendered many elements that used to exist in physical form into digital form, music is a good example of that.

So sure there are areas that machines aren't as good at yet because they haven't practiced it enough but it's literally just a matter of training and improving not some fundamental problem that can't be solved.


> ...but it's literally just a matter of training and improving not some fundamental problem that can't be solved.

I heard this echoed many before when pawing through the library stacks in my uni days looking through the littered corpses of AI trends in the past. I believe that we will eventually get to strong AGI. But after either reading or seeing in-person the hype machine sprout up and wither around symbolic programming, semantic programming, neural nets, fourth generation, expert systems, perceptrons, Connection Machine, etc., I'm gun-shy around any proclamation that achieving strong AGI is "just a matter of...<insert-single-solution-space>". The results so far seem to indicate pure cognitive processing is very amenable to the toolbox we have built up to-date in AI research, hence the breakthroughs in game playing.

Manipulating and interacting with the material world and humans however, and the results are a little patchier; I suspect we have lots more work and research ahead of us than we currently realize. When we do get some initial results like the laundry-folding machines, they're single-purpose and uneconomic for mainstream middle-class adoption (not to speak of working-class), and often with lots of attached caveats like Tesla AutoPilot. Instead of all these discussions of whether or not we will get strong AGI, I prefer to see everyone assume it will happen, and when we don't get the incremental result we were anticipating, say, hmm, that's interesting, I wonder why...

I want to see the hype tamped down to the point we can steadily chip away at the overall problem space, and accelerate AI research results and organically reach strong, economically-available AGI sooner than continue experiencing the disappointing two-steps-forward-one-step-back our industry seems to so far historically take in this field. The hype says we're a sprint away from unlocking all sorts of benefits promised by strong AGI, when we are better served accepting the organic incremental benefits as they occur during our acknowledged marathon, and using those incremental benefits as stepping stones to greater understanding.


It took billions of years to evolve us. We have only been working on this seriously for less than 100 years and the progress is reminiscent of the Cambrian explosion.

Again humans evolved from dumb atoms why is that easier to believe?


Laundry folding robot with actual smarts to it: https://m.youtube.com/watch?v=DzK38ylMTXk

Inferior to a human, sure. But a start.



We don't create factories around people. Reinvent fashion, kitchens and house plans to fit the machine. That's very doable. Restrict the solution space to find the answer. (Let Marketing handle the user acceptance issues)


> He doesn't present an honest understanding of his own field, or the field of neuroscience, or the ongoing developments in the technology surrounding his own business, or its implications.

Yes, but I don't think you are, either.

The fact is that chess and Go abilities aside, we are probably not even close to insect-level intelligence, and we don't have a clear path of getting there soon -- let-alone anything human-level. Current state-of-the-art so-called AI is basically powerful statistical regression algorithms, that are heuristic improvements over core algorithms invented in the 1960s, and there have been few theoretical breakthroughs since then (far fewer than in most other fields), so much so that many consider machine learning to be in the pre-science or pre-theory stage, being mostly about collecting data and trying out heuristics. It's silly to deny recent successes -- largely due to better hardware (although hardware progress is slowing down quickly) -- but we are behind, not ahead, of where we thought, even as recently as the 1990s, we'd be by now.

At this point we have no idea what role statistical regression plays in intelligence or whether we're even in the right direction. That statistics has become synonymous with intelligence (it used to be synonymous with lies) is certainly a cultural phenomenon that is not directly related to our actual knowledge of the field.

That computers perform some mental tasks (certainly more and more of them) better than humans has been a fact of computing since the 50s, and often a cause for wild claims. The invention of neural networks in the 40s and their implementation in digital computers in the 50s led some very respectable people (like Norbert Wiener) to declare that the problem of the brain will be solved in 5 years. The pragmatic Alan Turing thought that was ridiculous and predicted it would take 50. It's been almost 70 years and we haven't yet reached insect-level intelligence or anywhere near a complete understanding of the insect brain, so at this point, any claims that we are on the cusp of something, or starting to believe that our statistical regression algorithms reflect the beginning of intelligence is... misplaced.

On the other hand, it seems like we have not learned the lessons of misplaced confidence in AI, despite our relatively slow progress, and things are worse now as that we actually have some algorithms that are useful in certain restricted domains that we insist on calling AI, thus causing people to use them in domains where they are not only useless but downright harmful. In the meantime, some people draw attention to the dangers of real AI -- which may be anywhere between decades and centuries away (I believe we'll get there some day but we have no idea how or how soon) -- while distracting from the very real and already present dangers of "AI".


> we are probably not even close to insect-level intelligence

I think the problem is that we do not have a slightest clue what is (even insect-level) intelligence (or consciousness, which is often mixed up in the discussion).


That's right. Some have tried describing intelligence as a general problem-solving skills, but this is clearly false. Humans are terrible at finding even approximate solutions to NP-hard problems, which are certainly general and very common. It seems like intelligence is an ability to solve many problems that humans and animals face, but no one has characterized it more precisely, AFIK.


> hardware progress is slowing down quickly) I would be interested to know more about this. I haven't heard yet that the progress of GPUs cores for example is declining quickly ...


The problem is Amdahl's law. You can only parallelize so much. While the brain is certainly extremely parallelized, neural nets do not employ the same algorithms as the brain, and so, unless we find algorithms that are more amenable to parallelization, Amdahl's law is going to get us.


Most modern neural networks implementations are parallelized. And that is why we can run them extremely well on the GPU. For example Volta GPUs delivers 5X increase in deep learning training compared to prior generation NVIDIA Pascal architecture. This is why I was asking for clarification about the hardware claims.


Of course they are, but they're not as parallelizable as the brain, which is why they're subject to Amdahl's law.


The Blue Brain project managed something “as big and complex as half of a mouse brain” in 2007, so I think your claims of not-even-insect-level are outdated.


Not so sure that claim is right : http://www.artificialbrains.com/blue-brain-project

Seems like they managed a honeybee (but I am not sure that it ran in real time or how they validated that) but were hoping for a rat brain.

I'm quite sceptical - I don't believe that there is a good understanding of how a single neuron functions, or agreement on the taxonomy of neurons or an understanding or agreement on their interactions and arrangement apart from in a part of the vision system where there do seem to be some good models.


Thanks for the link. I noticed it was “As of August 2012”, which despite being old still falsifies my prior belief. (Gell-Mann Amnesia, I incorrectly relied on the press for science). Unfortunately I can’t seem to find anything more up-to-date and just get more confused journalism.

That said, isn’t the point of the Blue Brain project to answer your skepticism? When we have all those things, the only thing remaining is to see what can be left out of the sim without compromising the behaviour?


The OpenWorm project doesn't even manage a worm though.


So? OpenWorm is (barely) crowd-funded and trying to simulate the entire body not just the brain. Seems like a decent effort given their resources, but on a scale of Radioactive Boy Scout to Manhattan Project they’re a Farnsworth Fusor.

(Which is to say “I ought to volunteer”).


AlphaGo is impressive, but it is an impressive parlor trick not impressive as 'artificial intelligence'

Playing a game with fixed rules and a finite set of potential states is something a computer can do.

Designing the computer that does that is intelligent.

There is no connection between the two and one does not lead to the other.

The fact that some of the people developing 'artificial intelligence' have such a limited understanding of what intelligence is no doubt contributes to the mocking tone of some critics.


I'm looking forward to the day when we have a computer that passes the Turing test but comments like yours still show up.


Scott Aaronson: “So this is about the state of the art in terms of man-machine repartee. It seems one actually needs to revise the Turing Test to say that if we want to verify intelligence in a computer, then we need some minimal level of intelligence in the human interrogator.”

https://www.scottaaronson.com/democritus/lec4.html


That’s just the anthropic principle restated.


Plot twist : parent is an AI !


The article you reference makes the same mistake as Searles Chinese room argument. It somehow assumes there is something magical about the human mind.

I had a brief argument with Robert Epstein (the author of that article) because I find the argument that humans don't actually store information to be quite misleadning and missing the point.

The most obvious mistake is this:

"We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not."

This is both true and false. It's true we don't do that like a computer but its wrong to claim that computers fundamentally do that too.

That happens several layers of abstraction up and so a computer fundamentally doesn't actually store an image or a word either it manipulates atoms and turns circuits on and off and several layers of abstraction up it gets translated into meaning first by machines then by humans.

Anyone who have a hard time believing machines can become sentient should first ask themselves why they have a harder time believing that than accepting that dumb immaterial matter somehow have become the pattern recognizing feedback loops that are us.


Searle's argument isn't that there is something 'magical' about the human mind, it is that biological systems are fundamentally different from mechanical or digital systems and that sentience (not intelligence!) is a unique feature of biological systems.

This argument deserves much more recognition than it gets because 1. it's at this point still empirically true, we have not observed non-organic sentient life and more importantly, because not Searle but everybody else employs 'magic'.

Searle's point is simple. Computation is subjective. Electricity flowing through a machine doing complex things is just a physical process like anything else. You (the sentient observer) classify that physical process as meaningful, but a computer is no more 'computing' things than a falling pen computes gravity.

So sentience really is related to physical agency and sensory experience in the world, which creates conscience in organic brains. That doesn't imply complexity or intelligence or understanding. Syntax and Semantics are different things. Your pocket calculator processes the syntax of mathematics, but it does not understand the semantics of mathematics. A compiler processes symbols according to rules, but it does not understand the meaning of the computation, it has no cognition. It might be very good at what it does, but it has no capacity to understand. That's the essence of the Chinese room, and it's still a convincing argument.

An even stronger point might be made, namely that sentience actually limits intelligence. That it requires a degree of slowness and introspection that is unsuited for fast decision-making. For a fictional treatment of this, Blindsight by Peter Watts is an excellent read.


It really don't in my opinion.

Of course, the person in the room doesn't understand Chinese just like the individual neuron in a Chines person doesn't understand Chinese.

It's the entire house where the person in the room is just one part of the entire system.

And yes he does imply that there is something magical about the way humans are pattern recognizing feedback loops vs. machines as humans came from immaterial matter ourselves if he doesn't his argument simply do not hold up as there is nothing magical either in the way human conscience is a byproduct of simpler systems all forming to become a scentient one.

If you can buy that humans have evolved from dumb atoms then you have too look that not how humans and "machines" are different but how they are the same.

The samenes is that we are pattern recognizing feedback loops and that our sentience comes out of something non-sentient.

So either something magical is in play or there is nothing that hinders machines to become sentient either from what we know.

A human consists of milions of sub-systems like a calculator and yet we are somehow sentient.

Furthermore there is no know upper limit to how complex silicon based systems can be and so the right answer really is if anything "we don't know" not "because the person in the room doesn't understand chinese it proves that systems can" the person in the room is not the system the entire house including everything happening outside of the room.

In other words unless searle is claiming magic at some level nothing, absolutely nothing indicates that machines can't become sentient.

With regards to your last point then that's the wrong way to look at it.

A better way to understand why it's possible is to start from omniscience and then realize that omniscience means you are aware of everything and thus have no perspective where as all systems that can handle information potentially can become sentient the more complex they become.


This is a very interesting interpretation of Searle's argument (which also has some overlap I believe with some of Douglas Hofstadter's ideas), and I will begin to read Blindsight shortly, as it seems a very intriguing novel.


> idiotic article about the brain not processing information

How about the brain creates information from constant interaction with the world based on the kinds of bodies we have and our needs/wants? This information doesn't exist as information until the brain creates it. Information is the product of minds. It doesn't exist in the world on it's own to be processed. As such, the brain is something other than a computing device. Computers exist because we figured out how to arrange physical systems to process information that's meaningful to us. But to nature, it's just a physical system (and not even that, since physics is a model of nature we create).

That's Jaron Lanier's paraphrased argument against thinking of the brain as a computer. To say that information exists in the world to be processed is to make a metaphysical commitment that information exists ready made for us.

> and there being something magical about human brains that cannot be simulated

It doesn't have to be magical. There are different philosophical views on the world and the mind which lead to different conclusions. If one takes the hard problem of consciousness seriously, then consciousness cannot be computed. Not because of magic or the supernatural, but just because consciousness is not computable, since computation is itself an abstraction (Turing machines don't exist on their own anymore than do any other mathematical systems). Unless your metaphysics falls along the lines of Tegmark, Plato or Wheeler (it from bit).

Instead you can think of The brain as an information creator. We give meaning to the world. We build models. The world itself just is, it's not information, math, physics or symbols.


Computers interact with the world too. I'm looking at a screen that produces patterns of light based on the internal state of my computer. How is this different from a brain interacting with the world? The brain is a finitely sized hunk of matter and matter seems to follow laws. We currently have no reason to assume that those laws can't be simulated by sufficiently sized computer, so anything observable the brain does a computer can do too.


Yes, computers are physical systems. But what does their interaction mean without humans around to interpret their output?


"Not computable" is itself a strong claim that's never been proven.

What is true is that nobody has done it yet. The process is a mystery in the sense that it's not understood, which means that we don't know if it's computable or not.


The argument at the moment seems to be "define a problem that a computer can't do that a human brain can"... "I can't because expressing that problem is beyond the machinery I have developed for cognition, and it may always be".

What is certain is that there are uncomputable problems, but are any of the problems that humans solve in order to speak, act, socialise uncomputable? Some people think that because they are solved within the physical universe then they must be computable but that implies that the physical universe can be simulated (in principle) by a universal turing machine, since we can express problems (using the machinary of the universe) that a universal turing machine can't do then there is a gap that permits the possibility that there may be some process which is not simulatable by a universal turing machine.

In my belief free will/autonomy/initaitive/creativity are expressions of that process, but belief is not an argument.


> Some people think that because they are solved within the physical universe then they must be computable but that implies that the physical universe can be simulated

If you believe the brain exists in the physical universe, that means you can build a physical system that also solves the same problems.


> If you believe the brain exists in the physical universe, that means you can build a physical system that also solves the same problems.

Sure, but in the case of computation, is it possible to make a computing system that has subjective experiences? Maybe consciousness isn't something that can be expressed in computational terms, because computation is itself based on abstraction.


There is no way to define subjective experience such that you can tell whether something has them or not. The question is meaningless.


I think that is extremely austere, and misses a large part of the value of science and philosophy. In some ways the questions of cosmology matter far less to the vast majority of humans, and have in practical terms less qualification and quantification for those people than subjective experience communicated by drama, poetry and art. Stating that this is meaningless ignores the suffering and joy of humanity, and makes an unwarrented and low utility judgement about the set of beliefs that are legitimate in finding how things are.


Yes : but...

- "It" wouldn't be a "computer"

- you / someone would have to be able to understand it, which might be impossible (for a human)

- you would have to be able to construct it, which might be very very technically hard

but yes (ish)


the tone is pretty standard baffler style. its meant precisely as a provocation, thats their whole thing. Its also an openly leftwing publication fwiw. Personally I find it way more refreshing and honest, then, say, the NYT op ed pages, in terms of being honest about why they take the subjects they do and why they present them in the way they do.


I enjoyed it and I think that it is helpful in the sense of calling "naked man" at the emperor. It doesn't do any harm at all for the AI community or startup community to look at itself and think hard about what it's doing and saying.

This time round there will be no million fold increase in compute power to bail everyone out!


If you want to rise to an 'empereor has no clothes' caliber piece, it would help to demonstrate a comprehensive understanding of the fields you're criticizing, and not arrogantly cite bad science essays, and not ignore the actual state of the art techniques in that domain.

You need to present the best arguments from the side you want to critique and then prssent a case why you think they are wrong. Calling people names and avoiding difficult challenges to your thesis is not the way to do it.


Well - I'm not so sure, you're setting a very high bar which makes it difficult for people with a different background to make points (badly in your view, but pretty well in mine) which the community needs to hear.

This isn't a Ph.D. exam, this isn't a thesis - it's an outsider calling BS. I'm not impressed by the counter arguments advanced so far. Let's be honest, Alpha Go and Alpha Go Zero are surprises in that they have shown that Go isn't as astonishingly difficult for approximate search - which everyone thought it was - but until we see the real world applications it's all of intellectual interest.. which is the point of the article.

There are a lot of folks who I respect making claims similar to the company that is featured in the piece, I'm really disappointed by that because everthing that we know about learnability is ignored with the cry "we've got deep networks now". We've don't know why dnn's generalise as well as they do but shouldn't, but it's no excuse to just abandon our sanity and go out and bet large amounts of other people's money on them doing things that they can't.

This money, btw, should be spent on hospitals and roads, not on providing near 7 figures for these people.


If you don't even understand the basics of what's possible in a field and what's not, you can't make a convincing refutation of that field by just calling certain people names and citing bad science essays that you also don't understand.


It's an ad for their company posed as an opinion piece.


It's an ad for their company

I guess I'm not in their target market then because it reads like a hit piece - so much so that I was sure that all the names were changed!


I Googled because it sounded too harsh to be real, but they are all real people!

The author is out of his damn mind for not changing names, but NMP.


He said he didn't have to sign an NDA. I suppose he felt free to say what he wished!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: