Hacker News new | comments | show | ask | jobs | submit | joe_the_user's comments login

The Chinese curse might be now updated to "may you wind-up dependent on 'really interesting novel applications'..."

Machine learning-derived applications are impressive and give a good show until one winds up in a situation where they are expected to work reliably. Sure, it's nice that the insurance company's phone-based, voice-recognition-driven, registration/etc system can understand 99% of the choices people give them - except total failure in that 1% is actually going to leave a large population unserved and angry. Of course the company has keypad backup - except they don't 'cause that would cost the money they claimed voice recognition could save, etc.

Machine learning apps are great for situations where 1) You don't expect 100% reliably and the degree of non-reliably doesn't have to even be quantified. 2) Either you are accept that they'll degrade over time and have an army of tuners and massive data collect to keep that from happening OR you are dealing with an environment you completely control.

This is kind of the conditions for regular automation - except even more so.

Yeah, if anything, the "AI" part of the search has been part of the decline. Google aggressively gives me what it thinks I want rather than what I ask for. It seems like it's very clever in giving me something like what an entirely average person would likely want if they mistakenly typed the text that I knowingly and intentionally typed ("Kolmogorov? do you mean Kardashian?" etc).

The search does seem able to understand simple sentences but there's much less utility in that than one might imagine. Just consider that even an intelligent human who somehow had all of the web in their brain couldn't unambiguously answer X simple sentence from Y person whose location, background and desires were unknown to them. Serious search, like computer programming, actually benefits from a tool that does what you say, not what it thinks you mean. Which altogether means they're a bit behind what Alta Vista could give in the Nineties but are easier to use, maybe.

Part of the situation is the web itself has become more spam and SEO ridden and Google needs their AI just to keep up with the arms race here. So "Two cheers" or something, for AI.

The "did you mean" part reminds me of this:


Typical Google :)

How safe does a self-driving car have to be before it is allowed on the road?

I would say that's going to be a societal decision rather than an engineering decision.

An interesting point is that the introduction of the private automobile itself was an imposition on the social space of that time which basically was wildly unsafe with automobile accidents still being on of the leading causes of death in this country.

If it's determined that self-driving cars will be allowed, ordinary drivers will be forced to adjust to their presence and will have to learn their quirks.

Certainly, cellular phones realistically are being used by a significant fraction of drivers today and if the accident rate has gone up at worst only slightly due to that (not enough to counter-weight other safety measure), it's because non-cell users have adapted to the presence of the cell-user, however annoying that might be.

>How safe does a self-driving car have to be before it is allowed on the road?

I'd argue that self driving cars just don't have to be safer than average drivers. They should be safer than average drivers with computer driving assistance. It's already happening, but we can take the self driving programs and use them with human drivers. To get a best of both worlds scenario. Of course we'd have to mandate all new cars come with computer driving assistance.

The counter argument is that we accept the risk of cars now, why not let self driving cars have the same level of risk. My answer is that the risk of cars is so great we either shouldn't be accepting it or are only accepting it because there isn't an easy alternative. A lot of Americans die on the roads.

You make an incredibly good point - whilst any safety improvement at all is obviously and unquestionably a good thing, we can take this opportunity to step back and analyse the situation we've found ourselves in, and decide that the status quo is far from good enough and we should in fact be aiming to do much, much better.

A convenient advantage of regulating for self-driving vehicles is that we can put minimum safety standards into law quite easily, and enforce them meaningfully - because the prize for reaching and adhering to those standards is so great for the players involved, and because we have a relatively clean regulatory slate because of the clear step-change in the technology.

As well as the most important factor, safety, it could also be possible to step back and consider the other disadvantageous side-effects of the current status quo with cars, particularly in cities - noise, traffic congestion, wide roads with thin pavements/sidewalks for pedestrians, indeed a general culture of cars having priority over pedestrians in various situations where the number of pedestrians is much greater than the number of people in the cars.

These are all things which, depending on the city and the culture, can be really significant problems and which I'm sure would never be tolerated had they not crept up on us over decades. Regulating both for and with the capabilities of self-driving vehicles might give us opportunity to vastly improve the status quo in these areas, too, in a relatively short period of time.

It could be quite important to take advantage of the opportunity to realise such optimistic targets now, starting green field, as it can only get more difficult and become much slower to implement after the first set of regulations has been put into effect.

It depends on the distribution of drivers that cause incidents. If the average (mean) driver is involved in an average number of incidents, then fine, base the safety estimation on them. If the average driver is involved in a below average number of incidents, the safety standard should be tuned to the drivers that are involved in incidents.

When we calculate averages, are we including the number of drunk driving accidents, for example? 10,000 Americans are killed every year due to that. If we calculate equivalent accident rates that include impaired humans, I'm not sure that's good enough. Computer + human should be able to address this problem, right? Reduce speed, pull over and stop, if someone can't drive between the lines or they are driving the wrong direction on a road.

I think there is a point where assisted driving no longer becomes practical, and either the evolution of autonomous cars has to stop to allow the driver to have a continued say in how the car reacts to its environment or move on to where there is just no place for time-sensitive human input any more. As drivers trust their cars more they will pay attention less (not to mention practice less, an even deeper problem for post-autonomous-car generations than those before), and their ability to meaningfully react to stimulus will be roughly nil.

If we chose the former we are probably arbitrarily limiting our safety in order to save the ego of a human driver. If we chose the latter we will reach a point where we won't have a steering wheel for a driver to potentially become a hazard rather than a help.

Do we expect computer assistance to make much of a difference? Right now they handle the easy tasks like highway driving which don't account for many accidents, as far as I know.

That's a good point, there's been some recent stats(don't have them on-hand) showing AEB making serious improvement in collision and injury rates.

I'd bet they will just happen one day. "Super cruise control" will get an update that automates your whole trip, et voila.

If AlphaGo eventually beats humans at go I think it says more about the game (or the way humans play the game) than about AI in general.

I'd agree with the rest of your post but this statement seems to imply there's some "general level" that can't reached by the incremental approaches that yielded the results.

The progress that we see here is incremental progress in a defined area but it's also progress towards more general and easier to implement approach - it's yet "point and solve" but it might be a step toward "point and solve". Given little "general intelligence" is understood, no one can say now for certain we won't arrive at it through a series of these advances.

Let me expand a little more on what I meant and see if it makes more sense to you:

Much about Go/Weiqi is still unknown even to the top players. For example Ke Jie (the current highest ranked player) feels that the current komi (compensation given to white for playing second) is too generous and he prefers white (last year he won almost all his white games). This is why professional players seem very excited about AlphaGo. If AlphaGo can consistently beat top players it may teach us more about the game. It may discover or settle questions about josekis (pattern conventions). It may tell us what komi is fair. It could settle questions about how a particular board position should be valued. On the other hand since it uses patterns learned from human plays, this could also motivate new theories of play that could defeat past strategies. Or if no one can come up with ways to defeat AlphaGo it may indicate the valuation function it produces is approaching ideal.

But AlphaGo is very much about combining existing tools of ConvNet and MCTS, albeit in a highly innovative manner, to solve the search problem of Go. Its success or failure could teach us a lot about how amenable the Go game is to such an approach (and potentially advance theories about the Go game), more than I would say about how in general problems in AI can be solved. That is what is learned here is very specific because Go is a very visual game and/or humans play it in a very visual way to take advantage of the massive parallelism of our vision system to quickly narrow down choices.

For those wondering about komi:

Standard komi is 6.5 points under the Japanese and Korean rules; under Chinese, Ing and AGA rules standard komi is 7.5 points


Really? I thought it was 5.5 back when I played go. I thought 5.5 was fair, so maybe he's right that 6.5 is too generous.

But of course the increase to 6.5 was done for good reasons. Perhaps the strategies for white just suit Ke Jie's style slightly better.

That's what I remembered it as, but it's been a while.

> I'd agree with the rest of your post but this statement seems to imply there's some "general level" that can't reached by the incremental approaches that yielded the results.

At this point in time, the belief in "incremental" general AI looks like a kind of pseudo-religion emerging in IT. People go way beyond postulating it as a possibility. They fervently defend it as some kind of obvious fact and flaunt this attitude as progressive.

What really disturbs me is the way these bridge the gaping void between existing AI and biological brains. On one hand I see absolutely insane amounts of hype around artificial neural networks, exploding way beyond the optimism warranted by the actual research. On the other hand I see the insistence that biological brains are "nothing special". I wonder how deep that goes. Are these people ultimate sociopath who truly believe that everyone around them is a mere pattern-matching device?

All this bullshit is actually detrimental to the usage of "AI" algorithms as programming techniques aimed at solving real-life problems. For one, many managers look at the hype and mysticism and conclude that AI is something that is too complex for mere mortals to handle. I've seen this on many occasions.

There are two different issues.

One is whether simple progress on neural nets is enough to close-in on real intelligence and I think those who really understand neural nets generally think not.

The other issue is whether incremental progress in general can achieve AI - there I don't think anyone can be sure, especially the relative vagueness of "incremental" and so I don't one should dismiss incremental progress or uncritically assume it.

Computer Go actually had advanced a long way by using Monte Carlo Tree Search in particular. The pre-AlphaGo programs that AlphaGo defeated were much stronger than computer Go programs from before the era of Monte Carlo tree search. Computer Chess was not achieved instantly by applying generic "tree search" but required quite a bit of tweaking to the various algorithms which were applied.

Go ranks are in "stones" that one player can give another as a handicap - except at the professional level where there's more relative equality of play and the stone-ranks are honorary and ELO ratings are more accurate.

Monte Carlo tree search had advanced computer Go from 2-3K to 6-7 dan, a gain of 8 stones. Alpha Go apparently has advanced to 9-10 stones, professional level, by using neural networks to enhance the final position and search policy used in Monte Carlo tree search.

In many ways, it seems to me that Monte Carlo Tree Search was the primary answer to how computers could deal with Go [1].

Chess essentially conquered through a combination of red-black and other smart pruning algorithms, incremental advances in hand-tuned final position policy and improved hardware [2].

So it seems like the "conquest" of Go has involved a more generalized, self-learned version of the original approach to chess (tree-search strategies plus final position heuristics). That might enough for just about any deterministic game one can find.

It should be noted that Arimaa, a game designed specifically to be hard for computers without having the large board of Go, was "conquered" last year but without any neural net techniques (apparently).




I wonder if there is any work going on to go back to Chess and try these same techniques that were used in AlphaGo.

It would be interesting to see how well this more generalized approach fairs against the hand-tuned final position policy code that you spoke about.

Is MCTS algorithmically interesting, or is it only powerful because it is embarrassingly parallel and so can leverage all the computing power as you have availalbe?

Google did most of what they do now ten years ago, before their activity had much arguable "intelligence". They have used AI to enhance various existing services but the improvements hardly seem earth shattering to me [insert earth-shattering counter example HERE]. Google's more pure AI product such as the Google Car are so-far vaporware at best.

Calling Google an "AI company" seems like calling Ford in 1962 "a streamlining company" - which is does violence to the concept even if Ford in 1962 maybe some money with streamlined cars.

I think there's a dramatic difference in search 10 years ago compared to now, due to better natural language understanding, which is very much an "AI" technology IMHO.

I've had to un-learn my search query habits from the past. I used to attempt to express queries in ways that I thought would help the search engine (limit it to keywords, suppress keywords that are likely to produce false positives, etc.). Now, it seems better to put in sentence fragments, and the old way of a few specific thought out keywords is actually worse at producing the result I want.

The difference is small to the average HN user (most of us are just as capable of expressing queries both ways, and we'll find what we're looking for) but the latter kind of query is far more accessible to the average person.

As you say, this entire situation has been supported by massive infusions of printed money intended to support the economy dating back from the last collapse.

And the massive bailout followed by quantitative easing/money-printing was already touted by Bernanke in his helicopter speech.

Which is to say that one can't imagine any response to the present embryonic crisis other than even more money being thrown at the problem.

And it seems like this money restore the status quo even as the 2007+ money didn't restore the previous quo. Rather the primary trends - printed money concentrating into the hands of the already wealthy, seems likely to simply accelerate.

How long can the circus keep going? It might collapse at any moment yet I don't think anyone knows for sure.


Even though the vast majority of monetary transactions are done electrically, it is still at heart a paper-based system built upon the constraints and assumptions of the paper/print medium that has dominated for the past ~500 years.

"It is part of the age-old habit of using new means for old purposes instead of discovering what are the new goals contained in the new means."[1]

I'd imagine the house of cards won't come tumbling down until a viable electric alternative outcompetes the legacy system.

The old goals of the Federal Reserve system are primarily:

1. Maximize employment

2. Stable prices

3. Low interest rates

These goals are no longer applicable in the new electic age of automation and ephemeralization.

[1] Marshall McLuhan, The Medium is the Massage


Retail transactions may be "at heart paper" with those constraints and they may be numerically the majority of transactions however broadly financial transactions involve a greater large amount of funds and so the Fed's policy of money creation has in no way been limited by retail monetary transactions' dependence on paper.

Have you heard of the "helicopter money speech"?[1] It seems required reading for anyone trying to understand modern monetary policy (though it's naturally only the start).

[1] http://www.federalreserve.gov/boarddocs/Speeches/2002/200211...


Deflation is defined as:

>a decrease in the general price level of goods and services

In the Gutenberg era of paper based processes and hierarchies, as stated in your cited speech, a reduction in the price of goods and services is seen as a bad thing. This is no longer the case. The assumption of scarcity of renewable physiological resources (level 1 of Maslow's hierarchy) is no longer valid:

>In technology's "invisible" world, inventors continually increase the quantity and quality of performed work per each volume or pound of material, erg of energy, and unit of worker and "overhead" time invested in each given increment of attained functional performance. This complex process we call progressive ephemeralization. In 1970, the sum total of increases in overall technological know-how and their comprehensive integration took humanity across the epochal but invisible threshold into a state of technically realizable and economically feasible universal success for all humanity.

-Buckminster Fuller

In the era of electric automation, deflation flips into ephemeralization because everything is increasingly/exponentially produced better, faster, and cheaper. A reduction in the general price level of goods and services is the purpose of automation.

This is why goal #2 of maintaining stable prices is now obsolete and self-defeating. It completely ignores automation and renewable resources.


How long can the circus keep going? It might collapse at any moment yet I don't think anyone knows for sure.

This statement is fundamental. It is very very hard to time the market. In fact there is a famous saying by John M Keynes "The market can stay irrational longer than you can stay solvent".


Thanks, I will add that Keynes quote to my collection


Why is it strange to think that over-eating is could be considered problem?

One might better phrase it as "today, many humans have the ability to consume as much as they wish of many kinds of foods, a situation somewhat unique to evolutionary history and many problems seem to stem from this". I think a fair portion of nutrition experts think this. I don't know if it's a majority but as the article mentions, consensus is hard to find and just about any opinion is hard to support.

And you seem to recommend something closer to the cranky fringe as an alternative to a decent article whose main message seems to be "very little is uncontroversial"


I don't disagree with your critique of the parent's preferred "source," but "over-eating" has a certain hand-waving component as well (IMO at least.)

If you eat 10 pounds of celery, there is no doubt that the vast majority of people would consider that over-eating but the negative health effects would at best be an abundance of fiber I suspect.


> 10 pounds of celery > an abundance of fiber

You seem to be misinformed. 10 pounds of celery is most probably a very bad idea. There is no such thing as "good food" if you push it to the limits and eat too much of it. Celery has a lot of active component, just smell this taste and you will know. It is mainly good but as an addition to a normal meal's main component, which is traditionally white rice or noodles in the East, bread or potato in the West.

Among these I would guess the best is rice. But even rice you cannot eat 50 pounds of eat without harm.

I think the main issue here is that food is not science, and should not be the subject of science. Some chemical reactions are, but not nutrition. Nutrition is a cultural thing, much more connected to health, well-being and social living than to chemical reactions, milligrams of whatever and parameters of our bodies.


10 pounds of celery would result in 40g (or thereabouts) of sugar intake, 4 liters or so of water, and round about the US RDA of vitamins A, B1, B2, B3, B6, B9, C, E and the minerals calcium, iron, magnesium, phosophorous, potassium, sodium, and zinc.

Granted, the consumption of vitamin K would be noticeably above the USRDA, although we're also not accounting for the reduction in hypertension observed in those that consume celery juice.

Food and nutrition may not be a science, but that is a result of incomplete understanding rather than some special magic that culture exerts on biochemistry.


> incomplete understanding

The first act of ratio (reason) is to assert its own limits. Many things are outside the scope of science, for example no science will ever explain why a novel or a painting is good, why someone love another. From the catastrophic results of nutrition-as-science (things like USRDA), I would bet that the most important part of nutrition is outside the scope of science. Eating food is only marginally the matter of pouring x or y or z into one's body. Just as reading a book is nothing like pouring one by one a long string of words in our memory.

We are no robots, after all.


It's not that usual for large projects to proceed a fair distance before seemingly obvious problems become apparent. Bridges that fail to meet in the center come to mind.

I remember being at a meeting talking about the implementation of a chipset where each of the three chips required another chip to be running first. Global conditions can be more obvious to an outsider than those involved in many details of creating X big plan.

Why would a snow bank not be avoidable as well?

The Google car in particular is intended to follow a pre-defined path derived from more-detailed-version of Google Maps with obstacle avoidance being just deviations from that pre-defined plan. It seems like the situation described by the op, the road-structure itself changing, would present significant obstacles to that strategy. Essentially, rather than solving the problem of guiding a vehicle down the road using visible spectrum image-processing, Google avoided the problem by various stands-in; lidar for obstacles and satellite maps for positioning. Clever but that sort of thing will have failures that might be surprising - and surprisingly obvious.



Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact