Hacker News new | past | comments | ask | show | jobs | submit login
Google’s AlphaGo Defeats Chinese Go Master in Win for A.I (nytimes.com)
123 points by zt on May 24, 2017 | hide | past | web | favorite | 132 comments



Andrej Karpathy had some awesome tweets on this yesterday, you can see the thread in the links, but these two were my favorite:

"Yes AlphaGo only won by 0.5, but it was not at all a close game. It's an artifact of its training objective."[0]

and:

"it prefers to win by 0.5 with 99.9999999% chance instead of 10.0 with 99.99% chance."[1]

[0]https://twitter.com/karpathy/status/867075706827689985 [1]https://twitter.com/karpathy/status/867077807779717121


I know it's a totally different game, but it would be interesting to see AlphaGo play a game optimized for expected number of points, instead of low variance wins. I assume it would require retraining.


I would really love to see an AlphaGo with even the minimal tweak of maximize points while maintaining win % > 95%.

I really wouldn't be shocked in those final moves it was optimizing for 5 or 6 "9"s of success probability. Which will still true to programming, it also obfuscates how good it really it.

That said, there is some benefit to not completely embarrassing your human player who previously was known as the best, but has volunteered to be beaten repeatedly on international TV.

Going into this Ke Jie knew he was going to lose, and still agreed.

Speaking as an inexperienced go player, I would not be shocked if the new Alpha Go were 500-1000 Elo points better than Ke Jie. It's true skill being masked by its ultra-conservative style of play.

The original AlphaGo only lost a single game to Lee Sedol that was essentially just a bug, not because it was "weaker". DeepMind has said this completely retrained AlphaGo is significantly stronger than the original.

[1] Partial support for the possibly 1000 pt Elo advantage of Alpha Go. http://en.chessbase.com/post/alphago-vs-lee-sedol-history-in...


> that was essentially just a bug

I'm not sure that's right, everything was working correctly, it just didn't read out a low-probability move very deep.


I'd agree it was an algorithmic bug and not an implementation bug.

From my reading, it essentially got AlphaGo into a state where it was no longer reading the board correctly. The algorithmic bug was play a decent but extremely improbable move and AlphaGo won't know how to respond.

Or by a similar argument, I think most people would say that if Lee Sedol hadn't played that move and the game continued 'normally' he would've lost like the other games. The rarity of the move is why he won, not the "strength".

Essentially they trained the app on too specific data. Their main fix was to retrain the next version from "scratch" instead of from moves humans are likely to make.


If I remember correctly, the problem wasn't that it didn't see the move, the problem was that in response to the move it played a few really bad exchanges, like the stupid-looking wedge in the bottom-left, and adding stones to a dead group on the right. Playing bad exchanges when behind is a bad idea even from AlphaGo's perspective. It's still not correct to call it a bug, though.


It really bugs me whenever I see this as evidence of AlphaGo playing in some fundamentally nonhuman way. It's a basic tenant of Go strategy that, if you are ahead in points, then you play conservatively. Professional players are very good at counting territory and assessing point values netted by various moves, especially in the endgame. They'll take a smaller margin and a safer win over a riskier but larger but riskier margin, too.


Sure, but:

A) Conservative play (by humans) tends to maintain the margin, not shrink it

B) Show me just one human player who would not waver under the psychologic pressure of only leading by 0.5 points


Would that mean that a slight change in board evaluation rules would require Alpha Go to retrain all of its strategies?


So, effectively, it's playing with its food?


The game is available here for anyone interested - http://events.google.com/alphago2017/

It's a half-point win which doesn't seem like a lot, but AlphaGo is intentionally taking lower-risk paths to lock down the win in exchange for some win margin. If you watch the actual game and some commentary, it really feels like AlphaGo is playing at the next level. Some of its moves are so inhuman and subtle.

This is a very exciting time for Go; a lot of traditional wisdom has been shaken up in the last couple of years! I heard a story about a Korean pro study group that was doing nothing but studying AlphaGo's games for new insights for some time. I'm looking forward to seeing the future of play as extremely strong Go AI becomes more widely accessible.


Yes, it's exciting that a half point win gives us no bound on the strength difference betwen the human player and AlphaGo -- the human player could play "twice as well" and still lose by 0.5.

The best way to test strength, assuming Ke Jie continues to lose, would be to start giving handicap stones until the winrate stabilizes. I'd guess that AlphaGo is no more than two stones stronger, but maybe that's just more bias towards humans..


In the match commentary they talk about why no one is likely to take them up on that offer, because losing at a handicap would be devastating for the self confidence of a pro.


I don't understand that. Should an Olympic weightlifter lose confidence if he can't beat a forklift?


They might if they've been a competitive weightlifter their whole life before forklifts are suddenly invented.


I think the difference is that Go is played on a psychological level as well, while weightlifting is (mainly) not. Somebody mentioned this in another Go Thread on hn a few days ago: Professionell Go players do not like to play against "weak" players because they are afraid of the psychological damage an (unlikely) loss might have on their play.


You can imagine the same applying to MMA fighters, too. The outcome is never a sure thing; you can always catch a lucky punch from someone much less skilled than you and lose in the most embarrassing manner possible, by knockout. Hell, look at how Ronda Rousey's self esteem was shattered from a single such loss.

Weightlifting isn't the best example because it's a big group competition without the individual 1-on-1 aspect to it, and chance plays less of a role. Yeah, you might feel out of sorts on competition day because you didn't peak in your training properly, lift 20 kg left than you're capable of, and subsequently not place, but it's not supremely embarrassing. You're still a very strong person, which you demonstrated. Rousey, though, just got demolished.



I don't think we're yet at the point where AI is equated by our psychology with a forklift or toaster.


Humans are considered a dominant species because of their intelligence, not their strength.


This! This would give us an idea of "how much stronger?" rather than just answering the question "is it stronger?"


According to Deepmind, this version of Alpha Go (aka Master) is 3 stones stronger than the version that defeated Lee Sedol. Prior to this match it had a 60-0 record against high professional dans (on faster game settings, but still). Right now it would be something like a 13-dan pro, almost impossible to beat.

The natural occurrence of a human being capable of defeating a bot like this would be very rare, and would take decades to train. In the case of this bot, you just set up a cluster and deploy the same software, to produce as many instances of this bot as you want in less than a day. In this sense, AI has a huge advantage.

Finally, by the time a human being can beat this version of the bot, there is going to be a much stronger version.


I would like to see the next game with 1 stone handicap and the 3rd game with 3 stone handicap.

But Ke Jie probably will take that as an insult.


I don't know about that.

While it might not be very gratifying to lose to a computer, players often enjoy the opportunity to play stronger players and learn something new.

Then, the way to be a professional 9 dan involves a lot of losing and frustration. I don't think he is very thin-skinned in this respect.


This represents a fairly major milestone in game A.I. doesn't it?

I only took one A.I. class in school and had a few that involved games as projects and remember several times the lecturers/professors (10 years ago) mentioning that Go was a next level target after chess due to extremely high branching factor.


Yes. Of all common games that are deterministic and have no hidden state, GO has the largest search space. It's significantly larger than chess. So this is pretty much the end of that branch of research.


Thank you for mentioning hidden state; I feel like a lot of laypersons have the impression that Go is the hardest game to solve, period, without realizing that it's in a specific class of games where the entire state of the game is evident just by looking at the board (and in addition: is a two-player game, where each player takes exactly one move per turn). I honestly wonder whether a game like Stratego, which is much less competitive than Go but has tons of hidden state, would be harder for current AI techniques to compete at.

(EDIT: Specifically last year I was witness to a lot of threads regarding the quality of the AI in the video game Civilization 6 that asserted that poor AI in video games was no longer excusable since computers now beat experts at Go...)


I write the current most used AI mod in Civ 6, and have been doing Civ AI since 4.

The poor AI in Civ is repeatedly based around smart systems stacked on top of each other.

So imagine Software Development, you decide you want to add 10 AI systems to run aspects of the AI. You build all 10 AI systems, each one doing a thing.

Then you add them all together, and they all clash. That's Civ AI.

It's almost deterministic in that, all the AI systems fight each other in a consistent way, so you could have just ignored writing all your complicated AI, and made the AI just follow a single decision tree, and it would be the same results. Except a single decision tree would follow logic. Random systems competing with each other, has no real logic.

Then when AI problems come about, the programmers tweak the systems to fix a single problem, which can maybe work, but will break many other things in the process.

So each Civ game, to fix the AI, you start by shutting off all the AI systems one by one. The goal being when I ask the game to do X, X occurs.

Once you get all the stuff removed so you can actually make the game do something, you then create actions that make things happen.

That can make the game has significantly insanely good AI. The best example of this is a mod in Civ 5 called Vox Popeli which has the most profound Civ AI anyone has come up with, and takes ideas from all the Civ modders.

To get the AI to a point where the Human thinks it's good is not particularly hard if you focus on simple, easy to understand concepts, and not large systems that are supposed to cover every possibility organically.


Thank you for your service! The current civ AI team is pointed in such a wrong direction... wish they would hire you guys instead! :)


I still wonder how well a computer could handle games like magic: the gathering which don't simply include hidden information and randomness, but include cards that change the rules of the games themselves and where the interactions of cards can produce crazy effects that are hard to predict before they occur.

Add in the fact that each player needs to build their own deck (choosing from 20,000+ cards) and that new cards are printed regularly, significantly shifting the expected metagame, and building an AI that can routinely win in a larger card pool format like Modern or Vintage seems absurdly difficult.


MTG online enforces the rules AFAIK, so I'd assume they are not that complicated.


Poker is the canonical example of a hidden-state game and that has already fallen.

https://www.wired.com/2017/01/mystery-ai-just-crushed-best-h...

The list of things that humans do better than computers is getting very short very fast.


That makes for a good headline but it's not actually true. AI beat humans in two-player, unlimited stakes, no-limit hold-em.


And how does that fact in any way refute my claim?


"Poker" is a much broader category than "two-player, unlimited stakes, no-limit hold-em."

Particularly, it's usually a multiplayer game at the outset, and has many varieties other than hold 'em (which just happens to be the currently—and this is a fairly recent development—popular form.)



It's not the standard version of the game. If computers were consistently beating the best in the world in blitz chess but not standard tournament rules we wouldn't say that "chess is solved".


I didn't say poker was "solved", I said it had "fallen".

It's true that poker is not completely solved, but neither is chess. It is still not known whether white has a forced win. It is possible that some day this will be known, and if the answer is yes, then a human playing white may be able to learn the strategy and hence beat any machine. But that will be only marginally more interesting than the fact that a human player can force a draw in tic-tac-toe. The difference between "fallen" and "solved" is just not that interesting. Checkers is solved, but no one cares.

I see no reason to believe that the techniques used to beat the best human players in one variant of poker cannot be extended to beat the best human players in any variant of poker, or, for that matter, to any game with randomness and hidden state.


I would welcome the chance to play you in poker.


It limits your claim. Putting an AI in a poker tournament with 10 players at the table is not yet a solved problem.



Heads-up limit hold-em was solved years ago. No limit is a harder game.


Not likely. The training of the neural nets adjusts for induced randomness. Witness the high quality AIs for poker or backgammon, where winning a single game isn't needed for the bigger multi-game win.

I expect StarCraft will be the next professionally played game to experience destruction at the hands of AI. Then it will be clear that "fog of war" doesn't matter much in the decision tree from a probabilistic perspective.

The irony being that faced with a "fog of war" maybe you do want to underestimate your opponent because then, if you end up surviving, you can claim brilliant insight and leadership which bolsters the short-term position, even though in fact you simply were lucky. So in the open-ended problem of tactical maneuvering in business or war, AI might not have an advantage any time soon.


You'd have to limit its APM.


Yeah, I remember this vid with zerglings doding siege tanks shots: https://www.youtube.com/watch?v=IKVFZ28ybQs

The difference is quite drastic.


It would require a significantly different approach, but good progress is being made in that area as well. Top computers now beat humans at heads-up no limit hold 'em poker, for example: https://www.scientificamerican.com/article/time-to-fold-huma.... And that game probably has more hidden state than stratego.


>I honestly wonder whether a game like Stratego, which is much less competitive than Go but has tons of hidden state, would be harder for current AI techniques to compete at.

There don't seem to be good Stratego computer AIs (which actually surprised me a little). But I can easily believe that's at least in part because not a huge amount of effort has been put into it.


Nit: Go's game state isn't just defined by the board state and whose turn it is, but also by the history of all prior board states (which would still be public knowledge).


Tewari is a common technique for analyzing go positions. You alter the order of the moves to get to the same final position and, as all the stones (the pieces) are the same, if one sequence ends with a silly move of a player, but there are no silly moves of the other player in any sequence, then the first player made a mistake.

This means that the history of the game is pretty much irrelevant. Usually only the last two moves are important because you can't recapture a stone and get to the very same position as two moves before. It's called the ko rule (another unfortunate word clash with English.)

There is also triple ko (three linked kos), which requires to remember more moves back, like for repeated positions in chess. A go game with a triple ko has no result: there is no draw so the game must be played again if in a tournament. It's very rare.


>A go game with a triple ko has no result

True under Japanese rules. Chinese rules at least theoretically use a superko rule. But many comments online say in practice the game is voided under Chinese rules too.

http://senseis.xmp.net/?Superko


It's not just the branching factor. It's also the fact that the quality of moves in games like chess can be evaluated relatively quickly, say, within 10-20 turns. In Go, on the other hand, a stone placed at the beginning of the game may be critically important 100 turns later.


I wonder if playing Go on a much larger board (e.g. 37 x 37) would limit the ability of AlphaGo to explore branching significantly to tip the advantage back to humans. I'm not sure if A.I. is limited by branching sizes. And would human intuition of strategy on a 19 x 19 board map easily to the strategy of a 37 x 37 board? After all, Go is still Go no matter what size the board, whereas chess can only be called chess when played on the 8 x 8 board.


Board size would hamper old brute force algorithms.

Modern Go AI also has "intuitive" components just like we do (large neural networks trained to immediately provide a heat map of what should be the next best move). That NN is not limited by board size.

BTW, even when playing by "intuition" alone (no tree search), AlphaGo is still incredibly strong. It would still beat most amateur players that way.


This game could take forever to finish...

AlphaGo should be able to be retrained using different variables/game rules.


The real milestone was achieved last year, when AlphaGo was unveiled, and played (and won) against Lee Sedol.


It's so accurate to call this a Sputnik moment for China. I wonder if we'll see a face off between AlphaGo and a Chinese Go AI sometime in the future.


yea totally agree. the fact that state media shut down the broadcast speaks to some loss of face.

Something organized in good spirits like an AI Olympics would be amazing to watch.


It would be awesome if AlphaGo were released as an online-playable version but with an adaptive difficulty. Either by choosing a desired rank or just setting it to "try to win %50 of the time" or similar.

I feel like that would lead to a new generation of players trained on new strategies from a blank slate. Could be interesting.

Disclosure: I only know the basics about playing Go :)


Humans are still better on a watt per watt basis!


Not for much longer! Apparently this iteration of AlphaGo is using 10x less computational power than the version that beat Lee Sedol.


10x more efficient and this version ran on a single machine, per the press conference.


Interesting, is there a link to this?


https://youtu.be/Z-HL5nppBnM?t=21580

More will be published by end of this week


but humans obtain their watts very very inefficiently, so there's prob at least an order of magnitude to give for the same kind of system-level efficiency. Consider a field to feed a human vs a PV installation of the same physical size. And ofc there's all the other ways to obtain electricity...


The surprisingly efficient thing about biology is that it will take care of itself after released in an energy-dense environment...


Yes, whelks for example are known for their ability to flourish in supernovas.


That's ridiculous. Only tardigrades can survive supernovas.


If my company has its way, not for long ;)


"Going from the CPU to the GPU brought about enormous progress in Deep Learning. Now researchers grind their teeth against the constraints of the GPU.

Vathys changes that."

Indeed!


People who believe that AGI is too far away to be concerned about should keep in mind that very few people, even AI enthusiasts and experts, predicted 5 years ago that we would have a human-pro-level Go playing program now.

Prior to 2015, no programs could compete evenly with any among hundreds of Go professionals. Two years later, even the human World Champion admits that his competency is far below that of a computer program.

"I am quite convinced by this loss that AlphaGo is really strong. From AlphaGo there are lots of things that are worthwhile learning and exploring." -- Ke Jie

The time to start working on and funding AI Safety research is now.

Note: I am not saying that AGI is imminent. The point is that we do NOT know when it will emerge and AI Safety research is very difficult and will likely take a long time to complete.


This means as much in terms of progress towards AGI as did DeepBlue's win over Kasparov.


AlphaGo has won one game out of three so far, it's not over yet.


It's very likely that AlphaGo will won ALL three invidual games, the pair game and the team game, given all the commentaries that I've been reading in the past few days.


You can bet it will win the pair game, but it will also lose it: an instance of AlphaGo is going to play in both pairs.


The one I'm most interested about is the 5v1 match (which may be what they were referring to?). If AlphaGo wins that one, then it's basically over.


I wouldn't look at it that way. Are five heads playing a single game of Go really better than one? I imagine there would be plenty of disagreements about moves and tactics.


There was a lot of talk in the Go world that the time controls for that game were too short; not enough time for discussion, etc.


It'll likely lose that game as well. Frankly Humans can't be 'boosted' so easily (unless the pros know something I don't).


I've watched the old version beating Lee Sedol. Ke Jie is not a whole lot stronger.

Let it go, man, cause it's gone. A new era has dawned.


From their Elo ratings, Ke Jie is expected to beat Lee Sedol 65% of the time. Last year's AlphaGo lost one of five games against Sedol. So Ke Jie and the old AlphaGo are in the same ballpark of strength. I think it's possible that Ke Jie might win one of the games.


AlphaGo already won the first game, which 5 years ago would have been considered impossible. It is basically over. Google can pour an order of magnitude more computing power at the algorithm to boost its performance if they choose too.


The question of performance as a function of computing power is an interesting one. Demis Hassabis claims that the returns are diminishing quickly when going above the system they are using. In fact Alpha Go Ke Ji uses 10 times less power than Alpha Go Lee Sedol, though the architecture has changed to using Google's own TPUs, so it's not an easy comparison.

In either case Alpha Go Ke Ji is said to be 3 stones stronger than Alpha Go Lee Sedol which the team claims comes nearly completely from algorithm improvements.


Google has stated previously that the Lee Sedol variant also used TPUs.


I suppose that invites the next challenge, a machine that can defeat a human with no more power consumption than a meat brain uses.


> He would treat the software more as a teacher, he said, to get inspiration and new ideas about moves.

Interesting.


In the future, only poets will have jobs.

j/k someone will have to program the poets


See "The First Sally -or- Trurl's Electronic Bard", page 12 of the pdf (43 of the book) http://raley.english.ucsb.edu/wp-content/uploads/Reading/Lem...

Keep in mind this is all a translation from Polish.



Poetry jobs are getting lost to outsourcing [1].

[1] http://watleyreview.com/2003/111103-2.html


Let's hope none us of will have 'jobs'.


I, for one, look forward to more articles starting with "It isn’t looking good for humanity." in the coming decades.


Tomorrow there will be another story on flawed centaurs narrative which I never understand. How is the Human + AlphaGo > AlphaGo narrative not fail on smell test?


The space of playing go is rather large and humans only need to be better at a small portion of it to improve upon AlphaGo. Humans+Computer was better than just Computer for chess (at long time controls) for nearly 10 years after BigBlue beat Kasparov.


That centaur narrative seems to have debunked a while back: http://www.infinitychess.com/Page/Public/Article/DefaultArti...


That is for a recent tournament. Now computers are just better. But for up to 5-10 years centaurs were better. Eventually human Go players won't be able to improve on computer evaluations, but that is may not be for another few years.


Let's see. I think there are some centaur like tournaments going on for AlphaGo as well.


Now limit AlphaGo to the same power budget and see who wins.


Humans should just use more power. But they can't which is why machines are superior.


Humans can use more power, they're just a lot more limited in their potential. Michael Phelps gets up to like 400 watts per day while at peak training.


What's the difference between having a limited potential and being inferior?


A human eats maybe 2500 calories per day, right? That's about 100 watts, I think? That's a pretty serious restriction.


2500 kcals/day would be around 121 watts, yeah.

But computers have much lower idle consumption than humans. So if we assume alphago idles at ~45w that would consume around 1000 kcal for 21 hours leaving 1500 kcal for the 3 hours it was given to think which would be more like 580 watts.

That sounds pretty doable.


I wonder how games will be 10~ years in the future. "Check the box to make sure you are human" will be the largest feature


Clearly this checkbox will be AI's next target.


Once we have AI writing AI, then we can say AI is on par with humans.


That seems like an odd version of the AI Effect [0], where "when a technique reaches mainstream use, it is no longer considered artificial intelligence".

[0] https://en.wikipedia.org/wiki/AI_effect


AI is a vague term. What we are trying to do is capture the human mind's ability to solve problems in general. Otherwise it's like solving a crossword puzzle and calling the solution AI. It's the solving that is of interest not the solution. These game playing algorithms are all trained on human games with algorithms designed by humans. The only computer contribution is the ability to churn through permutations really quickly, which is not intelligence. Intelligence is the reduction in state space so it is small enough for the churning to be of value. No AI can do this state space reduction for us yet. We always need a human.


The article makes a lot of PRC censors trying to subdue news about the match. How much of this is a real tendency or cultural fear? How much of it is just an attempt to kneecap Google/Alphabet?


A little bit of both, I think, here is what was apparently send around:

http://chinadigitaltimes.net/2017/05/minitrue-no-live-covera...

and in Chinese:

http://chinadigitaltimes.net/chinese/2017/05/%E3%80%90%E7%9C...

"Regarding the go match between Ke Jie and AlphaGo, no website, without exception, may carry a live stream. If one has been announced in advance, please immediately withdraw it. Please convey the gist of this to sports channels. Again, we stress: this match may not be broadcast live in any form and without exception, including text commentary, photography, video streams, self-media accounts and so on. No website (including sports and technology channels) or desktop or mobile apps may issue news alerts or push notifications about the course or result of the match."

Sounds pretty harsh... will be interesting to see how the rest of the event goes.


I am not sure, but it's floating around.

https://www.google.com/search?q=%E6%9F%AF%E6%B4%81%E5%92%8CA...

Sites like Sina and Sogou are able to report, and those aren't small sites. They got millions of views every day.

That site you quoted is anti-PRC news site. Not trying to discredit, but it isn't a total ban. At least the result isn't and probably won't.


> Not trying to discredit, but it isn't a total ban. At least the result isn't and probably won't.

That's the lowest bar in the world. It's a damn board game. The stream was six hours of two men sitting across a table from each other placing black and white stones. Optionally, some of the streams had commentators talking about how good the moves were as they were played. That such a thing could be subjected to censorship is absurd.

As for the result not being censored, well, it's literally a single bit of information. That was only censored because it's not practical to do so.


Agreed, that's why I mentioned "apparently", though it is what the NYTimes is referring to (although I guess they also have a bone to pick with the PRC).


I mentioned some of this censorship in https://news.ycombinator.com/item?id=14403809. A tieba (your random big BBS site; https://archive.fo/0mh1t#selection-4961.0-4961.16 reply #10) link from http://archive.is/tTgsx (solidot, Chinese version of slashdot) points to a ban on mentioning Google by name & doing commentary on AI, and suggests doing a indirect livestream by showing a separate board instead. Someone's YouTube livestream mirror on Bilibili was supposedly taken down after the B site made it too public by putting it on the front page.

According to the tieba page, the ban was first noticed as CCTV5 deleted their Weibo (a microblogging service) post on the livestream.

Welcome to the 4th part of confidence doctrine -- cultural confidence.


That sounds absolutely absurd. Say what we may about the state of affairs in the United States, but if the media was over forbidden from broadcasting an AI competition, which is effectively educational research in nature, I think there'd be a lot of uproar.


When I worked in China nearly a decade ago, the general mindset among my Party friends was "there's no way this will last." Most of them had sent their kids off to get Canadian citizenship because they weren't sure there would even be a PRC in the long term.

I haven't been back in a while but my guess is things have sort of "settled in" more. The Chinese Firewall is absurdly effective - last I heard it was difficult to even get a VPN through.


That mindset comes from generational memory of high-risk political instability. In the PRC, there has yet to emerge a population without a cohort that personally experienced first-hand, or was passed by oral history with strong impressions like second-hand recitations, political upheavals that involved substantial loss of life, on the order of thousands or more. Get caught, even if incidentally, on the wrong side, and there are dire consequences. What will be interesting to monitor is whether or not the children of your Party friends (the ones who were sent off to obtain Canadian citizenship) do the same with their children. If they all do, then that's not a good signal. If there is abatement of the trend, then that's encouraging.


That's what they said about the current state of affairs twenty years ago. Uproar follows sudden changes. Slow and steady makes the totalitarian state.


They're probably afraid some guy will jump up and shout anti CCP 'propaganda'. It also means they'll lose face if they then proceed to put this guy in the prison.


I doubt the authenticity of the source, it's an anti-PRC site so it might be biased, and the source the article referenced was from a forum instead of some Chinese government's official website. Actually, the contest was widely discussed on some of China's major websites like Weibo and Zhihu so I don't think it was totally banned.


Gee, even with Trump as president this makes me so happy I live in the United States


Google's been blocked since ~2010, due to Google's refusal to block and censor search results that Chinese government considers 'inharmonic'.


my wife, who's chinese, says its because china wanted to protected its own market. i'm sure censorship was a part of it, but think of 1 billion viewers and how much money that'd generate. the government decided that money should stay and circulate in china instead of going outside to google


Maybe it was an unholy synergy of both.


I admit I don't know much about the situation there. So, in mainland China, for approxmiately 7 years, nobody could use Google's search engine, or gmail, for anything? (Barring extreme technical measures like tor / vpn and so forth)?

Is this true? Could you talk about what the most common search engines in actual use are there?

Obviously I am asking about this as a total outsider. I've never even been to China.


It isn't just a search engine; there is an entire for-China-by-Chinese ecosystem [1]. To answer your direct question, yes, every Google online property is inaccessible within the PRC to non-registered commercial entities, and has been for the past 7 years. If you are a appropriately-registered commercial entity that can afford it, my understanding is dedicated lines (assumed monitored by the PRC) may be purchased with access to the outside world.

[1] http://www.mandarinzone.com/top-10-popular-chinese-websites/

Edit: my apologies, bad wording. Kudos to xiaoma for catching my mistake, thank you 小马 (my guess at your name in 中文)。Today, all Google properties I'm aware of are blocked. The blocking was implemented gradually across Google properties over the past 7 years. The search engine has been blocked for the past 7 years, since the beginning of the block.


Gmail wasn't blocked until later.


谢谢! Correction added to my post.


Thanks for answering my direct question and for that link, which is exactly what I was curious about.


Even conventional VPN is not enough. The Great Firewall of China (https://en.wikipedia.org/wiki/Great_Firewall) is a mix of DNS poisoning, deep packet inspection, and traffic and usage analysis based on real time ML. It is very smart and adaptive, and will block most mainstream VPN services, including IPSEC, standard OpenVPN, SSH tunnels, (of course) SOCKS and http proxies.

Besides blocking entire sections of the net outright (like Google address blocks), poisoning controversial domains, etc, even if it can't directly inspect the traffic due to good encryption (say in the instance of OpenVPN or IPSEC), it will slowly degrade and eventually null-route your traffic over the course of minutes, depending on its judgement of the likelihood (based on packet structure and history) that your activity isn't "normal" usage.

Currently the only functional ways of getting around the GFW is VPN through stunnel (TCP OpenVPN traffic re-wrapped in TLS, thus pretending to be https traffic, and incurring triple TCP performance penalties), similar convoluted protocols like Shadowsocks, obfsproxy, and other China specific tools.


It started with the search engine and followed by all Google's services, including but certainly not limited to Blogger, Adwords, Play store, Gmail and so on. So Chinese students who were applying to foreign universities with a Gmail account would wake up one day and find they had difficulty accessing their own mailboxes. (Don't ask me how I knew it.)

Last time I checked the only Google service available now seems to be the Google Translate.

One project monitors how many popluar sites are blocked in China:

https://en.greatfire.org/analyzer

Tor is also blocked so it takes a considerable amount of effort to make it work. VPNs and other measures (socket proxy, etc.) work but they're also sinfully slow, and it breaks all the time.

The most common search engine there is Baidu: https://baidu.com. Bing has some market share as well.


What was the point of hosting this in China?


There is plenty of news coverage of this in China, both in English and in Chinese -

http://www.ecns.cn/2017/05-24/258802.shtml

http://www.ecns.cn/2017/05-18/258072.shtml

http://en.people.cn/n3/2017/0523/c90000-9219327.html

I see claims everywhere that "China is censoring coverage of the match", but anyone can just click on those links and see that it is untrue.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: