Hacker News new | past | comments | ask | show | jobs | submit | aragonite's favorites login

Annual reminder: the reason Larry Page originally started Google wasn't to solve search or become rich, it was to develop a sustainable source of income to produce infrastructure for ML research and hire motivated ML researchers to develop the technology point where it would become AI in an externally recognizable way (say, a computer program that could play some interesting game better than anybody else, or solve a long-standing scientific problem). Everything else- search, ads, social, cloud, etc- all of those were tangential.

The first 15-20 years of google didn't really have any interesting machine learnign at all. There was SmartASS, SETI, and later Sibyl, which are really just large-scale variations on "build a model that predicts a value that allows us to make profit in a very specific area". There were other things, like Phil and later Rephil. Inside Google (not DeepMind), things didn't really get going at scale until somebody stuffed a bunch of GPUs into a workstation and showed you could train voice recognition really fast- that lead to the early, extremely high quality Android voice recognition and improved quality of the existing voice models.

Around the same time, Jeff was experimenting with distributed CPU training, and at that point, the ban on GPUs in Google servers was lifted, although because Google couldn't source enough GPUs, they decided to start a program to make their own alternative (TPUs). This has led to a revolution within Google and DeepMind (and X) allowing a flourishing of research into many directions that would have been more or less impossible just 5 years ago.

larry wasn't completely wrong in his long-term goal, but he got bored and promoted himself out of google, leaving sundar to deal with the messy details of implementing the singularity while also keeping the stock price up.


Many have tried this. It is definitely not in the class of "nobody else has thought of this" so much as "tons of people have thought of this and tried it and smash up against some major problems right away".

AIUI, in this case, the major issue is that it is very tempting to try to impose a constraint that all intermediate states the code passes through are semantically valid. However while superficially appealing this turns out to be a crippling constraint. Even the best of us tend to think in somewhat more sloppy terms and then fix up code after the fact, even on a line-by-line basis. Being forced to be completely valid 100% of the time turns out to be a big mismatch. I am inclined to believe this mismatch is fundamental. Only vanishing fractions of a percent of humans, if any, think this way. Even professional mathematicians report that they tend to work this way in practice, in that they tend to leap ahead and then back-fill the rigor, rather than strictly working forward one step at a time, and if even they don't work in this manner, who does?

Programming has settled on semantically-aware autocomplete and suggestions, and that's probably the actual optimum, not just a symptom of some kind of laziness.

It is possible that if somebody really pushed through the development that the problems are solvable, but I'd advise anyone taking it on that if there's any fruit here, and there may well be, it is not low hanging.

As saurik mentioned, the Lisp world through emacs is the closest you can get to this, but again AIUI you can still do anything you want, it's just that you have a lot of tools that encourage "more" semanticness, not that it is enforced absolutely rigidly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: