Hacker News new | comments | show | ask | jobs | submit login
Teenager Finds Classical Alternative to Quantum Recommendation Algorithm (quantamagazine.org)
824 points by okket 49 days ago | hide | past | web | favorite | 216 comments



> “Tang is killing [Kerenidis and Prakash’s] quantum speedup, but then in another sense Tang is giving a big improvement and building on what they did. Tang never would have come up with this classical algorithm but for their quantum algorithm,” Aaronson said.

I think this deserves more emphasis: regardless of whether or not we need a quantum computer in the end, as a model quantum computing is apparently useful to do research with and advance the status quo of our knowledge.


It's worth noting that the title of the paper does emphasize this: A quantum-inspired classical algorithm for recommendation systems.

The abstract does too: https://arxiv.org/abs/1807.04271


Humble scientist right there, live long and prosper brother.


Im calling BS. We had ‘random projection’ long before anybody started dreaming up use cases for quantum. Seriously, people.

When we realize that ‘quantum’ computing is really just ‘analog’, we’ll see a resurgence in that too.

(Which is great actually, because modern analog computers are stunningly capable. And yes, they are the same thing when you realize that the hardware is exactly the same, and sampling away the noise has the same form as solving for decoherence.)


Scott Aaronson explained how this claim is wrong. Quantum computing is not the same thing as "analog computing". See his response to "a quantum computer would merely be a souped-up analog computer" here: https://www.scottaaronson.com/democritus/lec14.html

> And yes, they are the same thing when you realize that the hardware is exactly the same

Quantum computers don't actually exist yet. It's a theoretical model. There's no hardware.


I think it's fair to say quantum computers don't exist yet, but there is plenty of hardware. Although we're still at the wall-throwing stage and nothing seems to have stuck yet.

Even for the theoretical models, we have plenty of alternative models. The circuit model is the de-facto standard and a good thought framework, but there are real concerns about it maybe not to being the model to translate into actual hardware. (analogously, lambda calculus vs RTL) It will be interesting to see what alternative or new theoretical models pop up once the quantum computer's equivalent of the transistor surfaces.


Aaronson is completely wrong here, there already exists many powerful analog computation models and physical hardware backing it up (Apollo 11 had many). More so, it's completely dissuading to mention that the Quantum Threshold problem differentiates analog from quantum computation, when analog has practically unlimited resolution and error correction through sampling, while the proof states quantum needs 1,000-10,000 physical qbits for every physical qbit—–if anything computer scientists should and have, like Kalai, become skeptical of the possibility of quantum computation.

Another thought to consider is since our computation latencies are currently restricted by sampling (clock) times, which is restricted by power (for heating reasons which are predicted by quantum), would the smarter bet be to pursue technology which uses exponentially less energy (analog) or technology which uses exponentially more energy with the same promised speed ups as the former (quantum)?


The latter claim is not true. Quantum computers exist, but with lower capabilities than classical ones due to few numbers of entangled qubits.


It's not guaranteed that the current hardware will scale to many more qubits. And they haven't passed the threshold to "quantum supremacy".


Sure (although it's likely), but I don't think it's fair to call quantum computing hardware entirely theoretical at this point.


The laws of E&M generalized for Electrical Engineering can even be derived from quantum, BJT and several other equations historically were. Fortunately analog computing seems to be making a rudimentary comeback in the aspects they are actually good for.


But the converse is not true. The laws from Quantum Mechanics can not be derived from E&M. This means that the physics covered by QM is strictly larger than E&M. And as my former prof. R. Jozsa showed, this allows for Quantum algorithms that can not be matched by any classical algorithm (see Deutsch-Jozsa algorithm).

Analog computers in their ideal form (known as real computers after the real numbers) are also strictly superior to classical computers. But unlike quantum computers, we know for sure that ideal analog computers can not exist. This also follows from QM. (Intuitively, real numbers don't exist in physics and everything ultimately becomes quantized)


That's not true, there are idealizations of ideal analog computers, such as the field computer model of analog computation†. We already have analog computations using continuous-time digital that can solve 32nd order differential equations in less than a millisecond††. We are also unaware of even the theoretical possibility of computation in QM. The noise and sampling problem hasn't been settled (see Kalai for a convincing argument); given the Quantum Threshold needed it becomes suspect when you have the equivalent of 1,000 CPU's for every individual CPU for just a .1% probability of error.

Given your same line of reasoning, we are able to measure real numbers, in fact you can derive discrete numbers from real numbers while the converse is not true. Intuitively, discrete numbers don't exist and everything ultimately becomes real numbers. This is obviously a correct following of the stimulating rhetoric yet completely untrue. The ability of derivation from an axiom has nothing to do with the validity of the derivative representations.

https://www.eetimes.com/document.asp?doc_id=1138111 †† https://ieeexplore.ieee.org/document/7463004/


My argument was purely theoretical. I fully agree that analog computers currently have more practically attainable advantages over digital computers. I'm also less confident than most people of quantum computers ever overcoming the practical problems, some of which you mentioned. But I do not doubt that the physics itself offers superior computation, even if we may never access it.

> we are able to measure real numbers

I disagree with you here. I like to believe in the Church-Turing-Deutsch thesis, which implies that reality is ultimately discrete and real numbers are nonphysical. The example you give, time, is indeed a real number in our best models of physics. But this is mostly because we don't know how to quantize it yet. It's one of the biggest open problems in physics. That said, the outcome could indeed be that it is a real number and CTD is wrong.


If we do end up finding a bunch of classical performance equivalents to quantum algorithms this could have huge implications for physics.


This is a bit speculative, but I suspect quantum theory is essentially god's programming language. It seems to crop up everywhere we look, like in psychology, biology, machine learning, computer science and general probability.


> essentially god's programming language

That metaphor seems to confuse people more than it clarifies. Did you perhaps mean "nature's programming language"?


Probably, though it could also be a way to say "The language in which algorithms from 'The Book'[1] are written."

It would be fun if quantum computing were equivalent to classical (less fun -- probabilistic) models with comparable complexity, just more intuitive. I guess lots of complexity conjectures can be phrased that way, though -- P vs NP for sure.

1: https://en.wikipedia.org/wiki/Proofs_from_THE_BOOK


There isn't any compelling reason to favor one person's fiction over another's. We can invoke memetic references to magic mechanisms all day long and accomplish little.


Erm, that's a little abstract for me, but I guess you're saying that aesthetic judgements of programs aren't worth much from a theoretical perspective?

If so, I'd say,

- It didn't really seem like anyone in the comment thread was trying to make a "compelling" argument for anything -- we're just chatting here.

- If we were interested in that sort of thing, we might note that minimal program size is a good measure of language complexity/information, and that "algorithms from the book" and different models of computation that might make programs easier to write are both "objectively interesting" from an information-theoretic standpoint.

EDIT: oh wait, I get it, you weren't being constructive, you just came here to make fun of a religion nobody was talking about. Never mind.


I think 'nature' would be confusing here, since we also use it for the concrete, manifest nature. As used here, and in many other phrasings, and by analogy with religon, 'god' is probably meant to be the principles and forces underlying manifest nature.


I think it requires some level of creationistic thinking to not recognize these principles and forces as nature itself.


Nobody is saying that they aren't part of nature. The distinction is between nature-the-principles and nature-manifest. Together these comprise Nature.

The law of gravity is a different kind of thing to a rock. It is not a physical object, it describes the ways physical objects behave. The number 3 is a different kind of thing to 3 apples. It is not a physical-natural object, it describes aspects of nature.

Its the difference between the rules of chess and the board and pieces. Both are essential parts of the game 'chess' but we can usefully distinguish them when we want to be more precise.


And if nature was a programmer, I don’t see how that’s different from god.


In your paradigm ”nature” is what ”god” ”programmed” with his ”quantum language”.


Not really. I think nature = universe = god. Again, a bit speculative.


QC is a description of how we think and perceive, necessarily, which may be product of the environment, and using a notion of the later to give a circular definition may be useful as food for thought, but nothing to talk about, really. Edit: ... unless applying or extending the theory, in which case the pan-ultimate macro scale is probably out of scope for now.


Deepak Chopra, is that you?



Jews hate him!


I’d assert that nature == universe == god is an incredibly sober rationalization. As a literary interpretation, it’s acute.


What's confusing about it?


What is a deity from the perspective of quantum computing? It seems to be forcing one’s own opinions on essentially an imagined character.


All abstraction is imagined. When someone uses the word God in this context, it's obvious the abstraction is about the root forces that cause reality to manifest.

There is no controversy here.


Clearly, the controversy is about whether reality needs a cause in order to manifest.


I think the definition for cause you're using is heavier than the definition I'm using.

This is probably because religions have traditionally framed extra-universal things as ordering events inside the universe, but it seems that is not possible if you exit spacetime.

In other words, branes, strings, gods, or anything we discover could, in my definition, "cause" reality to manifest without a temporal cause/effect coupling.

Perhaps I need a better word to remove the cause/effect temporal baggage


Why wouldn't it? Everything else does...


Why would it? "Everything else" may behave the way it does because it's embedded within reality. But reality in itself isn't "in" anything, so it may or may not have the same properties as ordinary objects. So it may or may not be legitimate to carry out that final inductive step. Who knows. Who cares.


Reality though consists of many tons of "everything else".


But in order to carry out the final inductive step, you have to invoke something outside of nature that causes nature to exist. And the problem is that we don't even know whether reason can be applied to things outside of nature. I mean, our rules of reasoning about objects realiably break down when we try to apply induction to objects even at the fringes of our ability to observe. What makes you so sure that we can reach the supernatural realm with reason?


>But in order to carry out the final inductive step, you have to invoke something outside of nature that causes nature to exist. And the problem is that we don't even know whether reason can be applied to things outside of nature.

In terms of proof, we also don't know whether reason can be applied to things within nature -- it's still depending on axioms. But we still believe it to be true and to be applicable everywhere inside nature regardless of specific context, and I see no reason (pun intended) not to extend this belief even outside nature.


There has to be some measure by which we can say that something is reasonable. And I think that measuring our reasoning against nature is the only way we can approach a valid model of the world. In contrast, we can't measure our reason against the supernatural realm. And like I said before, there are areas where our reasoning does break down even within nature (such as at the center of a black hole), and there we say "we don't know" rather than believe that our reasoning holds.


Personally, I prefer the thesis that everything exists.


Hmmm.

There is the set A. It contains all elements except itself.

There is a subset of A called P, which is all possible things.

There is a subset of P called M, which is all things that manifest in reality.

Traditionally we call elements of M "things that exist (or did exist or will exist)"

You're saying that P=M?

I find that position extraordinary and lacking evidence.


Is a common view among those who believe in parallel universes whether spatial, quantum or mathematical. Its sometimes described as 'everything that can happen does happen'.

If there's infinite matter arranged randomly then you would expect to eventually find one of everything finite.


If there is no controversy, then there is nothing being said.


That's controversial. If there's no controversy, you can still say tautologies :-)


Wat

We can say things that aren't disagreements. We can even disagree without controversy.


Do you ask that question when you hear Einstein's quote, "God doesn't play dice"? I suspect not, because everyone understands that it's a poetic way of expressing the belief that the universe isn't random. So why do you have such a pedantic and uncharitable interpretation of GP's comment, which obviously uses the word "God" in a similar way?

You seem to be having a knee-jerk reaction to the word "God". Your personal preference for the most secular language possible is not a valid reason to pick apart GP's phrasing.


First, he was wrong, and second, he said that in a very different culture. This isn’t the 1920s.


Wow, this discussion is going nowhere fast.

> First, he was wrong

I agree. And that's completely and totally irrelevant. I have no idea why you even mention it. Obviously what matters is that his meaning is clear.

> Second, he said that in a very different culture. This isn't the 1920s.

So your objection to GP's phrasing, in spite of it being completely inoffensive and having a perfectly clear meaning, is "because it's $CURRENT_YEAR".


Totally irrelevant but I wouldn’t agree Einstein was wrong.

QM without collapse is actually a fully deterministic theory. And it makes sense that the universe doesn’t play dice - because there is no source of randomness available.

It’s a functional language.


Yes, I suppose saying Einstein was wrong is too strong. It's more correct to say there's no evidence (so far) that he was right. Quantum collapse certainly appears random, but if we eventually find a deterministic cause, Einstein will be vindicated. It's impossible to prove that a process is random.

> there is no source of randomness available

I think that's going beyond what we can be sure of. We have no evidence that there is randomness, be we also have no evidence that there isn't.

QM may be fully deterministic if we disregard collapse, but as far as I'm aware, we can't do that. Collapse is still something we have to contend with given the current state of our knowledge.


I say there is no source of randomness because the universe is a closed system by definition. In this case by universe I mean all of existence. And closed systems cannot create randomness - it always comes from somewhere.

I suppose we can't rule out collapse totally, but we also can't rule out fairies pushing atoms around. It's just that there is no evidence for collapse, so it seems we can ignore it.


The statement "closed systems cannot create randomness" is only true if you assume those closed systems don't feature random collapse. But if the collapse itself is the source of randomness, then no external input is required, and the system is closed.


From an Occam perspective, a universe with fundamental randomness requires an unbounded amount of arbitrary data to specify. Deterministic theories are thus massively advantaged.


I don't see how. The amount of information needed to specify the current state of a system (the universe) does not depend on how that state was arrived at.

It takes one bit of information to specify that a coin is tails-up, regardless of whether it was placed that way deliberately or got that way as a result of a random coin toss.


It's possible that a deterministic universe might be specifiable in many fewer bits than the bits required for a complete description of its current state.

For example if you know you're in a universe where coins always land tail up then you need 0 bits to store that state.


That's a special case where the universe's state never changes. Any universe that changes over time requires information to encode the current state.

Of course you could make a similar claim about a universe where coin flips always alternate--current state is merely a function of time and initial conditions, and therefore no information is needed to store it. But then you're saying that deterministic universes contain no information at all except their initial state. That's fine, but I personally wouldn't put an axiom like that on the winning side of Occam's razor.


> deterministic universes contain no information at all except their initial state

How can it be otherwise? You can always completely describe the deterministic universe by its initial state plus a time value, even if its initial state appears much simpler than the later states.

A busy game of life scenario can arise from a simple starting point. If you don't know the rules, you're stuck describing the position of every dot, if you do, then you can create a complete description with much less information.


> universe is a closed system by definition

By which one?

> randomness - it always comes from somewhere

Unless randomness is a fundamental property of matter.

> can't rule out fairies pushing atoms around

Wasn't it exactly what the Bell's experiments disproved?


Fairies coordinating with ftl walkie talkies then.


I am providing context as to why the statement at the center of this is confusing. It still strikes me as putting an unshared perspective onto a concept in a way that doesn’t make sense to me as well as the person to which you were replying. If you continue to disagree, feel free to, but the core premise makes little sense to me.


Whether he was wrong or not about the underlying theory is irrelevant, as the discussion here is about whether this was confusing.

The date that he said isn't relevant either -- nor the culture at that date, since it doesn't affect whether this is "confusing" or not, and I don't see anything drastically different in our culture to render it so. (Or even to render a reference to God "offensive" or a bad metaphor today, if that was your argument)


I don't think we know that he was wrong. Deterministic quantum theories do exist. Many-Worlds is deterministic.


I’m an atheist and even I get what his meaning and am I unperturbed by his choice of terminology.


A debugger.

It can change state arbitrarily, introspect all the data and step forward in execution.


Perhaps they meant it in the sense that the Higgs Boson particle is sometimes referenced as the “god” particle.

It works in the comparison that quantum computing often sensationalized by media.


Well yeah, quantum mechanics in general is the theory of quantized probabilistic physical systems, so it makes sense that apparently probabilistic systems with an apparent finite set of component states that can only discretely change in magnitude can be modeled with the theories that were developed to address precisely that. I suspect the cause of your surprise is actually inverted (as in, it's not surprising that alot of high level systems work like that, but rather that the most fundamental elements of matter/energy do.)


Not quite. A similar thing does surprise me; that quantum theory was discovered when researching particle physics and is only recently making inroads in other fields.

That it applies to physics too is expected, because one way to characterise the quantum language is that it defines what one system can possibly do or say to another system.

The source of those limitations is not yet completely clear, i.e. it hasn't been satisfactorily axiomized. But we can get close and it seems it's a logical restraint based around self-consistency. Assuming the universe must be self-consistent, it must also obey QM.


Bell's theorem says that no other theory with local hidden variables (which do away with randomness) can explain all the things QM explains.


I have to quibble a little bit, pointing specifically at psychology because it's first in your list. I don't know if there's an "official" demarcation for what comprises a quantum theory. But there's no psychological experiment or theory that hinges on the value of Planck's constant, or that demonstrates a shortcoming of classical mechanics.


Some human behavior is more naturally modelled in QT than classical probability theory.¹ The major difference between quantum and classical is that you can have incompatible variables which don't have a shared probability space. Nothing to do with physics.

Also, with natural units Plank's constant is , so it does show up a lot.

[1] https://arxiv.org/pdf/1711.00418.pdf


Biology neither. Anybody that is telling you that quantum coherence properties are relevant for low energies states at room temperature is frankly uneducated in the matter.


I think you mean the “MIX” assembly language.


I just want to mention that the "recommendation systems" angle is quite unimportant here. This is a result in complexity theory, a resolution of the question "whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time" (as Scott Aaronson puts it [1]).

While this problem does occur in matrix factorization based recommenders, there are a lot of solutions for it without quantum computers that perform well, and it's far from being the one method for recommendation.

[1] https://www.scottaaronson.com/blog/?p=3880


> "whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time"

Could that be used in clustering algorithms in BioInformatics?

EDIT: Sorry, Aaronson's writing is just too funny not to put that quote in full context:

> On the Hacker News thread, some commenters are lamenting that such a brilliant mind as Ewin’s would spend its time figuring out how to entice consumers to buy even more products that they don’t need. I confess that that’s an angle that hadn’t even occurred to me: I simply thought that it was a beautiful question whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time, and if the application to recommendation systems helped to motivate that question, then so much the better. Now, though, I feel compelled to point out that, in addition to the potentially lucrative application to Amazon and Netflix, research on low-rank matrix sampling algorithms might someday find many other, more economically worthless applications as well.


"research on low-rank matrix sampling algorithms might someday find many other, more economically worthless applications as well."

I'm not sure whether this is a typo, and he means to say that there might be more economically worthwhile applications, or whether he's trolling the people complaining that this is useless by pointing out that there are real economic benefits to it right now.


Neither.

He's trolling the people who regret that the result of this research is manipulating people. By pointing out that there is a real potential for applications that they object to less.


It's a very broad definition of "manipulating people" that includes recommendation algorithms. The problem, as stated, seems to call for an implementation of a perfectly benign strategy for recommending content.

Clustering users by their ratings of various titles is the kind of thing any hobbyist statistician might do to help their friends find movies they like using online data sets.

There are methods to use data to improve people's experiences, and sometimes corporations employ those methods.


With a slight tweak, this is also the kind of algorithm that is used to personalize disinformation campaigns for large numbers of people. As Cambridge Analytica was famously trying to do in the 2016 election.

So yes, this algorithm is useful for manipulating people. And not just by using an extremely broad definition of the term.


You must admit that there is a very large difference between a friend doing something for me and a corporation doing the same thing.


What is the difference?


A friend actually cares about you, your best interests, and long-term well-being.

One's relationship with a corporation is very different. They don't know you or necessarily care very much about your long term interests.


One increases social connection, and the other doesn't.


And this is an example of trolling done right and for the good of (mostly?) everyone :-)


It is not a typo but a jab at wannabe elitists who are sneering at Ewin’s work because of its ready applicability to Amazon and Netflix recommendations — as though the discovery would be more worthwhile if it had no economically valuable uses.


Looking at the paper, I think the fact that this is a recommendation algorithm is fairly important.

The big thing is that broadly "the recommendation algorithm" is not some single problem in combinatorics. Rather, it a initially ill-posed problem that yields concrete problems based on simplifying assumptions. IE, sure, the FINAL, SOLVED problem is as Aaronson put it but because different researchers make different simplifying assumptions, that final problem actually hadn't studied that much - half the achievement of the paper is finding simplifying assumptions that are real world-plausible and yield tractable problems ("Obviously, without any restrictions on what T looks like, this problem is ill-posed. We make this problem tractable through the standard assumption that T is close to a matrix of small rank k (constant or logarithmic in m and n). This reflects the intuition that users tend to fall into a small number of classes based on their preferences. With this assumption, it becomes possible to infer information from subsampled data. We can determine the classes a user lies in, and use information about those classes to give a likely-to-be-good recommendation")

The original quantum recommendation algorithm paper involved both a simplifying assumption about the sorts of consumers one was models and a system of sampling to get a good approximation ("The literature on theoretical recommendation systems is fairly scant, because such algorithms quickly run up against the limits of their model, despite them being somewhat far from the results seen in practice.")

https://arxiv.org/pdf/1807.04271.pdf


Sometimes I think I'm smart because I can configure WebPack, and then I read an article like this.


I'd like to see him try that.


Heh, I can imagine him in front of some whiteboard, while some interviewer dude is like... "uh... the candidate does not even understand the basic WebPack.keepCompatWithWebPack0.1.11='comp,3t,all'. And I am not even joking. What's next, the candidate cannot count? If you don't know the basics, what good can you do? No hire. "


Can he vertically center a div though?


Sure, simply construct an evenly-distributed superposition of centerings, and let the divfunction collapse! It will work on average.


Flex!


Flexbox’s Best-Kept Secret (margin: auto) https://hackernoon.com/flexbox-s-best-kept-secret-bd3d892826...


tables!


Early 2000s strike back


If you are a teen and can do that, I'd also upvote you to #1



I like the last line:

> And what about the future of UT Arlington’s latest wunderkind, Ewin Tang? “I haven’t come up with anything that is really ingenious yet,” he says. Be patient; he’s only 12.

It only took 6 years. Not bad at all!


As someone who finished undergrad classes as a teenager. I totally failed to live up to my potential.


Don't sell yourself short. Lots of important stuff happens when inquiring minds age and have the benefit of cross-disciplinary experience and perspective, plus new social, temporal and fiscal resources.


It’s a probability game. The article states that he picked the problem that seemed easiest to him. Other people make different choices. And finally: even if you don’t achieve anything ground breaking till the end of your life it’s good you tried your chances.


I was pretty confused by this link, until I realized that the part about Ewin Tang has been deleted from that page, which must have happened shortly after posting the link.

Archive.org has the original text: https://web.archive.org/web/20180725201359/http://www.uta.ed...

Quoting for posterity:

---8<---

Twelve-year-old Ewin Tang is the latest in a line of wunderkinds to begin their UT Arlington careers while other students their age are still in elementary school or junior high. History suggests he’ll continue to amaze.

Ewin Tang’s classmates tend to overlook the slight, bespectacled youth sitting in the front row until he answers the professor’s queries—all correctly. Then they ask their own questions. “Who is this guy?” “Why is he here?” And always, “How old is he?” At 12, Ewin, the son of bioengineering Professor Liping Tang, is the youngest student on campus and among the youngest in UT Arlington history. Since taking his first college courses at age 10, he has completed 20 hours, including classes in calculus and differential equations, all with a 4.0 GPA.

“Other students just seem kind of amazed,” he says. “They ask about my age, what I’m majoring in. Some of them actually take pictures of me. They’re pretty cool with it, though; they really don’t bother me a lot.” Although the age gap usually prevents Ewin from forming close friendships with his classmates, many are eager to work with him once they recognize his abilities.

Ewin’s college career began after he completed every math course available in his K-12 private school. His intellect had already prompted school officials to move him from third to seventh grade, but it was soon apparent that he needed more. After he scored 1920 on the SAT at age 10, his parents and school officials explored college enrollment.

Dr. Tang acknowledges that having an immensely bright child can be challenging. “There are no books, no guidance on exactly what to do. This (college for someone so young) is a totally gray area.”

The Tangs met with then-Provost Donald Bobbitt and later with Senior Vice Provost and Dean of Undergraduate Studies Michael Moore. Ewin’s first classes, an online course in history and an on-campus calculus class, were tests he passed easily.

“I think it makes a difference that I’m here,” Dr. Tang says. “Ewin has a place to go and a built-in support system.”

In addition to his University coursework, Ewin works part time in his dad’s nanotechnology laboratory. He is developing a probe to detect bacterial infection, something that would greatly assist in diagnosing diseases. His career plans involve science or engineering, but he hasn’t yet settled on a specialty.

“Our main concern when we began this was his social life,” says Dr. Tang, who notes that Ewin attends a private high school with students his own age for some courses and activities. “Academically he is fine, but we want him to stay in school and stay with kids his own age, to have friends his own age. So far it’s working out pretty well. Thanks to Dr. Moore, he’s having a very good experience.”

“Other students just seem kind of amazed. They ask about my age, what I’m majoring in. Some of them actually take pictures of me.”

Ewin spends part of Monday, Wednesday, and Friday at the private school, where he takes classes and participates in soccer, basketball, cross country, and the Science Olympiad. He also attends UT Arlington on Monday, Wednesday, and Friday and does research in his father’s lab Tuesday and Thursday. As if that’s not enough, he works with a private tutor, studies Chinese, and plays the piano and erhu, a traditional Chinese instrument akin to a violin.

While it might seem that Ewin’s case is unique, Moore and his predecessors in the Provost’s Office have seen others. Because such students don’t meet traditional enrollment requirements, they are evaluated case by case.

“Typically, there is a detailed conversation with the parents about the challenges and rigors of college work as well as a thorough review of the student’s academic history,” Moore explains. “Obviously, we are looking for exceptional young men and women who show the ability to excel in the classroom as well as handle the collegiate environment.”

Over the past two decades, several of these exceptionally young and brilliant students have used UT Arlington as a springboard to success.

---8<---


Scott Aaronson posted about this on his blog

https://www.scottaaronson.com/blog/?p=3880

I like his update - a message to Hacker News commenters:

"On the Hacker News thread, some commenters are lamenting that such a brilliant mind as Ewin’s would spend its time figuring out how to entice consumers to buy even more products that they don’t need. I confess that that’s an angle that hadn’t even occurred to me: I simply thought that it was a beautiful question whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time, and if the application to recommendation systems helped to motivate that question, then so much the better. Now, though, I feel compelled to point out that, in addition to the potentially lucrative application to Amazon and Netflix, research on low-rank matrix sampling algorithms might someday find many other, more economically worthless applications as well."


Funny!

It's as though we moderns are all grazing in a 100-acre meadow that's been grazed for two centuries. The easy stuff is gone. Even the magic beans are rare. The question is: with your magnifying glass, which seeds do you look at in hopes that they're nutritious?

One of my college advisors muttered, and I quote, "Einstein wasted 30 years of his life" (looking for a unified theory). Well now. Sadly, this advisor put graphite on paper for a whole career with little to show for it (apart from employment).

Mr. Nobel's name is -not- still recognized because of his economically worthy applications. And it's lucky for Kelvin that he invented a temperature scale.


That seems a bit passive aggressive. I don't think economical value is what is in question, but rather the social impact of the application (or lack of positive social impact).

Applications can be both economically valuable and have positive social impact. Now recommender systems are in fact useful to discover new things, not just to entice people to buy more. E.g. finding new music, related research, or other things.


The tone of his comment seems to me he's just frustrated a community of programmers took the conversation down the path of debating the merits of recommendation systems rather than exploring the more general applicability of the algorithm in other problem spaces.


I agree with him. Half the time you'll enter an HN discussion and the most upvoted comment will an unrelated bikeshedding discussion.

Usually one of these:

- "I love the idea but can we talk about how the website doesn't look good when CSS is disabled?"

- "25MB of JavaScript? Really? You know, websites used to..."

- "This website is impossible to read because the contrast ratio is too low and I am personally offended."

- "Did anyone else get an ad from this news website? Why does advertising exist?"

- "An obscure open source project uses the same name as this unrelated business. Have you no shame?"

- "The website looks weird on mobile, by which I mean my rooted Android phone running Opera."

- "The text is too big."

- "The text is too small."


When did "comment" transition into "thought-provoking well-defended essay"? Is there no water cooler talk on the internet anymore? I find interesting tidbits and interesting tangents here on HN and I appreciate it all!


I think the "most upvoted comment" part is important here. Tangents are indeed good to have, but I don't think they should occupy the most visible parts of the comment section.


That is in fact why the mods boot many of the most-upvoted comments to the bottom of the thread. I think it’s incorrect to say that half the time you’ll see such comments at the top.


OMG this is so true. Should be a HN bingo.


Contrast ratio is an accessibility issue. One of these things is not like the other.


Very true. But all the same, when I see an article with about 25 comments, click on it, and see that the text has poor contrast, I know exactly what all those comments are going to be saying. Probably nothing about the article at all. The first comment will be about how the site is hard to read - which is a very legitimate complaint. And then the next 24 will be complaining about the design of every other random website under the sun.


Good part is that they visit the link at all.


That’d be like criticizing the development of calculus because it could be applied to the development of atomic bombs. This is first and foremost a mathematical result. When people come up with a new algorithm for the orienteering problem it’s not primarily because they want to be better at orienteering; the same applies here. Your comment is well-written and almost totally irrelevant.


I chuckled when I read it and took it as tongue-in-cheek.


This result is really a benefit for startups building on Filecoin-type platforms, not for FAANG companies that are already at scale. Reducing the barrier to creating good recommendation algorithms is an import step in re-democratizing the web.


note that this was reference to a previous HN thread.


It's a satirical pun. "More economic" - "more economic what?"


From the paper,

> The only difference between Theorem 1 and its quantum equivalent in [13] is that the quantum algorithm has no ε approximation factors (so ε=0 and 1/ε does not appear in the runtime). Thus, we can say that our algorithm performs just as well, up to polynomial slowdown and ε approximation factors.

With ε being the accepted margin of error, theorem 1 being the algorithm produced by Tang, and [13] being a reference to Kerenidis and Prakash's quantum algorithm kept here in the quote for fidelity of quoting.

This means that this algorithm is not really a substitue for the quantum algorithm because it will require a level of error to be accepted (something which might work in recommendation engines, but not necessarily outside that context) and it still has polynomial slowdown over the quantum algorithm. One thing to note is that not all quantum algorithms will perform exponenttially better than their classical counterparts. Particularily, some quantum search algorithms perform only quadratically better than the classical search algorithms. This means that even if an exponential speedup was not achieved by quantum computers, (which it still is because of the ε factor meaning they don't give the same results) it is expected that they still achieve polynomial speedup as they do in this case.

In conclusion, stating that this algorithm threatens the prospects of quantum computation is probably an exageration.


Scott Aaronson wrote about this on his own blog too: https://www.scottaaronson.com/blog/?p=3880


Wow good on this guy. I thought it might have been a high-school student reading the title but he's a PHD candidate. Incredible. What an accomplishment at 18 to be lecturing a room of PHD holders.


It is, but not as rare as you might think! I've been reading some of the old books/papers about SMPY from the 1970s, a talent search study which uses the SAT-M to screen for visuospatially-gifted middle schoolers; they then study them and try to accelerate their education with summer camps, advanced math classes, and grade-skipping and/or early enrollment in Johns Hopkins (where SMPY is based). SMPY was a big part of making acceleration for gifted children acceptable in America, when the conventional wisdom was that they should be forced into regular classes for their own good.

What they often did early on was, around age 14, SMPYers would be admitted to Johns Hopkins to typically start taking math and computer science courses, at which they would often excel and be at the top of the class, expected to go on to post-graduate work around 19, and fit in so well that fellow students and their teachers often (when surveyed or asked) didn't realize they were so young. If you look in OP, that describes Tang remarkably well.


> visuospatially-gifted

This is such a great term. I never thought about it before or how it could relate to our definitions of intelligence and ability in science/tech. So many generic tests for intelligence that perhaps there should be more focus on this area.


Yes, that's one of the take-aways from SMPY. Focus on identifying visuospatial giftedness, as opposed to simply high scores overall, is especially good at predicting scientific accomplishment. (Hence, 'Study of Mathematically Precocious Youth'.)

You might ask, why use only the SAT-M, why not the other subtest, the SAT-V verbal test, since it should be equally difficult? I was surprised to find out that there was in fact a parallel study, 'SVPY', which did just that, but it got canned early on for disappointing results, apparently.


Thank you for this piece of history - mankind is only beginning to learn how to manage the talent.


post-graduate work around 19

depends on the subject. Did computer science exist as a course in the 70's?


It certainly did, as that is what many of the kids were enrolled in...


yes


Skipped 4th through 6th grades. Impressive. Is this innate or did his parents do something to make this happen? Is there anything I'm supposed to be doing with my own kids? How/when do you know when your child has this potential?


Nature is important but people massively underestimate the nurturing component. Parents are most definitely key for these types of outcomes. If you're not familiar with the story already, read about the Polgar sisters. https://www.psychologytoday.com/us/articles/200507/the-grand...

To your question, my immediate answer was, "let them play outside". But it sounds dismissive of what you are asking. So the better answer is: provide them with the opportunity to learn, encourage their curiosity, praise them for their work not innate intelligence while still telling them that they are smart, create an environment that values and promotes education, create a program for them to follow when they show interest in a particular discipline, have great mentors help them develop, etc.

The truth is, there is always going to be a compromise between having a care-free childhood and top performance from an early age. You can't really have both no matter how smart your kids are. Asian parents stereotypically tend to opt for the latter end of the spectrum with, on average, very successful outcomes.


imho, I think ppl actually overestimate the role of parenting. Often extremely gifted children (as Tang and Tao are/were) will have their interests piqued by something, and then on their own volition learn everything they can about it. There isn't much prodding by the parents. For someone with an IQ of 140+, maybe doing calculus problems is more fun than video games.


The interest must come from the kid, but there are conditions that enable this interest. From a previously mentioned link in this thread [1], Tang's father is a bioengineer professor, which means:

1. Financial stability in the family, and it seems that both parents where emotionaly supportive

2. The father is knowledgeable in sciences, that means he can orient the learning and use his network

3. The father let his son work in his nanotechnology lab

[1] http://www.uta.edu/utamagazine/archive-issues/2010-13/2012/1...


In the case of extreme outliers, I think you might be right. In the case of "regular" top performers, I think nurturing is underestimated by many (perhaps not us on HN).


I was around some people who did this growing up. In outlier cases like this it tends to get extremely obvious to anyone nearby, and then usually the onus is on the parents to push this process forward, keeping in mind what is best for the student holistically rather than solely educationally.

Of course other situations are not as clear and the judgement call aspect becomes more pronounced, but at the same time the potential downsides are usually much smaller, i.e. skipping one grade is nowhere near the same thing as skipping three.


this is someone who at the age 10 could have gone to college. That's how smart he is. One in 50 million IQ. I don't think parents are that involved, imho. This is the digital age. All you need in is an internet connection to learn the same material as everyone else has access to. This kid as a 10 year could have self-taught himself calculus online. if a 10-year old shows up to school knowing as much as a 20-year-old, then grades will be skipped..lots of them.


probably really high IQ and knew all the material


I wish an editor would step in:

(1) limit the use of 'exponentially faster', simply for the grating repetition. The phrase is used 5 times in the article.

(2) In an article that is about time complexity of algorithms, be precise about terms like 'exponentially faster'.

(3) In an article that is about quantum computing, don't use 'quantum' in its colloquial sense of 'significant'. That is 'quantum' in quantum computing is different than quantum in 'quantum increase'. The latter is a suspect usage, but potentially defensible as colloquial; mixing the two in the same paragraph reads poorly.


Where does it say "quantum increase" in the article? "Quantum speedup" does appear, but this has a specific technical meaning and it is not being used in the colloquial sense. It means that there is an efficient quantum algorithm for solving the computational task but no efficient classical one.


I consider this the colloquial sense: the speedup (or, more generically, the "increase" of throughput, quality, or other performance indicators) obtained by using quantum computers.

Unlike the irreducibly mathematical and very narrow concept of "exponential increase", which could be plausibly misunderstood, a quantum speedup (or increase) can be correctly abstracted to the trivial pattern of doing things in a different way which performs better, without really needing to understand the problem and the solution; it's very surprising to hear the term misused to mean "really big". How could it happen?


I think this might represent an example of another unique value of quantum computing: as a way to get us thinking about computing problems via a different paradigm that can perhaps be translated back to classical computing.


This is awesome work, and kudos to Tang, but I am also really impressed with how his advisor, Scott Aaronson, seems to have so selflessly supported his research and given him full credit. Even in Aaronson's quotes within the article he talks about it entirely as Tang's work, and he seems to have considered Tang's best interests in deciding how and when to let him present the work - even that he let Tang present it!

Given a different person, we may have read about "UT-Austin Professor and Quantum Researcher Finds Classical Alternative to Quantum Recommendation Algorithm" (with generic help from students in a footnote somewhere). It's great to see this type of collegial mentorship.


Keep in mind that Scott Aaronson is about as famous as CS professors come. He no longer can get anymore famous by doing great work himself. Instead, he gets more famous by having his students do great work and having them become famous as well. A more junior researcher (or a researcher outside the theory community), on the other hand, would have more incentive to claim research as their own to become more famous themselves.

That's not to diminish Aaronson's generosity here, just to put it in the context of his actual career motivations.


> just to put it in the context of his actual career motivations.

Unless you have some deeper insights into his psychology, your guess is as good as mine: "Scott Aaronson is simply a decent human being."


I had no idea who Scott Aaronson is, so the context was useful to me.



The preprint has no mention of Scott Aaronson as an author though.

https://arxiv.org/abs/1807.04271


Scott is famous enough that people reading this article are more likely to remember the author as "Scott's student" than the author's name. Additionally, having famous and successful students has great career benefits when they champion your ideas (even when uncited).


Scott Aaronson is just one of the most awesome human beings ever. He is brilliant yet modest, and his writing is at once deeply enlightening and eminently accessible. The world would be a better place if more academics adopted him as a role model.


yeah i enjoy his blog as well


Just to provide a counterpoint based on the scant evidence of his blog (post and comments). He is , of course , brilliant, and has a refreshing no-nonsense approach. Modest, he is not, not that is anything wrong with that, I would call modest for example Terence Tao a far more accomplished academic. This is something that should be a "law", the more accomplished a person is the most it can "afford" to be modest. So for example Einstein was more modest than say Feynman, and Edward Witten is way way more modest than your typical string theorist.

The other thing I dont like about Aaronson is his weird fetish with STEM people , he seems to think scientists and technologists are somehow superior or more worthy than regular folk. I also dont agree with some of his opinions on the actions of the state of Israel, but I will avoid that, being this the Internet.


I've never known Scott to be anything but unfailingly modest, both in person and on his blog. What specifically are you referring to?


I had a look and the first thing I found was him calling for an academic boycott against New Zealand - presumably for NZs support of the UN resolution against Israel’s Palestinian settlements. This is an interesting approach to dealing with criticism.

https://www.scottaaronson.com/blog/?p=247


I think you're misunderstanding the post. It's a satire of similar such blog posts which are calling for the boycott of Israel. It's trying to (humorously) make the case that boycotting Israel makes about as much sense as boycotting New Zealand for their treatment of the native population, or likewise boycotting China over Tibet, etc, etc.


It's satire


> This is something that should be a "law", the more accomplished a person is the most it can "afford" to be modest. So for example Einstein was more modest than say Feynman, and Edward Witten is way way more modest than your typical string theorist.

Your examples seem different from your law. Your law says that a more accomplished person can afford to be more modest, whereas your examples suggest that a more accomplished person automatically is more modest. While it would be nice if there were some such causative effect, I think that your examples in evidence of one are very cherry picked (and speculative: do you really know whether or not Einstein was a modest person?).


It is kinda cool to think that Scott himself was a really young undergraduate at Cornell. I believe he was 16 when he was a freshman at Cornell and we would hear whispers of the child prodigy in the CS department. He then graduated in 3 years and has gone on to do great things. He is a really nice person and I met him a few times since I was a CS undergrad at Cornell at the same time - He actually helped us run the programming contest we ran at Cornell. I presume if he participated, he would probably have won it hands down :-)


Conjecture: for every quantum algorithm that is faster than current classical algorithm exists yet undiscovered classical algorithm that is equally fast.


The technical equivalent of this is that BQP = P. However, we have some evidence that this might be the case, namely Raz and Tal's recent oracle separation of BQP and the polynomial hierarchy


Oops, that should say "might not be the case...", ie,, that BQP != P


Not sure if you’re being serious, but there are already quantum algorithms that have been mathematically proven to not be dequantizable, i.e. have no possible classical version that's as fast.


What's your definition of "as fast"? We can't separate BQP from P right now. That is, we don't know any problems that run in polynomial time on a quantum computer, but provably have no polynomial time classical algorithms.


“As fast” in the big-O sense. Here’s what Scott Aaronson says in reply to one of the comments on his blogpost about this paper:

I should point out, though, that in many cases (Simon’s problem, period-finding, Forrelation, quantum walk on glued trees…), people have proved exponential separations between the quantum and randomized query complexities, which means that there certainly won’t be a dequantization in those cases along the same lines as what Ewin did.


But nobody can figure out anything useful that those algorithms would be good for.


For literally centuries, people thought that number theory had no practical applications. Same for a number of other fields of mathematics. Is quantum computing somewhat overhyped at this moment in history? Undoubtedly (but way less so than block chains, I would argue). But that doesn’t mean it won’t ever become practically useful.


The paper [1] states:

> There is a classical algorithm whose output distribution is O(ε)-close in total variation distance to the distribution given by l2-norm sampling from the ith row of a low- rank approximation D of A in query and time complexity [O(poly-func)] where δ is the probability of failure.

So the classical algorithm would give a "sufficiently close" approximation of what the QML would output? The article doesn't mention this at all. Is it fair to compare a quantum algorithm with an equivalent classical approximation algorithm?

From the quanta magazine article:

> Computer scientists had considered [the recommendation problem] to be one of the best examples of a problem that’s exponentially faster to solve on quantum computers — making it an important validation of the power of these futuristic machines. Now Tang has stripped that validation away.

Has he really?

[1] https://arxiv.org/pdf/1807.04271.pdf


> As a consequence, we show that Kerenidis and Prakash’s quantum machine learning (QML) algorithm, one of the strongest candidates for provably exponential speedups in QML, does not, in fact, give an exponential speedup over classical algorithms.

The comparison is fair and the author is giving a metric to compare the algorithms too.


The quantum algorithm is exact if it works, the new classical algorithm is negligibly worse at a negligibly higher cost.

Even if there's a formal difference, practical performance is what matters when discussing useful real-world applications as motivating examples for new and expensive technology.


But why does the title focus on the person's age, instead of their skill? This should just read "Grad student finds classical alternative to quantum recommendation algorithm". Sure, he's young, but that's also 100% irrelevant to the actual work presented.


His youth is remarkable, so they remarked on it. I'm not sure what about this troubles you.


Given the audience of the magazine in question, writing the headline to focus on how young he is (which tells us nothing in relation to quantum computing) instead of the fact that this is a grad student (which DOES tell us something: it gives readers a frame of reference in terms of this person's academic knowledge and skillsets) is disappointing at best, and clickbait at worst.


It may not seem as noteworthy to the HN crowd given the demographics, however his age makes it all the more impressive.


What makes it impressive is that this is a grad student. By all means, compound that in the article by going "it's a teenager" but calling an 18 year old going on 19 a teenager is kiiind of pushing it for the sake of getting more clicks.

Yes, he's doing amazing work at a young age, and that's super special. But from a result perspective, this is special because it's a grad student doing the work that usually happens at the phd or faculty level.


Because it makes more people click on the link.


true words.


both his work and the age at which he has done his work are impressive. Why do you think these two must be mutually exclusive?


Why are you imagining me saying they must be? I'm talking about the headline, not the article.

Highlighting his age in the headline tells us nothing, whereas highlighting that this is a grad student tells us why this is remarkable. You age doesn't say anything about what your skillset is, whereas your academic level is a damn fine indicator in this context.


I would have liked to read this article, but the page refuses to load in my bog-standard Mozilla browser.

Ironically, there's an article in HN today about "The Bullshit Web." This is exactly what that article is about. Apparently Quanta Magazine is trying to do something clever, and that "clever" thing is stopping their content from reaching me.


Since I really had trouble finding a way to read the article (source seems to have depublished it?) I just wanted to leave a small how-to on how I succeeded eventually:

  - Look up the page in wayback machine
  - Open the first snapshot
  - JS seems to delete the content immediately
  - Open developer tools
  - Reload the page
  - View the HTML response in the response preview window
Bullshit Web it is

Edit: Article seems to be back online now


And here I am, struggling to solve TopCoder problems....


Because that's what you get paid for as a techie (and in turn, what most tech companies relentlessly grill people for in the interview process): the ability to jump through hoops others have set up for you.

Not the ability to do original, groundbreaking work.


You can always console yourself with the knowledge that you* can rent a car and buy a beer in any state and he can't.

*I'm assuming you're at least 26.


... and listen MC++ all night long


I don't get why they don't just prepare the dataset for optimal search beforehand, as done in compgeom, like a quadtree or voronoi or in this case clustering into k regions. K dimensions are not so hard to handle. Then search will always be logarithmic. Even Google prepares it's datastructures for fast queries, a reverse index.


I am impressed, what an interesting piece


Has anyone started an implementation of this yet in python, go, rust, etc?


From the paper:

"Second, while the recommendation system algorithm we give is asymptotically exponentially faster than previous algorithms, there are several aspects of this algorithm that make direct application infeasible in practice. First, the model assumptions are somewhat constrictive. It is unclear whether the algorithm still performs well when such assumptions are not satisfied. Second, the exponents and constant factors are large (mostly as a result of using Frieze, Kannan, and Vempala’s algorithm [10]). We believe that the “true” exponents are much smaller, but our technique is fairly roundabout and likely compounds exponents unnecessarily. These issues could be addressed with more straightforward analysis combined with more sophisticated techniques."


Can anyone explain the significance of this in a few sentences for the non-expert? I read the article but am struggling to understand.


This is amazing and he is just 18 years old!


Maybe now Netflix will show me titles I'm actually interested in.


Not enough content on netflix unfortunetly


The headline makes my eye twitch. It is technically correct but pushes age bit too much into forefront. It is a bachelor thesis work supervized by Scott Aaronson of all people.

It is going to be really interesting to see if there are other algorithms where we find classical algorithms that run in polynomial time.


Maybe I'm just easily impressed but the idea of a 17 year old who skipped three grades and is already a third-year university student who was talented enough to catch Scott's attention and then address a technically challenging problem like this certainly warrants at least some recognition for the age at which he made this accomplishment.

Or maybe I'm just not smart enough to hang out on Hacker News.


The issue is that the title actually diminishes the accomplishment by focusing on the wrong thing. The work is impressive independent of the authors age. By emphasizing the age, the focus shifts from what he did, which he should be proud of, to something he will no longer have in two years.


As someone who has done research in quantum information, the age shifts this result from "very impressive" to "extremely impressive".


"Person discovers result in complexity theory" doesn't really beckon your mouse.


Precisely this. He already was awesome by getting a bachelors so soon and going to grad school next year. This result is on top of an awesome beginning of a career, not just a thing a random teenager did.


Surely him being very young impresses more.

Kylian Mbappe won the World Cup a couple of weeks ago. He was 19. That makes it a bigger deal.

Generally people are impressed by accomplishments made by younger people. I suppose there is also a similar effect at the other end, for a similar reason: you're disadvantaged in accomplishing stuff if you're at either end of the age spectrum. You either had a lot less time to get to the top, or your faculties are waning precipitously.


The piece was written by a journalist. Journalists know how to write for a general audience. That means making the story about a human being. The general public loves reading stories about prodigies. They don't care at all about the "quantum advantage" of this or that algorithm.


>Maybe I'm just easily impressed but the idea of a 17 year old who skipped three grades and is already a third-year university student..

Not saying this applies to Tang. But would you be equally impressed by a 17 year old from an upper middle class family whose parents recognized their gifts and used their influence to assist them in skipping 3 grades and enrolling in college early vs 17 year old from lower income family who is equally bright as the former kid, but graduates at 21 and receives no recognition for it due to an unheard of advisor at a low ranked school? On paper you'd be more amazed by the former, while the latter is equally as bright but is sort of dismissed because of their situation.


Ok, so is your solution to be unimpressed by both, or... ?


Note that Aaronson also thought that a fast classical algorithm was impossible. Tang found the algorithm in spite of that false start.


Always be skeptical of "impossible" without proof.


I thought it was taking a poke at quantum computing by putting the emphasis on his age. But that could be straight up projection on my part - I admit to having serious doubts about both the relevance and practicality of quantum computers. Yes, how I interpret it is influenced by my own bias ;-)


When I was 18, I could operate an oven.


50% of land is uninhabited


Poor Chad Rigetti


Chinese geniuses have done it again


If he had discovered this algo a few years back, he might have won the $1 million netflix prize (https://en.wikipedia.org/wiki/Netflix_Prize).


The Netflix Prize was not concerned with the asymptotic complexity of the algorithms. It measured them only by their results.


It’s doubtful that the results would rank well because they bin sort people into types of movies, which is what most algorithms likely did anyhow. In fact, the results are probably awful if it’s not built to be optimal for such a challenge. Deep learning / similar ML would take some CPU but likely outperform all brute-force or even non-ML quantum algorithms because it’s able to elucidate knowledge of likes and whatever else ephemeral metadata or movie semantic analysis can be shoveled in.

Unfortunately, this seems like a “look at this trained mammal can eat out of a plate, wow!” story. Kid probably had to endure those innane 6x6 and up IQ red/white triangle blocks tests over and over. He won’t get much of a high-school life or college dorm/party experience.


I'm pretty sure Netflix can give him another million.


Pretty sure at the current rate star ML researchers are paid, a million for Tang would be low-balling him.


It could just be a handcuffless gift.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: