I think this deserves more emphasis: regardless of whether or not we need a quantum computer in the end, as a model quantum computing is apparently useful to do research with and advance the status quo of our knowledge.
The abstract does too: https://arxiv.org/abs/1807.04271
When we realize that ‘quantum’ computing is really just ‘analog’, we’ll see a resurgence in that too.
(Which is great actually, because modern analog computers are stunningly capable. And yes, they are the same thing when you realize that the hardware is exactly the same, and sampling away the noise has the same form as solving for decoherence.)
> And yes, they are the same thing when you realize that the hardware is exactly the same
Quantum computers don't actually exist yet. It's a theoretical model. There's no hardware.
Even for the theoretical models, we have plenty of alternative models. The circuit model is the de-facto standard and a good thought framework, but there are real concerns about it maybe not to being the model to translate into actual hardware. (analogously, lambda calculus vs RTL) It will be interesting to see what alternative or new theoretical models pop up once the quantum computer's equivalent of the transistor surfaces.
Another thought to consider is since our computation latencies are currently restricted by sampling (clock) times, which is restricted by power (for heating reasons which are predicted by quantum), would the smarter bet be to pursue technology which uses exponentially less energy (analog) or technology which uses exponentially more energy with the same promised speed ups as the former (quantum)?
Analog computers in their ideal form (known as real computers after the real numbers) are also strictly superior to classical computers. But unlike quantum computers, we know for sure that ideal analog computers can not exist. This also follows from QM. (Intuitively, real numbers don't exist in physics and everything ultimately becomes quantized)
Given your same line of reasoning, we are able to measure real numbers, in fact you can derive discrete numbers from real numbers while the converse is not true. Intuitively, discrete numbers don't exist and everything ultimately becomes real numbers. This is obviously a correct following of the stimulating rhetoric yet completely untrue. The ability of derivation from an axiom has nothing to do with the validity of the derivative representations.
> we are able to measure real numbers
I disagree with you here. I like to believe in the Church-Turing-Deutsch thesis, which implies that reality is ultimately discrete and real numbers are nonphysical. The example you give, time, is indeed a real number in our best models of physics. But this is mostly because we don't know how to quantize it yet. It's one of the biggest open problems in physics. That said, the outcome could indeed be that it is a real number and CTD is wrong.
That metaphor seems to confuse people more than it clarifies. Did you perhaps mean "nature's programming language"?
It would be fun if quantum computing were equivalent to classical (less fun -- probabilistic) models with comparable complexity, just more intuitive. I guess lots of complexity conjectures can be phrased that way, though -- P vs NP for sure.
If so, I'd say,
- It didn't really seem like anyone in the comment thread was trying to make a "compelling" argument for anything -- we're just chatting here.
- If we were interested in that sort of thing, we might note that minimal program size is a good measure of language complexity/information, and that "algorithms from the book" and different models of computation that might make programs easier to write are both "objectively interesting" from an information-theoretic standpoint.
EDIT: oh wait, I get it, you weren't being constructive, you just came here to make fun of a religion nobody was talking about. Never mind.
The law of gravity is a different kind of thing to a rock. It is not a physical object, it describes the ways physical objects behave. The number 3 is a different kind of thing to 3 apples. It is not a physical-natural object, it describes aspects of nature.
Its the difference between the rules of chess and the board and pieces. Both are essential parts of the game 'chess' but we can usefully distinguish them when we want to be more precise.
There is no controversy here.
This is probably because religions have traditionally framed extra-universal things as ordering events inside the universe, but it seems that is not possible if you exit spacetime.
In other words, branes, strings, gods, or anything we discover could, in my definition, "cause" reality to manifest without a temporal cause/effect coupling.
Perhaps I need a better word to remove the cause/effect temporal baggage
In terms of proof, we also don't know whether reason can be applied to things within nature -- it's still depending on axioms. But we still believe it to be true and to be applicable everywhere inside nature regardless of specific context, and I see no reason (pun intended) not to extend this belief even outside nature.
There is the set A. It contains all elements except itself.
There is a subset of A called P, which is all possible things.
There is a subset of P called M, which is all things that manifest in reality.
Traditionally we call elements of M "things that exist (or did exist or will exist)"
You're saying that P=M?
I find that position extraordinary and lacking evidence.
If there's infinite matter arranged randomly then you would expect to eventually find one of everything finite.
We can say things that aren't disagreements. We can even disagree without controversy.
You seem to be having a knee-jerk reaction to the word "God". Your personal preference for the most secular language possible is not a valid reason to pick apart GP's phrasing.
> First, he was wrong
I agree. And that's completely and totally irrelevant. I have no idea why you even mention it. Obviously what matters is that his meaning is clear.
> Second, he said that in a very different culture. This isn't the 1920s.
So your objection to GP's phrasing, in spite of it being completely inoffensive and having a perfectly clear meaning, is "because it's $CURRENT_YEAR".
QM without collapse is actually a fully deterministic theory. And it makes sense that the universe doesn’t play dice - because there is no source of randomness available.
It’s a functional language.
> there is no source of randomness available
I think that's going beyond what we can be sure of. We have no evidence that there is randomness, be we also have no evidence that there isn't.
QM may be fully deterministic if we disregard collapse, but as far as I'm aware, we can't do that. Collapse is still something we have to contend with given the current state of our knowledge.
I suppose we can't rule out collapse totally, but we also can't rule out fairies pushing atoms around. It's just that there is no evidence for collapse, so it seems we can ignore it.
It takes one bit of information to specify that a coin is tails-up, regardless of whether it was placed that way deliberately or got that way as a result of a random coin toss.
For example if you know you're in a universe where coins always land tail up then you need 0 bits to store that state.
Of course you could make a similar claim about a universe where coin flips always alternate--current state is merely a function of time and initial conditions, and therefore no information is needed to store it. But then you're saying that deterministic universes contain no information at all except their initial state. That's fine, but I personally wouldn't put an axiom like that on the winning side of Occam's razor.
How can it be otherwise? You can always completely describe the deterministic universe by its initial state plus a time value, even if its initial state appears much simpler than the later states.
A busy game of life scenario can arise from a simple starting point. If you don't know the rules, you're stuck describing the position of every dot, if you do, then you can create a complete description with much less information.
By which one?
> randomness - it always comes from somewhere
Unless randomness is a fundamental property of matter.
> can't rule out fairies pushing atoms around
Wasn't it exactly what the Bell's experiments disproved?
The date that he said isn't relevant either -- nor the culture at that date, since it doesn't affect whether this is "confusing" or not, and I don't see anything drastically different in our culture to render it so. (Or even to render a reference to God "offensive" or a bad metaphor today, if that was your argument)
It can change state arbitrarily, introspect all the data and step forward in execution.
It works in the comparison that quantum computing often sensationalized by media.
That it applies to physics too is expected, because one way to characterise the quantum language is that it defines what one system can possibly do or say to another system.
The source of those limitations is not yet completely clear, i.e. it hasn't been satisfactorily axiomized. But we can get close and it seems it's a logical restraint based around self-consistency. Assuming the universe must be self-consistent, it must also obey QM.
Also, with natural units Plank's constant is 2π, so it does show up a lot.
While this problem does occur in matrix factorization based recommenders, there are a lot of solutions for it without quantum computers that perform well, and it's far from being the one method for recommendation.
Could that be used in clustering algorithms in BioInformatics?
EDIT: Sorry, Aaronson's writing is just too funny not to put that quote in full context:
> On the Hacker News thread, some commenters are lamenting that such a brilliant mind as Ewin’s would spend its time figuring out how to entice consumers to buy even more products that they don’t need. I confess that that’s an angle that hadn’t even occurred to me: I simply thought that it was a beautiful question whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time, and if the application to recommendation systems helped to motivate that question, then so much the better. Now, though, I feel compelled to point out that, in addition to the potentially lucrative application to Amazon and Netflix, research on low-rank matrix sampling algorithms might someday find many other, more economically worthless applications as well.
I'm not sure whether this is a typo, and he means to say that there might be more economically worthwhile applications, or whether he's trolling the people complaining that this is useless by pointing out that there are real economic benefits to it right now.
He's trolling the people who regret that the result of this research is manipulating people. By pointing out that there is a real potential for applications that they object to less.
Clustering users by their ratings of various titles is the kind of thing any hobbyist statistician might do to help their friends find movies they like using online data sets.
There are methods to use data to improve people's experiences, and sometimes corporations employ those methods.
So yes, this algorithm is useful for manipulating people. And not just by using an extremely broad definition of the term.
One's relationship with a corporation is very different. They don't know you or necessarily care very much about your long term interests.
The big thing is that broadly "the recommendation algorithm" is not some single problem in combinatorics. Rather, it a initially ill-posed problem that yields concrete problems based on simplifying assumptions. IE, sure, the FINAL, SOLVED problem is as Aaronson put it but because different researchers make different simplifying assumptions, that final problem actually hadn't studied that much - half the achievement of the paper is finding simplifying assumptions that are real world-plausible and yield tractable problems ("Obviously, without any restrictions on what T looks like, this problem is ill-posed. We make this problem tractable through the standard assumption that T is close to a matrix of small rank k (constant or logarithmic in m and n). This reflects the intuition that users tend to fall into a small number of classes based on their preferences. With this assumption, it becomes possible to infer information from subsampled data. We can determine the classes a user lies in, and use information about those classes to give a likely-to-be-good recommendation")
The original quantum recommendation algorithm paper involved both a simplifying assumption about the sorts of consumers one was models and a system of sampling to get a good approximation ("The literature on theoretical recommendation systems is fairly scant, because such algorithms quickly run up against the limits of their model, despite them being somewhat far from the results seen in practice.")
> And what about the future of UT Arlington’s latest wunderkind, Ewin Tang? “I haven’t come up with anything that is really ingenious yet,” he says. Be patient; he’s only 12.
It only took 6 years. Not bad at all!
Archive.org has the original text: https://web.archive.org/web/20180725201359/http://www.uta.ed...
Quoting for posterity:
Twelve-year-old Ewin Tang is the latest in a line of wunderkinds to begin their UT Arlington careers while other students their age are still in elementary school or junior high. History suggests he’ll continue to amaze.
Ewin Tang’s classmates tend to overlook the slight, bespectacled youth sitting in the front row until he answers the professor’s queries—all correctly. Then they ask their own questions. “Who is this guy?” “Why is he here?” And always, “How old is he?” At 12, Ewin, the son of bioengineering Professor Liping Tang, is the youngest student on campus and among the youngest in UT Arlington history. Since taking his first college courses at age 10, he has completed 20 hours, including classes in calculus and differential equations, all with a 4.0 GPA.
“Other students just seem kind of amazed,” he says. “They ask about my age, what I’m majoring in. Some of them actually take pictures of me. They’re pretty cool with it, though; they really don’t bother me a lot.” Although the age gap usually prevents Ewin from forming close friendships with his classmates, many are eager to work with him once they recognize his abilities.
Ewin’s college career began after he completed every math course available in his K-12 private school. His intellect had already prompted school officials to move him from third to seventh grade, but it was soon apparent that he needed more. After he scored 1920 on the SAT at age 10, his parents and school officials explored college enrollment.
Dr. Tang acknowledges that having an immensely bright child can be challenging. “There are no books, no guidance on exactly what to do. This (college for someone so young) is a totally gray area.”
The Tangs met with then-Provost Donald Bobbitt and later with Senior Vice Provost and Dean of Undergraduate Studies Michael Moore. Ewin’s first classes, an online course in history and an on-campus calculus class, were tests he passed easily.
“I think it makes a difference that I’m here,” Dr. Tang says. “Ewin has a place to go and a built-in support system.”
In addition to his University coursework, Ewin works part time in his dad’s nanotechnology laboratory. He is developing a probe to detect bacterial infection, something that would greatly assist in diagnosing diseases. His career plans involve science or engineering, but he hasn’t yet settled on a specialty.
“Our main concern when we began this was his social life,” says Dr. Tang, who notes that Ewin attends a private high school with students his own age for some courses and activities. “Academically he is fine, but we want him to stay in school and stay with kids his own age, to have friends his own age. So far it’s working out pretty well. Thanks to Dr. Moore, he’s having a very good experience.”
“Other students just seem kind of amazed. They ask about my age, what I’m majoring in. Some of them actually take pictures of me.”
Ewin spends part of Monday, Wednesday, and Friday at the private school, where he takes classes and participates in soccer, basketball, cross country, and the Science Olympiad. He also attends UT Arlington on Monday, Wednesday, and Friday and does research in his father’s lab Tuesday and Thursday. As if that’s not enough, he works with a private tutor, studies Chinese, and plays the piano and erhu, a traditional Chinese instrument akin to a violin.
While it might seem that Ewin’s case is unique, Moore and his predecessors in the Provost’s Office have seen others. Because such students don’t meet traditional enrollment requirements, they are evaluated case by case.
“Typically, there is a detailed conversation with the parents about the challenges and rigors of college work as well as a thorough review of the student’s academic history,” Moore explains. “Obviously, we are looking for exceptional young men and women who show the ability to excel in the classroom as well as handle the collegiate environment.”
Over the past two decades, several of these exceptionally young and brilliant students have used UT Arlington as a springboard to success.
I like his update - a message to Hacker News commenters:
"On the Hacker News thread, some commenters are lamenting that such a brilliant mind as Ewin’s would spend its time figuring out how to entice consumers to buy even more products that they don’t need. I confess that that’s an angle that hadn’t even occurred to me: I simply thought that it was a beautiful question whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time, and if the application to recommendation systems helped to motivate that question, then so much the better. Now, though, I feel compelled to point out that, in addition to the potentially lucrative application to Amazon and Netflix, research on low-rank matrix sampling algorithms might someday find many other, more economically worthless applications as well."
It's as though we moderns are all grazing in a 100-acre meadow that's been grazed for two centuries. The easy stuff is gone. Even the magic beans are rare. The question is: with your magnifying glass, which seeds do you look at in hopes that they're nutritious?
One of my college advisors muttered, and I quote, "Einstein wasted 30 years of his life" (looking for a unified theory). Well now. Sadly, this advisor put graphite on paper for a whole career with little to show for it (apart from employment).
Mr. Nobel's name is -not- still recognized because of his economically worthy applications. And it's lucky for Kelvin that he invented a temperature scale.
Applications can be both economically valuable and have positive social impact. Now recommender systems are in fact useful to discover new things, not just to entice people to buy more. E.g. finding new music, related research, or other things.
Usually one of these:
- "I love the idea but can we talk about how the website doesn't look good when CSS is disabled?"
- "This website is impossible to read because the contrast ratio is too low and I am personally offended."
- "Did anyone else get an ad from this news website? Why does advertising exist?"
- "An obscure open source project uses the same name as this unrelated business. Have you no shame?"
- "The website looks weird on mobile, by which I mean my rooted Android phone running Opera."
- "The text is too big."
- "The text is too small."
> The only difference between Theorem 1 and its quantum equivalent in  is that the quantum algorithm has no ε approximation factors (so ε=0 and 1/ε does not appear in the runtime). Thus, we can say that our algorithm performs just as well, up to polynomial slowdown and ε approximation factors.
With ε being the accepted margin of error, theorem 1 being the algorithm produced by Tang, and  being a reference to Kerenidis and Prakash's quantum algorithm kept here in the quote for fidelity of quoting.
This means that this algorithm is not really a substitue for the quantum algorithm because it will require a level of error to be accepted (something which might work in recommendation engines, but not necessarily outside that context) and it still has polynomial slowdown over the quantum algorithm. One thing to note is that not all quantum algorithms will perform exponenttially better than their classical counterparts. Particularily, some quantum search algorithms perform only quadratically better than the classical search algorithms. This means that even if an exponential speedup was not achieved by quantum computers, (which it still is because of the ε factor meaning they don't give the same results) it is expected that they still achieve polynomial speedup as they do in this case.
In conclusion, stating that this algorithm threatens the prospects of quantum computation is probably an exageration.
What they often did early on was, around age 14, SMPYers would be admitted to Johns Hopkins to typically start taking math and computer science courses, at which they would often excel and be at the top of the class, expected to go on to post-graduate work around 19, and fit in so well that fellow students and their teachers often (when surveyed or asked) didn't realize they were so young. If you look in OP, that describes Tang remarkably well.
This is such a great term. I never thought about it before or how it could relate to our definitions of intelligence and ability in science/tech. So many generic tests for intelligence that perhaps there should be more focus on this area.
You might ask, why use only the SAT-M, why not the other subtest, the SAT-V verbal test, since it should be equally difficult? I was surprised to find out that there was in fact a parallel study, 'SVPY', which did just that, but it got canned early on for disappointing results, apparently.
depends on the subject. Did computer science exist as a course in the 70's?
To your question, my immediate answer was, "let them play outside". But it sounds dismissive of what you are asking. So the better answer is: provide them with the opportunity to learn, encourage their curiosity, praise them for their work not innate intelligence while still telling them that they are smart, create an environment that values and promotes education, create a program for them to follow when they show interest in a particular discipline, have great mentors help them develop, etc.
The truth is, there is always going to be a compromise between having a care-free childhood and top performance from an early age. You can't really have both no matter how smart your kids are. Asian parents stereotypically tend to opt for the latter end of the spectrum with, on average, very successful outcomes.
1. Financial stability in the family, and it seems that both parents where emotionaly supportive
2. The father is knowledgeable in sciences, that means he can orient the learning and use his network
3. The father let his son work in his nanotechnology lab
Of course other situations are not as clear and the judgement call aspect becomes more pronounced, but at the same time the potential downsides are usually much smaller, i.e. skipping one grade is nowhere near the same thing as skipping three.
(1) limit the use of 'exponentially faster', simply for the grating repetition. The phrase is used 5 times in the article.
(2) In an article that is about time complexity of algorithms, be precise about terms like 'exponentially faster'.
(3) In an article that is about quantum computing, don't use 'quantum' in its colloquial sense of 'significant'. That is 'quantum' in quantum computing is different than quantum in 'quantum increase'. The latter is a suspect usage, but potentially defensible as colloquial; mixing the two in the same paragraph reads poorly.
Unlike the irreducibly mathematical and very narrow concept of "exponential increase", which could be plausibly misunderstood, a quantum speedup (or increase) can be correctly abstracted to the trivial pattern of doing things in a different way which performs better, without really needing to understand the problem and the solution; it's very surprising to hear the term misused to mean "really big". How could it happen?
Given a different person, we may have read about "UT-Austin Professor and Quantum Researcher Finds Classical Alternative to Quantum Recommendation Algorithm" (with generic help from students in a footnote somewhere). It's great to see this type of collegial mentorship.
That's not to diminish Aaronson's generosity here, just to put it in the context of his actual career motivations.
Unless you have some deeper insights into his psychology, your guess is as good as mine: "Scott Aaronson is simply a decent human being."
The other thing I dont like about Aaronson is his weird fetish with STEM people , he seems to think scientists and technologists are somehow superior or more worthy than regular folk. I also dont agree with some of his opinions on the actions of the state of Israel, but I will avoid that, being this the Internet.
Your examples seem different from your law. Your law says that a more accomplished person can afford to be more modest, whereas your examples suggest that a more accomplished person automatically is more modest. While it would be nice if there were some such causative effect, I think that your examples in evidence of one are very cherry picked (and speculative: do you really know whether or not Einstein was a modest person?).
I should point out, though, that in many cases (Simon’s problem, period-finding, Forrelation, quantum walk on glued trees…), people have proved exponential separations between the quantum and randomized query complexities, which means that there certainly won’t be a dequantization in those cases along the same lines as what Ewin did.
> There is a classical algorithm whose output distribution is O(ε)-close in total variation distance to the distribution given by l2-norm sampling from the ith row of a low- rank approximation D of A in query and time complexity [O(poly-func)] where δ is the probability of failure.
So the classical algorithm would give a "sufficiently close" approximation of what the QML would output? The article doesn't mention this at all. Is it fair to compare a quantum algorithm with an equivalent classical approximation algorithm?
From the quanta magazine article:
> Computer scientists had considered [the recommendation problem] to be one of the best examples of a problem that’s exponentially faster to solve on quantum computers — making it an important validation of the power of these futuristic machines. Now Tang has stripped that validation away.
Has he really?
The comparison is fair and the author is giving a metric to compare the algorithms too.
Even if there's a formal difference, practical performance is what matters when discussing useful real-world applications as motivating examples for new and expensive technology.
Yes, he's doing amazing work at a young age, and that's super special. But from a result perspective, this is special because it's a grad student doing the work that usually happens at the phd or faculty level.
Highlighting his age in the headline tells us nothing, whereas highlighting that this is a grad student tells us why this is remarkable. You age doesn't say anything about what your skillset is, whereas your academic level is a damn fine indicator in this context.
Ironically, there's an article in HN today about "The Bullshit Web." This is exactly what that article is about. Apparently Quanta Magazine is trying to do something clever, and that "clever" thing is stopping their content from reaching me.
- Look up the page in wayback machine
- Open the first snapshot
- JS seems to delete the content immediately
- Open developer tools
- Reload the page
- View the HTML response in the response preview window
Edit: Article seems to be back online now
Not the ability to do original, groundbreaking work.
*I'm assuming you're at least 26.
"Second, while the recommendation system algorithm we give is asymptotically exponentially faster than previous algorithms, there are several aspects of this algorithm that make direct application infeasible in practice. First, the model assumptions are somewhat constrictive. It is unclear whether the algorithm still performs well when such assumptions are not satisfied. Second, the exponents and constant factors are large (mostly as a result of using Frieze, Kannan, and Vempala’s algorithm ). We believe that the “true” exponents are much smaller, but our technique is fairly roundabout and likely compounds exponents unnecessarily. These issues could be addressed with more straightforward analysis combined with more
It is going to be really interesting to see if there are other algorithms where we find classical algorithms that run in polynomial time.
Or maybe I'm just not smart enough to hang out on Hacker News.
Kylian Mbappe won the World Cup a couple of weeks ago. He was 19. That makes it a bigger deal.
Generally people are impressed by accomplishments made by younger people. I suppose there is also a similar effect at the other end, for a similar reason: you're disadvantaged in accomplishing stuff if you're at either end of the age spectrum. You either had a lot less time to get to the top, or your faculties are waning precipitously.
Not saying this applies to Tang. But would you be equally impressed by a 17 year old from an upper middle class family whose parents recognized their gifts and used their influence to assist them in skipping 3 grades and enrolling in college early vs 17 year old from lower income family who is equally bright as the former kid, but graduates at 21 and receives no recognition for it due to an unheard of advisor at a low ranked school? On paper you'd be more amazed by the former, while the latter is equally as bright but is sort of dismissed because of their situation.
Unfortunately, this seems like a “look at this trained mammal can eat out of a plate, wow!” story. Kid probably had to endure those innane 6x6 and up IQ red/white triangle blocks tests over and over. He won’t get much of a high-school life or college dorm/party experience.