>“My seemingly inconsequential moment came one night while washing dishes and using a bristle brush to clean a tall glass,” said IBM’s Edwin Pednault. “It suddenly occurred to me that if one looks at the gates applied to a given qubit in a grid circuit, the gates form a bristle-brush pattern where the bristles are the entangling gates that are being applied to that qubit.”
He then took this idea and applied it to simulating qubits.
So let us all applaud Louis F. Kutik who after inventing the plastic bristle brush in 1964 could not have surmised it's inspiration for advancements within the quantum computing domain.
Funny, while I've found this to be true of intellectual "hard problems", I actually learned this principle from challenging video game bosses (I'm looking at you Liquid Snake!)
The work is about simulating wide-but-shallow quantum circuits (e.g. 49 qubits with 27 layers of operations, or 56 qubits x 23 layers [edit: HN title much better now]). The article entirely fails to mention the depth restriction. (The paper is very upfront about it: [1].)
Personally, I happen to agree with Scott Aaronson's take [2]:
> The goal, with sampling-based quantum supremacy, was always to target the “sweet spot,” which we estimated at around 50 qubits, where classical simulation is still possible, but it’s clearly orders of magnitude more expensive than doing the experiment itself. If you like, the goal is to get as far as you can up the mountain of exponentiality, conditioned on people still being able to see you from the base. Why? Because you can. Because it’s there. Because it challenges those who think quantum computing will never scale: explain this, punks! But there’s no point unless you can verify the result.
Although doing that and pushing into a regime that no one can simulate until a couple years later sure wouldn't hurt.
> But there’s no point unless you can verify the result.
I thought there were quite a few problems (like factoring) which were hard to generate a correct answer, but easy to check whether an answer is correct?
In theoretical computer science, NP is the class of problems where it is easy to check an answer but (presumably) difficult to generate an answer.
There are many, many problems in NP. However, quantum computers are not expected to be able to efficiently solve NP problems.
Factoring is thought to be an easier sort of NP problem, known sometimes as NP-intermediate [1]. (This means it is in NP, but not "NP-complete.") So quantum computers have a chance at factoring efficiently, and in fact they do. There are other NP-intermediate problems like discrete logarithm that quantum computers can also solve, but NP-intermediate problems are not as common (in real life) as NP-complete problems and quantum computers still can't solve a lot of them as far as we know.
A bit off topic to your question, but this reminds me. Where are we exactly progress wise with P vs. NP problem? I've not thought to track the millennium problems in a while. I'm sure I would have heard if something substantial had arisen though. I used to work with it at some point on specific variation of the problem with was the dorm room housing problem [1]. That particular scenario always stood out to me as something that should be relatively easy to solve and I worked on it for a few years. Anyways, I'm way off topic now.
Every now and then there is a paper that attempts proving P=NP or attempts proving P!=NP, I've seen different papers on hacker news several times. The most interesting link someone posted was this one listing many such papers, nicely indicating [equal], [not equal], [undecidable] or [unprovable]:
I am especially intrigued by the argument made by Nicholas Argall. In that the question of whether P = NP or not is incomplete in and of itself.[1] I think this is very true on the basis that not all of these problems are the same or present the same criteria to solve. I think understanding these problems is going to require thinking from a different depth though. It's fun to me to think about and try to wrap my head around. I'm no expert but have a lot of fun with it.
The argument put forward in the link misrepresents Godel's theorem.
> 6) Therefore, no consistent and complete definition of the set NP is possible Rationale: If we accept that the set NP can only be rigorously defined via a language, this conclusion follows from the premises above.
This doesn't even make sense. Definitions do not change a languages completeness or consistency; they simply relate a new concept to previously defined concepts.
> 7) Therefore, no consistent and complete statement of the problem of P=NP is possible Comment: A conclusion which is not only proven in this paper, but supported by the years of argument between mathematicians regarding the relevance of proposed answers to the problem.
This is not supported from Godel's theorem. The fact that you need a non-complete system to be able to reason about something does not mean the theorem you want to prove is not provable.
The main problem with the argument is that it works for other theorems that have been formally proven.
This seems to be mainly a collection of garbage papers and nonserious proofs by cranks and uninformed people. Not that it isn't interesting, but it doesn't seem to say anything about actual mathematical progress.
There are, but quantum computers won't have the ability to solve these problems for many years to come (because error correction is required). Quantum supremacy will happen sooner because it can choose from more esoteric problems that are easier to sample.
The article’s description of whatever they’re theoretically doing is absolute fucking garbage, but it sounds like they’re probably doing some sort of compressed representation like Young Tableaux for efficiently representing a subset of quantum circuits. Unfortunately, it’s usually not the subset you’re interested in.
I've been working with 'Watson' products for a while now and 'nothing came out of it' is an overstatement. They have some interesting products that have useful applications in business, however their marketing just doesn't align with their portfolio. They use so many buzz-words to get C-level hyped, but they never manage to live up to the promises. If you manage expectations there are some really interesting opportunities though, but never groundbreaking.
First of all, I figured out that there's no 'it'. It's a portfolio of products ranging from image recognition as a service to on premise NLP and enterprise search. All these products have their own specific use cases, some of them have hardly any real world use cases.
What I've recently been doing is using NLP for document classification. It's the type of problem you can solve with many different solutions, but 'Watson' is the one we're using.
That's the point being made; the "Watson" marketing and the "Watson" reality are at odds, but there is a Watson reality, even if it's more mundane than the marketing.
I rewrote my initial draft of that sentence to deny IBM their strategy of using Watson as a human name, which is part of the problem. You shouldn't really use a possessive on Watson, because nothing is Watson's as it isn't a person, grammatically or otherwise. Watson is a collective noun, which is a suite of products. Even if you nominally know better, it fools your brain if you let them frame Watson as a proper name of a person.
Let's just be clear about this: "Watson" as an entity does not exist. It is nothing but a brand, like "Microsoft" slapped in front of a disparate array of independent siloed products. Most of these are probably just acquisitions.
Watson Knowledge Studio - This is actually just something called SIRE (Statistical Information and Relation Extraction toolkit). The "Watson" in the name means nothing.
Watson Natural Language Understanding API (Note: This is just a product that was called AlchemyLanguage -- something they bought and renamed to include the word "Watson") - I can tell you everything that makes it crippled (you can build a NERC model using Watson Knowledge Studio, but the API has no way for you to extract e.g. negations, sub-entity types, etc. basically any entity attributes). Again, Watson means nothing, it's rebranding an aquisition.
Watson Analytics - It doesn't integrate into anything Watsony except maybe Cognos. I.e. It's completely useless for doing any analytics from any of the other Watson NLU/Classification products. The Social Media component has no way for you to integrate your own data. It's nothing big a big toy meant to be shown in demos. It is 100% useless in practice. I have no idea what they bought and rebranded here. I mean c'mon, I'm using your Watson sentiment analysis crap to do sentiment on Tweets and Facebook and your Watson Analytics product has no way of me getting this analyzed data back in to build visualization and see trends? With a lot of effort, basically somehow normalizing the JSON output from NLU maybe, but there is no icon that says "Import Watson NLU data".
Watson Natural Language Classification - This is just a multi-label text classifier. It's not too bad to be honest, except I don't like the limit of the number of texts per classifier and the limit of the number of characters per text. I also think they need to handle hierarchical multi-label classifications. My biggest problem is... I can't feed the output of NLC into any other Watson tool. How the fuck do I do my analytics? C'mon IBM, throw a man a bone here why must I do everything myself. Why can't I plug the output into Watson Analytics?
Basically everything is branded under a label called "Watson", but none of the products even _remotely_ integrate with one another. There are so many deep flaws in the products. The documentation often does not correspond to reality.
If someone at IBM would just sit down and try and make all these Watson products actually "talk to one another" this would be a really good enterprise quality suite of products. I really think they are close, but they're fucking it up in such a stupid way it dumbfounds me. I honestly don't think anyone at IBM will fix this. They are so big and so stupid at the same time.
Believe nothing you see at an IBM demo or any presentation. It is smokescreen and mirrors. You will have to build something yourself to see the flaws.
This is actually a really long answer but to give you a flavor, on an analog computer, you can sort in o(n) https://en.m.wikipedia.org/wiki/Spaghetti_sort also you have integration and differentiation as primitives.
It was unclear to me what the impossibility is that is referred to by the article, but it appears to be the exponential nature of the required storage for the simulation:
> Simulating anything more than 49 qubits on a classical computer was thought impossible because of the limited memory classical computers have compared to their quantum counterparts. Because of the difference between classical and quantum computing, each time you add a qubit to a simulation it increases the necessary memory exponentially in the classical simulation.
If true, then the jump from a simulation of 49 to 56 was clearly not impossible in the first place.
Following some links, the term "impossibility" is first introduced by the New Scientist, but it's not backed up by anything substantial, as far as I can tell.
Well, they explained it a couple of paragraphs later.
The previous record, a 45-bit quantum simulation, took 500 Terabytes of memory. I assume there's some sort of terrible exponential increase that would have made 49-bit simulations require four orders of magnitude more, or something impractical. But this one did 56 bits with just 4.5 terabytes of memory.
So, what does this actually mean for quantum computing? I have hardly any understanding of the subject, so to me it seems odd that a breakthrough in simulation is very relevant. I also don't have a research background, so it may be that.
> With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality. To assess, refine and advance the design and control of these devices, one needs a means to test and evaluate their fidelity.
>“My seemingly inconsequential moment came one night while washing dishes and using a bristle brush to clean a tall glass,” said IBM’s Edwin Pednault. “It suddenly occurred to me that if one looks at the gates applied to a given qubit in a grid circuit, the gates form a bristle-brush pattern where the bristles are the entangling gates that are being applied to that qubit.” He then took this idea and applied it to simulating qubits.
So let us all applaud Louis F. Kutik who after inventing the plastic bristle brush in 1964 could not have surmised it's inspiration for advancements within the quantum computing domain.