- Teaching obligations... which is fine with me, I _love_ teaching, but it earns you zero credits. Listening to other profs complaining about teaching is tiresome and goes completely against my view of universities: places to build next generations of good people.
- A lot of time spent on acquiring funding, it doesn't matter whether you're employed at a research institute or university. University profs probably don't get fired if they fail to acquire funding, but the pressure is there.
- A lot of time spent on reviewing the work of your subordinates: master and doctoral students' theses. (OK, it doesn't go into the teaching quota, but it's part of teaching as I see it.)
- Publication pressure and inflation. Count earns you points, not quality or impact. This encourages authors to publish multiple partial papers instead of one "complete" paper. It's also devastating for the field as searching for relevant papers (on any topic) is way harder.
This leaves you with little time for actual _research_ in the work hours. Most part of your official working hours are spent on administrative tasks and (co-)authoring papers. It is not environment for someone like me who likes to _build_ stuff as well. Income was only secondary to my choosing industry over academia.
For me personally, the money hasn't been a hard thing to let go. (Professors in the US get paid relatively well compared to academics in other countries, even places with much higher cost of living.) Nor the stress. The thing that's been hardest is not being able to pick where you live. But even if academic jobs were more plentiful, fact is there just aren't that many cities with multiple research universities.
The pay is much much better.
Work life balance is waaay better.
Career prospects are better....
My biggest regrets in life are...getting a PhD, and going to university at all, in that order.
The opportunity cost for not just reading a few basic books and getting entry level tech certs and instead getting an education are on par with owning a house outright where I live, and not having a decade of unbelievable stress and pressure to succeed in academia.
At every step I basically went more and more practical and further away from theory and science.
I think the problem here is that studying complex stuff is glorified in our society. We naively think that every problem can be solved by studying it hard enough and that productivity is at least linear. I mean, it is true that we need science, but there is a universe of well-paying existing jobs (and those yet to be created) that are way easier and provide a decent payoff for workers and society.
I am convinced only a small fraction of people are suited for the lonely grind of academia. Most others would be better off getting more practical experience.
Smart people end up studying, so university degrees are associated with smart people. But the chance that they would have been successful without a science degree is pretty high for a lot of them.
What signal this sends to me is that science is, and excepting a cold war arms race level of public funding always will be, nothing more than two broad groups:
those without external material demands ( no need to pay rent, etc)
the rare insane genius ( driven by lifelong obsession)
Everyone else leaves for a real job once they realise they have bills to pay imho.
Science is not part of our society, it is a fringe luxury indulged in by an elite social class that fuels itself with the pyramid scheme esque hordes of naïve young grad students, who are swiftly forced to choose between poverty and abandoning what they have wasted their youth training to become experts in.
Or in other words, higher education is the greatest waste of human capital since the world wars.
Or finance (or “crypto”)
IBM (can click through to see calibration of each processor): https://quantum-computing.ibm.com/services?skip=0&systems=al...
IonQ (overall benchmark): https://www.nature.com/articles/s41467-019-13534-2
A virtual Z is a book-keeping trick that changes the rotation axis of later gates.
When you're writing an algorithm, you tend to want as few operations as possible so you'd prefer a virtual Z. When you're doing a physics experiment, or trying to decouple from slowly-varying noise, you tend to want to actually affect the system and so would prefer a physical Z.
The hard part of Shor's algorithm is computing `pow(g, e, modulus=N)` where g is a uniformly random number, e is a uniformly superposed number, and N is the number you want to factor. But note that 15 is one less than a power of 2, so modular arithmetic mod 15 can use special circuits that are surprisingly efficient. Since it's easy to check if a number is one less than a power of 2, there's no obstacle to using this optimization on a "real" problem. On the other hand, most products of two primes aren't one less than a power of 2 and so this case is absurdly rare. So is using the more optimized circuit allowed or isn't it?
It get so much worse. In  it's shown that, if you know the factors behind a factoring problem of any size, you can derive constant sized quantum circuits that "solve it". Basically, you can force Shor's algorithm into a trivial corner case and then implement the corner case. For all intents and purposes this makes the quantum computer irrelevant. E.g. I used this result for an April fool's post where I "factored" the largest number ever "with" a quantum computer .
The other problem with Shor's algorithm on small cases is that as soon as you get a "correct" output from the quantum part the following classical part will succeed. But for small cases there are too few possible outputs. Even if the quantum computer is totally broken and generating random noise, the algorithm will still succeed pretty quickly. So you need to define success as some kind of noise metric to beat, instead of as actually solving the problem.
Here is maybe something that will answer the intent of your question. I haven't checked on this particular chip, but from experience with running things on various quantum computers and from the numbers in the data sheet, I can confidently say that this chip will struggle to count to 10 before suffering an error. A 4-qubit increment circuit is going to use at least 10 two-qubit gates , you need to increment 10 times, and the listed two qubit gate error rates are around 1%. So presumably the success rate is going to be on the order of (99%)^(10*10) ~= 35%.
Disclaimer: I work on the google quantum team.