Figure 6 which breaks-down the time spent doing different tasks is very informative -- it suggest:
15% less active coding
5% less testing,
8% less research and reading
4% more idle time
20% more AI interaction time
The 28% less coding/testing/research is why developers reported 20% less work. You might be spending 20% more time overall "working" while you are really idle 5% more time and feel like you've worked less because you were drinking coffee and eating a sandwich between waiting for the AI and reading AI output.
I think the AI skill-boost comes from having work flows that let you shave half that git-ops time, cut an extra 5% off coding, but cut the idle/waiting and do more prompting of parallel agents and a bit more testing then you really are a 2x dev.
> You might be spending 20% more time overall "working" while you are really idle 5% more time and feel like you've worked less because you were drinking coffee and eating a sandwich between waiting for the AI and reading AI output.
This is going to be interesting long-term. Realistically people don't spend anywhere close to 100% of time working and they take breaks after intense periods of work. So the real benefit calculation needs to include: outcome itself, time spent interacting with the app, overlap of tasks while agents are running, time spent doing work over a long period of time, any skill degradation, LLM skills, etc. It's going to take a long time before we have real answers to most of those, much less their interactions.
i just realized the figure is showing the time breakdown as a percentage of total time, it would be more useful to show absolute time (hours) for those side-by-side comparisons since the implied hours would boost the AI bars height by 18%
I built control computers for nuclear reactors. Those machines are not connected to a network and are guarded by multiple stages of men with automatic machine guns. It was designed to flawlessly run 3x boards each with triple-modular-redundant processors in FPGA fabric all nine processors instruction-synced with ECC down to the Registers (including cycling the three areas of programmable fabric on the FPGAs). They cycle and test each board every month.
Well, the news says that doge randos are potentially exfiltrating the details of systems like that as well as financial details of many Americans, including those who hold machine guns and probably suffer from substandard pay and bad economic prospects/job security as much as anyone else does.
Perhaps the safest assumption is that system reliability ultimately depends on quite a lot of factors that are not purely about careful engineering.
A bit off topic, but my uncle used to be security at a nuclear plant. Each year the Delta Force (his words) would conduct a surprise pentest. He said that although they were always tipped off, they never stopped them.
The whole Hacker News support thing gives good dev vibes but we had a support ticket for over-billing go unanswered for months (01288863) despite email pleas to a salesperson and then the ticket was closed without response. Made it impossible for me to convince my org to continue using Cloudflare.
It’s worse than that. Everything leading up to this and this reversion right now is a perfect demonstration that the current US administration cannot be trusted and behaves in irrational ways. You cannot expect consistency and enduring policies. It’s all fickle and capricious. How are you supposed to do any planning with this?
This is all so obviously dumb and I’m frankly astounded by so many people (especially here on HN) playing devils advocate or, I don’t know, honestly believing that this all makes sense.
Even if you agree with the stated (also somewhat incoherent, by the way) goals why do you think this implementation can achieve any of that?
I believe the point is power, and from that lens everything makes perfect sense. Trump is exercising available levers of global influence -- for good or for bad -- in a way that hasn't occurred since Hitler initiated World War II.
Tariffs are appealing to him because they are incredibly forceful blunt instruments over which he alone has almost complete control. They give him immense, immediate influence over the entire world. What we're seeing is that the US President today, if the full capacities of that office are pushed as far as possible without violence, is arguably one of (if not the) most powerful human beings ever.
Beyond this, Trump has said that one of his greatest weapons is uncertainty. He wants to be feared. Having people genuinely afraid of you is the next step of power that he is already flirting with by posting videos of people being blown up in warfare on social media.
And what will he do when China and rest of the world tired of his untreated Narcissistic Personality Disorder will start selling US debt, like $trillions in bonds in say less than a week? I bet you finally someone broke the secret to him today, so he reverted the policy.
The White House can pressure red states to offer companies sweetheart deals so the President can take credit. If they refuse, he pulls his support for them come election time.
Yes they will, because when industry can't bet on being tariff free outside the US, they'll hedge their bets and have at least some manufacturing here. The damage is already done in that regard. The instability is the warning across the bow for companies that import.
Welcome to Hacker News Gil! I’m a big fan of your work in complexity theory and have thought long and hard on the entropy-influence conjecture revisiting it again recently after Hao Huang’s marvelous proof of the sensitivity conjecture.
To answer your question on why the hyped fantastic claim, as you must know, the people who provide the funds for quantum computing research almost certainly do not understand the research they are funding, and need as feedback a steady stream of fantastic “breakthroughs” to justify writing the checks.
This has made QC research ripe for applied physicists who are skilled in the art of bullshitting about Hilbert Spaces. While I don’t doubt the integrity of a plurality of the scientists involved, I can say with certainty that approximately all of the people working on Quantum Computing Research would not take me up on my bet of $2048 that RSA2048 will not be factored by 2048 —- and would happily accept $204,800,000 to make arrays of quantum-related artifacts. Investors require breakthroughs or the physicists will lose their budget for liquid gases — certainly exceeding $2048.
While there might be interesting science discovered along the way, I think of QC a little like alchemy: the promise of unlimited gold attracted both bullshitters and serious physicists (Newton included) for centuries, but the physical laws eventually emerged that it is not scalable to turn lead into gold. Similarly it would be useful to determine the scaling laws for Quantum Computers. How big of an RSA key is needed before even a QC exceeds the total number of particles in the universe to factor it in reasonable time? Is 2048 good enough that we can shelf all the peripheral number-theory research in post-quantum-cryptography? Let’s not forget the mathematicians too!
You should not put any weight on surveys like this.
I'm an ML/AI researcher. I get similar surveys regularly. I don't reply, neither do my colleagues. The people who reply are a self selected group who are heavily biased toward thinking that AGI will happen soon. Or they have a financial interest in creating hype.
Most of the experts from that report have a direct financial benefit from claiming that this will happen really soon now.
I think Shor's scales lineally, if it's possible to do the fine grain control of the most significant bits. Some people don't think that's a problem, but if it is then growing keys will be an effective defense.
There's a specific view (as I understand it) that QFT's don't scale https://arxiv.org/abs/2306.10072 but some folks seem to dismiss this for reasons I just don't grok at all.
There are two major issues with the paper you linked.
First, it says if you can't do accurate rotations then you can't factor. But the premise is false. Quantum error correction allows you to do arbitrarily accurate rotations. Specifically, you can achieve arbitrarily accurate 45 degree rotations around the X and Z axes by using state distillation and gate teleportation [1], and all rotations can be approximated by a sequence of these 45 degree rotations with the error decreasing exponentially versus the length of the sequence [2]. The paper's justification for saying you can't do accurate rotations is that they think quantum mechanics will end up being wrong (see section 4).
Second, even if you assume for the sake of argument that rotations are inherently noisy, the conclusion doesn't actually follow. The mistake the paper makes is to assume the algorithm will use the textbook QFT circuit [3], which uses n^2/2 rotations for an n-qubit QFT, allowing large amounts of error to accumulate. But in practice, because this QFT is at the end of a circuit, you would use the qubit-recycling QFT which only performs n rotations [4]. Alternatively, if rotations were really such a problem, you could perform ~30 rotations to prepare what's known as a phase gradient state. You can then achieve later rotations via the phase kickback of adding a constant into the phase gradient controlled by the qubit you want to rotate [5]. In other words, the paper asserts you need millions of rotations to factor a 2048 bit number when actually you only need dozens. Everything else can be done with Clifford+Toffoli gates.
So yeah, this paper is not at all a convincing takedown of quantum factoring. The formal part focuses on the wrong cost and the informal justification is that they think quantum mechanics is wrong.
that's a tough argument given that there are already known algorithms to scale it. it's possibly QM is just broken, but it it's not, it's hard to see how error correction wouldn't work
reply