Hacker Newsnew | past | comments | ask | show | jobs | submit | more mjg59's commentslogin

The final order implementing the judgement isn't out yet so I'm not going to go into too much detail here as yet, but there's additional publicly available information I can share:

The original claim: https://codon.org.uk/~mjg59/case/Claims.pdf

The defence and counterclaim: https://codon.org.uk/~mjg59/case/Defence_Counterclaim.pdf

The associated schedule of harassment: https://codon.org.uk/~mjg59/case/Schedule.pdf

The reply to the defence and counterclaim: https://codon.org.uk/~mjg59/case/Reply.pdf


One thing that didn't end up happening - the claim that he would have multiple people, including Linus Torvalds, Richard Stallman, Bruce Perens, and John Gilmore testify against me.


Do lawsuits ever really end, or do the parties just run out of money? Isn’t the SCO v. Linux thing still shambling on in some court?


Other than reaching agreement over the order to implement the judgement, this is likely over - my understanding is that an appeal could only occur if the judge made an error of law, and they would need to convince another judge of this before being granted permission to appeal.


What about the fact that - incredibly - the page defaming you is still up on their website? Surely the judge won't take kindly to that?


As of yet there isn't an order associated with the judgement - that's expected to be something negotiated between the parties. I'm unqualified to say what impact continuing to publish the material has on that process.


I think Jarndyce v Jarndyce is still going as well.


There's a lot of UEFI in the phone ecosystem, it's not the BIOS later that's missing - it's the ACPI layer.


I've never seen UEFI in any mainstream Android device.

The problem is... in the x86 world, even the most modern systems around still ship with decades of garbage. INT 10h and VBE, every x86 system still speaks it - either directly in the card, or emulated in BIOS/UEFI compatibility layers, so even a basic "hello world" can get video output, 09h/16h gives you keyboard input, 13h gives you disk I/O, 14h a serial port.

That means that at least the initial bringup for a second-stage bootloader is incredibly easy, less than 40 lines of assembler code [1]. And when you write a tiny operating system, as long as you're content with that basic foundation of 640x480 and text output, it will run on any x86 compatible PC or server.

On anything ARM, that's outright impossible to do and that is a large part of the problem. ARM is power efficient, but it comes at a serious cost. The low level bringup will be handled by the board's ROM, similar to PC BIOS/EFI, but after control is passed to the OS it gets different - all the OS gets is the devicetree stating "here's the memory addresses and interfaces of the hardware in the system", but you still need to write drivers for each and every individual thing from the bottom up, there is no framework for even most basic hardware interactions.

[1] https://gist.github.com/MyCatShoegazer/38dc3ee7db9627ff3a20e...


I have many x86 devices that don't provide a CSM, so no, it's untrue that it will run on any arbitrary x86 device. You can do something similar running entirely inside the UEFI boot services environment - and that'll work just as well on any of the large number of UEFI-based ARM phones.


isn't uefi used for all the modern qcom devices..?


Yes it is. It can be hard to spot if you don’t know it’s there though.


What's 09h/16h ?


09h is keyboard interrupt, the utter basic interface [1] that only gives you scancodes and that's it, 16h is the extended interface [2] that you need to deal with if you want to read/set shift and other special keys [3].

[1] http://www.techhelpmanual.com/106-int_09h__keyboard_interrup...

[2] http://www.techhelpmanual.com/228-int_16h__keyboard_services...

[3] http://www.techhelpmanual.com/58-keyboard_shift_status_flags...


Yeah, the requirement to build and provide device trees for most mobile devices is the huge issue. For all of the garbage we have gotten from buggy ass ACPI tables on assorted PC’s, it’s absolutely true that it solved a lot of headaches with hardware discovery/enumeration.

It’s really too bad that ARM had adopted ACPI as part of their SystemReady certification. It does work, and not reinventing the wheel is always a wise where feasible, but I think we could absolutely push something better.


The article says it supports enough 486 and Pentium instructions to boot modern Linux


This is beautiful, but the real takeaway should be that even proprietary software you only have binaries for is still mutable. The computer runs the code you want it to run. We always need to maintain that and prevent scenarios where general purpose computers stop being the default.


Cat's out of the bag there already. We all have general purpose computing devices in our pockets, locked down on purpose. Android used to allow you to gain admin rights but it's been getting more and more impossible to do so while still keeping most of your programs working. It's not only a cat-and-mouse game against "rooting detection" SDKs companies licence and plug into their apps out of a misguided duty of care, but it's especially bad with anything that uses Google's remote attestation lately.

Android is also about to lock down "sideloading", another "great" dysphemism for "installing software".

Moving the Overton window on this has been so successful, that even people in our industry happily accepted the much maligned dysphemisms of "jailbreaking" and "rooting" for what used to be called "local admin rights" and look upon such access as if it's only something pirates, criminals or malware spreaders would want to do.

I say this as someone who is running an Android phone with a kernel with some backported patches applied and compiled by myself. The fact that I can do it is great. The fact that the entire industry is trying to make it as frustrating as possible for me to do this under the guise of false premises such as "security" is disheartening.


We were always doing this kind of thing on these platforms. This is how we used to hack copy protection out of games.

Stepping through, line by line, editing the code and adding JMPs to get around the copy protection code after loading the magic numbers into the register...

Happy, happy times.


Then they started loading the protection code from disk doing tricky things. One I cracked recently was a pair of Commodore 1541 sectors that appeared to be the same logical sector (because the drive head is blind). It needed to hit both of them to compile the next portion of the loader. Naturally the segment up to that point was encrypted as well, but nothing survives a VICE breakpoint. https://oldvcr.blogspot.com/2023/08/cracking-designwares-gra...

Obviously this is nothing on things like V-MAX! and Rapidlok which even nowadays have variations that are tough to remaster.


That's beautiful.


That's how I first learned assembly. Armed with a monitor program that can disassemble and modify memory, I read and modified programs stepping through them. Mostly games, naturally. I never got an actual assembler/linker chain that would work and useful software was hard to come by.


> even proprietary software you only have binaries for is still mutable

POKE 35136, 0

thus it ever was.


Unfortunately the whole "open source" movement has diverted attention away from that and brainwashed countless would-be power-users and even developers into believing that they are powerless to do anything without the source code. It's convenient to have the source, but not necessary for freedom.


Meanwhile, roughly contemporaneously, the Motorola 68000 was a CPU with 32-bit registers and a 16-bit bus (there was also the 68008 which had an 8-bit bus) - but in the 80s both the PC and 68000 devices were generally referred to as 16-bit. I guess the argument is that the 8088 was a cut down 8086 (an unambiguously 16-bit CPU) while the 68k family didn't ship a 32-bit bus until the 68020, but it's interesting how handwavy terminology is here (see also the "64 bit" Atari Jaguar)


In the Atari ST article from yesterday I read the 68000 was very pleasant to program for. I wonder how does it compare to 808x?


808x registers are all 16 bit, despite it having a 20 bit address space. That means you can't fit a full memory address in a single register, which means memory is split into 64K "segments" and you have a separate segment register that tells the CPU which segment you're referring to (segments can overlap, so this is distinct from banked memory). On its own that makes writing 808x code fucking miserable.


Any programming in real mode with compilers meant that the programmer had to annotate pointers with near, far and huge. Plus you had to know if the stack segment was the same as the data segment...... ARRRRGGGGHHH.

I am glad that time is in the distant past now, although apparently plenty of brain cells remain dedicated to the 8088.


Indeed, this aspect of early x86 is something more people need to know about... It still has its remnants in low-level code. Of course, 6502 and other 8-bit processors had similar issues tryign to go beyond the 64K addressing limit, but only the x86 line both baked this into the architecture and continued to carry it forward for compatibility for many years.


If you want to implement UEFI secure boot and verify existing signed objects then you need to incorporate Microsoft-issued certificates into your firmware, but that's very different from needing Microsoft to be in the loop - the certificates are public, you can download them and stick them in anything.


In 2000 my neighbour built a small network using two Orinoco Gold cards - an ad-hoc[1] network between his laptop (A Sony with a Neomagic chipset, I don't remember the precise model but it was beautiful) and the desktop in his room, and this was

(a) utterly magical (b) his father was the son of someone very high up in one of the Scottish banks and so this was affordable for him and clearly outside the range of normal people

In 2001 I bought a set of Prism 2 based cards that let me run HostAP (https://hostap.epitest.fi/) and was able to build my own network that didn't rely on ad-hoc mode and so everything was better but the speed at which all of this changed was incredible - we went from infrastructure being out of the reach of normal humans to it being a small reach, and by 2005 we were in the territory of all laptops having it by default. It was an incredible phase shift.

[1] ad hoc was a way for wifi cards to talk to each other without there being an access point, and there was a period where operating systems would show ac-hoc devices as if they were access points, and Windows would remember the last ad-hoc network you'd joined and would advertise that if nothing else was available, and this led to "Free Internet Access" being something that would show up because it was an ad-hoc network someone else advertised and obviously you'd join that and then if you had no internet your laptop would broadcast it and someone else would join it and look the internet was actually genuinely worse in the past please stop assuming everything was better


Poor quality analogy: should ed25519 only have been incorporated into protocols in conjunction with another cryptographic primitive? Surely requiring a hybrid with ecdsa would be more secure? Why did djb not argue for everyone using ed25519 to use a hybrid? Was he trying to reduce security?

The reason this is a poor quality analogy is that fundamentally ecdsa and ed25519 are sufficiently similar that people had a high degree of confidence that there was no fundamental weakness in ed25519, and so it's fine - whereas for PQC the newer algorithms are meaningfully mathematically distinct, and the fact that SIKE turned out to be broken is evidence that we may not have enough experience and tooling to be confident that any of them are sufficiently secure in themselves and so a protocol using PQC should use a hybrid algorithm with something we have more confidence in. And the counter to that is that SIKE was meaningfully different in terms of what it is and does and cryptographers apparently have much more confidence in the security of Kyber, and hybrid algorithms are going to be more complicated to implement correctly, have worse performance, and so on.

And the short answer seems to be that a lot of experts, including several I know well and would absolutely attest are not under the control of the NSA, seem to feel that the security benefits of a hybrid approach don't justify the drawbacks. This is a decision where entirely reasonable people could disagree, and there are people other than djb who do disagree with it. But only djb has engaged in a campaign of insinuating that the NSA has been controlling the process with the goal of undermining security.


> seem to feel that the security benefits of a hybrid approach don't justify the drawbacks.

The problem with this statement to me is that we know of at least 1/4 finalists in the post quantum cryptography challenge is broken, so it's very hard to assign a high probability that the rest of the algorithms will be secure from another decade of advancement (this is not helped by the fact that since the beginning of the contest, the lattice based methods have lost a signficant number of bits as better attacks have been discovered).


It's a touch odd to make a big deal of the fact that you've filed a complaint and fail to mention that it was formally rejected three days before you published the post: https://datatracker.ietf.org/group/iesg/appeals/artifact/146


Lots of respect to both you and the author, but the rejection gives no real response to any of the issues I see raised in the document.

It failed to raise my confidence at all.

> The IESG has concluded that there were no process failures by the SEC ADs. The IESG declines to directly address the complaint on the TLS WG document adoption matter. Instead, the appellant should refile their complaint with the SEC ADs in a manner which conforms to specified process.


I feel like if your argument is that the rules weren't followed, you have a pretty strong obligation to follow the rules in submitting your complaint.


Having served on boards, rejections on procedural grounds which fail to address engineering concerns which have been raised stink of a cop-out.


Have you read the complaint? It's not about engineering concerns, it's about whether procedures were followed correctly.


> Have you read the complaint? It's not about engineering concerns, it's about whether procedures were followed correctly.

This complaint? https://cr.yp.to/2025/20250812-non-hybrid.pdf

Engineering concerns start in section 2 and continue through section 4.

It seems you haven't read it.


There's a bunch of content that's not actually the complaint, and then there's section 4 which is the actual complaint and is overwhelmingly about procedure.


> and then there's section 4 which is the actual complaint and is overwhelmingly about procedure.

Ah, yes, procedural complaints such as "The draft creates security risks." and "There are no principles supporting the adoption decision.", and "The draft increases software complexity."

I don't know what complaint you're reading, but you're working awful hard to ignore the engineering concerns presented in the one I've read and linked to.


As is made clear from the fact that those issues all link to the mailing list, these are not novel issues. They were raised during discussion, taken into account, and the draft authors concluded they were answered adequately. Complaining about them at this point is fundamentally a complaint that the process failed to take these issues into account appropriately, and the issues should be revisited. Given that this was raised to the IESG, who are not the point of contact for engineering issues, the response is focused on that. There's a mechanism Dan can use to push for engineering decisions to be reviewed - he didn't do that.


> There's a mechanism Dan can use to push for engineering decisions to be reviewed - he didn't do that.

This is the retort of every bureaucracy which fails to do the right thing, and signals to observers that procedure is being used to overrule engineering best practices. FYI.

I'm thankful for the work djb has put in to these complaints, as well as his attempts to work through process, successful or not, as otherwise I wouldn't be aware of these dangerous developments.

Excuses of any kind ring hollow in the presence of historical context around NSA and encryption standardization, and the engineering realities.


Hey, look, you're free to read the mailing list archives and observe that every issue Dan raised was discussed at the time, he just disagreed with the conclusions reached. He made a complaint to the ADs, who observed that he was using an email address with an autoresponder that asserted people may have to pay him $250 for him to read their email, and they (entirely justifiably) decided not to do that. Dan raised the issue to the next level up, who concluded that the ADs had behaved entirely reasonably in this respect and didn't comment on the engineering issues because it's not their job to in this context.

It's not a board's job to handle every engineering complaint themselves, simply because they are rarely the best suited people to handle engineering complaints. When something is raised to them it's a matter of determining whether the people whose job it is to make those decisions did so appropriately, and to facilitate review if necessary. In this case the entire procedural issue is clear - Dan didn't raise a complaint in the appropriate manner, there's still time for him to do so, there's no problem, and all the other complaints he made about the behaviour of the ADs were invalid.


> you're free to read the mailing list archives and observe that every issue Dan raised was discussed at the time

As was https://en.wikipedia.org/wiki/Dual_EC_DRBG which was ratified over similar objections.

That made it no less of a backdoor.

> it's not their job

As I said about excuses.


They're adhering to their charter. If you show up to my manager demanding to know why I made a specific engineering decision, he's not going to tell you - that's not the process, that's not his job, he's going to trust me to make good decisions unless presented with evidence I've misbehaved.

But as has been pointed out elsewhere, the distinction between the Dual EC DRBG objections and here are massive. The former had an obvious technical weakness that provided a clear mechanism for a back door, and no technical justification for this was ever meaningfully presented, and also it wasn't an IETF discussion. The counterpoints to Dan's engineering complaints (such as they are) are easily accessible to everyone, Dan just chose not to mention them.


> unless presented with evidence

The complaint seems well referenced with evidence of poor engineering decisions to me.

> Dual EC DRBG ... had an obvious technical weakness that provided a clear mechanism for a back door

Removing an entire layer of well tested encryption qualifies as an obvious technical weakness to me. And as I've mentioned elsewhere in these comments, opens users up to a https://en.wikipedia.org/wiki/Downgrade_attack should flaws in the new cipher be found. There is a long history of such flaws being discovered, even after deployment. Several examples of which DJB references.

I see no cogent reason for such recklessness, and many reasons to avoid it.

Continued pointing toward "procedure" seems to cede the case.


Why don't we hybridise all crypto? We'd get more security if we required RSA+ECDSA+ED25519 at all times, right? Or is the answer that the benefits are small compared to the drawbacks? I am unqualified to provide an answer, but I suspect you are also, and the answer we have from a whole bunch of people who are qualified is that they think the benefits aren't worth it. So why is it fundamentally and obviously true for PQC? This isn't actually an engineering hill I'd die on, if more people I trust made clear arguments for why this is dangerous I'd take it very seriously, but right now we basically have djb against the entire world writing a blogpost that makes ludicrous insinuations and fails to actually engage with any of the counterarguments, and look just no.


FWIW, https://blog.cr.yp.to/20240102-hybrid.html reads to me like a more direct attempt to engage with the counterarguments.

I am curious what the costs are seen to be here. djb seems to make a decent argument that the code complexity and resource usage costs are less of an issue here, because PQ algorithms are already much more expensive/hard to implement then elliptic curve crypto. (So instead of the question being "why don't we triple our costs to implement three algorithms based on pretty much the same ideas", it's "why don't we take a 10% efficiency hit to supplement the new shiny algorithm with an established well-understood one".)

On the other hand, it seems pretty bad if personal or career cost was a factor here. The US government is, for better or worse, a pretty major stakeholder in a lot of companies. Like realistically most of the people qualified to opine on this have a fed in their reporting chain and/or are working at a company that cares about getting federal contracts. For whatever reason the US government is strongly anti-hybrid, so the cost of going against the grain on this might not feel worth it to them.


Which insinuations do you think are ludicrous? Is it not a matter of public record at this point that the NSA and NIST have lied to weaken cryptography standards?


The entirely unsupported insinuation that the customer Cisco is describing is the NSA. What's even supposed to be the motivation there? The NSA want weak crypto so they're going to buy a big pile of Ciscos that they'll never use but which will make people think it's secure? There are others, but on its own that should already be a red flag.


The article links a statement from an NSA official that explicitly says the NSA has been asking vendors for this, which seems like fairly strong support to me.


>So why is it fundamentally and obviously true for PQC? This isn't actually an engineering hill I'd die on, if more people I trust made clear arguments for why this is dangerous I'd take it very seriously, but right now we basically have djb against the entire world writing a blogpost that makes ludicrous insinuations and fails to actually engage with any of the counterarguments, and look just no.

As a response to this only, while djb's recent blog posts have adopted a slightly crackpotish writing style, PQC hybridization is not a fringe idea, and is not deployed because of djb's rants.

Over in Europe, German BSI and French ANSSI both strongly recommend hybrid schemes. As noted in the blog, previous Google and Cloudflare experiments have deployed hybrids. This was at an earlier stage in the process, but the long history of lattices that is sometimes being used as a (reasonable) argument against hybrids applied equally when those experiments were deployed, so here I'm arguing that the choice made at the time is still reasonably today, since the history hasn't changed.

Yes, there is also a more general "lots of PQC fell quite dramatically" sentiment at play that doesn't attempt to separate SIKE and MLKEM. That part I'm happy to see criticized, but I think the broader point stands. Hybrids are a reasonable position, actually. It's fine.


If anyone is curious about DSI's and ANSSI's positions referred to in this comment:

The german position:

https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publicat...

"The quantum-safe mechanisms recommended in this Technical Guideline are generally not yet trusted to the same extent as the established classical mechanisms, since they have not been as well studied with regard to side-channel resistance and implementation security. To ensure the long-term security of a key agreement, this Technical Guideline therefore recommends the use of a hybrid key agreement mechanism that combines a quantum-safe and a classical mechanism."

The french position, also quoting the German position:

https://cyber.gouv.fr/sites/default/files/document/follow_up...

"As outlined in the previous position paper [1], ANSSI still strongly emphasizes the necessity of hybridation1 wherever post-quantum mitigation is needed both in the short and medium term. Indeed, even if the post-quantum algorithms have gained a lot of attention, they are still not mature enough to solely ensure the security"


> Why don't we hybridise all crypto?

So you've constructed a strawman. Another indication of ceding the argument.

> and the answer we have from a whole bunch of people who are qualified

The ultimate job of a manager or a board is to take responsibility for the decisions of the organization. All of your comments in this thread center around abdicating that responsibility to others.

> This isn't actually an engineering hill I'd die on

Could have fooled me.

> we basically have djb against the entire world

Many of your comments indicate to me that clashing personalities may be interfering with making the right engineering decision.


If the argument is "Why adopt a protocol that may rely on a weak algorithm without any additional protection" then I think it's up to you to demonstrate why that argument doesn't apply to any other scenario as well.


Again with the strawmen.

"Why adopt a protocol that may rely on a weak algorithm without any additional protection"

Does not accurately represent the situation at hand. And that seems intentional.

"Why weaken an existing protocol in ways we know may be exploitable?" is a more accurate representation. And I believe the burden of evidence lies on those arguing to do so.


Kyber is not known to be weaker than any other well used algorithm.


Another strawman. No one in this thread said Kyber was known to be weaker. Just that elliptic curve cryptography is well tested, better understood as a consequence of being used in production longer, and that removing it opens up transmissions made without both to attacks on the less widely used algorithm which would not otherwise be successful.

It really seems like you're trying not to hear what's been said.


As a friendly reminder, you're arguing with an apologist for the security-flawed approach that the NSA advocates for and wants.

There are absolutely NSA technical and psychological operations personnel who are on HN not just while at work, but for work, and this site is entirely in-scope for them to use rhetoric to try to advance their agenda, even in bad faith.

I'm not saying mjg59 is an NSA propagandist / covert influencer / astroturf / sockpuppet account, but they sure fail the duck test for sounding and acting like one.


ML-KEM as standardized by NIST is weaker than Kyber.


> If you show up to my manager demanding to know why I made a specific engineering decision, he's not going to tell you

Well if your working in a standards development organisation then your manager probably should.

It looks like (in the US at least) standards development organisations have to have (and follow) very robust transparency processes to not be default-liable for individual decisions.

(Unlike most organisations, such as where where you and your manager from your scenario come from)


This is just a bureaucracy making up fake excuses. qsecretary, the autoresponder, is way less annoying than having to create a new account everywhere on each SaaS platform. At least you know your mail arrived.

Everyone has no issues forcing other people to use 2FA, which preferably requires a smartphone, but a simple reply to qsecretary is something heinous.

The $250 are for spam and everyone apart from bureaucrats who want to smear someone as a group knows that this is 1990s bravado and hyperbole.


If you have nothing to hide, feel free to mail me your unlocked phone.


It's still nice that it was put up for completeness. And as we know this stuff has a long sordid history of people who are proponents of weakening encryption not giving up easily.


Boeing currently has an awkward gap between the 737 and the widebodies that was previously filled by the 757 - the 737 Max 10 (which still isn't certified!) only has about two thirds of the range of the A321XLR, and a slightly lower passenger capacity. Airlines that currently have 757 fleets and who need that range are going for Airbus instead, and Boeing just doesn't have an answer for it. So while, yes, any new Boeing design is likely to be fly by wire and composite and everything, it also seems likely that it's going to try to fit that market.

The 737 Max 7, the smallest of the Max series, is longer than the 737-200, the stretched version of the original design. A brand new design is going to be able to ignore that market (which basically doesn't exist any more, the Max 7 only has a handful of orders) and scale upwards to also be a 757 replacement. But it's also going to have basically no commonality with the 737, so it's going to have to genuinely be better than the Airbus product because existing Boeing customers aren't going to benefit from being able to move existing pilots to it without retraining or benefit from common maintenance plans and so on. It obviously should be better - the A320 program started over 40 years ago, it's not that much newer than the 737 - but given Boeing's myriad series of failures in recent years and how painful the 787 program was, it's not impossible that they'll fuck this up entirely.


So given that both the basic recipe for the 737 and A320 are pretty old by now, how much "better" could a new clean sheet narrowbody realistically be, given recent in aircraft design?

And how much better would it _need_ to be, in order for large 737 operators to be convinced to place their next order for the new 7007? (yes, like nVidia, I trust they'll just add a number and start over when they run out of numbers).


This comment talks about next-gen engines as the driver for new airframe designs: https://news.ycombinator.com/item?id=45431849

Those engines will be a major driver of whether “better” is achieved, in particular maintenance costs and fuel economy will need to move the needle.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: