Hacker News new | past | comments | ask | show | jobs | submit | yuriks's comments login

Another bonus of wearing ear protection in concerts: The music will actually sound way better when it's not saturating your eardrums. I got a decent pair of silicone plugs and it's a huge difference, everything sounds clearer and there's none of the sibilance and distortion I hear without them on, which I previously had always just attributed to the speakers/bad venue acoustics.

Which part of that architecture is impossible in Rust? Actually an honest question, I'm wondering if I'm missing something.

From what I remember from my Unity days (which granted, were a long time ago), GameObjects had their own lifecycle system separate from the C# runtime and had to be created and deleted using Destroy and Create calls in the Unity API. Similarly, components and references to them had to be created and retrieved using the GetComponent calls, which internally used handles, rather than being raw GC pointers. Runtime allocation of objects frequently caused GC issues, so you were practically required to pre-allocate them in an object pool anyway.

I don't see how any of those things would be impossible or even difficult to implement in Rust. In fact, this model is almost exactly what I used to see evangelized all the time for C++ engines (using safe handles and allocator pools) in GDC presentations back then.

In my view, as someone who has not really interacted or explored Rust gamedev much, the issue is more that Bevy has been attempting to present an overtly ambitious API, as opposed to focusing on a simpler, less idealistic one, and since it is the poster child for Rust game engines, people keep tripping over those problems.


The repo has had 3 commits in the last 4 years or so. I don't think it's going to get developed from alpha unless something suddenly changes.


A well-trending publicization via HN is a good help.


The intro document mentions

> Here's the thing - the big vendors encrypt and sign their updates so that you cannot run your own microcode. A big discovery recently means that the authentication scheme is a lot weaker than intended, and you can now effectively "jailbreak" your CPU!

But there's no further details. I'd love to know about the specifics too!


They accidentally used the example key from AES-CMAC RFC, the full details are in the accompanying blog post: https://bughunters.google.com/blog/5424842357473280/zen-and-...


Yikes! One would have expected a little more code review or a design review from a hardware manufacturer, especially of security system. A system that people have been worried about since the Pentium FDIV bug.

I guess this one just slipped through the cracks?


Taking "never roll your own" too far.


I feel like using the example key isn’t really the big failure here.

They didn’t need a keyed hash at all, they needed a collision resistant hash.

SHA256 would have eliminated this vuln and it has a hardcoded “key” built into it.

Using a secret key for CMAC would not have been more secure, it would have just meant sophisticated hardware extraction of the key was required before this attack could be mounted.


I suppose the reuse wasn't accidental, but they mistakenly thought the key doesn't matter for CMAC.


I believe lobste.rs is one site that's going to geoblock the UK as a precautionary measure at least


I thought that was a tech site, are they hosting porn now? I'd have thought they'd already police hate crimes, encouraging suicide, self-harm, and such?? Perhaps they have a special section where they encourage kids to huff glue?


You’re missing the point. The law is so vague and broad that it could be interpreted as covering even far more innocuous content than the few extreme examples you listed here.


The 'if they have nothing to hide' argument? Really?

I look forward to reading your fully compliant risk assessment before interacting with this comment, lest it be judged to contain offensive, inappropriate, or pornographic content.


And again, the article explicitly mentions that, with a picture of a similar book:

> I don’t know how this first proto-Gorton was designed – unfortunately, Taylor, Taylor & Hobson’s history seems sparse and despite personal travels to U.K. archives, I haven’t found anything interesting – but I know simple technical writing standards existed already, and likely influenced the appearance of the newfangled routing font.


The issue is that the author presents the entire set of typefaces that are similar to Gorton as derived from Gorton without presenting evidence to rule out the obvious alternative lineage: that, just like genuine Gorton, they too were derived from the various regional single-stroke letterforms that draftsman everywhere were taught and used. Excellent draftsman’s examples abounded and would have been so much more common than genuine Gorton and its genuine ancestors that it’s hard to believe that regional companies marketing their own type engraving machines would have had to copy Gorton rather than local examples of the draftsman’s art that were considered superior.


The article has a link to the licensing agreement between Taylor Hobson and Gorton, links to other posts explaining how Leroy bought Gorton machines, an interactive comparison where you can see the similarity of the letters, and dozens of photos and scans of docs where you can compare them yourself, too. I would say that is a lot of evidence presented.


But that evidence is not persuasive that the set of fonts that the author calls Gorton are actually derived from the lineage he presents (see the diagram captioned “The Gorton quasisuperfamily”), rather than from freehand lettering that would have been much more widely used at the time. Remember that the letterforms themselves could not qualify for legal protection in the United States. So none of the licensing agreements offered as evidence were needed to acquire the fonts. So we can conclude that they were executed to acquire the machines and the patents behind them for the purpose of introducing similar machines in a new market. The machine designs and methods of production were the hard part. The fonts were comparatively trivial to create and incorporate into a machine, whatever its design.

The author's comparison of the fonts actually argues against them being of the claimed lineage. Consider the many differences between the Taylor, Taylor & Hobson machine’s fonts and the Gorton machine’s fonts. If Gorton had a license to use the Hobson machine designs, which they did, they could have simply copied the TT&H fonts verbatim. But they clearly did not. Why not? I think it's likely that they simply preferred a different design, one closer to the letterforms that were more commonly used by draftsmen in the American market. In other words, the Gorton reference design was not the TT&H font design.

At least, that's my best guess based on the evidence presented.


This quote is where the author seems to reach the conclusion they want, that is Gorton being the Proto-Indo-European of these drafting letterforms:

> Each of these reappearances made small changes to the shapes of some letters. Leroy’s ampersand was a departure from Gorton’s. Others softened the middle of the digit 3, and Wrico got rid of its distinctive shape altogether. Sometimes the tail of the Q got straightened, the other times K cleaned up. Punctuation – commas, quotes, question marks – was almost always redone. But even without hunting down the proof confirming the purchase of a Gorton’s pantograph or a Leroy template set as a starting point, the lineage of its lines was obvious. (The remixes riffed off of Gorton Condensed or the normal, squareish edition… and at times both. The extended version – not that popular to begin with – was often skipped.)


The concern isn't about Fedora packaging and distributing an RPM, but they they also package their own flatpack that overrides the official OBS one.


Ah! The conflict makes more sense then.


They do darken that fast (not fast enough you can't catch it in a high speed camera, but much faster than a frame). Most of the apparent persistence in the CRT comes from the retina/camera exposure, not the phosphor. A CRT has a sharp peak of light that quickly falls off, but the peak is bright enough that even though it is brief, when averaged out in an exposure in the camera it still appears bright enough to form an image.


> With a free price tag, it could easily do that. But, it first needs to provide the required functionality, and today, it stubbornly refused to that, for ideological reasons.

Very confused by the article, even after re-reading it. The author keeps bringing up ideology throughout the article, but is there any arguments or evidence given that this is a factor? The simplest explanation to me is that OOXML is a de-facto proprietary format, and implementing full compatibility with it is simply a large technical undertaking that LibreOffice doesn't have the resources to effectively achieve right now. They even hint at that themselves: "From what I've been able to decipher, no non-Microsoft Office program implements the full specification and follows it to the letter."


I agree with you. I disagree with the assertion from the author that LibreOffice refuses to implement the OOXML standard for ideological reasons; nothing was cited in the article supporting this assertion. If it were true that LibreOffice's developers only want to support open standards, then there wouldn't have been support for the pre-Office 2007 binary formats.

Yes, I believe that imperfect compatibility with Microsoft Office is holding LibreOffice back, but perfect compatibility isn't going to happen without massive resources. It's immense work being 100% compatible with a proprietary, under-documented standard. Without an army of developers, it's going to take a very long time. Think of how long it took Wine and ReactOS to get to today's usability, and those projects still have much work to do. Think of how long it took the HaikuOS people to release release-candidate issues of their project.


It’s absolutely not true that LO devs only want to support open standards. They are still supporting .doc compatibility!

It’s difficult to be conpatible with a software product that claims to follow an open standard but that does not.


The article's reasoning is ridiculous. I'm quite sure if it were easy to implement OOXML, LibreOffice would have done it.

The first part of OOXML alone has more than 5,000 pages. Ideology or not, I highly doubt anyone would try to implement OOXML to the letter.


> I'm quite sure if it were easy to implement [feature A], LibreOffice would have done it

I've lost this optimism a long time ago when I realized how much denialism and how little actual factual, detailed knowledge there is among vocal proponents for Linux and free software. For example, the entire font handling stack on Linux is a hot seething mess of horrible dysfunctionality, wrongly implemented and starting out from the wrong principles (IMHO), yet nobody is working to improve it because everyone keeps telling you it's a solved problem.

More specifically, years ago ago I hacked together solutions to script LibreOffice from the 'outside' as it were because its macro system is such an utter painful failure in so many ways. Not only was a single website written in Japanese the only source for a reasonable rundown of LibreOffice's API, it also introduced me, indirectly, to the fact that the people who initially wrote OpenOffice were apparently not comfortable with Java being a strictly OOP language, so they did their best to write code that to a degree circumvents that, leading to a terrifyingly bad user experience and a hint of the convoluted horrors that might lurk in the source code.

The glacial pace of LibreOffice's development where the appearance of the tiniest of innovations leads, every few years, to great public announcements and a new version number, has convinced me that the community has painted themselves in a corner with an unmanageable code base and devs that are, likely, in complete denial that the only way out would be a complete rewrite of the software, something that is, understandably, too hard to stomach.


I challenge you on each of these points:

You make a big claim that font handling is a “hot seething mess”, but you do t explain why. Harfbuzz and ICU are hardly what you describe.

LibreOffice’s API is extensively documented:

* https://api.libreoffice.org/

* the Developers Guide has been around for literally over a decade and is now located at https://wiki.documentfoundation.org/Documentation/DevGuide

LibreOffice is not based on Java. Java was, IMO, bolted on when Sun took over. As for it circumventing the “strict OO” nature of Java, that makes no sense and I challenge you to back this up with a technical explanation.

LibreOffice does incremental development. It’s not in any way glacial. Every release has hundreds of feature updates and bug fixes. The quality and compatibility with Office is getting better over time, and though there is still a ways to go it’s hardly going slowly.

I’ve extensively looked at the LibreOffice source code, and I can tell you that a complete rewrite would be a major waste of time that would kill the project. The software has been around since the early 1980s. It has accreted layers, but even so those layers are pretty well defined. It’s why it can be ported to different architectures and desktop environments. It’s why it has so many language bindings.

I don’t think you have a very informed view of how LibreOffice is developed, or even how it works.


Sorry it took some time before I could come back to this.

When I said font handling, I should've really said fontconfig. FWIW, https://eev.ee/blog/2015/05/20/i-stared-into-the-fontconfig-... delivers a pretty good rundown of why fontconfig is problematic, and also correctly states that "the documentation is atrocious". It's simply a piece of software that attempts to do too much while having an insufficient understanding of the problem. One of fontconfig's foundational problems is that it gets priorities wrong: when you have a series of configuration files, which ones should prevail in case of conflict, the global ones or the local ones? fontconfig thinks it should be the global ones, meaning whatever you configure you have to specifically override another rule. Also, for all its complexity that tries to deal with font names, file names, languages, and font styles, fontconfig omits the one obvious entity that is crucial for font rendering, the Unicode codepoint.

In short, CSS got it right: in CSS, you can define a `@font.face` that is composed of various code point ranges that are associated with font resources. This allows fine control over each displayed character. In contrast, fontconfig only lets you configure preferences of one font over the other where your second choice only comes to bear when your first font is indeed missing a given code point. And that is really all that it does; its complexity is just fluff, imagined functionality that isn't usable.

A remark is in place: first, it's ten plus years ago that I dealt with this stuff and tried to hack LibreOffice, so things might have changed. Also, I have to pull all of this from memory.

Second, not all of Linux font rendering is a mess (HarfBuzz for one thing is great), but, specifically and crucially, fontconfig is, and its documentation at http://www.freedesktop.org/software/fontconfig/fontconfig-us... has not changed a bit since then.

As for Libre/OpenOffice, the only documentation site that goes into details of the API is still up and has been last updated in 2023, so kudos for that. It's at https://openoffice3.web.fc2.com and in Japanese language; I did extensive searches back in the day and am sure this is the best I could find. Also I must add that for whatever reason I targeted Apache OpenOffice in the day, not LibreOffice, so that's one more thing that could conceivably make a difference here.

Here's the Hello World JavaScript example from the documentation:

    importClass(Packages.com.sun.star.uno.UnoRuntime)
    importClass(Packages.com.sun.star.text.XTextDocument)
    importClass(Packages.com.sun.star.text.XText)
    importClass(Packages.com.sun.star.text.XTextRange)
    importClass(Packages.com.sun.star.beans.XPropertySet)
    importClass(Packages.com.sun.star.awt.FontSlant)
    importClass(Packages.com.sun.star.awt.FontUnderline)

    oDoc        = XSCRIPTCONTEXT.getDocument()
    xTextDoc    = UnoRuntime.queryInterface(XTextDocument,oDoc)
    xText       = xTextDoc.getText()
    xTextRange  = xText.getEnd()
    pv          = UnoRuntime.queryInterface(XPropertySet, xTextRange)

    pv.setPropertyValue( "CharHeight",    16.0 ) // Double
    pv.setPropertyValue( "CharBackColor", new java.lang.Integer(1234567) )
    pv.setPropertyValue( "CharUnderline", new java.lang.Short(Packages.com.sun.star.awt.FontUnderline.WAVE) )
    pv.setPropertyValue( "CharPosture",   Packages.com.sun.star.awt.FontSlant.ITALIC )
    xTextRange.setString( "Hello World (in JavaScript)" )
So when I say that the API is convoluted and squirts Java OOP, I specifically mean the way you have to use `XSCRIPTCONTEXT` and `UnoRuntime.queryInterface()`. One might think this is caused by the use of a scripting language instead of OpenOffice's 'native' Java, but no:

    import com.sun.star.uno.UnoRuntime;
    import com.sun.star.frame.XModel;
    import com.sun.star.text.XTextDocument;
    import com.sun.star.text.XTextRange;
    import com.sun.star.text.XText;
    import com.sun.star.script.provider.XScriptContext;

    public class HelloWorld {
        public static void printHW(XScriptContext xScriptContext)
        {
            XModel xDocModel = xScriptContext.getDocument();

            // getting the text document object
            XTextDocument xtextdocument = (XTextDocument) UnoRuntime.queryInterface(
                XTextDocument.class, xDocModel);

            XText xText = xtextdocument.getText();
            XTextRange xTextRange = xText.getEnd();
            xTextRange.setString( "Hello World (in Java)" );
        }
    }
All of this is completely superfluous. Why would you call a method and pass it the `class` attribute of a class (?) and a 'model' (?) of the current document and then cast the return document so it obtains / becomes an instance of that class? Again, that line is literally, after renaming

    Foo foo = (Foo) UnoRuntime.queryInterface( Foo.class, FOO );
If that doesn't look contorted to you, I don't know what will.

When you say "LibreOffice is not based on Java" I will not challenge you on that but just say that back in the day when I tried to 'remotely control' LibreOffice from a scripting language all I got to see was a Java API, whatever it was bolted onto didn't matter, like at all. Likewise, I will not dispute different evaluations of development speed or whether a rewrite would be a good thing since those points are probably too subjective.


Wanted to point out that Linesight, the final project described in the article, has since released a new update last month, and it now beats world records in about a dozen maps between official and user made ones: https://www.youtube.com/watch?v=cUojVsCJ51I It's some really impressive stuff.


Brilliant, thanks for the update!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: