Hacker News new | past | comments | ask | show | jobs | submit | moody__'s comments login

A lot of discussion in this thread is pointing out that chromium is a thing and that it would be hard for a company to properly fund a web browser without the backing of a tech giant whose more direct revenue stream is elsewhere. I think this showcases a larger issue with the web as it stands today. Why has building a browser for the "open web" become such a complex piece of software that it requires the graces of a tech giant to even keep pace? Can nothing be done to the web to lower the barrier to entry such that an independent group (a la OpenBSD or similar) can maintain their own? Right now it seems this is only possible if you accept that you'll only be able to build on top of chromium.

I know the focus by the DOJ here seems to be more on search and less on the technical control that Google has over the web experience through implementation complexity, however I can only hope that by turning off the flow of free cash more "alternative" browsers are given some space to catch up. Things like manifest V3 show that Google is no stranger to tightening the leash if the innovation of web technologies impact their bottom line, I'd like to have a web where this type of control isn't possible.


That was the goal, not an accident. The length of the standard itself is comparable to medium-sized serious project kloc count.

They driven these numbers up to ensure that no one except them and their leashed pets could repeat it.

And here we are, you can have ten internet-enabled apps with texts, images and videos, basically the same functionality, but you can only copy nine of them.

You can’t even keep up with a simple fork.


Can nothing be done to the web to lower the barrier to entry such that an independent group (a la OpenBSD or similar) can maintain their own?

Sure, we can have the original web with text and the occasional embedded photo. But if you want what amounts to a full blown operating system, with a rock solid sandbox, plus an extremely performant virtual machine, that’s going to be a high bar.


> Can nothing be done to the web to lower the barrier to entry such that an independent group (a la OpenBSD or similar) can maintain their own?

Of course it can and it is done: Linux Foundation Europe runs Servo, GNOME Foundation runs WebKitGTK and Epiphany, Ladybird Browser Initiative runs Ladybird.


Only WebKitGTK is feature complete. Servo and especially Ladybird are still likely to run into missing features while browsing.

But they are in early development, they are making great progress every month.

Despite existing since 2012, and getting funding from several companies, Servo development has been intermittent. It's now pretty usable, but it's not a success story in keeping pace with the tech giants.

No, Servo only recently has gained traction and it's doing really well in 2024: https://blogs.igalia.com/mrego/servo-revival-2023-2024/

These guys think it can be done: https://ladybird.org

> A lot of discussion in this thread is pointing out that chromium is a thing and that it would be hard for a company to properly fund a web browser without the backing of a tech giant whose more direct revenue stream is elsewhere.

This is not an issue though is it?

Like all those magazine subscriptions make their money off ads. The idea that a business can't survive on its own is fine, no?

If it's a singular tech giant then that's a problem but if chrome had contracts with like a dozen+ companies then it sounds really sustainable.


> Like all those magazine subscriptions make their money off ads. The idea that a business can't survive on its own is fine, no?

This is not quite the same, if a single magazine starts to become more ads than decent content it is not insurmountable for another company to start a competitor. It's not ad income itself that is bad, it's that in the case of a web browser it is insurmountable for a company to start up a competitor from scratch. It wasn't always the case, but because google has dumped so much engineering in to chrome they've effectively pulled up the ladder behind them.


I can't speak for OpenBSD specifically but I can speak to some of my thoughts in why an operating system continues to use C. Supporting a language ecosystem is not easy, the less "default" languages needed to bootstrap the core system the better. The nice part about C is that it's one of the few languages suited for both kernel space and user space. Out of the alternatives you listed the only language that could even seriously be considered for kernel space is rust, and even that took a lot of back and forth to get it to that point in the Linux kernel. Higher level languages have a larger range as assumptions and you have to tow those accomodations in to kernel space if you want to use them. There is also the issue of the memory management in kernel space being much more of a complicated environment than user space. How do I teach the borrow checker about my MMU bring up and maintenance?

I am also skeptical to your claim about removing memory bugs freeing up brain space for logic bugs, at least for Rust. Rust has grown quite a number of language features, that in my experience, result in a higher cognitive load compared to C. If you seriously reduce your reliance on the C macro system (as Plan 9 has shown possible), the language itself is quite simple.


I didn't directly mention third party software but when I talk about the various levels of default software the implication is that those with less built in typically rely more heavily on third party software. Even those who do ship a more batteries included base still have to provide mechanisms for using third party software given the ecosystem.

> ... has a lot of third party software that would be hard to maintain along with the rest of the system

This is the point that the article is trying to challenge. I think 9front proves that it's doable.

> Most people don't care what commit your system is built from as long as it works as their programs expect it to.

The former helps the later a lot. Everything is tested with each other and for a lot of functionality there is only one option.


Your link for 9front mentions that ssh2 is not included. This is because the code was rewritten and the program is now just called ssh(1). Other features of ssh are accessible through sshfs(4), and sshnet(4). The only difference in features compared to the original Plan 9 is that 9front does not currently have code for an ssh server. I know some users who are interested in this capability so it'll likely happen at some point.


There is a direct correlation between the amount of power exerted by a project like Debian over an upstream project, and the amount of effort and upkeep required in doing so. I think of this like a sliding scale between shipping things with zero patches and a full on fork. From my understanding distribution patches on top of upstream projects tend to be typically just bug or portability fixes and stop short of adding features. The point I was trying to communicate was that in order to fully interact with the software you either have to be part of the upstream community or essentially fork.

The illustrate how I think Plan 9 is different in this regard. A patch for 9front could include a new feature for our compilers and then also show how useful it is by using it within other parts of our code. In plan 9 you can interact fully with every component.


What is said here in this blog I think is true, but it is only a single part of the perverted incentive puzzle. Folks up in the c suite have realized that they can just say they care about security and reap the benefits. In my experience average Joe is not going to inconvenience himself on account of there being some security breach, and if the company is at least _saying_ they care about it then Joe can write it off as incompetence and go about his day.

Which makes security spending like entertainment spending, when you have extra money to spend you do it to make yourself and potentially your customers feel good. If the economy is bad you lie about your security posturing just like you lie about how much you care about the customer in general.


The "asynchronous state machine" name here is a bit strange, when searching for this term used elsewhere I couldn't find any formal definition what it is. Reading further in the README it looks like the author implies that it really just means a DFA? Not entirely sure.

I'd also like to add the Plan 9 implementation[0], which also uses the properties of utf8 as part of its state machine and anecdotally has been quite performant when I've used it.

[0] http://git.9front.org/plan9front/plan9front/107a7ba9717429ae...


"Asynchronous" isn't part of the name of some really cool state machine :-) Its just an adjective and means the same as when you put it in front of any other noun.

A synchronous state machine is one where the incoming stream of events is always "in sync" with the state transitions, in the following sense:

1. When an event happens, the state machine can transition to the next state and perform any necessary actions before the next event happens

2. After the the state machine has transitioned and performed any necessary actions, the program must wait for the next event to happen. It can't do anything else until it does.

An asynchronous state machine doesn't make the main program wait until the next event happens. It can go on and do other things in the meantime. It doesn't have to wait for then next event to arrive.

When the next event does arrive, the program pauses whatever else it is doing, and hands control back to the state machine, which process the event, and then hands control back over to the main program.


I was not treating "asynchronous state machine" as a noun, even if taken as a generic adjective it doesn't make sense in this context. What "other things" is this wc2.c doing while the state machine is churning? There is no multi threading or multi processing going on here. So I find it hard to believe that this use of "asynchronous" is inside of what I would generally see it used as. As such I thought perhaps it referred to a specific methodology for designing the code, something akin to how the "lock free" adjective implies a certain design sensibility.


// What "other things" is this wc2.c doing...

AFAICT, wc2.c isn't written to be an asynchronous state machine. It doesn't ever seem to transfer the control to any other place.

// So I find it hard to believe that this use of "asynchronous" is inside of what I would generally see it used as

Yeah, you are legitimately confused. The post talks about asynchronous state machines, but w2c.c isn't an example of that. I'm sure this gave you a severe case of WTF?!??

// thought perhaps it referred to a specific methodology for designing the code

It does---that's exactly what it is, a programming methodolog, or perhaps better put, a design pattern. But w2c.c isn't an example of code written using that methodology. Again, you are legitimately confused here, because the post talks about something and w2c.c isn't that.

Do you know python? If you google for "asynchronous programming in python" you'll get all kinds of blog posts and youtube videos which explain the technique.


Why would the author of this repository make "wc2 - asynchronous state machine parsing" his header of his README if indeed wc2 was not by his own definition an "asynchronous state machine"? I ask you to consider what is more likely: that your blanket definition of asynchronous is incorrect as applied here or the author is just fucking with us by adding random words as the description of his project.


Indeed this is very confusing! The program implements a pretty standard state machine (ok), but there is nothing apparently async here. The auth alludes to combining the state machine with async IO in this paper (https://github.com/angea/pocorgtfo/blob/master/contents/arti...), but this implementation is just using fread to (synchronously) read a chunk of bytes.

Furthermore, given disk caching and memory mapping, I'm not convinced async IO would really be that astonishingly different, as individual reads are going to be amortized over pretty much the same bulk reads that the sample program is doing.

As the author says themselves, it seems the main win is hand implementing the incremental utf8 parsing instead of calling a library/os function.


> I ask you to consider what is more likely: that your blanket definition of asynchronous is incorrect as applied here or the author is just [elided] with us by adding random words

LMAO!!! Well, when you put it that way, I can't blame you for not believing me. Your skeptical mindset will no doubt serve you well in this era of deep fakes and AI hallucinations.

Alas, it is also an example of how this skepticism, however necessary, is going to slow down the sharing of information :-( Its the price we're going to pay for so much lying and such a breech of the social contract.

I assure you, however, w2c.c is not asynchronous. It would be nice if the author could step in here and clarify, because it is hella confusing.

I don't believe the author is Effing with us either--documentation and comments are not automatically synced with the code they describe, so its easy for them to drift apart. Perhaps the author is intending to implement asynchronous features in the future, or perhaps he changed his goals between when he wrote the README and when he wrote the code.


You'll have to ask him.


As I see it, state machines are particularly good for expressing logic in asynchronous systems. For instance in the late 1980s I wrote assembly language XMODEM implementations for the 6809 and the 80286 and since that kind of code is interrupt drive it is efficient to make a state machine that processes one character at a time. Today when you use async/await the compiler converts your code, loops and all, into a state machine.


// 6809

The good old days :-)


"Parallel" seems to be a more popular term in the literature.


This is spot on, the issue is not the mistake per se but in creating a code base that the team themselves are not familiar with. With some intern or team member generated code you can sit down with them and have them walk you through the code and introduce you to their reasoning, but you can't do that with an LLM. The author even admits to just mimicking the existing structure that the LLM started with when they had to expand, which sounds like a first commit for a team member first getting familiar with some new code. Part of the benefit of being able to write your own code is that you can do it in a way that clicks for you. Hopefully this lets you debug and extend it efficiently. I don't know why someone would squander this opportunity.


Location: Des Moines, Iowa

Remote: Open

Willing to relocate: Unlikely

Technologies: Go, C, k8s, operating systems, embedded systems, networking, profiling, penetration testing/security analysis

Résumé/CV: Available upon request

Email: moody [at] posixcafe [dot] org

Hello, I would generally consider myself a fanatic of security, networking and operating systems. I've worked professionally in Penetration Testing, Software Engineering, and currently DevOps. I've worked on a novel EDR bypass, high throughput data streams for a time series database, and managing complex k8s clusters full of operators. I also work on open source projects outside of work, specifically for the 9front project where I've worked on new security enhancements to the kernel, additions and bug fixes to our compilers, enhancements for our unicode support, and more.


I really doubt anything happens. The three big reasons why nintendo attacked yuzu were: It was a modern console, developers were making a profit working on it, and it violated DMCA by shipping a circumvention for game encryption. No one at 9front is making any money from this work, and we fail to fit in to the other two due to the age of the systems.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: