Hacker News new | past | comments | ask | show | jobs | submit | Too's comments login

With the vendor/system split introduced by project Treble in Android, it should be easier than ever to build your own system against a rich set of hardware abstractions, that work on a wide range of devices. Assuming you are ok with still running a very thick slice of the stack as proprietary vendor image.

Yes you can run a GSI (and project Droidian does that) but then you're dependent on a downstream kernel and Android-ish early boot environment, that will likely lead to pointless incompatibilities compared to a fully-upstreamed approach.

Upstream doesn't care about clang and Rust as much as Google does, so that isn't ever going to happen as much as people vouch for it.

The biggest hurdle to getting AOSP-kernel features into the upstream kernel is not clang or Rust, it's cleaning up hacked-together kernel code in a way that makes it long-term acceptable to upstream maintainers. (And getting rid of userspace blobs for things like graphics.) Always has been for as long as AOSP was a thing.

You missed the point. If A, B and C are all open source you can actually fix them too. Just send a PR. Most projects are open to ensuring compatibility moves forward.

Just send a PR? What are you talking about? A project that requires an old version of a dependency either has technically valid reasons, and is unlikely to be upgraded just because one more user asks nicely, or is maintained at a slow pace and/or with a low effort level, so that even if you do the work your patch is likely to be ignored (at least temporarily).

Reproducible builds and open source sounds like a good thing.

I wouldn’t expect the reviewers to deal with every add-ons bespoke snowflake build. Even less so if it requires access to a private module. Mozilla should provide a baseline of how a build is intended to be done, then extensions just have to follow this template. Though yes, you would expect them to have some familiarity with basic stuff like yarn and that the baseline supports a few of the most popular builders.


We use a relatively simple build. at the base of it, if you have node and npm, a complete build is as easy as

yarn npm login

yarn --immutable

yarn build

Personally - I don't really find it reasonable to place demands on build tooling for an external company.

I'm assuming you would also find it reasonable for Google to suddenly ship chromium with a requirement that you use "google-pack" for all js builds or they don't run it?

To be entirely blunt, what exactly do you think is going to change when we're already giving them bare JS? It's not like we're shipping a binary blob here, we're literally handing them a zip file with perfectly fine & inspectable javascript inside it.

Further, do you realistically believe that a single low grade QA/Support engineer who can't even install the correct tooling is going to catch malware?

Because I read their matrix chats and I can fucking promise they aren't catching the malware all that fast....


> I don't really find it reasonable to place demands on build tooling for an external company.

I'm not sure I agree, plenty of OS distributions do this. If you want to distribute on Arch in the official AUR you're going to need a PKGBUILD file. The difference though is they make it very easy to integrate custom distribution channels where you can build the package however you want, and I would really love to see browsers move more in that direction. Requiring centrally managed signatures from a corporation to install extensions in a purportedly open and community-driven product is just absurd to me.


> I'm not sure I agree, plenty of OS distributions do this. If you want to distribute on Arch in the official AUR you're going to need a PKGBUILD file.

This is fine. This is actually also roughly in line with what you need for an extension (a manifest.json file).

What the poster here is proposing is rather this: You cannot build that PKGBUILD file using any tooling other than the standard. Ex - you want to script how that PKGBUILD file get made? Fuck off, not allowed.

That's a COMPLETELY different take. It's not dictating limitations on the output (which I find reasonable as a required integration between products) it's dictating limitations on how a company produces that output (I find this monopoly behavior, why should they get to tell me what tools or processes to use? My output is the SAME.).


Docker

It seems reasonable that they'd have a requirement there's a single file they'll run, maybe even with a predetermined name like ./build, and that's it.

The developer can then juggle all their dependencies and run make/yarn/npm/etc within that. It's really not different from having a CI build script.


Here is the original paper https://ieeexplore.ieee.org/ielx8/52/10629161/10629169.pdf. It’s not much longer and I found it easier to read.

In fact, the linked rehash feels uncanny, using similar words and phrases from the original, that are otherwise unusual. Made by AI?


It didn't seem AI like to me. It doesn't seem to use bulletpoints in similar fashion as usually LLMs do, paragraphs are differently organized and differing lengths, etc. It was a bit too random to be AI in my opinion.

> In fact, the linked rehash feels uncanny, using similar words and phrases from the original, that are otherwise unusual. Made by AI?

I think it's just the author reusing those words?


Also odd is the expression 'jobs to be done'. At first glance this seemed like an allusion to this famous 'Jobs to be Done' Clay Christian talk [1] (well worth the 5 minute watch). But AFICT it's just coincidental use of the phrase.

[1] https://www.youtube.com/watch?v=sfGtw2C95Ms&t=28s


They mention “high quality code” without defining it. I assume this is internally well defined?

And why is that not a goal of the "test team"?

That seems to be about 5000 words, and the submitted "summary" is about a third of that. Why summarise when one is going to make something of nearly equal length?

Not me. Just wild guess

1. People are just sick of hearing yet another thing about AI 2. Combine AI with EA and even more buzzwords and you’ll quickly multiply that effect. 3. It comes off as advertising, something that is usually highly frowned upon here. Even if the cause may or may not be noble. The crowd here wants to digest interesting content.


You missed quoting the next sentence about providing confidence metric.

Humans may be wrong a lot but at least the vast majority will have the decency to say “I don’t know”, “I’m not sure”, “give me some time to think”, “my best guess is”. In contrast to most LLMs today that in full confidence just spews out more hallucinations.


What about OIDC does not scale?

The things you just listed sounds like unnecessary complexity that inevitably leads to the “too big standard” problem, where every vendor and id provider has their own half assed, incomplete and incompatible implementation of the standard, or worse - with security bugs. Something quite often seen with SAML.

That’s not to say that oidc or oauth doesn’t have alignment issues. See https://news.ycombinator.com/item?id=35713518 “We implemented OAuth for the 50 most popular APIs. TL;DR: It is still a mess”


Can recommend Opentelemetry if you need a more comprehensive tool like this.

There is a whole library of so called instrumentation that can monkeypatch standard functions and produce traces of them.

Traces can also propagate across process and rpc, giving you a complete picture, even in a microservice architecture.


Is there an example?



They do have one of the better EVs on the market.


Freevalve yes.

They’ve got loads of other cool thinking outside of the box solutions. Like the Lightspeed transmission with 7 clutches, providing instant shifting between any gears and ability to slip freely between them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: