Hacker News new | past | comments | ask | show | jobs | submit | more athrun's comments login

I agree the global experience WeWork provides is really unique.

The question is whether that product would have ever been sustainable—had the company been better managed.


Regus is still worth billions.


You just have to realise that given enough scale everything turns proprietary anyway.

The difference being whether you are internally maintaining your special snowflake, or else paying for a solution maintained for you by a 3rd party.


I feel the same way.

Python has been my goto language for a long time, but lately I've been noticing that I've been holding off on writing new tools with it because on the back of my mind I have this nagging feeling that making them robust and portable will take too much work—and so I don't even bother getting started.

It's this trap of yes you get to ~99% pretty fast, but the last 1% (packaging/distribution) then take forever.

But I'm still looking for a good alternative... Golang does the job—no question, but it doesn't spark joy for me.


While there is definitely a higher barrier to entry, once I got comfortable with Rust (and finally stole someones working cross-compile / publish github actions for it) it has surplanted Golang in this use case because it does spark joy for me.


100% agreed.

Car lanes in the city consume a significant public resource (space) and encourage negative externalities (noise, heat, air pollution, etc.). They should absolutely be replaced with more space efficient options: tram tracks & bike lanes.


I think it's important to separate rural and suburbia.

I agree that the sense of community can be high in rural areas. People who live there can be very strongly rooted in the place. Depopulation is taking its toll though.

Suburbia on the other hand...


I agree. They have weird food safety standards!

Especially when it comes to the fermented stuff. Cheese is a good example: most of the good stuff is severely restricted in the US. Idem for cured meat—they mostly don't allow non-pink salt cured meat. Cooking temperatures in restaurant settings are also on the very safe side (finding pink pork is rare for example).

All of this contributes to food that is often underwhelming taste-wise.


It seems like you are disagreeing with GP? They are saying "EU tends to restrict things until proven safe, whereas US tends to allow until proven unsafe"...while you are bringing up the example of cheese and meat, which are more permissive in the EU than in the US, from a food safety standpoint.


I was talking about the additional ingredients/chemicals in the great example when referring to EU being more restrictive, whereas EU is more permissive with traditional foods (also in my comment), which I personally find to be positive.


> a grocery store chain

Here's your problem right there. There's not a real need for this, when you can just go to your local market.

Whole Foods works in the US because in most places this is the only option to obtain (quality) fresh produce. Not so in France.

It's a fundamentally different approach.


That doesn't make any sense to me.

Most instructions are not atomic, so even if you had runtime checks something like a debugger could still inject invalid state in between your checks, making them ineffective.

If real-time code injection is your threat model, I don't see how runtime checks would get you anywhere.


You seem to be suggesting that (to use a common phrase) if something can be done, it should be done. The previous poster's example regarding a debugger was to challenge the notion that there was any meaningful reason to constrain non-accidental type subversion. You seem to be saying it's absurd to care about someone using a debugger to inject invalid code at runtime because there's nothing you can do about it, thus implying if you can do something, you should. But that only begs the question--why should you? To some people, defending against debugger code injection is no more absurd than defending against other examples where the programmer deliberately alters code to subvert static type checking. If you disagree then articulating more meaningful distinctions among cases would be worthwhile. Mere ability is generally not considered sufficient justification alone to do something.


This is an inane comparison. Casting as demonstrated in the example is normal and has to be expected. Go types aren't as strict as types in other languages. Code can't assume otherwise.


> Casting as demonstrated in the example is normal and has to be expected.

Can you give an example where someone has done similar casts accidentally? It seems like it would be hard to accidentally typo. That leaves malice, which seems difficult to defend against, in light of the kinds of system calls that are available to a program.


Sure, by parsing a string to a Color value.


The example given wasn't a simple cast of a value, and definitely not an implicit coercion. The example is more like C-style type punning where you explicitly cast a pointer to the value and then write through the dereferenced pointer.

I don't doubt that there are niches where such explicit type coercion patterns are common in Go and susceptible to mistakes, but I doubt usage of constant identifiers is such a niche.

Rust is currently the standard bearer for strong, static type safety, and it even has both the enum types and pattern matching construct which Go lacks. AFAIU, you can use unsafe{} Rust code to perform a similar type punning trick, successfully assigning an invalid value to an enum object. I don't know if Rust's code generator always inserts runtime validity checks in match statements on an enum value without a catchall/default case, but certainly it's possible for Rust code to have an explicit if/else chain that at compile time appears comprehensive but which would neither panic nor produce the expected behavior. Does that mean Rust programmers shouldn't rely on Rust's static typing, instead always adding explicit code to handle unknown/invalid enum values?

Maybe the assertion that Go code should have such checks is more reasonable than for Rust code, but you haven't explained how. At least to me, the simplest, minimal code to achieve the subversion in both Go and Rust seems similarly stilted and similarly unlikely to be written by mistake. (To be clear, the context of this subthread as I understand it assumes the interface method hack, the subversion of which requires the type punning.)


I don't know what point you're trying to make, beyond just objecting to the points that I'm trying to make. Not interested in continuing.


Can you point at a git commit where someone had a similar bug implemented by accident and fixed it? The code you posted earlier seems to be something that would be incredibly hard to write without malice. And if you're considering malice, you also need to consider mmap, foreign function interfaces, writes to /dev/memory, and other similar perverse mechamisms.


As an ex-Novell/SUSE employee this makes sense to me. Upstream is supposed to keep marching onwards.

Backporting is _so_ much work. And it's unfortunately not sexy work either, so it's always going to be hard attracting unpaid contributors for it.

If you need stability and long term support as a customer, you have companies like RedHat or SUSE whose entire point is providing 10+ years maintenance on these components.


Unfortunately, none of these companies providing 10+ years of maintenance are doing so for most embedded devices. We either need to get SoC vendors to update their kernel baselines regularly This is hard, we've been trying for a decade and not seen much progress. Alternately, get them to backport fixes and patches (there's actually been quite a bit of progress here in getting them to actually take updates from the stable kernel at all! And that's getting thrown away now...)


The effect of EU’s Cyber Resilience Act will be interesting. It requires maintenance, at least for security fixes.


Exactly. It sounds like currently there's no money to be made supporting old embedded devices (in the consumer space at least), because no one is on the hook for long term maintenance.

Regulations _could_ change the incentives, and create a market for long term servicing. Regulations are hard to get right though...


Old devices getting support from the Linux team is only one of the ways this can play out though. Some other ways I can think of:

- Old devices are phased out sooner.

- Moving away from linux towards proprietary embedded OSes that do provide support.

- No doubt some companies will try just ignoring the regulations


Or maybe vendors will be incentivized to actually upstream kernel patches, plus stop making 10 different models every year for weird market segmentation reasons.


“Old devices are phased out sooner” seems like an OK solution with some caveats.

It is nice that it makes the cost of not supporting things visible to the users. Assuming “phased out” means the device will actually stop operating; “Company X’s devices have a short lifetime” is an easy thing for people to understand.

I suspect consumers will look for brands that don’t have this reputation, which should give those well behaved brands a boost.

Although, if it does turn out that just letting devices die is the common solution, maybe something will need to be done to account for the additional e-waste that is generated.

Moving toward proprietary OSes; hey, if it solves the problem… although, I don’t see why they’d have an advantage in keeping things up to date.

It is possible that companies will just break the law but then, that’s true of any law.


They could also provide the devices without an OS, and point you at the recommended open source ISO to download.


If it is not standard x86, then good luck.


Nobody will move out of Linux, and old devices won't be phased out sooner, at least not in s significant manner.


This won’t make more money available for supporting old devices, it’ll just make the long term profitability of any device significantly lower and therefore less competition and innovation.

A smarter regulation would have been required non-commercial use firmware source disclosures to allow non competitive long term maintenance by owners.


Who is responsible for complying with it? If a Chinese or American manufacturer of an embedded device that does not have a presence in the EU fails to provide updates what happens?

How many of the companies producing this stuff have the skills to fix kernel security bugs?

It will end up being tickbox regulatory compliance and will create barriers to competition, especially from FOSS: https://pyfound.blogspot.com/2023/04/the-eus-proposed-cra-la...


Not sure who the "we" is that you refer to, but Google (and Samsung, and other Android manufacturers, as well as companies building other Linux-based embedded/IoT devices) could band together and create a "Corporate Embedded Linux Consortium", and pool some money together to pay developers to maintain old kernel versions.

If the mainline kernel devs are uncomfortable allowing those to be official kernel.org releases, that's fine: the CELC can host the new versions themselves and call the versions something like "5.10.95-celc" or whatever.

I don't get why this is so difficult for people to grasp: if you want long-term maintenance of something, then pay people to maintain it long-term. It's a frighteningly simple concept.

But yes, it'd be better for SoC vendors to track upstream more closely, and actually release updates for newer kernel versions, instead of the usual practice of locking each chip to whatever already-old kernel they choose from the start. Or, the golden ideal: SoC vendors should upstream their changes. But fat chance of that happening any time soon.

> (there's actually been quite a bit of progress here in getting them to actually take updates from the stable kernel at all! And that's getting thrown away now...)

I found this statement kinda funny. If the original situation was that they wouldn't take updates from the stable kernels, then what were all those unpaid developers even maintaining them for? It's bad enough that it's (for most people) unrewarding work that they weren't getting paid for... but then few people were actually making use of it? Ouch. No wonder they're giving up, regardless of any progress made with the SoC vendors.


>SoC vendors should upstream their changes. But fat chance of that happening any time soon

I honestly do not understand why SoC vendors don't put the extra 1% effort in upstreaming their stuff. I've seen (and worked with) software that is lagging 3 to 4 years behind upstream developed by these vendors and if you diff it against upstream it's like 10 small commits, granted, these commits are generally hot garbage.


Isn’t this what the Civil Infrastructure Platform (CIP) initiative [0] was also proposing? Maintenance of Linux kernels on the 10+ year horizon aimed at industrial use cases. Has backing from Toshiba, Hitachi, Bosch, Siemens, Renesas, etc, though a marked lack of chip vendors as members. Not really sure how well it is going though.

[0] https://wiki.linuxfoundation.org/civilinfrastructureplatform...


> Unfortunately, none of these companies providing 10+ years of maintenance are doing so for most embedded devices.

But why do these devices need special kernels anyway? Isn't that the real problem?


Linux kernel doesn't have ABI for device drivers. The device manufacturers either can't or won't publish the drivers as a part of Linux kernel, that's why they fork Linux instead.


If they got their fixes upstream , they would be pulled into the same build tree that centos and rhel uses.


Which really makes you understand why the lgpl was a defeat.


What do you mean? The Linux kernel isn't licensed under the LGPL.


> If you need stability and long term support as a customer, you have companies like RedHat or SUSE whose entire point is providing 10+ years maintenance on these components.

Is that even feasible for projects like the Android kernel that distributes their fork to vendors when RedHat forbid redistribution of their source code?


> it's unfortunately not sexy work

What comprises sexy work in programming?


I don't know what the definition of "sexy work" is, but rewarding work in programming is solving interesting problems.

Backporting fixes isn't generally interesting problem solving.


This is the root of so much of our software quality problem. “I want to work on something shiny” outweighs “I have pride in this software and want to keep it healthy.”


Personally I love working on legacy software, I actually dislike greenfield projects, but even in the context of legacy software and system maintenance, backporting fixes would still not rate highly or provide much in the way of interesting work for me.


I'd say there's enough software developers that enjoy doing the latter. It's mostly the external motivation (both in community standing and in payments) that push people to shiny new things.


And so everyone has his own kink. I'd love to backport patches maintaining some legacy code base.


Are you flagging the word "sexy" or are you asking whether some important projects are fun and exciting and other important projects boring?

Surely maintaining 40 year old bank Cobol code is important but it's not considered fun and exciting. Rewriting half of skia from C++ into Rust is arguably not important at all but it's exciting to the point that it reasonably could make the front page of HN.


New features, typically.


The tech is still rapidly evolving though—I'm not sure it's realistic to expect standardisation here for a while.

Manufacturers are experimenting with form factors, perf-to-weight ratios, noise levels, weatherproofing, connectivity features, anti-theft mechanisms, ABS, etc.

Battery connectivity and charging is where I'd personally like to see more standardisation—even if there are obvious form factor challenges. Standardisation here could drive costs down, which would benefit the entire market. Maybe the EU should start leaning in this area.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: