Hacker Newsnew | past | comments | ask | show | jobs | submit | lmm's commentslogin

Not if the bike goes significantly slower than the e-bike. Which is the whole point.

Idk, I think baby becomes pancake no matter the speed. With my naive understanding of the physics involved, the weight of the teenager and their bike compared to the relatively tiny baby is going to be the deciding factor here.

> my first motorbike ride and car drive at 60kmh were terrifying

Maybe those should be more tightly regulated.


> Why do e-bikes become better or more safe when you have to rotate your legs?

Because you're directly engaged in operating them. Electric handcycles are also legal, the problem isn't which body part it is, it's whether you're moving muscles to move your bike - and, perhaps more importantly, that your bike will stop accelerating when you stop your body.


The victim is the one who's injured or dead.

Must be nice to live in a world simpler than the one I do. Your broad generalization has so many deficiencies that I actually deleted what I was writing. There are countless exceptions to your hasty generalization.

If that's how you want to redefine the term, sure, ok. But the "victim" in this particular scenario is the one at fault.

Sure, if you commit suicide you're a victim of yourself

You seem to be unwilling to assign blame to the operator of hazardous machinery for running people over. Why is that? "They should know better"? Kids? Disabled people unable to use their mobility devices on the unplowed sidewalks? Animals?

If you cannot control yourself enough to slow down around people, you need to get out of the driver's seat. If you cannot stop before a potential crash, you were going too fast. 100% of the time. It literally does not matter if someone jumps in front of you. If you're going to choose the statistically most violent mode of transportation by multiple orders of magnitude, that is your responsibility.


Certainly very hard to defend against that when the messenger you're using won't let you use a device you control.


Early '90s maybe. By the late '90s people knew tests were a good idea, and many even applied that in practice.


This all sounds very clever, but in practice no, most of the problems are code problems and if you do functional programming you really can avoid getting paged at 3AM because everything broke. (Even if you don't solve the deployment problem the article talks about - you're not deploying at 3AM!). It's like those 400IQ articles about how Rust won't fix all your security problems because it can't catch semantic errors, whereas meanwhile in the real world 70% of vulnerabilities are basic memory safety issues.

And doing the things the article suggests would be mostly useless, because the problem is always the boundary between your neat little world and the outside. You can't solve the semantic drift problem by being more careful about which versions of the code you allow to coexist, because the vast majority of semantic drift problems happen when end users (or third parties you integrate with) change their mind about what one of the fields in their/your data model means. There are actually bitemporal database systems that will version the code as well as the data, that allow you to compute what you thought the value was last Thursday with the code you were running last Thursday ("bank python"). It solves some problems, but not as many as you might think.


> It's like those 400IQ articles about how Rust won't fix all your security problems because it can't catch semantic errors, whereas meanwhile in the real world 70% of vulnerabilities are basic memory safety issues.

even setting aside the memory safety stuff, having sum types and compiler-enforced error handling is also not nothing...


The thing is, you're gaining a bunch of knowledge about compiler internals and optimisations, but those aren't necessarily specified or preserved, so it's questionable how valuable that experience actually is. The next release of the compiler might rewrite the optimiser, or introduce a new pass, and so your knowledge goes out of date. And even if you have perfect knowledge of the optimiser and can write code that's UB according to the standard but will be optimised correctly by this specific compiler... would that actually be a good idea?

All of that is less true in the microcontroller world where compilers change more slowly and your product will likely be locked to a specific compiler version for its entire lifecycle anyway (and certainly you don't have to worry about end users compiling with a different compiler). In that case maybe getting deeply involved in your compiler's internals makes more sense.


Learning about how compilers optimize code isn't really knowledge that goes out of date. Yes, things get reshuffled or new ideas appear, but everything builds on what's already there.

You'd never want (except in extreme desperation) to use this knowledge to to justify undefined behavior in your code. You use it to make sure you don't have any UB around! Strategies like "I wrote a null pointer check, why isn't it showing up anywhere in the assembly?" can be really helpful to resolve problems.


You're not just learning the specific thing this compiler does on this code, but also the sorts of things compilers can do, in general.


The US would spend 20 years arguing about which agency's jurisdiction it was, and ignore the dead babies?

No, wait, Volvo is European. They'd impose a 300% tariff and direct anyone who wanted a baby-killing model car to buy one from US manufacturers instead.


Of course a bug is negative business value. Perhaps the benefit of shipping faster was worth the cost of introducing bugs, but that doesn't make it not a cost.


If a bug is present but there is no one who encounters it, is it negative business value?


That’s not how this goes.

Because the entire codebase is crap, each user encounters a different bug. So now all your customers are mad, but they’re all mad for different reasons, and support is powerless to do anything about it. The problems pile up but they’re can’t be solved without a competent rewrite. This is a bad place to be.

And at some level of sloppiness you can get load bearing bugs, where there’s an unknown amount of behavior that’s dependent on core logic being dead wrong. Yes, I’ve encountered that one…


> That’s not how this goes.

Once you gain some professional experience working with software development, you'll understand that that's exactly how it goes.

I think you are failing to understand the "soft" in "software". Changing software is trivial. All software has bugs, but the only ones being worked on are those which are a) deemed worthy of being worked on, b) have customer impact.

> So now all your customers are mad, but they’re all mad for different reasons, and support is powerless to do anything about it.

That's not how it works. You are somehow assuming software isn't maintained. What do you think software developers do for a living?


Nothing I just described was hypothetical. I’ve been the developer on the rewrite crew, the EM determining if there’s anything to salvage, and the client with a list of critical bugs that aren’t getting fixed and ultimately killed the contract.

If you haven’t seen anything reach that level of tech debt with active clients, well, lucky you.


If you can see the future and know no-one will ever encounter it, maybe not. But in the real world you presumably think there's some risk (unless no-one is using this codebase at all - but in that case the whole thing has negative business value, since it's incurring some cost and providing no benefit).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: