Hacker Newsnew | past | comments | ask | show | jobs | submit | scratcheee's commentslogin

The obvious use-case for unsafe is to implement alternative memory regimes that don’t exist in rust already, so you can write safe abstractions over them.

Rust doesn’t have the kind of high performance garbage collection you’d want for this, so starting with unsafe makes perfect sense to me. Hopefully they keep the unsafe layer small to minimise mistakes, but it seems reasonable to me.


I'm curious if it can be done in Rust entirely though. Maybe some assembly instructions are required e.g. for trapping or setting memory fences.


If it comes to it then Rust has excellent support for inline assembly


But how well does it play with memory fences?


You don’t even need inline assembly for those https://doc.rust-lang.org/stable/std/sync/atomic/fn.fence.ht...


You make a strong case for voice, but that doesn’t necessarily invalidate their argument, they never said voice should be replaced.

Here’s some ideas: 1. A data side channel 2. Use it to send originator for each message, have unique note on other end per sender so they don’t need to check visually, but also show on their display so corrupted or suspicious sender can be verified, in desperate circumstances (rather than the current case of “that cannot be done at all”). 3. Digital audio, allowing actual high quality audio, which we know does improve comprehension, which should not be optional in this context. 4. Take some lessons from modern coms systems on how to handle overlapping coms, plus the extra bandwidth from digital, so overlapping coms is handled gracefully (I realise the realtime nature prevents being too clever, but perhaps blocking all but the first to speak and playing a tone if you’re being blocked), perhaps with some sensible overrides like atc and anyone declaring an emergency getting priority. Currently overlap obliterates both messages and it’s possible for senders to not even know their message was lost. This has contributed to accidents, whilst basic direct radio transmissions cannot avoid this, smart algorithms with some networking could definitely reduce the failure cases to very rare and extreme scenarios 5. Let atc interact with flight planners on aircraft, show the aircraft’s actual locally programmed flight plan to atc, with clear icons if it differs from the filed plan atc has, and perhaps as an emergency only measure, allow atc to submit a flight plan to the aircraft (not replacing the active plan of course, just as a suggestion/support for struggling pilots, “since you have not understood my instructions 3 times, please review the submitted plan on your flight computer, note how it differs from what you programmed”) 6. Aircraft usually know where they are, and which atc they’re meant to be communicating with, have the data channels talk even when the audio channel is not set correctly. If incompetent pilots forget to switch channel, you can force an alarm instead of launching a fighter jet, or just have a button for “connect to correct atc” and a red light when you’re not on the correct one.

That’s just the ideas I’ve come up with just now. 4. Is probably quite hard to get right, and 5 could add load, so should be done carefully. But hard to believe the current system is technically optimal, or even vaguely close to optimal.

Admittedly, I know the real reason is that having 1 working system for everyone is better than a theoretically great system that is barely implemented and a complicated mess of handoffs between the 2. But with care they can absolutely improve things, but feels like things are moving a few decades slower than they should be.


The article explains the weaknesses of the password-centric approach:

> whether by phishing or exploiting the fact the passwords are weak or have been reused

1. Phishing is harder when you only ever enter your password into 1 place, and that one place is designed to be secure and consistent.

2. Much easier to have exactly 1 strong password than unique strong passwords for every website.

Is it better than a vault full of random passwords? Probably not, beyond pressuring the user into using the more secure method


Not disputing the obvious advantages, but since you asked:

Being forced to maintain compatibility for all previously written apis (and quite a large array of private details or undocumented features that applications ended up depending on) means windows is quite restricted in how it can develop.

As a random example, any developers who have written significant cross platform software will be able to attest that the file system on windows is painfully slow compared to other platforms (MS actually had to add a virtual file system to git at one point after they transitioned to it because they have a massive repo that would struggle on any OS, but choked especially badly on Windows). The main cause (at least according to one windows dev blog post I remember reading) is that windows added apis to make it easy to react to filesystem changes. That’s an obviously useful feature, but in retrospect was a major error, so much depends on the filesystem that giving anything the ability to delay fs interaction really hurts everything. But now lots of software is built on that feature, so they’re stuck with it.

On the other hand, I believe the Linux kernel has very strict compatibility requirements, they just don’t extend to the rest of the OS, so it’s not like there’s a strict rule on how it’s all handled.

Linux has the obvious advantage that almost all the software will have source code available, meaning the cost of recompiling most of your apps for each update with adjusted apis is much smaller.

And for old software that you need, there’s always VMs.


Kind of a bad example. Firstly because you are comparing windows with the Linux kernel. The Linux kernel has excellent backwards compatibility. Every feature introduced will be kept if removing it could break a userland application.

Linus is very adamant about "not breaking userspace"

The main problem with backwards compatibility (imho) is glibc. You could always ship your software with all dynamic lobs that you need, but glibc does make it hard because it likes to move awkward and break things.


Glibc is one of the few userspace libraries with backwards compatibility in the form of symbol versioning. Any program compiled for glibc 2.1 (1999!) and later, using the publically exposed parts of the ABI, will run on modern glibc.

The trouble is usually with other dynamically linked libraries not being available anymore on modern distributions.


There’s a classic yes minister skit on how dubious polls can be: https://youtube.com/watch?v=ahgjEjJkZks&t=45s


Easy: provide high quality output when being tested for a new task, The moment you are done outperforming the competition in the tests and have hit production you slowly ramp down quality, perhaps with exceptions when the queries look like more testing.

Same problem as ai safety, but the actual problem is now the corporate greed of humans behind the ai rather than an actual agi trying to manipulate you.


This is confusing on so many levels.


See also: Volkswagen emissions test scandal


I assume they’re referring to ag-gag laws, https://en.wikipedia.org/wiki/Ag-gag Gives a reasonable background by the looks.


And yet their statement makes perfect sense to me.

Caching and lower level calls are generic solutions that work everywhere, but are also generally the last and worst way to optimise (thus why they need such careful analysis since they so often have the opposite effect).

Better is to optimise the algorithms, where actual profiling is a lesser factor. Not a zero factor of course, as a rule of thumb it’s probably still wise to test your improvements, but if you manage to delete an n^2 loop then you really don’t need a profiler to tell you that you’ve made things better.


Which is exactly what I said: I am only arguing against their use of "intuitively".


Given the web’s much wider remit than pdf, it has support for accessibility tools and much better non-visual handling than pdf, so the comparison isn’t entirely fair I think. If a website doesn’t handle lynx well, there’s a good chance it doesn’t handle accessibility well either.


“Newfound”? People have been fighting this debate for decades. I remember having this exact debate at school 20 years ago, except the price arguments back then were bullshit, but people didn’t care enough about the environment so there wasn’t any pressure to change.

Nowadays the price arguments are… complex. But for the first time people actually care enough about the environment that nuclear is no longer competing poorly with coal (except for in Germany).

The exact maths on comparing pricing is complicated, given that energy storage costs vary so much depending on the inputs (try looking up storage costs for a 100% solar/wind grid during a once in a decade lull, it’d make nuclear look great, but obviously for a slightly mixed grid and more typical conditions, storage might be reasonably priced vs nuclear).

Anyway, I’m mildly disinterested in nuclear now that it’s only a side show to renewables, but I think it’s far from being a slam dunk either way. If some country or politian is more interested in nuclear, fair play to them, I say go for it. We’re not in a comfortable position right now so any movement away from fossils is a win regardless of where we end up (within reason of course)

No doubt the debate will only be resolved once and for all once fusion turns up and actually makes fission genuinely irrelevant (even then, fission might be cheaper for quite a while).


Or just accept fossil fuels, biofuels, hydrogen or hydrogen derivatives for the ”once in a decade lull”.

We need to solve climate change, that means maximizing our impact each step at the time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: