Hacker News new | past | comments | ask | show | jobs | submit | pornel's comments login

This is already the reality in most of EU and the UK. There are 300kW charging stations all over the place, and a decent EV can recharge in 20 minutes.

I actually prefer this over refuelling, because filling up takes 5+ minutes of my time, while recharging is unattended, so I can plug in in seconds, and go get a coffee/break in the meantime. Chargers also tend to be next to nicer places, near food courts and other retail, which is handy, and doesn't need to be a separate stop on road trips.


Rust can't prevent crates from doing anything. It's not a sandbox language, and can't be made into one without losing its systems programming power and compatibility with C/C++ way of working.

There are countless obscure holes in rustc, LLVM, and linkers, because they were never meant to be a security barrier against the code they compile. This doesn't affect normal programs, because the exploits are impossible to write by accident, but they are possible to write on purpose.

---

Secondly, it's not 1000 crates from 1000 people. Rust projects tend to split themselves into dozens of micro packages. It's almost like splitting code across multiple .c files, except they're visible in Cargo. Many packages are from a few prolific authors and rust-lang members.

The risk is there, but it's not as outsized as it seems.

Maintainers of your distro do not review code they pull in for security, and the libraries you link to have their own transitive dependencies from hundreds of people, but you usually just don't see them: https://wiki.alopex.li/LetsBeRealAboutDependencies

Rust has cargo-vet and cargo-crev for vetting of dependencies. It's actually much easier to review code of small single-purpose packages.


There’s two different attack surfaces - compile time and runtime.

For compile time, there’s a big difference between needing the attacker to exploit the compiler vs literally just use the standard API (both in terms of difficulty of implementation and ease of spotting what should look like fairly weird code). And there’s a big difference between runtime rust vs compile time rust - there’s no reason that cargo can’t sandbox build.rs execution (not what josephg brought up but honestly my bigger concern).

There is a legitimate risk of runtime supply chain attacks and I don’t see why you wouldn’t want to have facilities within Rust to help you force contractually what code is and isn’t able to do when you invoke it as a way to enforce a top-level audit. Even though rust today doesn’t support it doesn’t make it a bad idea or one that can’t be elegantly integrated into today’s rust.


I agree there's a value in forcing exploits to be weirder and more complex, since that helps spotting them in code reviews.

But beyond that, if you don't review the code, then the rest matters very little. Sandboxed build.rs can still inject code that will escape as soon as you test your code (I don't believe people are diligent enough to always strictly isolate these environments despite the inconvenience). It can attack the linker, and people don't even file CVEs for linkers, because they're expected to get only trusted inputs.

Static access permissions per dependency are generally insufficient, because an untrusted dependency is very likely to find some gadget to use by combining trusted deps, e.g. use trusted serde to deserialize some other trusted type that will do I/O, and such indirection is very hard to stop without having fully capability-based sandbox. But in Rust there's no VM to mediate access between modules or the OS, and isolation purely at the source code level is evidently impossible to get right given the complexity of the type system, and LLVM's love for undefined behavior. The soundness holes are documented all over rustc and LLVM bug trackers, including some WONTFIXes. LLVM cares about performance and compatibility first, including concerns of non-Rust languages. "Just don't write weirdly broken code that insists on hitting a paradox in the optimizer" is a valid answer for LLVM where it was never designed to be a security barrier against code that is both untrusted and expected to have maximum performance and direct low-level hardware access at the same time.

And that's just for sandbox escapes. Malware in deps can do damage in the program without crossing any barriers. Anything auth-adjacent can let an attacker in. Parsers and serializers can manipulate data. Any data structure or string library could inject malicious data that will cross the boundaries and e.g. alter file paths or cause XSS.


> the exploits are impossible to write by accident, but they are possible to write on purpose.

Can you give some examples? What ways are there to write safe rust code & do nasty things, affecting other parts of the binary?

Is there any reason bugs like this in LLVM / rustc couldn't be, simply, fixed as they're found?


https://github.com/Speykious/cve-rs

They can be fixed, but as always, there’s a lot of work to do. The bug that the above package relies on has never been seen in the wild, only from handcrafted code to invoke it, and so is less of a priority than other things.

And some fixes are harder than others. If a fix is going to be a lot of work, but is very obscure, it’s likely to exist for a long time.


Yes, true. But as others have said, there’s probably still some value in making authors of malicious code jump through hoops, even if it will take some time to fix all these bugs.

And the bugs should simply get fixed.


People have a finite amount of time and effort they can spend on making the code correct. When the language is full of traps, spec gotchas, antiquated misfeatures, gratuitous platform differences, fragmented build systems, then a lot of effort is wasted just on managing all of that nonsense that is actively working against writing robust code, and it takes away from the effort to make a quality product beyond just the language-wrangling.

You can't rely on people being perfect all the time. We've been trying that for 50 years, and only got an endless circle of CVEs and calls to find better programmers next time.

The difference is how the language reacts to the mistakes that will happen. It could react with "oops, you've made a mistake! Here, fix this", and let the programmer apply a fix and move on, shipping code without the bug. Or the language could silently amplify smallest mistakes in the least interesting code into corruption that causes catastrophic security failures.

When concatenating strings and adding numbers securely is a thing that exists, and a thing that requires top-skilled programmers, you're just wasting people's talent on dumb things.


About invariants generally: Rust wants to know when memory behind each pointer is immutable, or mutable by Rust only, or could be mutated by something else while Rust has a pointer to it. Rust also encodes which types can't be moved to another thread, and which pointers own their memory and need to be freed by Rust.

These are part of the type system, so they need to be defined precisely. The answer to these questions can't be just "it depends". If there are run-time or config-dependent conditions when the same data is owned or not or immutable or not, Rust has enums, unions and unsafe escape hatches to guard access to it.


It's possible to get slightly better compression by losslessly rearranging data in a JPEG (DC components are compressed row by row, and prefer fewer horizontal color changes).

However, here the author seems to accidentally fully recompress the images, and falls into the classic trap of "looks almost the same but the file is much smaller!"

That's what every lossy format does every time. They're designed to lose data that is hardest to see with the naked eye.

At the top end of the quality range, file sizes grow exponentially in proportion to quality (it's natural consequence of allowing less and less data to be lost, and approaching lossless compression that has a hard limit on how effective it can be).

But conversely this means that going from very high quality to still-pretty-high quality moves the file size along the exponential curve and seems to give dramatic reduction in file size. This isn't any trick, that's how it works.

File size change is easy to measure, but visual quality change is difficult to quantify, so people disregard the visual change. In reality they're just moving a point along a curve, and recompression gets a worse curve (less quality for the same file size) than the quality-to-file-size ratio of compressing the file at the lower quality from the start.


> However, here the author seems to accidentally fully recompress the images, and falls into the classic trap of "looks almost the same but the file is much smaller!"

Except they didn't quite do that: yes, they recompressed the image instead of using the lossles rotation that JPEG is capable of. However, they then compared a recompressed rotated image to a recompressed image that wasn't rotated, and noted there was still a significant size differnce.

He also claims to have verified in GIMP that the two recompressed files were visually identical after rotating (I'm a little suspicious of that bit, since you wouldn't notice a tiny difference unless you use the "difference" layer mode and theo manually amplify the miniscule differences with something like the curves tool)


Yes this is the crux and the fun of my discovery! I was surprised how using sips to rotate the image resulted in a lower size than using sips or ImageMagick to directly compress the image.

I’d encourage you to download the image from the link in the article and try yourself if you have a Mac and then compare them with GIMP because it’s very possible I didn’t do a perfect job with that.


See my other comment for the result from ImageMagick, which shows little difference regardless of the orientation. For sips, there is a possibility that chroma subsampling impacted the result (because there are two different scaling factors for each axis) and you are technically comparing different images.

And every CVSS score is 9.8, because it's designed to never underestimate potential risk, no matter how absurdly unlikely, rather than be realistic about the actual risk.

CVSS is not not really meant to measure risk, it primarily measures the severity of technical vulnerabilities. It should be used in conjunction with other factors such as system exposure and threat sources to determine the probability of exploitation. This should then be combined with impact and costing data to fully assess the risk.

Regulatory requirements also need to be contextualized similarly. If they become burdensome, efforts should focus on reducing the exposure of your systems to those risks.

That said, patch and configuration management should be second nature and performed continuously so that when a real issue arises, you're prepared and not worried about your environment falling over because you're unsure how it will respond to an update, or whether your backups will restore properly - which are risks as well.

I saw more than a few organizations struggle with log4j because they only patched server systems when a vulnerability was publicly exposed, and a Metasploit exploit was available.


When you have a connected grid, it needs to ensure the entire grid operates on the same frequency. Production and consumption affect the frequency, so they have to be perfectly balanced. This is a hard problem when it needs to be done at scale of entire countries or continents.

Traditionally this has been managed basically in an analog way by having huge fossil- or nuclear-powered steam turbines that spin at the desired frequency. Apparently a spinning mass is so good at stabilizing grid frequency, that massive flywheels are being added to the grid to replace the role of the fossil turbines.

Our old power grids make solar and wind production merely follow the existing grid frequency, rather than dictate one, so at least in our grids pure solar couldn't work. I don't know if there's a solid technical reason for it, or that's just how it's been set up.


This proposal is much simpler — it encodes slices of the image with filters and zlib block boundaries that make them naturally independent and obviously correct, and just adds a chunk saying "yeah, it's safe to use that".

The restart marker proposal just adds more configurability on top of that, which IMHO isn't needed. It makes implementation more fiddly. And I'm not sure why they're talking about recovering from broken gzip when spec-conforming encoder shouldn't generate broken gzip.


Restart markers aren't necessarily much more configurable than this proposal. Segmentation type 1 is basically identical to this proposal, except that the fixed number of scanlines in each segment is computed slightly differently. Segmentation type 0 just adds an offset array on top of that, but I can't tell if it's just to enable seeking, or if it's to allow multiple IDAT chunks within a single segment. If the latter is the case, I guess that would make it somewhat more complicated, though the author has suggested limiting the proposal to one type or the other.

Meanwhile, the error-recovery part isn't about a broken DEFLATE stream, it's about avoiding specially-crafted files that would produce one image if decoded using the restart markers, but another image if decoded without them. To prevent the usual issues with different tools yielding different results, this is made an error. But the PNG spec suggests that errors in ancillary chunks shouldn't ever be fatal:

> However, it is recommended that unexpected field values be treated as fatal errors only in critical chunks. An unexpected value in an ancillary chunk can be handled by ignoring the whole chunk as though it were an unknown chunk type. [0]

Therefore, the extension tells decoders to restart with sequential decoding in that case, instead of bailing out entirely.

[0] https://www.w3.org/TR/2003/REC-PNG-20031110/#13Error-checkin..., unchanged in https://www.w3.org/TR/2024/CRD-png-3-20240718/#13Error-check...


Animated AVIF is widely supported, and can represent GIFs losslessly.

BTW, Chrome vetoed the idea of supporting muted video files in `<img>` like Safari does, so we've got this silly animated AVIF that is a video format that has been turned into a still image format that has been turned back into a video format, which takes regular AV1 video data but packages it in a slightly different way, enough to break video tooling.


> Animated AVIF is widely supported, and can represent GIFs losslessly.

Doesn't lossless AVIF have terrible compression ratios?


You'd use lossless blocks for really simple pixel art that the GIF was made for. For GIFs made from video clips, you can apply regular video compression and decimate their size.

JPEG XL is pretty cheap to decompress.

Advancements in compression algorithms also came with advancements in decompression speed. New algorithms like tANS are both compressing well, and have very fast implementations.

And generally smaller files decompress faster, because there's just less data to process.


But how does the ecological benefit of space savings compare with the extra power consumption from compressing and decompressing?

And will people take more pictures because of the space savings leading to more power consumption from compressing and decompressing the photos?

Is this just greenwashing by Apple?

But I have now decided to take my photos off of Apple's servers as well as to take way way less photographs, if any. The climate of my near future is way more important than a photograph of my cat.


You have an invalid assumption that extra power is spent on better compression or decompression. It generally takes less energy to decompress a better-compressed file, because the decompression work is mostly proportional to the file size. Energy for compression varies greatly depending on codec and settings, but JPEG XL is among the fastest (cheapest) ones to compress.

Secondly, you have an invalid assumption that the amounts of energy spent on compression have any real-world significance. Phones can take thousands of photos when working off a tiny battery, and most of it is spent on the screen. My rough calculation is that taking over a million photos takes less energy than there is in a gallon of gas.

Apart form that, compression cost is generally completely ignored, because files are created only once, but viewed (decompressed) many many times over.

Smaller files save storage space. Storage has costs in energy and hardware.

Smaller files are quicker to transfer, and transfer itself can be more energy intensive than compression. It's still small in absolute numbers.


He also has an invalid assumption about RAW as well .

No, I did not. You did not understand the question I was asking.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: