Hacker News new | past | comments | ask | show | jobs | submit | n42's comments login

Regardless of the fact that Apple’s laptops were shipping with Thunderbolt years before they were “forced” to add USB-C ports to iPhones, or that Thunderbolt and USB-C are orthogonal, Apple was involved in the creation of Thunderbolt in the first place.

And yet they're still producing and selling iPhone 14 Plus 512GB on apple.com for £1000 in 2025 with USB2.0 speeds from the year 2000.

The iPhone 16 is also USB 2.0 (not the Pro though). While it's kinda crummy for a phone of it's price, it's not alone. A lot of lower end Android phones/devices are still USB 2.0 with C ports.

I just started using it when it launched 1.0. the stock configuration is basically perfect for me, with a few minor tweaks and a theme:

    confirm-close-surface = false
    macos-titlebar-style = tabs
    theme = IR_Black


> I am really surprised almost no one is doubling down on something like this.

I've thought a lot about this – every time a new one is posted. I wish we could live in a world where this is what STEM education looks like. I think that, ultimately, it's just very high labor cost, and edtech is not known for being highly lucrative.

Bartosz does these as a labor of love, and the world is better off for it.


this blog, its author and the work is being idolized everywhere in this thread as artisanal and peak of craft.


This is cylindrical, which the author purports to be different


genuine question: how much can the frontend really impact compile times at the end of the day? I would guess most of the time spent compiling is on the backend, but IANA compiler engineer


> How much can the frontend really impact compile times?

Your question is addressed in this blog post - Faster compilation with the parallel front-end in nightly (https://blog.rust-lang.org/2023/11/09/parallel-rustc.html). One the example shown there:

- Serial took 10.2 seconds in the frontend and 6.2 seconds in the backend.

- Parallel tool 5.9 seconds (-42%) and 5.3 seconds (-14.5%).

So to answer your question, parallelising the frontend can have a substantial impact.

You are right that frontend is only a subset of the total compilation time - backend and linking time matter too. Happily, that's something that's being worked on too!

There is an alternate backend called cranelift (https://bjorn3.github.io/2024/11/14/progress-report-nov-2024...) that compiles faster than LLVM, at the cost of producing slower binaries. That's a good trade-off for debug builds.

And there's work on changing the default linker to lld (https://blog.rust-lang.org/2024/05/17/enabling-rust-lld-on-l...).

We may see all 3 efforts land in the next year :)


One obvious example of this would be C++, where a smarter frontend that doesn't do textual inclusion of a million lines would significantly improve compile times.


But that's low-hanging fruit "optimization", no? Once you get around it, and this has been forever, the bottleneck is in the backend generation. So if Rust has already solved this, the area where they can improve most is the backend generation?

Most C++ builds I have worked with, and all of them being MMLoC codebases, were actually bottlenecked by the high memory pressure. In part due to heavy templates and symbol visibility.


I think the history of C++ implementations shows that it's not low hanging fruit, it's a huge effort to implement and the payoffs aren't game changing.


Modules? Yeah, I don't expect them to be a game changer either. Exactly because of the reasons I mentioned.


that makes sense. are Rust's macros handled in the frontend? if so, I could see how multithreading could have a dramatic impact


this brings me joy.


does anyone understand how this fits in with the weird mess of SPIR-V/WGSL/Vulkan/GSL/HSL?

is this supposed to take the place as the officially blessed frontend language for SPIR-V? or is this just another standard attempting to replace the other standards (insert XKCD)?


Slang is a language largely derived from HLSL. It's a solid leap in capability with more powerful features like generics. Largely as a language it brings shader languages out of the stone ages of plain procedural C and adds tools to help make writing composable shader code less awful.

Slang the compiler is also very cool because it can directly target multiple different languages. Slang can compile to (among other things) HLSL, GLSL, SPIR-V, MSL and WGSL directly. The next closest game in town is DXC which compiles HLSL to DXIL or SPIR-V, with most game engines using a soup of SPIRV-Cross and other translators to convert the SPIR-V to other languages like MSL.

It's shaping up to be a solid choice for a single-source shader pipeline. Especially if you care about being portable to MSL as the current 'state of the art' for turning HLSL into MSL is a fragile chain of several tools.


> Largely as a language it brings shader languages out of the stone ages of plain procedural C

It's not the first to do so though, Metal's shading language is C++14 with some restrictions and some enhancements (which IMHO is overkill though, except for the ability to stamp out different specializations from the same template code).


https://github.com/shader-slang/slang

It's kind of both. It can generate High level shaders as well as SPIRV, according to the doc.

I interpret this announcement as more about the governance of the open source project than brand new anything. The project has been around for years and there are even some SIGGRAPH papers in this thing.


Yeah the news here is that Khronos is finally throwing their weight behind something modern as the "default" Vulkan shader language, as GLSL has been stagnant for years.


Apple is currently my favorite for content, so to each their own I guess.


What content? They have almost none. Granted with the segmentation in the streaming market, none of the big players have much going on, but at least you can usually find something to watch on the others. Once you catch up with the 2 or 3 interesting ones on Apple Tv, there is basically nothing to watch.


This includes movies, but browsing their "originals" list, so far I (or my wife) have enjoyed (in no particular order):

  Slow Horses
  Lessons in Chemistry
  Severance
  Ted Lasso
  Shrinking
  Mythic Quest
  Roar
  CODA
  The Problem with John Stewart
And we are looking forward to checking out, in no particular order:

  Palm Royale
  For All Mankind
  Criminal Record
  Killers of the Flower Moon
  Foundation
  Strange Planet
  The Morning Show
  Greyhound
They're not all knockout hits. Overall we just feel safer trying something new there. If we don't like it, it's usually just because it's not for us, and less because it feels like an algorithmic, soulless cash grab.


They have much less content, but if you value quality over quantity, I don't think that's an issue.

When I look at the Apple TV+ catalog, I can see myself giving an honest shake to _most_ of their content. I simply can't say the same for other services. I understand why they are flooding the zone if they have the resources, but there's a finite amount of content one human can watch.

If Apple releases 50 shows and 80% are up my alley, I get the same amount of enjoyable content as Netflix releasing 400 shows but only 10% are worth my time.


I think my issue is that it's not 80% up my alley. I see a bunch of decent looking shows that are probably good for their audience, I'm just not their audience. But even for stuff where I'm their audience, they often feel to be lacking something.


What do you consider the purpose of a UUID?


Asynchronous unique ID generation.


You can have both asynchrony and sequence by encoding thread ID in the UUID too, and make the sequence a thread local state.



Then, just use thread ID and integer sequence pairs instead of trying to stuff them into an arbitrary binary format.


Thread IDs get repeated across reboots. Integer sequence may also repeat in a distributed scenario, unless you want a massive bottleneck. You do need other stuff (timestamps, random number, etc.).


Yes, but that’s what parent proposed? :shrug:


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: