Hacker News new | past | comments | ask | show | jobs | submit | errnoh's comments login

Possibly that's mostly out of familiarity with the language? The only thing in your example that does things in-place (and thus looks out-of-place) is Sort(), but that's the way I'd at least expect it to work? If you take that away from the list all of them behave similarly to each other and return the modified slice:

    slices.Compact(s)       // modified slice in the return value is ignored
    s  = slices.Compact(s)  // s now points to the new changed slice
    s := slices.Compact(s)  // s already exists, this is not valid Go syntax.
    slices.Delete(s, …)     // modified slice in the return value is ignored
    s = slices.Delete(s, …) // s now points to the new changed slice

EDIT: Would prefer people not to downvote actual discussion. In this case there were was indeed good argument made on the reply that these also modify the underlying slice, but it's not like I was being rude in the comment.


> The only thing in your example that does things in-place (and thus looks out-of-place)

That is not really correct, and is much of the issue: Compact and Delete operate in-place on the backing buffer while copying the slice metadata. This is the same issue as the append() builtin and why it's so fraught (before you get in all the liveness stuff).

> s := slices.Compact(s) // s already exists, this is not valid Go syntax.

    s := []int{}
    if true {
        s := slices.Compact(s)
        _ = s
    }


Ah, my bad for not reading the article before the comments :)

As that's the case it's indeed hard to defend it. Data structure-wise it still kind of makes sense since as you mentioned the slice metadata is changed, but it basically making the old slice invalid is rather annoying.

For the := example sure, it's a bit far fetched example and likely would not pass any code review, but there are cases where shadowing is indeed valid. So is the `s := slices.Compact(s)` in this example not working as expected then?

EDIT: looking at another reply to the parent the := being broken likely is trying to point that using := also modifies the slice and thus the original slice is broken when one tries to use it in the original scope. That's really bad practice, but true indeed.


No, that's wrong. Multiple news sites are getting that wrong but Apple announcement is very clear that the fee applies to even App Store:

> Core Technology Fee — iOS apps distributed from the App Store and/or an alternative app marketplace will pay €0.50 for each first annual install per year over a 1 million threshold.

source: https://www.apple.com/newsroom/2024/01/apple-announces-chang...


It's opt-in:

> Developers can choose to adopt these new business terms, or stay on Apple’s existing terms. Developers must adopt the new business terms for EU apps to use the new capabilities for alternative distribution or alternative payment processing.

So you either choose US/Global Business Model (what we know today), or EU Business Model (what was announced today, with flexibility but high fees). You can distribute on the App Store either way.


At least based on my observations it's been common practice in ML papers for some years already. Usually releasing Github hosted project page and a repository with the same information, then releasing the code on that repo afterwards at some point.

I don't feel that's an issue. A lot more people are able to see what's happening on the bleeding edge than if they'd just release the paper without accompanying demo page, and faster than if they'd wait for the code to be ready for release? Of course one can argue that "they should just release whatever code they have instantly", but that's their choice if they want to clean it up, remove secrets etc.


45it/s (0.1~s per image) on 7900XTX here, so it's still one magnitude faster on GPU with a lot higher power draw than the macs. Doing 10x slower with non-tethered is quite nice outcome.


While I agree that it's much more effort to get things working on AMD cards than it is with Nvidia, I was a bit surprised to see this comment mention Whisper being an example of "5-10x as performant".

https://www.tomshardware.com/news/whisper-audio-transcriptio... is a good example of Nvidia having no excuses being double the price when it comes to Whisper inference, with 7900XTX being directly comparable with 4080, albeit with higher power draw. To be fair it's not using ROCm but Direct3D 11, but for performance/price arguments sake that detail is not relevant.

EDIT: Also using CTranslate2 as an example is not great as it's actually a good showcase why ROCm is so far behind CUDA: It's all about adapting the tech and getting the popular libraries to support it. Things usually get implemented in CUDA first and then would need additional effort to add ROCm support that projects with low amount of (possibly hobbyist) maintainers might not have available. There's even an issue in CTranslate2 where they clearly state no-one is working to get ROCm supported in the library. ( https://github.com/OpenNMT/CTranslate2/issues/1072#issuecomm... )


> While I agree that it's much more effort to get things working on AMD cards than it is with Nvidia, I was a bit surprised to see this comment mention Whisper being an example of "5-10x as performant".

It easily is. See the benchmarks[0] from faster-whisper which uses Ctranslate2. That's 5x faster than OpenAI reference code on a Tesla V100. Needless to say something like a 4080 easily multiplies that.

> https://www.tomshardware.com/news/whisper-audio-transcriptio... is a good example of Nvidia having no excuses being double the price when it comes to Whisper inference, with 7900XTX being directly comparable with 4080, albeit with higher power draw. To be fair it's not using ROCm but Direct3D 11, but for performance/price arguments sake that detail is not relevant.

With all due respect to the author of the article this is "my first entry into ML" territory. They talk about a 5-10 second delay, my project can do sub 1 second times[1] even with ancient GPUs thanks to Ctranslate2. I don't have an RTX 4080 but if you look at the performance stats for the closest thing (RTX 4090) the performance numbers are positively bonkers - completely untouchable for anything ROCm based. Same goes for the other projects I linked, lmdeploy does over 100 tokens/s in a single session with LLama2 13b on my RTX 4090 and almost 600 tokens/s across eight simultaneous sessions.

> EDIT: Also using CTranslate2 as an example is not great as it's actually a good showcase why ROCm is so far behind CUDA: It's all about adapting the tech and getting the popular libraries to support it. Things usually get implemented in CUDA first and then would need additional effort to add ROCm support that projects with low amount of (possibly hobbyist) maintainers might not have available. There's even an issue in CTranslate2 where they clearly state no-one is working to get ROCm supported in the library. ( https://github.com/OpenNMT/CTranslate2/issues/1072#issuecomm... )

I don't understand what you're saying here. It (along with the other projects I linked here[2]) are fantastic examples of just how far behind the ROCm ecosystem is. ROCm isn't even on the radar for most of them as your linked issue highlights.

Things always get implemented in CUDA first (ten years in this space and I've never seen ROCm first) and ROCm users either wait months (minimum) for sub-par performance or never get it at all.

[0] - https://github.com/guillaumekln/faster-whisper#benchmark

[1] - https://heywillow.io/components/willow-inference-server/#ben...

[2] - https://news.ycombinator.com/item?id=37793635#37798902


I'm not familiar with the author, but seems like they have quite identical post from 2021 as well: https://lunduke.substack.com/p/linux-foundation-spends-just-...


I was planning on mentioning Zoom as well. The Linux client especially is insanely bad, iirc it also drew itself on top of everything.

My suggestion on Linux at least is to use the web client. Just get the url, do a 's#/j/#/wc/join/#' to it and open it in browser of your choice. You'll need to copy the password manually, sometimes it might require captcha etc, but at least it's somewhat usable.


This works on MacOS as well (in Firefox or Chrome). No way in hell I’m installing Zoom on a Mac.


Can't speak for everyone, but since pretty much everyone seems to have unlimited data and the speeds aren't that bad the consumer behavior probably doesn't change at all if you're on wifi or not.

That said, 35GB average still seems rather high. I'm interested how much I'm using, but I only have usage data from the past four months and those months aren't really that representative of what I would call normal use.


According to my iPhone, I've used 7.7 GB since last December. Of course, I've been spending more time at home than usual, but even without a global pandemic, I'm mostly at home or at work, both where I have Wi-Fi and fixed Gigabit connectivity.

I'm definitely not the biggest consumer of data but at least I don't have to change my browsing behavior at all whether or not I'm on Wi-Fi or not. There's also the luxury that whenever I'm out with my laptop, I just tether instead of trying to figure out how to connect to the local Wi-Fi.


By using only basic functionality that's easy enough to remember I guess I'd go with something like

`echo -e "foo bar baz" | tr -s ' ' | rev | cut -d ' ' -f 1-2 | rev | awk '{print $2 " " $1}`

Everything except the awk part is something that I use all the time and is easy to type & remember.

To be honest I'd use `choose` if it was available everywhere, but for string manipulation I can't justify using nonstandard tools since they aren't always available.

Every now and then there are some new ones I actually start to use. For example `ripgrep` mostly replaced `grep -R` for me some time ago, a lot of it has to do with the fact that if `rg` is not found I can fallback to normal grep and get the same result, just a bit slower.

I guess my point is that while I do appreciate innovation & making better tooling, the hard part always is getting the tool where it's most needed.


Color me interested. I've been wanting to use IPFS for similar use case but there's always something that annoys me or documentation that's lacking.

While reading Hyperdrive/Hyperswarm docs I actually managed to find everything I wanted to know without much effort (most of all it seems like setting up this on private network should be doable)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: