Hacker Newsnew | past | comments | ask | show | jobs | submit | osa1's commentslogin

Is blink an interpreter for x86_64 instructions, or does it compile basic blocks to the host architecture?

I had a look at the source code but I'm not sure how it works. It looks a bit too small (50 kloc C + 6.6 kloc headers) to have code generators for all of the supported host architectures.


It's an interpreter, but it does support JIT to x64 and arm. There are some details here: https://github.com/jart/blink/?tab=readme-ov-file#technical-...

On the x64-playground website it's just running as an interpreter, inside of web assembly


Good stuff, but I don't use the follow feature on Spotify. Can it use number of plays of songs/artists to make the playlists?


Thanks!

Right now it's not possible, but I'll put it on my list of features to add. Unfortunately though, the play history Spotify provides is very innaccurate and incomplete - so suggestions only based on that would be quite limited :( It's the same issue with the statistics, they are best-effort based.

Having said that, there is another feature you could use: if you have or follow any playlists, you can include artists from them. Make sure to have the `Index Playlist Artists` option (it will get enabled automatically if you follow <100 artists) and tick `Include playlist artists` setting when creating your playlists.


Thanks for the response.

Another question: instead of getting my follows from my Spotify, could it let me type the artists I'm interested in?

I really want to use it (I'm also not happy with Spotify's recommendations), but my follow list is mainly for podcasts. Maybe just letting the user enter the artist names (instead of getting them from Spotify follow lists) would be easier to support?


Not currently, no - I might potentially include it at some point, but I feel like the Spotify UI is better tailored for that, so I'm not sure.

The current functionality revolves around genres, and artists are derived from those selections. There are some additional filters where you can filter down based on album/track release dates, exclude genres or specific artists - but it all comes from your library, not from the whole Spotify pool. It was a deliberate decision, as my gripe was the fact that I have a massive library, and was not listening to its entirety.

You can achieve a somewhat similar functionality by creating a playlist, and adding a single song from any artists you want in that playlist.

As to the following list - the podcast/artists libraries are not the same and you can access them at different places in Spotify. If you click on your profile and go to following you'll only see artists/friends. Moreover they are behind separate APIs/access scopes and I only scrape the aritsts you follow.

If you want give it a go, you might find it useful. You can delete your account at any time - I don't keep any of your data once you delete your account.


I don't understand. The link could've come from anywhere (for example from a HN comment). How does just clicking on it give your package credentials to someone else? Is NPM also at fault here? I'd naively think that this shouldn't be possible.

For example, GitHub asks for 2FA when I change certain repo settings (or when deleting a repo etc.) even when I'm logged in. Maybe NPM needs to do the same?


OP entered their credentials and TOTP code, which the attacker proxied to the real npmjs.com

FWIW npmjs does support FIDO2 including hard tokens like Yubikey.

They do not force re-auth when issuing an access token with publish rights, which is probably how the attackers compromised the packages. iirc GitHub does force re-auth when you request an access token.


> They do not force re-auth when issuing an access token with publish rights, which is probably how the attackers compromised the packages

I'm surprised by this. Yeah, GitHub definitely forces you to re-auth when accessing certain settings.


As OC mentioned elsewhere, it was a targeted TOTP proxy attack.


So, he clicked the link and then entered his correct TOTP? how would manually typing the url instead of clicking the link have mitigated this?


They wouldn't have manually typed the exact URL from the email, they would have just typed in npmjs.com which would ensure they ended up on the real NPM site. Or even if they did type out the exact URL from the email, it would have made them much more likely to notice that it was not the real NPM URL.


By CPS do you mean lightweight threads + meeting point channels? (i.e. both the reader and writer get blocked until they meet at the read/write call) Or something else?

Why is CPS better and lower level than async/await?


Async/await tend to be IO bound, in zigs case hiw can i know what to use and when? Say i do a db call, thats clearly IO, and later do heavy CPU thing, now how do i know that the CPU thing does not block inside my async io thing?

I guess its pure luck if the io implementation can handle both, but who knows?


> how do i know that the CPU thing does not block inside my async io thing?

I think it's common sense to not interweave IO with long-running CPU, hence sans IO.

If you want to go that route, we already have solutions: goroutines and beam processes.


This was my point. With Go it does not matter, i can do IO or CPU both with Gos concurrency (CSP), but with async/await i usully cannot, i assume this is not something Zig is planning on, as it seems to be all about IO.


Because it allows multiple topologies of producers and consumers.


No idea what that means.. Do you have a concrete example of what CPS allows and async/await doesn't?


I believe async/await means you have a single consumer (caller) and a single producer (callee) and only a single value will be produced (resolved).

With CPS you may send and receive many times over to whomever and from whomever you like.

In JavaScript you may write..

    const fetchData = async () => {
        // snip
        return data;
    }

    const data = await fetchData();
And in Go you might express the same like..

    channel := make(chan int);
    
    go func() {
        // snip
        channel <- data;
    }()
    
    data := <-channel
But you could also, for example, keep sending data and send it to as many consumers as you like..

    go func() {
        for { // Infinite loop:
            // snip
            channel1 <- data;
            channel2 <- data;
            channel3 <- data;
        }
    }()


I don't have too much experience in GUIs and I've been looking at iced and gpui recently for a new GUI application.

iced has some nice looking apps written using it, but they all seem "laggy" on my high-end Linux box somehow. I'm wondering if this is a limitation of the immediate mode GUIs, or something related to my system, or an issue in the apps?

For example, drag-and-drops or selecting text with mouse lag behind the mouse cursor.

I'm wondering if I'm not holding it right somehow?


iced is not immediate mode

which apps are laggy? iced is fast AF. try running the game of life example on release and crank to max speed. there is literally no lag

could be something wrong with your GPU. join the Discord and we'll help you debug.

EDIT: I'm told you could be having vsync issues. try `ICED_PRESENT_MODE=immediate`


Exactly. Another case where this happens is with credit/point based systems for things like settlement/citizenship that effectively allows governments to discriminate freely based on vague criteria.


I think point based systems are the most fair and not arbitrary, since points are usually awarded for things like age, degree, language proficiency. That's the least discriminating way to steer immigration.


From what I've read - but have not myself looked into - Australia has been using this system for some time, and wants very much to move on from it, as it has not worked well in practice.


https://en.liberapay.com/explore/recipients largest is Librepay. Second largest is "infosec.exchange" Mastodon instance.


infosec.exchange was first when I made that comment. I guess this HN post helped get a lot more support for liberapay


Absolutely not a bad thing! I actually joined in myself, I'm convinced this is the best way to process donations!


Personally I disagree with the technical stack, but I love the project and the people behind it so I too wanna support it as much as possible.

It's hard to be "the Wikipedia of X" but once a tool has achieved Wikipedia status, competitors have basically no chance. A truly crowd-sourced and people funded solution is irreplaceable once it's mature and fully established


The article doesn't mention rust-analyzer (the language server), I wonder how much of this infrastructure is also used by rust-analyzer?


(I work on rust-analyzer)

We don't share the query infrastructure in rust-analyzer, but rust-analyzer is a query-driven system. We use Salsa (https://github.com/salsa-rs/salsa/), which is strongly inspired by rustc's query system, down to the underlying red/green algorithm (https://rustc-dev-guide.rust-lang.org/queries/incremental-co...), hence the name: Salsa.


Will rust-analyzer ever be safe to run on untrusted code?


let's see when will first vi plugin for it come out..


quick tip: d$ is same as D


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: