Hacker News new | past | comments | ask | show | jobs | submit | ssfrr's comments login

Oh man, soulseek brings back serious nostalgia for me. Is it still pretty active?


I've only been using it for a few years, but it's pretty great for finding and discovering music.


the languages listed (Pure Data, Max/MSP, Reaktor) are all visual languages where you connect up boxes with wires, similar to labview. I usually associate the term "Dataflow language" to refer to these types of languages, but I'm not actually sure if there are also text-based dataflow languages. Maybe Faust (which is also audio DSP focused)


What you probably want is to place a virtual listener/receiver where the NPC is, so the system would use the same rendering that it does for the player. Then you put an envelope detector on the signal and you have an approximation of how much sound is reaching that NPC.

You could also render the player sound separately from environmental sounds. So the NPC only hears the player if their sound exceeds the environmental noise.

You could get real fancy and do some directional processing on the NPC so they’ll go in the apparent direction.

It’s probably overkill though. You could probably do a much cheaper approximation and it would feel about the same to the player. It would be fun to test it out and see though!


Sounds like Stephen baxter’s Manifold: Time. She’s a squid not an octopus, but same idea. Great book!


The status quo they describe is a printing process where material is deposited and then UV cured, then there’s some kind of scraping process. That seems like a hybrid of FDM and lithographic printing from a vat of goo a la FormLabs.

Is that a common process?


Industrial equipment uses this technique often for different purposes like glueing. Might seem obvious but the scraping is an artifact from the UV curing, it happens for SLA printers too. There’s a reason why the vat is a FEP film since it has lubricating properties like PTFE so each layer peels off easily. Also fun fact, that’s what carbon3D printers are so damn fast. They have a semi permeable film that lets oxygen through which creates a thin barrier of “cured yet viscous” resin between the film and part so they don’t have to wait to peel or scrape anything.


I think if I were going to the trouble of dockerizing and isolating all my tools, I wouldn’t want to rely on someone else’s registry of dockerfiles.

This reminds me a lot of the cycle of sandboxing:

https://xkcd.com/2044/


Yeah, this is my understanding also. I don't understand how this is better than the publisher signing the content with their own key though, rather than relying on a special microphone.

I wish the article was more clear on what threat model they're trying to address.


> In our setting, we can compute a function that computes the hashes of the inputs and outputs the edited audio from the inputs. By revealing the hashes, we can be assured that the inputs match the recorded audio!

I don't understand how this works, or what exactly is being proven here. For instance, you could silence the given inputs and inject some other unrelated audio, so the fact that your output hash incorporates the input hashes doesn't seem very meaningful.

I figure I must be missing something here.


The editing can be checked. Basically the verifier is given the input hash, the editing function (e.g. remove noise) and the output. By using zk-snark the verifier can be convinced that yes indeed the output is the result of editing the input, and the input hash is the result of hashing the input.


k: super terse language inspired by APL and lisp

q: slightly less-terse language written in k, with similar semantics. The vendor (kx) supports q, not k

kdb+: column-oriented database bundled with q


Your comment makes it sound like they’re using some task-specific library, but this looks like pretty generic standard library functions to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: