Hacker News new | past | comments | ask | show | jobs | submit login

Our model generates CLIP image embeddings from fMRI signals and those image embeddings can be used for retrieval (using cosine similarity for example) or passed into a pretrained diffusion model that takes in CLIP image embeddings and generates an image (it's a bit more complicated than that but that's the gist, read the blog post for more info).

So we are doing both reconstruction and retrieval.

The reconstruction achieves SOTA results. The retrieval demonstrates that the image embeddings contain fine-grained information, not just saying it's just a picture of a teddy bear and then the diffusion model just generates a random teddy bear picture.

I think the zebra example really highlights that. The image embedding generated matches the exact zebra image that was seen by the person. If the model only could say it's just a zebra picture, it wouldn't be able to do that. But the model is picking up on fine-grained info present in the fMRI signal.

The blog post has more information and the paper itself has even more information so please check it out! :)




So what's the output if I show a completely novel image to the subject? E.g. a picture of my armpit covered in blue paint?


Why are you building this, and what kind of ethical considerations have you taken, if any?


I'm curious what answers you would find acceptable? I'm not being snarky - I genuinely struggle with this line of thinking. People seem to find "if I don't then someone else will" to be an unacceptable answer but it seems to me to be fairly central.

There's a inevitability about most scientific discoveries (there are notable exceptions but they are few) and unless we're talking about something with capital outlay in the trillions of dollars then it's going to happen whether we like it or not - short of a global totalitarian state capable of deep scrutiny of all research.


>People seem to find "if I don't then someone else will" to be an unacceptable answer but it seems to me to be fairly central.

Because you can use this as a cop out for truly heinous work. I.e. gain of function research, autonomous weapons, chemical weapons, etc. It's not a coherent world view for someone that actually cares about doing good.


I think you've hit upon some interesting examples. Maybe the way to look at this is cost vs "benefit" (in the broadest sense of the word).

When research has an obvious and immediate negative outcome that's a cost. The difficulty/expense of the research is also a cost.

The "benefit" would be the incentive to know the outcome. This may be profit, military advantage, academic kudos etc.

Maybe the problem with the type of research being discussed here is that there isn't neccesarily any agreement that the outcome is negative. For many people, I suspect this will remove a lot of the weight on the "cost" side of things.

I'm not making a specific point here - I'm actually trying to work this out in my head as I write.


> I think you've hit upon some interesting examples. Maybe the way to look at this is cost vs "benefit" (in the broadest sense of the word).

This is obviously a better framework to be in.

"If I don't do it someone else will" is really fraught and that's why people reject it.

So one would really need to ask is there a net benefit to having a "mind reading" system out in the world. In fact I find it hard to think of positive use cases that aren't just dwarfed by the possibility of Orwellian/panopticon type hellscapes.


> In fact I find it hard to think of positive use cases

Firstly - forcing people to think of positive use-cases up front is a terrible way to think about science. Most discoveries would have failed this test.

Secondly - can you really not? Off the top-of my head:

a) Research tools for psychology and other disciplines

b) Assistive devices for the severely disabled

c) An entirely new form of human-computer interface with many possible areas of application


As I mentioned do any of those outweigh the possibility that some 3 letter agency might start mass scanning US Citizens for what amounts to thought crime? The very fundamental idea of privacy would cease to exist.


That's a very big leap. If we're at the stage where a three letter agency can put you in an fMRI machine, then we're probably also at the stage where they can beat you with a rubber hose until you confess.

My point is that there's already a wide variety of things a future draconian state can do. This doesn't seem to move the dial very much.


I'm not suggesting I have some ability to judge whatever the answer is, I'm just curious because TFA didn't include a lot of detail on this point except some vague bullet points at the end.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: