Hacker News new | comments | show | ask | jobs | submit login
Facebook has a mysterious team working on tech that sounds like mind reading (businessinsider.com)
72 points by pmcpinto on Jan 13, 2017 | hide | past | web | favorite | 56 comments



I don't think so.

Unless they figure out a non-invasive technique that spatially samples brain activity with a high resolution at high speeds (think sub-millimeter @ several kHz), that is also cheap and wearable, there is NO way you can extract any high level information. It just doesn't work.

Current non-invasive methods mostly involve EEG - electrodes that measure the electrical potential across the brain surface. The potentials are a mix of the activity of every single neuron - almost all information is lost and you can't get much out of it at all, even with infinite computing power.

It's comparable to the task of reading the contents and activity of a CPU by attaching sensors to the surface - but without knowing the type of CPU, OS, what programs are running, what the user is doing etc. It's clear that it can't work.

Mind reading simply sounds sooo good in a news article and makes people dream of a sci-fi-esque future, I don't expect it to go away soon.


>It's comparable to the task of reading the contents and activity of a CPU by attaching sensors to the surface - but without knowing the type of CPU, OS, what programs are running, what the user is doing etc.

but you could easily map the bulk EEG data of an individual person in different brain states. just have them go through a calibration routine where they are prompted "think about apples" and then you record what comes back. now, you know what it looks like when they're thinking about apples in an intentional way. it's a start.

there's still plenty of noise, but the signal is going to look the same every time, if you can find it. the other question is whether your detection apparatus is going to be able to tease out the difference between "thinking about apples" versus "thinking about oranges". at present, my guess is no, which speaks to your point.

i get the feeling these problems are going to be solved as time goes on...


"... where they are prompted "think about apples" and then you record what comes back. "

What if you, one day, bite into an apple that has a worm in? You'll probably feel different about apples after that and you calibration data is now useless.

As I said, just consider the sparse information you get out of an EEG or FMRI and compare it to the vast information that make up your thoughts, memories, feelings etc. It's simple mathematics, it won't work unless you find a way to extract much more data non-invasively. Upon that, you most likely won't get away with a single calibration.

Yes there are awesome results, like controlling a cursor or artificial limbs with your brain. But that only works because your brain learns how to control it - not the other way around.

Continuing that thought, I can imagine a system where the user can learn how to express certain emotions to a computer - but that still leaves open the question of how to induce them into another user, which is a completely different task that we even haven't touched yet at all.


> As I said, just consider the sparse information you get out of an EEG or FMRI and compare it to the vast information that make up your thoughts, memories, feelings etc. It's simple mathematics, it won't work unless you find a way to extract much more data non-invasively.

This reminds me of a study I just found where some researchers, as a thought experiment, tried to reverse engineer a 6502 MOS Tech CPU using the same techniques used in neuroscience. Their argument seems to be that if we can't even reverse engineer a totally known system with our tools, we can hardly claim to be able to reverse-engineer a system whose core 'design' we don't even really have a concept of:

http://journals.plos.org/ploscompbiol/article?id=10.1371/jou...


> What if you, one day, bite into an apple that has a worm in? You'll probably feel different about apples after that and you calibration data is now useless.

More likely, what if you think about an apple while hungry. While in a mood for one vs not.


>The signal is going to look the same everytime.

What makes you think that it will look the same? I don't think it is that deterministic to be practical. Even so, would it generalize to different people?


it'd be calibrated on a per-person basis. if a person can't get it to recognize something that it was correctly identifying previously, then they add the current pattern to the calibration library.

clunky, but it'd work.


No, it would not work. Neither fMRI or "bulk EEG data" (whatever that is) reproduces brain activity patterns for even the most basic ideation, I have no idea why you would think they do. This is fantasy.


Researchers already do this in the neuroimaging community. The technique is called MVPA (see http://www.pymvpa.org/index.html for an implementation). The problems of machine learning in general also apply here, as nom pointed out. Classifiers applied to brain images are only as good as the training datasets used.


How about—to cross-pollinate current front-page articles—optogenetics? Rather than a virus that turns light into electrical pulses, you'd want a virus that takes power from ambient RF energy, and emits RF energy on a different frequency attenuated by the strength of the local intra-synaptic electrochemical gradient. Your neurons would report their state over RFID!


Is there a "someone's razor" for sensational, speculative "news"?

Chances are they are looking at something like this (http://www.bbc.co.uk/news/science-environment-12990211 -- control a mouse cursor with your thoughts anno 2011) to make a better interface for the Oculus Rift than (I can't believe this is a quote) "the mind reading and telepathy of science fiction movies".


I'd like to propose "Abstract Jumping" as the term here. We see it a lot when journalists just read the abstract of a scientific paper or the overview of a project and then draw conclusions.


Yeah that needs to be a term. It's a pattern I'm sure many of us have seen in scientific studies too:

Facebook post: "Scientists prove X causes Y"

Various layers of science news: "Scientists link X and Y"

Abstract: "We observed a moderate correlation between X and Y. The results are surprising because they contradict previous studies of similar populations."

6 months later and there are new-age parenting guides about why you belong in jail if you let your kids anywhere near X.


Betteridge's Law of Headlines? There's no question mark at the end this one here, but I think that the Law is at least conceptually close.


Something to ponder for a future generation: How important really is our physical being? Reproduction, social interaction and sustenance are transitioning from physical necessity, to augmentation, toward replacement. We talk of AI as replacing humanity, but it could be they are inexorably merging and will evolve along their own path that really belongs to them. One would have expected the neanderthal to have experienced existential stress about the emergence sapiens. It may be that in evolutionary terms, bipedal man's time may very slowly be nearing an end.


I am very pleased by this post. I am 60, people around me age and it is sometimes a not very pleasant experience. What is important when you are very old and impotent? I think that is to, still be able to interact with other people. The time we spend on social networks, including this one, shows that interaction is what matters. Here it is very limited, but what if we would be able to couple HN with some kind of "social network butler" to interact with the physical world. It would fulfill both the need of social interaction and the capacity to do stuff. After that, the need for a physical body, at least for the people who are very old, would not be so important. And if young people can enjoy living in the fast lane, I am sure some new experiences can be created in virtual worlds, that would match, if not replace, experiences brought by the physical body.


What I really like about your post is the reminder that virtual/cyber interactions are meaningful and supportive of the human condition for people whose ability to interact may be limited by age, geography, health, etc.


this very much reminds me of the TV show episode black mirror - san junipero


This reads like a big joke. I do not think Facebook could get remotely close to figuring out this technology. But watch me get quoted in a post in 2022 about how naive we were.


> I do not think Facebook could get remotely close to figuring out this technology.

Why not? Their money hires engineers the same as anyone else's.


I just mean it doesn't pass my personal "what do I think is possible in the near future?" test. I'm not claiming to be objective, just a gut reaction that we might quietly read how this project team was transitioned into some other project in a couple years. Nothing wrong with that at all, by the way. I love that companies fund careers for people to explore big possibility ideas and experiments like this.


I suspect there are an awful lot of very good engineering types who would never go and work for Facebook. No matter how much you paid them (well, let's say bounded at the upper end by a very, very large number).

Although (nearly?) everyone has their price, few people with _serious_ technical chops want to play on Facebook's side of the garden wall. I suspect Facebook worries about this.

[edit to remove oops; double negative]


"few people with _serious_ technical chops want to play on Facebook's side of the garden wall"

I fear you suffer from some form of bias. Or maybe I do; however, I know some folks there who, in the devops or data engineering world, get it. I mean you had Taner (who also lent his hand to building battle.net's infrastructure), Keith Adams, JPC, David Reiss, John Allen, eric huang, sam rash, etc.

Their largest hadoop cluster has _several_ hundred PB of data. You don't manage / run things on that without having some chops. Period. If you read the under the hood series, you might be able to see a different viewpoint.


The ethical viewpoint is very clear though. They just chose to be morally bankrupt.


The directory of Facebook AI Research is Yan LeCun, one of the top minds in the field of deep learning.


Who else is working in this field, and throwing this much money at it? If this is your dream job that pays your dream salary, and nobody else is offering you the same pay or experience, I think you just might consider working for Facebook. You can always leave.


Do you have any examples?


science != engineering


Alexa, remind me in 8 years.


Alexa, go unclog the toilet... oh wait, you can't? ok, not asking for much, go fold my laundry!.. No?!

Is Pluto a Planet? Wait, I want a real home assistant, god damn it!


Surprised they didn't mention Mary-Lou Jepsen, who worked as a Facebook VP but is now working on her own startup related to mind reading with opnwatr.io . Here's a talk she had at media lab recently which hints at this: https://youtu.be/VS810aV_PW4

Forgot there's a tedx talk that's more focused on it: https://youtu.be/BP_b4yzxp80


Exactly. I have a very strong hunch this is what they're referring to.


While I was reading 1984, I had to put it down because it was so depressing. It would be the end result of our experimentation with surveillance state coupled with AI management that outperforms even the smartest human being.

In the end, it won't be our willful submission to AI overlords but economies of scale heavily valued by our capitalist economy that will be our undoing.

but this also means untold riches that build & sell the new AI driven economies of scale. we've previously seen titans arise from the industrial revolution by doing the same thing.

It's no time to be a Luddite.


I think one way to help with this is the democratisation of A.I. It's not a silver bullet to the problems you describe, but a step in the right direction. They key is that the technology is available to everyone very cheaply.

Having worked with hundreds of data scientists and data engineers over the last couple of years the common theme is that they want to get into A.I. but configuration hell or poor documentation has stopped 50-80% of them from being able to experiment.

So I built SignalBox - a Deep Learning web platform with a set of "blueprints" for common tasks. It deploys to a bulk standard linux platform and once deployed has a web interface for generating neural networks, evaluating them, training them in parallel. Coupled with common ingestion patterns and data collectors, and from any point you can jump into iPython and start modifying the code for yourself. This gives newbies a pre-built, optionally GPU accelerated platform that they can play with, but also the flexibility of jumping to code so they can grow.

You know what I would love? 10 million pounds. Yes I am dreaming. But then I would release SignalBox for free, but due to the capitalistic nature of society I'm forced to sell it, which I am doing.

If I had 10 million, I would pursue some of the advances in computational chemistry for drug discovery, I do this for fun when I'm not working on platform, did you know that about it's estimated 10^8 molecules have been synthesised, whereas the range of potential drug like molecules is estimated at between 10^23 -> 10^60. I think there's real potential here, also I would like to explore epistasis modelling which I have been reading a lot of papers on too, and auditing some code from some PhD students I am mentoring, it's showing some good promise.

I dont want to work for a living, I want to spend 100% of my time working on using AI to help large numbers of people, but I've set my goal at 10 million first, that should be enough for me to never have to worry about money ever again, and buy enough GPU servers to advance state of the art


I have no idea why your comment was downvoted, I thought it was highly relevant and reflected on what I had written.

First of all, I'm super curious about your product, I've reached out using my email in my profile. You are right on the money in that documentation and heterogenous nature of configuring and getting ML/DL packages up and running is a large blocker. Personally for me, the training, setting up just to begin experimenting was a frustrating experience.

I think democratization is already taking place with open source alternatives like https://deepdetect.com/, I've only been able to install and play with the API and I'm trying to evaluate it further by building an app with it. But it hits that pain point in that it actually enables me to be in a position where I can now begin experimenting without having to deal with configuration and documentation noise.

$10 million pounds ($12m USD) is certainly not impossible and I think you are on your way. I'd love to have $28 million USD, I think I could live a comfortable life without having to work. The marginal utility really falls off beyond 70,000 USD / year according to stats multiplying by 40 years, it yields $2.8m USD not adjusted for inflation, so just to be safe, I multiply it by 10. I'd love it if it could be a 100x or even 1000x but the probability of that is pretty damn slim (but not impossible).

Here's to both of us for a successful 2017!


Does anyone know why "8" seems to be such an important number at Facebook (f8, Building 8, and I believe way back when they were using Fedora 8 the first time they stated their server operating system, though that is probably coincidental)?


Because in Slovenian you pronounce 8 almost the same way as "awesome" in English ;) (true, but not the reason)


This is a guess but only thing I could think of 8 is a lucky number in Chinese culture and Zuckerberg is married to Priscilla Chan who has Chinese heritage.


I read it was something like their hackathons started at 8 PM.


An existential reminder that bowing to Facebook is our eventual "fate/F8"?


Facebook has 8 letters?


I'm fairly sure this is the primary reason, just like a11y is shorthand for accessibility and i18n is short for internationalization. It also helps that F8 looks like "FB" and sounds like "fate" but those were likely just bonuses.


F8 looks like FB? Dunno, just a guess.


>“One day, I believe we’ll be able to send full rich thoughts to each other directly using technology," Zuckerberg said during a June 2015 Q&A. "You’ll just be able to think of something and your friends will immediately be able to experience it too if you’d like."

This sounds like a real possible future, but how could this be implemented so you only transmit the thoughts you want transmitted? Some sort of internal dialogue with an implant like "OK Facebook", "OK John, please end your thought by imagining a pound sign"?


In the same way that lifting your arm is different than thinking about lifting your arm, I imagine you could train your mind so that transmitting a thought is different than thinking a thought. And hopefully it would turn off if you fall asleep.


I could see that happening, but I feel like slip-ups or crossovers ("oh shit, I meant to think that, not think and transmit it") could still happen sometimes. Also, I wonder what it'd be like for someone with a psychotic mental disorder...

But yeah, who knows, maybe the two processes will be so distinct that it's not an issue.


I guess a better analogy would be talking. You can think about talking or you can talk, and I'm not sure anyone could really explain what the difference is, but it is relatively unusual to speak by accident while you're conscious. I can imagine training myself to add a new mode of "talking" that actually transmitted a thought.

Slip-ups would definitely still happen, but I think they would be more expected and less of a big deal with such a fluid communication system. And maybe you could build in a delay that reads your thought back to you before it sends.


If we can solve the problems that allow thought understanding, then catching a mental slip should be as doable as a spell checker.


This doesn't sound ominous at all. At least facebook doesn't have a history of experimenting on changing it's users minds for them. https://www.theguardian.com/technology/2014/oct/02/facebook-...


Along these lines, has anyone heard of Neurable? http://neurable.com/

They raised some money (https://techcrunch.com/2016/12/21/neurable-seed-funding-brai...), and are from my town.


Relevant:

Eben Moglen - Why Freedom of Thought Requires Free Media and Why Free Media Require Free Technology

Video [0] https://archive.org/details/EbenMoglen-WhyFreedomOfThoughtRe...

Transcript [1] https://benjamin.sonntag.fr/Moglen-at-Re-Publica-Freedom-of-...


Next step: Thought Police


what's so breakthrough about this? scientific community have already demonstrated the ability to map brain wave patterns to thoughts, or specific words.


aka fMRI


fMRI is not mind reading. Not even close. Functional fMRI is making repeated observations of a proxy for neural activity (blood flow) until you have enough data to potentially localize task-specific actions to some part of the brain. Resting-state fMRI is trying to extrapolate basic, enduring networks in a given individual or group's brain while trying to silence all the noise from breathing, head motion, etc. Did I mention that not all slices in a functional scan are acquired at the same time, so you also have to make interpolations so that the data looks like it's all acquired at once?

So no, not mind reading at all. Only in the vaguest sense at best.


This proxy argument can also be applied to EEG.

Anyways, the signal you get from fMRI is enough to reconstruct sequences of images:

http://news.berkeley.edu/2011/09/22/brain-movies/

EEG can't even come close.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: