Unless they figure out a non-invasive technique that spatially samples brain activity with a high resolution at high speeds (think sub-millimeter @ several kHz), that is also cheap and wearable, there is NO way you can extract any high level information. It just doesn't work.
Current non-invasive methods mostly involve EEG - electrodes that measure the electrical potential across the brain surface. The potentials are a mix of the activity of every single neuron - almost all information is lost and you can't get much out of it at all, even with infinite computing power.
It's comparable to the task of reading the contents and activity of a CPU by attaching sensors to the surface - but without knowing the type of CPU, OS, what programs are running, what the user is doing etc. It's clear that it can't work.
Mind reading simply sounds sooo good in a news article and makes people dream of a sci-fi-esque future, I don't expect it to go away soon.
but you could easily map the bulk EEG data of an individual person in different brain states. just have them go through a calibration routine where they are prompted "think about apples" and then you record what comes back. now, you know what it looks like when they're thinking about apples in an intentional way. it's a start.
there's still plenty of noise, but the signal is going to look the same every time, if you can find it. the other question is whether your detection apparatus is going to be able to tease out the difference between "thinking about apples" versus "thinking about oranges". at present, my guess is no, which speaks to your point.
i get the feeling these problems are going to be solved as time goes on...
What if you, one day, bite into an apple that has a worm in? You'll probably feel different about apples after that and you calibration data is now useless.
As I said, just consider the sparse information you get out of an EEG or FMRI and compare it to the vast information that make up your thoughts, memories, feelings etc. It's simple mathematics, it won't work unless you find a way to extract much more data non-invasively. Upon that, you most likely won't get away with a single calibration.
Yes there are awesome results, like controlling a cursor or artificial limbs with your brain. But that only works because your brain learns how to control it - not the other way around.
Continuing that thought, I can imagine a system where the user can learn how to express certain emotions to a computer - but that still leaves open the question of how to induce them into another user, which is a completely different task that we even haven't touched yet at all.
This reminds me of a study I just found where some researchers, as a thought experiment, tried to reverse engineer a 6502 MOS Tech CPU using the same techniques used in neuroscience. Their argument seems to be that if we can't even reverse engineer a totally known system with our tools, we can hardly claim to be able to reverse-engineer a system whose core 'design' we don't even really have a concept of:
More likely, what if you think about an apple while hungry. While in a mood for one vs not.
What makes you think that it will look the same? I don't think it is that deterministic to be practical. Even so, would it generalize to different people?
clunky, but it'd work.
Chances are they are looking at something like this (http://www.bbc.co.uk/news/science-environment-12990211 -- control a mouse cursor with your thoughts anno 2011) to make a better interface for the Oculus Rift than (I can't believe this is a quote) "the mind reading and telepathy of science fiction movies".
Facebook post: "Scientists prove X causes Y"
Various layers of science news: "Scientists link X and Y"
Abstract: "We observed a moderate correlation between X and Y. The results are surprising because they contradict previous studies of similar populations."
6 months later and there are new-age parenting guides about why you belong in jail if you let your kids anywhere near X.
Why not? Their money hires engineers the same as anyone else's.
Although (nearly?) everyone has their price, few people with _serious_ technical chops want to play on Facebook's side of the garden wall. I suspect Facebook worries about this.
[edit to remove oops; double negative]
I fear you suffer from some form of bias. Or maybe I do; however, I know some folks there who, in the devops or data engineering world, get it. I mean you had Taner (who also lent his hand to building battle.net's infrastructure), Keith Adams, JPC, David Reiss, John Allen, eric huang, sam rash, etc.
Their largest hadoop cluster has _several_ hundred PB of data. You don't manage / run things on that without having some chops. Period. If you read the under the hood series, you might be able to see a different viewpoint.
Is Pluto a Planet? Wait, I want a real home assistant, god damn it!
Forgot there's a tedx talk that's more focused on it: https://youtu.be/BP_b4yzxp80
In the end, it won't be our willful submission to AI overlords but economies of scale heavily valued by our capitalist economy that will be our undoing.
but this also means untold riches that build & sell the new AI driven economies of scale. we've previously seen titans arise from the industrial revolution by doing the same thing.
It's no time to be a Luddite.
Having worked with hundreds of data scientists and data engineers over the last couple of years the common theme is that they want to get into A.I. but configuration hell or poor documentation has stopped 50-80% of them from being able to experiment.
So I built SignalBox - a Deep Learning web platform with a set of "blueprints" for common tasks. It deploys to a bulk standard linux platform and once deployed has a web interface for generating neural networks, evaluating them, training them in parallel. Coupled with common ingestion patterns and data collectors, and from any point you can jump into iPython and start modifying the code for yourself. This gives newbies a pre-built, optionally GPU accelerated platform that they can play with, but also the flexibility of jumping to code so they can grow.
You know what I would love? 10 million pounds. Yes I am dreaming. But then I would release SignalBox for free, but due to the capitalistic nature of society I'm forced to sell it, which I am doing.
If I had 10 million, I would pursue some of the advances in computational chemistry for drug discovery, I do this for fun when I'm not working on platform, did you know that about it's estimated 10^8 molecules have been synthesised, whereas the range of potential drug like molecules is estimated at between 10^23 -> 10^60. I think there's real potential here, also I would like to explore epistasis modelling which I have been reading a lot of papers on too, and auditing some code from some PhD students I am mentoring, it's showing some good promise.
I dont want to work for a living, I want to spend 100% of my time working on using AI to help large numbers of people, but I've set my goal at 10 million first, that should be enough for me to never have to worry about money ever again, and buy enough GPU servers to advance state of the art
First of all, I'm super curious about your product, I've reached out using my email in my profile. You are right on the money in that documentation and heterogenous nature of configuring and getting ML/DL packages up and running is a large blocker. Personally for me, the training, setting up just to begin experimenting was a frustrating experience.
I think democratization is already taking place with open source alternatives like https://deepdetect.com/, I've only been able to install and play with the API and I'm trying to evaluate it further by building an app with it. But it hits that pain point in that it actually enables me to be in a position where I can now begin experimenting without having to deal with configuration and documentation noise.
$10 million pounds ($12m USD) is certainly not impossible and I think you are on your way. I'd love to have $28 million USD, I think I could live a comfortable life without having to work. The marginal utility really falls off beyond 70,000 USD / year according to stats multiplying by 40 years, it yields $2.8m USD not adjusted for inflation, so just to be safe, I multiply it by 10. I'd love it if it could be a 100x or even 1000x but the probability of that is pretty damn slim (but not impossible).
Here's to both of us for a successful 2017!
This sounds like a real possible future, but how could this be implemented so you only transmit the thoughts you want transmitted? Some sort of internal dialogue with an implant like "OK Facebook", "OK John, please end your thought by imagining a pound sign"?
But yeah, who knows, maybe the two processes will be so distinct that it's not an issue.
Slip-ups would definitely still happen, but I think they would be more expected and less of a big deal with such a fluid communication system. And maybe you could build in a delay that reads your thought back to you before it sends.
They raised some money (https://techcrunch.com/2016/12/21/neurable-seed-funding-brai...), and are from my town.
Eben Moglen - Why Freedom of Thought Requires Free Media and Why Free Media Require Free Technology
Video  https://archive.org/details/EbenMoglen-WhyFreedomOfThoughtRe...
Transcript  https://benjamin.sonntag.fr/Moglen-at-Re-Publica-Freedom-of-...
So no, not mind reading at all. Only in the vaguest sense at best.
Anyways, the signal you get from fMRI is enough to reconstruct sequences of images:
EEG can't even come close.