
Introduction to Modern Brain-Computer Interface Design - MichaelAO
http://sccn.ucsd.edu/wiki/Introduction_To_Modern_Brain-Computer_Interface_Design
======
etrautmann
This course appears to ignore a large portion of the field, such as the
intracortical recordings used by Braingate [1] and all of the neural
prosthetics work using implanted electrodes. The methods are quite different
since EEG is incredibly noisy and low bandwidth in comparison to more invasive
sensors.

[1] [http://braingate2.org/](http://braingate2.org/)

~~~
kcoul
I'm of two minds on that note:

On one hand, I somehow feel it's appropriate that an open online course limit
itself to methods that are accessible outside of medical research labs and
hospitals.

On the other hand, I agree completely that someone could walk away after
taking this course, never understanding that they would be dealing with a
quality of data of considerably lower fidelity, to the point where failing to
take that variable into account could easily lead to false conclusions.

Maybe this is why part of me is "glad" I cracked my head open at a young age
and still have a significant gap there in the skull bone - I can get higher
amplitude signal placing an electrode there than anywhere else on my head!

------
kcoul
I've been looking for a good modern overview like this since I became
interested in the OpenBCI project that finally has some devices shipping.

During my CS degree I wrote one paper with numerous citations from the field
of neuroscience, as I was trying to make a case for ways to change the way we
teach and learn so as to build more robust memory models of the things we are
trying to memorize. The case I used in my paper was pieces of notated sheet
music, but I believe the same principles could hold in areas like language
learning (whether computer languages or human) or mathematics equally well.

I'd like to build/buy a good enough EEG to show that specific patterns emerge
when we achieve the specific kind of focus that allows for this optimally
efficient kind of learning to take place. (The implication being that if we
are able to induce this type of brainwave pattern rather than expect the
individual to achieve it on their own, then we might be able to make a
significant step forward in the field of educational neuroscience).

[http://en.wikipedia.org/wiki/Educational_neuroscience](http://en.wikipedia.org/wiki/Educational_neuroscience)

------
cinquemb
In my lab we're using a 128 channel biosemi hardware[0] and we wish the
cheaper commercial head sets could come even close to what we're using. We've
played/hacked the emotiv/muse and they're both crap (from a muscle
contribution to the signal perspective)… even with the signal we are getting
from the 128 leads (plus 4 facial), we still feel like our hands are tied
behind our backs because its not like we're measuring the neurons directly,
but maybe we'll get somewhere.

And then there's the headset that needs people to be a still as possible… so
overall I think we're still years out for anything good that's cheaper
commercially (at least something that the research will back from non muscle
contribution standpoint). I'm interested in openbci, which I think will drive
the prices down in the long run (along with projects like open ephys). Maybe
one of these days we'll convince our PI to let us open source something to one
of these projects since we're definitely relying on it (liblsl, libboost…)
lol.

I think something using nano-electrode arrays[1] will probably be the best for
having confidence in the signal and less likely to damage the brain. I was
talking to Graham Yelton over at sandia about their project to try to get a
pulse on where we are:

 _" Originally the nano-electrode array project targeted traces of dissolved
lead and arsenic species in drinking water. Since then we have used modified
versions of the array platform for impedance and capacitance studies of
biofilm formation and vapor-phase detection, respectively. We have also
applied the concept for thermo-electric nano-wires arrays, thermo-interface
templates, and electrochromics (micro-pixilation). Steve’s group has deep
expertise and focuses on biosensing, beyond my limited bio sensor knowledge."_

I would love to see that work combined with the nano mechanical systems work
going on at MIT[2], a la deep brain nano neural stimulation (tdcs? meh). Who
knows, maybe people are working on it now, but sometimes trying to figure out
whats going on where is another problem in itself…

[0] [http://www.biosemi.com/](http://www.biosemi.com/)

[1]
[http://www.sandia.gov/mstc/MsensorSensorMsystems/technical-i...](http://www.sandia.gov/mstc/MsensorSensorMsystems/technical-
information/nano-electrode-arrays.html)

[2]
[http://meche.mit.edu/research/micronano/](http://meche.mit.edu/research/micronano/)

------
proveanegative
Do you think it will ever be possible for humans to control devices reliably
and quickly with EEG input (or noninvasive BCIs in general)?

~~~
Houshalter
MRI technology is improving over time, but I don't know how practical it would
be in everyday usage. fNIR can potentially do the same thing much cheaper. I
believe it's restricted to only the first few cm of brain tissue but that
might be plenty.

With fMRI you can even reconstruct entire videos of what a person is seeing
(e.g.
[https://www.youtube.com/watch?v=nsjDnYxJ0bo](https://www.youtube.com/watch?v=nsjDnYxJ0bo)).
Controlling something simpler should be easier.

~~~
21echoes
the problem, of course, being that this is what an MRI machine looks like:
[http://neurophilosophy.files.wordpress.com/2006/09/img_0968....](http://neurophilosophy.files.wordpress.com/2006/09/img_0968.jpg)

------
MichaelAO
I came across this while taking a look at the Muse SDK. I've been practicing
daily meditation/qigong and my western mind wants to augment my efforts with
some type of biofeedback device. My first thought was to use the Muse with the
Oculus rift. Regardless, the brain-computer interface seems important. We
might look back 10 years from now and consider it strange that our devices
didn't explicitly take into account things like our emotional state.

After watching Bret Victor's most recent talk, I thought, "This is awesome. I
totally get where you're coming from, but show me something tangible." Maybe
brain-computer interface can point us in the right direction.

~~~
kcoul
I agree. The emerging field of data science can help us to make more sense of
the patterns and lack thereof that can become apparent after high fidelity
capture of our brainwaves.

On that note, I would have to say I was a bit disappointed by the Neurosky
Mindwave Mobile unit I received - despite being able to bypass their
simplified attention/meditation/blinking reduction of the raw input signals
and capture the {delta, theta, ..., gamma} filtered bands for further
processing, many people (myself included) complained about the apparently low
quality of the data, specifically showing large amounts of delta band activity
and little else. I resigned that the low-cost interface was simplified to the
point where it was a novelty and little else.

Now I am narrowing it down to OpenBCI vs. Muse vs Emotiv. Picking the right
one for the handful of projects I want to use an EEG for is proving to be
difficult. I'm simultaneously attracted to the ability to take matters more
into my own hands with OpenBCI, and the ability to just get started right away
with a product like Muse or EPOC.

I'm curious if you were planning to use the Rift for photic entrainment. I
haven't seen anyone else intending to get into photic stimulation with the
Rift besides a project or two that never seemed to take off like Ocunaut, so
if you do plan to, it would be great to collaborate somehow.

Some research has indicated that photic entrainment might be even more
effective than the various kinds of aural entrainment (binaural, monaural,
isochronic).

[http://www.mindalive.com/articleone.htm](http://www.mindalive.com/articleone.htm)
(1/3rd of the way down, Frederick, Lubar, Rasey, Brim, & Blackburn, 1999)

My impressions from that study are that eyes-closed photic-only worked best
for two reasons:

1\. Closing the eyelids blocks out extraneous light which reduces
interference. 2\. No one knew how to properly combine the entrainment
modalities yet.

However I believe that if a proper theoretical model is applied, AVE
techniques might then become more effective, perhaps even more so if tactile
audio were involved as well. [http://en.wikipedia.org/wiki/Audio-
visual_entrainment](http://en.wikipedia.org/wiki/Audio-visual_entrainment)
[http://en.wikipedia.org/wiki/Tactile_sound](http://en.wikipedia.org/wiki/Tactile_sound)

To me, a unit like the DK1 is not really usable for anything besides this sort
of thing. After all, panning anywhere nearly as quickly as would be needed in
a game on a DK1 would get most nauseated fairly quickly. (The DK2 is much
better in this regard, and Crystal Cove hopefully even more so)

But the one saving grace for the DK1 was things done very slowly and
deliberately, like a relaxed exploration of the Tuscany demo or something like
this: [http://guidedmeditationvr.com/](http://guidedmeditationvr.com/)

~~~
MichaelAO
You hit the nail on the head as far as what I was thinking with the Rift.

After experimenting with binaural beats out of the Monroe institute, I think a
similar auditory experience is important. You're definitely right that
together they could be quite powerful. My email is in my profile if you'd like
to chat more.

Right now, I'm leaning towards the Muse. Does anyone have any suggestions for
a hacker friendly EEG?

~~~
heynk
I backed the indiegogo campaign with hopes that Muse would be very hacker
friendly, but so far their SDK support is limited. They provide very low level
interfaces for interacting with the data over bluetooth, but are behind
scheduling on releasing their 'LibMuse' which promises to be a higher level
SDK (in multiple languages) to help people build apps. I don't have links
right now but after browsing their forums they seem to have de-prioritized
their developer tools in favor of their own in-house apps. This is
disappointing both as a developer who'd like to harness my Muse and as a
consumer who wants to be able to do more things with my Muse than their single
Calm app [1]

[1] [https://itunes.apple.com/us/app/muse-
calm/id849841170?mt=8](https://itunes.apple.com/us/app/muse-
calm/id849841170?mt=8)

