
NextMind is building a real-time brain computer interface - vo2maxer
https://venturebeat.com/2020/01/05/nextmind-is-building-a-real-time-brain-computer-interface-unveils-dev-kit-for-399/
======
cowteriyaki
I'm intrigued by the electrodes they claim to be having 4 times better SNR
than conventional electrodes. AFAIK there has been numerous attempts at dry
electrodes (electrodes w/o gel for impedance control), but due to higher
impedance, SNR has always been worse than wet electrodes as a result. The main
advantage of dry electrodes has been faster prep time & convenience at a cost
of signal fidelity. I'd be interested in technical details, but information on
their website doesn't really let on much. Wonder how they actually compare to
market leaders in research grade electrodes (e.g. BrainProducts, Neuroscan).

The founder's previous research seems to be focused on topics related to
consciousness and sleep. I was hoping if he published articles about electrode
materials, but can't find any. Maybe it's a trade secret.. Anyone with more
info on this?

Also, their electrodes seem to be mostly focused on the occipital area. I get
the impression they'd be focused more on visual components rather than
cognition and decision making (e.g. P300).

~~~
skummetmaelk
It doesn't even matter what the SNR is. EEG is an extremely low bandwidth
signal that measures average activity of BILLIONS of neurons. It's not
possible to do anything significant with it. These guys are just capitalizing
on hype and they will be able to build some toy demos, but never anything
really useful.

~~~
bobowzki
As an anesthesiologist working with EEG to monitor anesthesia, thank you for
this comment.

I wouldn't say you can't do anything significant, it's a bit like listening on
a computer with a stethoscope. You can certainly deduce something from fans
and psu noise etc, but it's VERY crude.

------
oddity
What are the best publications, journals, twitter feeds, etc to follow if I
want to understand the state of the art in BCI? How closely does shipping
hardware track the state of the art? Is the research mostly industry driven at
this point or is it still blue-sky?

~~~
neural_thing
There are 3 serious companies - Kernel, Neuralink, Paradromics (Ctrl-labs does
not make brain interfaces, per se).

If you want to look at what's coming next - check out what DARPA is funding
through their NESD and N^3 programs.

The hardware currently being shipped is awful, including the original link.

I expect industry-relevant technologies to be showcased in under two years,
with commercial products following shortly.

There is a HUGE gap in time-to-market between invasive and non-invasive
technologies for regulatory reasons.

------
siljon
Imagine using this with CTRL kit wrist band

Because NextMind interaction seemed very slow, so would be difficult to move
around with for example: the mouse, or writing text. This might be quicker in
the future where you could imagine whole words and write it. But I think they
need some timer safety so not random words or random actions happens.

This is something that I imagine CTRL labs wrist band could handle a lot
better since it deals with the hand motions and seemed to be quicker in
response.

So Nextmind is used for saying what I want to interact with while CTRL kit
wrist band is used to how your interact.

------
anonytrary
...that communicates at such a low bitrate, it isn't even practical for the
most interesting use-cases. When you see big claims like this and all they
have is an EEG, you realize these guys didn't do their homework. EEGs are
physically limited in their resolution. You'll only capture very large scale
oscillations, hence the low bitrate.

~~~
quantics
I disagree. Even if you could record only a couple of bits (essentially being
able to click with your mind), wouldn't this be useful already?

~~~
nopriorarrests
If you still need to point to right area of the screen using trackpad or
mouse, and can only perform the click event with your mind, I can't see how is
that useful.

~~~
nexuist
This is assuming you are operating a computer and not any other kind of
machine. It would be very easy to assign multiple actions to different click
patterns. For example, while driving:

\- Click once to skip to next song \- Click twice to go back to previous song
\- Click thrice to pause

While using any kind of industrial machine:

\- Click once to emergency stop

While doing anything that involves both hands:

\- Click once to perform an auxiliary action

Being able to click without having to physically perform anything can
significantly reduce reaction time. If this system can be made reliable, there
are a ton of dangerous or complex jobs that can be simplified and made safer
with this technology.

~~~
nopriorarrests
>there are a ton of dangerous or complex jobs that can be simplified and made
safer with this technology.

Honestly, this is a good point. However, just to play a devils advocate, for
emergency situations you don't need a real-time brain computer. You can just
measure the adrenaline level or heartbeat rate (which is trivially easy with
present tech) and on once it jumps, you shut down the whole machinery.

My point being, complicated brain link is not needed to send what is,
essentially, a "SOS" signal.

------
Osiris30
Seems aimilar to the Swiss startup Mindmaze - 2016 Intro article after their
initial fundraise
[https://www.forbes.com/sites/aarontilley/2016/02/17/mindmaze...](https://www.forbes.com/sites/aarontilley/2016/02/17/mindmaze-
raises-100-million-at-a-1-billion-valuation-for-neural-virtual-reality/)

And a more recent Wired article [https://www.wired.co.uk/article/mind-maze-
virtual-reality-br...](https://www.wired.co.uk/article/mind-maze-virtual-
reality-brain-stroke-patient-neuroplasticity)

------
stedaniels
Any Brain Computer Interface needs to be locally hosted, non cloud dependent,
and open source. A proprietary license can protect the code, but anything
touching the brain should be open, transparent, and auditable.

~~~
quantics
From what I can tell, it will be locally hosted, and non-cloud dependent.
Don't know about the open source though (we can guess not).

Regarding the second part of your comment, an EEG device like that will have
access to _way less_ information (even with state-of-the-art signal processing
techniques) than, say, an Apple watch or a FitBit, which have access to tons
of biometric information (pulse, ECG, sleep data, constant geolocalization,
living habits + much more if you also upload your weight and blood pressure
measurements in the companion app), yet the vast majority of the public didn't
seem to be too preoccupied when those came out.

------
imvetri
If you aren't calm enough to listen to your own mind voice and it take it out,

Putting an interface in there ain't gonna make any difference.

------
ericand
I'm surprised they didn't list testing the effectiveness of advertising as a
use case.

~~~
ALittleLight
They did discuss the case of an alarm becoming more insistent if a pilot
didn't notice it. I think it's a small hop from that to "We will only charge
you for the ads that people notice in our ad supported VR game" type stuff.

I assume you could also use this for ad targeting if you could get people to
watch the right content with their brain reading VR helmets on. "This person
focused a bit more on the Cadillac driving through the screen than is typical.
Let's show them a Cadillac ad."

~~~
nexuist
>I assume you could also use this for ad targeting

You could also use this for ad blocking, replacing ads in your vision with
cute images of animals :)

------
NinoScript
What would I need to know to be able to play with one of these?

