
Soli - jonbaer
https://atap.google.com/soli/
======
jasonhong
I recently showed some videos of Soli in the HCI class I teach. Students
immediately hit upon the two major issues I wanted to discuss (I was pretty
proud!).

The first is learnability. A big problem with gestures is that there is no
clear affordance as to what kinds of gestures you can do, or any clear
feedback. For feedback, one could couple Soli's input with a visual display,
but at that point, it's not clear if there is a big advantage over a
touchscreen, unless the display is really small.

The second is what's known as the Midas touch problem. How can the system
differentiate if you are intentionally gesturing as input vs incidentally
gesturing? The example I used was the new Mercedes cars that have gesture
recognition. While I was doing a test drive, the salesperson started waving
his hands as part of his normal speech, and that accidentally raised the
volume. Odds are very high Soli will have the same problem. One possibility is
to activate Soli via a button, but that would defeat a lot of the purpose of
gestures. Another is to use speech to activate, which might work out. Yet
another possibility is that you have to do a special gesture "hotword", sort
of like how Alexa is activated by saying it's name.

At any rate, these problems are not insurmountable, but it definitely adds to
the learning curve, reliability, and overall utility of these gesture based
interfaces.

~~~
snewman
As predicted in the Hitchhiker's Guide to the Galaxy: "...an electric pencil
flew across the cabin and through the radio's on/off-sensitive airspace."

~~~
cannam
It certainly seems on-point.

"A loud clatter of gunk music flooded through the Heart of Gold cabin as
Zaphod searched the sub-etha radio wavebands for news of himself. The machine
was rather difficult to operate. For years radios had been operated by means
of pressing buttons and turning dials; then as the technology became more
sophisticated the controls were made touch-sensitive - you merely had to brush
the panels with your fingers; now all you had to do was wave your hand in the
general direction of the components and hope. It saved a lot of muscular
expenditure of course, but meant that you had to sit infuriatingly still if
you wanted to keep listening to the same programme."

This is from 1979 of course.

~~~
james_s_tayler
That's uncannily accurate.

------
GistNoesis
It is a nice piece of technology. It is a 60Ghz millimeter-wave radar. It is a
privacy nightmare. It is already shipped.

Radar uses electromagnetic waves (like Wifi but higher frequency) so it can go
through walls, and even typical range for gesture recognition is less than
meter, It probably can go at least 10 times as far by boosting the gain of the
amplifier, it is not constrained like a theremin would be because it is
already working in the far field region of the antenna.

Because it work at such high frequency (but not so high that it can still go
through walls), it has many very small antennas arrays, and sense sub
millimeter movements even from far away. It also has beam-forming
capabilities, meaning it can focus the direction in which to sense. Because it
is radar, things which moves are of interests and are filtered easily from the
background.

Typically this piece of technology already can or will soon be able to : sense
how many humans are around it, where they are, how fast they breath, how fast
the heart is pulsating, who they are by computing some heart-base ID.

It is low-power and always-on, 360° with focus-able attention. It is cheap
because it can be made on a chip. (Edit: fixing typos)

~~~
arghwhat
> 60Ghz millimiter-wave radar

> so it can go through walls

60 GHz RF doesn't really pass through _anything_ , which is the entire reason
that the frequency is used for radar. A radar needs powerful reflections to
detect things. Penetration impedes its operation, and would be akin to a
camera that is out of focus.

I'm also unsure if the resolution of this chip is noteworthy.

~~~
mlyle
Yup, the range claims are silly, too. Radar has an inverse fourth power with
distance, so to go 10x as far by adding power, you need 10000x as much power,
which is quite a challenge at 60GHz.

~~~
GistNoesis
You are right that Radar follows an inverse 4th power with distance (inverse
2th power for the light to go to the reflected object then 2th power for the
reflected wave to go back to your antenna).

For 3$
[https://fr.aliexpress.com/item/32786483344.html](https://fr.aliexpress.com/item/32786483344.html)
a 6Ghz radar that goes 12meters using 30mA, so my range claims are not that
silly.

When there is no obstacle 6Ghz or 60Ghz doesn't matter.

~~~
mlyle
> When there is no obstacle 6Ghz or 60Ghz doesn't matter.

Power amplifier efficiency for 6GHz and 60GHz is not the same-- not even
close.

> For 3$
> [https://fr.aliexpress.com/item/32786483344.html](https://fr.aliexpress.com/item/32786483344.html)
> a 6Ghz radar that goes 12meters using 30mA, so my range claims are not that
> silly.

This is a deflection and ridiculous. We're talking about spotting e.g.
gestures. It falls apart with distance because of both angular resolution and
inverse-fourth power.

I've been involved in the design of mmwave radars. If it was easy to spot and
precisely track small objects at 10m, we'd be doing it...

~~~
GistNoesis
>If it was easy to spot and precisely track small objects at 10m,

That's not the claim I'm making. I agree that this chip won't do the gesture
recognition at 10m, but I'm quite convinced that it can pick up human movement
signal if they try to do so.

>I've been involved in the design of mmwave radars

I don't have this level of expertise. But I'll be really surprised if we
couldn't reach the same levels of amplification. Gaining 12 dB will double the
range. We can extend the antenna array or use more expensive low noise
amplifiers. For cars there are 30Ghz Doppler radar, and the distance can go at
least 50m.

From the referenced paper ( A Highly Integrated 60 GHz 6-Channel Transceiver
With Antenna in Package for Smart Sensing and Short-Range Communications
[https://sci-hub.tw/https://doi.org/10.1109/JSSC.2016.2585621](https://sci-
hub.tw/https://doi.org/10.1109/JSSC.2016.2585621) ) : "In this work a 60 GHz
4-channel receiver 2-channel transmitter packaged chip targeting high
resolution sensing systems and capable of supporting large bandwidth
communication channels is presented. The SiGe technology used offers a low 1/f
noise which is essential to the functionality of the chip in frequency
modulated continues wave systems (FMCW)and Doppler radar with a sensing range
below 10 m."

"While we have not explored this in-depth we would like to highlight the
similarities to re-cent work exploiting FMCW RF technology to coarsely ‘image’
users through a wall [1]"

~~~
mlyle
Your second quote seems out of context and doesn't occur in the paper you
cite.

Your first quote says "sensing range below 10m".

Yes, it is possible to make long range 60GHz systems-- largely through antenna
gain and lenses. Yes, we could build an entirely different radar to track
peoples' gross movement--- and could have 20 years ago, too-- but that has
largely nothing to do with the original system or your original claim.

If I wanted to image people at low resolution through a wall, 60GHz is about
the last thing I'd pick. Drywall alone has an attenuation of about 3 dB/cm,
and remember we have to cross the wall twice. You run out of loss budget
really quick. Suggesting one can go 10x as far (10000x power alone) and
through walls is... creative.

If you want to track people through a wall, use UHF. It works pretty well and
is pretty easy.

~~~
GistNoesis
>10000x power alone

Sorry to dig this up so late, there is a usual misconception when reasoning
with power (which you might suffer from) :

What matters is not power, what matter is what we measure. We are measuring
electric field in volts whose square is proportional to the power. Going 10x
as far, mean measuring a voltage 100x smaller. Even with just the dynamic
range if your analog digital converter is reading you 8-bits values (values
going to 256) with a 100x voltage reduction you get a smaller blip going from
0 to 2.

~~~
mlyle
What matters is signal to noise ratio. There is not really background noise in
UHF and up; instead there is thermal noise in the receive amplifier.

If you have SNR of 6dB over some integration interval which gave you
acceptable results, and you have 40dB of additional path loss, you're now
-34dB, and you need to make that up. There's no "taking half" because we're
talking about only a 100x difference in receive voltage.

Put another way, we already "took half"\-- our SNR of 6dB (quadruple the
power) was already only double the voltage. Our metrics for SNR already take
what you're talking about into account.

Dynamic range can be a factor, but you generally have some kind of automatic
gain control that increases effective dynamic range (so that in absence of
signal, you have noise much bigger than 1LSB showing up at the converter), and
the conversion dynamic range "width" of the converter only matters when there
are other in-band transmitters around you need to reject (because they limit
how far you can turn up the initial gain before saturating the converter).

Note also, not that it relates to what we said at all-- you can detect signals
much smaller than 1LSB because of dither.

------
lm28469
Not sure what to think about it. Seems awesome at first glance but then the
examples are skipping songs and hand waving pokemons. Feels a lot like a
solution looking for a problem.

~~~
fasicle
Would be useful when cooking with mess on your hands and trying to scroll
through a recipe.

~~~
degenerate
This is the _one and only_ suggestion so far that makes sense. It would be
nice when cooking. If I was sitting in a bus/train/plane/concert/<public
space>, there is no chance in hell I am waving my hands at a phone. I would
even feel silly doing it at home.

~~~
magnamerc
You can extend this to mechanics/technicians. Manipulating a large screen when
your hands are full of grease.

~~~
rvnx
but can't the front camera do that with a simple gestures classifier ?

~~~
lewiseason
Probably, but that doesn't sell $650 phones

~~~
rvnx
I like the privacy aspect "it's not a camera, it doesn't take picture"

It does send millimetre waves though, could that mean we could hack Soli to
view through clothes ? Like a full-body scanner works ?

------
daxterspeed
I'd love to see PC devkits for this device. Something as simple as a USB 3
device you plug in and place under your monitor. Perhaps it will make its
first "PC" appearance in a Google Chromebook?

I can see a lot of _eccentric_ users figuring out interesting ways of
integrate many of these gestures into their workflow. Perhaps for navigating
in 3d space or switching between workspaces?

[edit] There used to be a developers page which showcased that devkits exists.
[http://web.archive.org/web/20181110202503/http://atap.google...](http://web.archive.org/web/20181110202503/http://atap.google.com/soli/developers/)
Showcase video
[https://www.youtube.com/watch?v=H41A_IWZwZI](https://www.youtube.com/watch?v=H41A_IWZwZI)

~~~
throwaway_bad
It will just join the graveyard of gimmicky vr/ar motion controllers.

Leap motion, Kinect, etc. Those can track precise skeletal gestures too.

I think the differentiation here is low power always on and attached to the
phone with high field of view.

~~~
landa
The Kinect caused a huge wave of innovation. It was a convenient and low-cost
source of RGBD data, and many robotics labs got a few when it was released.

~~~
0xffff2
Could you expand on the result of said innovation? Where would I see Kinect
driven innovation in everyday life?

~~~
cushychicken
Much of that technology was improved and miniaturized, and incorporated into
the Apple Face ID bar at the top of the iPhone X.

~~~
Judgmentality
Personally I feel FaceID is a step backwards from fingerprint readers. It
doesn't work as often for me (facial hair, hat, lighting, hoodie, and
sometimes it's just finnicky) and there are privacy concerns. The fingerprint
reader isn't perfect, but for me it was better. I can touch it while pulling
the phone out of my pocket and it's unlocked when I open it.

I'm frustrated with Google's choice to copy Apple and remove this feature from
the Pixel 4. It's literally the only reason I'm not buying one.

~~~
machello13
What privacy concerns are there?

~~~
Judgmentality
I guess it depends on whether or not you consider a 3D scan of your face
personal data. As face-tracking becomes more prevalent, I'd say this is the
worst thing you could voluntarily give away. Your face can be scanned just
walking through a crowd in public, a fingerprint is only usable when you
physically touch something (and the digital version is always prompted so you
can't be identified in a crowd unexpectedly like you can with a face).

~~~
machello13
I would agree facial recognition in a general sense has privacy implications —
but in the case of Face ID I think the implementation is sufficiently secure.

~~~
Judgmentality
I agree Apple is one of the few companies that takes security seriously. That
said, it only takes one hack/leak to compromise your biometric data forever.
You can't change it as easily as a password. And you're voluntarily giving it
up...for what? A worse unlocking experience? If it weren't for the novelty
factor I am confident most people would not use it - probably why the hardware
is removing it as an option. Also it's no secret those companies want the
extra data for training better models and this is a "free" way to get really
high quality data.

As far as things to bemoan in tech this is pretty low on the totem pole. I'm
just annoyed because I wanted to buy a new phone and can't find one that has
what I want at any price.

EDIT: I saw this article seconds after finishing this comment, which seems
ironic.

[https://www.bbc.com/news/technology-50080586](https://www.bbc.com/news/technology-50080586)

~~~
machello13
I think you should read up a bit more on the technical implementation of Face
ID. The data never leaves the phone and, even if an attacker had physical
access to the phone, they could not get the information. Apple is getting no
training data from it.

> If it weren't for the novelty factor I am confident most people would not
> use it

I think this is really incorrect. You really think everyone is using Touch ID
and Face ID only because of the novelty factor, and not because it's
significantly more convenient (and, at least in many cases, more secure)? That
if it wasn't "fun", everyone would be completely okay going back to entering
6-digit passcodes?

~~~
Judgmentality
> You really think everyone is using Touch ID and Face ID only because of the
> novelty factor, and not because it's significantly more convenient (and, at
> least in many cases, more secure)? That if it wasn't "fun", everyone would
> be completely okay going back to entering 6-digit passcodes?

I meant compared to using a fingerprint reader, but even then yes. I returned
an iPhone because I can't unlock my phone in the dark enough that I just got
sick of it. My partner complains of the same thing, and vows to buy a cheaper
phone next time because of it.

> I think you should read up a bit more on the technical implementation of
> Face ID. The data never leaves the phone and, even if an attacker had
> physical access to the phone, they could not get the information. Apple is
> getting no training data from it.

Thank you for this, it's useful. It still doesn't assuage me that it's
impenetrable though, just that it would be incredibly difficult to obtain.

~~~
landa
Face ID works way better for me. I couldn't get the fingerprint reader to
recognize my stupid sweaty fingers sometimes, but Face ID always works for me.

You can unlock your phone in the dark. It doesn't rely on visible light, as it
shines its own infrared light on you. You were probably holding your phone too
close to your face.

~~~
Judgmentality
I know how the sensor array works - I've personally built structured light
sensors for robots.

It doesn't work for me. I'm glad it works for you.

------
mortenjorck
As is often the case, this is some very interesting technology, but for now,
we’ll only see it used in some novelty applications.

An increased level of spatial awareness for phones will be huge in the coming
decade. However, it will almost certainly be a result of sensor fusion between
a Soli-like radar sensor, a FaceID-like ToF sensor, enhanced positioning and
pose detection, RGB cameras, microphones, and a lot of ML to assemble a
comprehensive picture of environmental context and user intent.

Radar is one more piece of the puzzle in building products that can read the
same cues we naturally use to communicate with other humans: Imagine, instead
of telling a voice assistant “Alexa, turn down the volume,” where you have to
use a phonetic trigger, and all the system has to go on is audio, something
more natural: You look in the direction of the hardware, say “turn it down a
bit,” and make a pinching gesture with your hand. The system can assemble all
these pieces (you were looking at it, you spoke in its direction, you
gestured) and, with a sufficiently-trained neural network, make a more
conclusive determination of your intent.

------
spectramax
Those growing up with Remote Controls of the kind:
[https://i.imgur.com/AIqz63k.jpg](https://i.imgur.com/AIqz63k.jpg)

After a few days, the user develops a muscle memory of sorts. User doesn't
even have to look at the controller and all actions (and feedback) are
executed through the tactile interface. From cockpits to nuclear power plants
to home tv remote control, there is absolutely nothing that replaces physical
buttons, encoders, sliders and toggles.

I haven't formally studied UI/UX, but these are important:

\- Feedback for an action

\- Predictable steps to take an action (Muscle memory)

\- Fast response

\- Expose current state (sliders, toggles do this)

There should be 0% ambiguity or the user gets frustrated. Any piece of
technology that puts impedance in this process is no fucking good. User
shouldn't have to "guess and wait" whether the device recognized their gesture
to swipe. A physical button _guarantees_ that the action was performed by the
means of feedback. Nope, sound feedback or taptic stuff still isn't as good as
the click of a button. It can be but no one engineers it well. For example
MacBook Trackpad that "clicks" without moving is excellent. Seeing touch
screens (one exception, phone), cap buttons, gesture controls, etc everywhere
makes me sad because it has nothing to do with UX but everything to do with
the bottom line (cost) and marketing, and in this case perhaps better ad
tracking? I will put this in plain words - Don't trust a company that sells
ads at the same time as building hardware. Either sell ads or sell hardware,
not both. Google already serves software which relies on trading off privacy
(even if it is anonymized). When it comes to hardware, I freak out and no way
in hell this thing sits in my home.

~~~
Fernicia
>there is absolutely nothing that replaces physical buttons, encoders, sliders
and toggles

Even space, versatility, & cost aside, there are definitely tasks that a touch
screen does better. Using a map with a touch screen is incredibly intuitive
compared to a mouse. (I cannot imagine using sliders or other "physical"
interfaces)

>A physical button guarantees that the action was performed by the means of
feedback

So you've never experienced pressing on a TV remote and nothing happening? On
touch screens I can see if the app responded to my interaction. On many button
interfaces I cannot.

>taptic stuff still isn't as good as the click of a button

Not sure why you're so confident with this when most people I tell are
surprised that their new Macbook touchpad is entirely haptic and not actually
moving.

>Don't trust a company that sells ads at the same time as building hardware

So by this definition Apple is worse than Google for privacy?

~~~
Crinus
> Using a map with a touch screen is incredibly intuitive compared to a mouse.
> (I cannot imagine using sliders or other "physical" interfaces)

What sort of map use you have in mind? I find panning (drag+move) and zooming
(mousewheel) maps with the mouse very intuitive.

~~~
carlinmack
Swiping and zooming a map on a phone or tablet is quicker and far more
intuitive than clicking and dragging with a mouse. You can't scroll to a
precise depth without moving your mouse to a specific UI slider. Whereas with
touch you control it with the spread of your fingers.

------
ranie93
The "dial" gesture is incredibly subtle. Kudos to them if it works reliably.

MKBHD doesn't seem convinced in the efficacy of the sensor* from his Pixel 4
video:
[https://youtu.be/sKJ4i7p-o-4?t=326](https://youtu.be/sKJ4i7p-o-4?t=326)

* from Analemma_'s comment below, the Pixel 4 doesn't seem to be running the full blown chip

~~~
Analemma_
Ars' report [0] seemed to indicate that the gestures work great with the full-
size Soli chip, but not the miniaturized one they had to cram into the phone.

[0]: [https://arstechnica.com/gadgets/2019/10/pixel-4-hands-on-
pro...](https://arstechnica.com/gadgets/2019/10/pixel-4-hands-on-project-soli-
just-seems-like-wii-waggle/)

------
doctoboggan
I just watched MKBHD's Pixel 4 video and he said the wave gestures worked
maybe 10% of the time. If that is true I hope it is a bug as shipping
something like that should never happen.

------
danicgross
Interestingly, this caused the Pixel 4 to get banned in India:
[https://www.androidpolice.com/2019/10/15/india-wont-get-
the-...](https://www.androidpolice.com/2019/10/15/india-wont-get-the-google-
pixel-4-as-soli-fails-to-secure-governments-nod/)

~~~
s_y_n_t_a_x
Radar is just turned off in Japan. Wonder why India banned it.

~~~
drusepth
Save you a click:

>The radar uses a 60GHz frequency band to attain the advertised accuracy, and
that’s exactly where the problem lies. India has reserved this mmWave band
only for military and government use for now and it needs to un-license this
frequency before allowing civilian use for applications like Soli.

>The report adds that Google did consider disabling the radar for the units
sold in India, but it still wouldn’t have guaranteed a sales permit, and
removing the hardware wasn’t an option.

------
matthberg
I wonder how this will impact fingerprinting of users from a privacy and
security perspective. It would be useful as a means of identity verification
based off of physical properties of the user beyond just their fingerprint or
iris. Yet it would also be a massive privacy concern, particularly since it
advertises 360° sensing.

------
simonebrunozzi
I'm Italian, and I can tell you us Italians have waited for this for at least
two thousand years!

We can finally stop pretending our sounds are the language, and get back to
using only gestures :)

------
throwaway5752
Oh my god, extraordinary technical feat, but _do not want_. Seriously, someone
go out there and charge $1000 for a dumb 50" television. Give me a phone that
doesn't have an assistant or this Soli and I will give a premium for it.

------
shantly
Gonna require "the cloud" so they can collect free machine learning model
training data and spy on us, I assume?

~~~
pacala
No, it’s doing all the heavy lifting on device, only sending the compressed
output to the cloud for spying, err, offering you a better service.

~~~
shadowgovt
TBH, when this data is shipped up to the cloud, it may be used for
customization that _could_ be fed to the ad engine, but it's especially
important for refining algorithm.

A lot of the leaps in high-fidelity human-computer interaction (voice, face,
and likely this new gesture system) have been made with having enough data
about real-world interactions to train the models. It's how a company finds
out about the ten thousand things that happen in the real world that their
lab-models missed, and gets their algorithm from 90% accuracy to 99.9%
accuracy.

------
fortran77
It would be nice to bring "hover" back to touch applications. You can have
hover with a mouse but not with your finger. Good for tool-tip style help.

------
arxpoetica
Relevant cross post from Bret Victor and hands:
[http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...](http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/)

This technology seems promising in the face of his ideas.

~~~
angleofrepose
A step in the right direction maybe, but it's missing half of the equation.
And this seems a space where the whole is greater than the sum of its parts:
we aren't half way there. You may be using your hands to gesture, but you
aren't feeling or manipulating anything. Every single picture in that post has
a physical thing in the hand. This product never has a physical thing in the
hand.

------
lalos
They should use this technology to charge Youtube video ad views depending on
how many people are watching the ad on a phone. Multi-user presence will be
interesting to see how it gets used as an API.

~~~
akersten
Or prevent a show from being screened to more than two people! Imagine how
many missed royalties can now be collected! With Google Pay integration, the
user can even be charged automatically to avoid the inconvenience of having to
apply for a screening license.

~~~
ssully
I know you are joking, but I am pretty sure this was a patent Microsoft filed
in conjunction with Kinect. I could be wrong about if it was Microsoft or not,
but it was definitely about a camera noticing how many people were watching a
digital product/ad.

~~~
skeptical900067
It was Microsoft.

------
boltzmannbrain
Slightly off topic: Where can one find these minimalist cartoons/sketches of
people for websites? Ideally free, but those options appear super cartoonish
and limited.

~~~
kirmerzlikin
something like this - [https://www.humaaans.com/](https://www.humaaans.com/) ?

~~~
boltzmannbrain
Perfect, thank you :)

------
bityard
Mods: can we change the headline to include a few words on what a soli is?

------
stefan_
So, um, where is the technology that lets my phone float in free space as I
execute these gestures, like in the video?

~~~
ben_w
Well, there is this if levitation is _really_ a must-have feature for you:
[https://www.amazon.co.uk/MASUNN-Magnetic-Levitation-
Revoluti...](https://www.amazon.co.uk/MASUNN-Magnetic-Levitation-Revolution-
Technology/dp/B074Z59C6Z/ref=mp_s_a_1_3?keywords=magnetic+levitation+display&qid=1571240589&sr=8-3)

------
flmontpetit
How does waving your hands in the air "feel more human" than using your
opposed-thumb-equipped hands to interact with some material object?

------
lucideer
> _Private_

> _Soli is not a camera and doesn’t capture any visual images._

This is a privacy misconception that really needs to die (something also
discussed in the W3C ambient light sensor thread recently on HN frontpage).
Sensing involves privacy implications. By. Definition.

~~~
pdpi
We don't even need to resort to "by definition" here.

They describe the tech as "detect[ing] objects and motion through various
materials". We're effectively talking about the equivalent of TSA full-body
scanners. How's that for privacy concerns!?

Also, for a bit of pedantry, if this is literally "radar" as they say it is,
then the argument that "Soli is not a camera and doesn’t capture any visual
images" is arguing semantics at the level of splitting "which EM frequency
ranges count as a camera and/or visual imagery" hairs.

~~~
aidenn0
The site doesn't say what band they are using, but their research articles
contain a lot in the 60GHz range, which is certainly high enough resolution to
capture images, particularly if you use synthetic aperture techniques (you can
already do inertial movement sensing on a phone to aid with this).

------
teddyh
Remember the gorilla arm.

[http://www.catb.org/~esr/jargon/html/G/gorilla-
arm.html](http://www.catb.org/~esr/jargon/html/G/gorilla-arm.html)

------
6gvONxR4sf7o
I think a killer app for this could be small screens. You could have a UI with
pseudo-"buttons" around the edge of the screen large enough to see but too
small to actually press, letting the interactions take place just off screen.
This could let you turn a 2x2 inch screen like a watch into one that's
effectively a couple inches larger in terms of the interactions available.
Right now my watch's design is almost entirely constrained by having UI
elements big enough to easily touch, taking up precious screen space.

------
melling
Google finally ships a research project in a real product and people don’t
seem to understand the usefulness.

Human nature for some reason often devolves to “what’s the point?”

Here’s the 2015 announcement. There’s a lot of potential. Maybe we’d all
benefit from “imagine what’s now possible “

[https://youtu.be/0QNiZfSsPc0](https://youtu.be/0QNiZfSsPc0)

~~~
meowface
It's really cool and innovative technology, but I also struggle to see the
point for the common user. If this were a computer monitor or projector
screen, sure. But my phone is pretty much always in my hand if it's being used
in any way. If it's already in my hand, what do I gain from this?

I can see how it'd be very useful for people who mount their phones on some
sort of distant stand. But I feel like that's pretty uncommon; though maybe
I'm just ignorant of how common it actually is. It's probably much more common
for tablets, like for watching videos/movies at a distance. I do that, and I
could definitely see myself using this with a tablet. But for something small
that's pretty much always either in my pocket or in my hand?

~~~
kllrnohj
Your phone isn't _always_ in your hand. In fact, your phone isn't in your hand
for the vast majority of the day most likely. And it's in those scenarios that
something like this could be cool if it works well.

If it can detect your presence before you actually pick it up it could do
things like fire up the face auth systems to give quicker unlock. It can auto-
lock if you set it down and walk away without manually turning it off or
waiting for the timeout. If the alarm is going off it can lower the volume if
it detects your presence before you actually dismiss or snooze the alarm.

There's all sorts of possibilities for all the times where you are not holding
your phone. If it works well, that is.

------
earth2mars
why no one talks about health and safety of this technology?Not that it will
cause any issues, but interested and curious about any peer reviewed studies
on how prolonged exposure of 60GHz with close proximity to the body will
affect? If there is none, I would love to jump on to the ship. Because, it is
so cool! if there is something or have no evidence, then the community or the
experts should talk about it and push for proper studies before 10 years from
now folks exposed with Soli got cancer!

~~~
igornadj
There's nothing special about 60GHz, it's just like any other radio
transmitter that's been around for decades. The safety is well understood and
apart from some pseudoscience around it (as has been for every new tech,
including microwave ovens, cellphones, 5g etc.), there are no concerns.

------
llamataboot
I've tinkered around with gesture recognition with the Magic Leap quite a bit
for art projects. I hope this gets released as a standalone product with an
SDK!

~~~
tastroder
According to Google cache they had an SDK at one point [1] - guess that went
down the drain after monetization. The chip itself seems to be an infineon
product [0] you could buy but the interesting part will likely be the software
stack. From a research perspective this is certainly cool tech. But it seems
kind of odd to have an ad / landing page for an aspect of their tech you can
only get as a gimmicky part of a feature phone. All the "it's not a camera"
wording makes me feel like this is just a marketing campaign to say "see,
we're not creepy".

[0]
[https://www.infineon.com/cms/en/product/promopages/60GHz/](https://www.infineon.com/cms/en/product/promopages/60GHz/)

[1]
[https://webcache.googleusercontent.com/search?q=cache:8OjB0R...](https://webcache.googleusercontent.com/search?q=cache:8OjB0RVLk5IJ:https://atap.google.com/soli/developers/+&cd=1&hl=en&ct=clnk)

------
rhacker
I think this is better for something like a Nintendo. To put it in a phone is
kinda invasive. I would hope that at minimum the chip can only be specifically
activated by the user in specific apps. To just have it on all the time is
kinda crazy. Are they going to disclose when they can identify WHO is using it
rather than just multiple bodies, etc...

------
Meai
I think this could be synergetic with voice commands. It wont be useful on its
own but sometimes you dont want to say every command out loud, so doing a
little annoyed wave feels more natural. Soli on its own is not going to sell
phones in my opinion but maybe Google is building up to a larger more aware
device that can ultimately guess your entire mood, behavior and position in
the room and do various things based on that. Like a sort of virtual person
that would detect if you look worried and then it asks you how you are feeling
today. That's a futuristic direction that I can see this going towards. For
example, one usecase that I would probably like is a phone that tracks my
posture during workouts and if it's really advanced, it could give me advice
or fire me up on my last reps. Like a personal trainer. Right now it seems we
aren't anywhere near that but this is my guess for their strategic direction.

------
classified
So now Pixel phones have the perfect spy chip and they call it "private"
because it doesn't record images. But then they don't say what data it records
and what they do with it. What they do have is boatloads of stupid buzzwords
for the dumbest phone user imaginable. That doesn't bode well at all.

------
aantix
I might start sweating if it takes that much energy just to move to the next
track.

Edit: Not sure why this got voted down? Hold your arm in an upright position,
swat to the side a few hundred times, you would feel at the very least, muscle
fatigue.

For repetitive tasks, you want efficiency of movement, not dramatic movement.

~~~
drusepth
I think you're getting voted down because "holding your arm in an upright
position and swatting to the side a few hundred times" seems like an
exaggeration of what's required to "move to the next track".

I would imagine to move to the next track in a song you'd be using an action
from the "Active" demos at the bottom, which may require some kind of initial
motion to enter the "active" state, but also seem like they'd work with your
arms resting at their sides, and be literally almost no energy whatsoever to
e.g. swipe your thumb along your index finger once.

------
Copenjin
Note at the bottom:

> 1) Pixel 4’s Soli implementation does not include all capabilities and
> interactions described here

~~~
rtkwe
I wonder if that's a software or hardware imitation.

~~~
Copenjin
I hope it's hardware or they would be selling vaporware.

------
berdon
This seems an awful lot like the eye/head tracking in the Amazon Fire phone
and that went over super well.

This type of input makes sense in certain applications/platforms but a phone
isn't one of them - at least not with current phone usage patterns.

------
yalogin
Why is this needed? I can see there are some niche needs like when you are
cooking and cannot touch the screen. But that is what voice is for. What
exactly is soli solving? Looks like over engineering to me, simply because
they can.

~~~
sturmeh
Let's not pretend voice isn't also a huge novelty.

Yes the gestures are majorly a novelty, but the fact is, this is a radar, not
a gesture sensor.

It would be capable of determining if you're in a dangerous situation (by
sensing assault / weapons), it could help a blind person avoid walking into
walls, or simply chime when it falls out of your pocket.

Some of these have very high power requirements which is why it's not entirely
practical to bake it into the phone as a common feature, but they're all
entirely within the realm of possibility.

------
m3kw9
The screen is right there but you want me to not touch the screen to interact.

~~~
rtkwe
You can still do it the normal way.. It's a neat idea and new interaction
methods coming up means we can experiment and maybe find things that work even
better. Soli if they work out the kinks would be great because you could
increase the interaction area available without having to make a bigger
device.

------
suyash
I'm curious is there are any health studies done around this, what about the
dangers of EMF radiation when it's on all the time so close to your body, not
to mention in your pocket all the time.

------
theAS
This doesnt make sense for phones. But when it becomes reliable, it will be a
good usecase for smartwatches. It already has less screen area and limited
option for button input. So gestures makes sense.

------
tomekjapan
Lots of negativity in the comments here, but I personally will love to have a
way of interacting with the phone even when my fingers are wet or dirty.

------
puranjay
> Soli is aware of your presence.

I can't wait until I get my official Google-Amazon Big Brother living room
portrait with built-in Soli and Alexa

------
nexuist
>Motion Sense will be available in select countries.

Just...why? What is the point of region locking this? It's only hardware,
isn't it?

~~~
opencl
Google does love pointless region locking but this is probably due to RF
certification requirements from each country's equivalent to the FCC.

------
glenvdb
In an earlier iteration of the web page they had really cool examples of
controlling sliders, radial dials, panning surfaces, highly specific pinch
gestures etc.

All of those have been removed and now it's just waving. It's cool tech and
requires solving lots of hard problems, but it's a shame it falls so short of
the initial vision.

~~~
sharcerer
Still there. some of them.

------
luizfzs
I was impressed when I saw it back in 2012. It took them a while to ship it
into an actual product.

------
jiofih
The technology is amazing, as seen in previous demos, but how much of it is
actually possible in the phone? The video is very underwhelming, showing
simple wave motion that was possible using IR, Motorola had it even before
smartphones over a decade ago.

------
anderspitman
All I want is a small Bluetooth device (maybe wrist-mounted) with a few
buttons, a slider, some sort of rotary sensor like a mouse wheel, and
accelerometer. Then give me complete control in my phone over how the software
interprets the actions.

------
taf2
It will be so fun to watch my 2 year old play with this . If it’s good we’ll
be happy if it’s bad it’ll be like the iOS 7 upgrade from 6 when my other then
2 year was so frustrated she threw the ipad down the street shattering into
many parts

------
rland
I could see this being extremely useful in the health space, since you could
interact without touching anything and transmitting bad bugs.

------
reaperducer
Meanwhile, deep in the bowels of Google...

    
    
      if (handwave.slows > 10%){
      do: flag: arthritis
      do: alert: advertisers }

------
est31
This is a beautiful piece of tech for any ad company looking to collect
reaction information to ads (like google):

* It works at low power. Turning on the front camera and processing the visual data generated needs large amounts of power.

* It can't be blocked as easily as a front facing camera.

* People are less aware that they are being surveilled, which makes them less motivated to block it.

* Yes, it's not capturing any facial expressions, but the information generated probably still gives valuable cues to advertisers.

Also, get ready for unskippable ads pausing when you aren't looking into the
direction of the smartphone.

~~~
axaxs
Ugh. I remember when Black Mirror was viewed as creepy and dystopian...

See:
[https://en.wikipedia.org/wiki/Fifteen_Million_Merits](https://en.wikipedia.org/wiki/Fifteen_Million_Merits)

~~~
shantly
Fifteen Million Merits is one of their episodes that's _not_ about the future
at all, the way I read it. More of an allegory about _now_ (or, _now_ in 2011,
anyway).

~~~
axaxs
That's an interesting take that I could completely see now.

When I watched it, I guess I took it more at face value, that in the future
when everything is automated, only celebrities and 'power generators' will
exist, and of course the rest of the stuff like unskippable ads and in-your-
face pornography.

~~~
shantly
I think the lack of "enforcers" of any kind throughout the episode is key to
understanding it. The closest thing we really see is the Cuppliance drink. The
stage hands hesitatingly start to step in near the end, but are waved off and
aren't exactly jack-booted thugs anyway.

Those demoted aren't dragged away crying. Our protagonist does property damage
without apparent censure (one is left to suppose the screens get replaced,
eventually, while he's out, maybe or maybe not with a charge to his merit
account, similar to the way "detritus" is taken care of).

This is why I don't think there's much room for ambiguity re: whether the
outdoors at the end is real. If it's not then that's undercutting what's been
the key theme of the rest of the episode, of timid acquiescence to a very
_gently_ coercive system. If it's real then that's the punch line of the whole
thing—these people could just leave, and they don't because of some
combination of social norms, fear of the unknown, fear of losing what
(pointedly crappy and meaningless—the avatar skins and such) comforts they
have, and (one may assume) some soft but effective persuasion techniques that
might come up if they tried. Since assuming it's the latter is consistent with
the rest of the episode and makes it much stronger, it doesn't make much sense
that it'd be anything else or even be deliberately left ambiguous (why?).

It's all about how control and how a "trapped" state can occur absent anyone
waving guns around. With more than a little commentary on social class,
celebrity, electronic entertainment, and the hollowness of participation in an
economic system and society heavy on alienation (the bikes), along the way.

[EDIT] I just skimmed the first few minutes again because I remembered there
being a whole thing about fruit, and not only that, the opening's _full_ of
cartoony imagery of the green outdoors. The outdoors at the end is definitely
intended to be real, not fake or even ambiguous, unless we assume incompetence
on the part of the creators. It's the antithesis of the wholly false, not-
even-trying-to-look-real outdoors scenes that are all over the beginning,
which _did_ leave room to wonder what the world's like outside this
environment—the ending gives us the answer.

~~~
axaxs
I've always been jealous of people who can see so deeply into things, I
absolutely cannot unless it's pointed out. I think my mind wanders off when
watching, so I miss the details. In any event, thank you very much for the
write-up, this makes that episode so much more intriguing to me.

~~~
shantly
Haha, no problem, but it's mostly just practice. The key thing seems to be
recognizing that there's anything worth picking out in a piece of art to begin
with—those with little practice don't notice it anywhere, or else will see it
everywhere because of course it's possible to bullshit deep themes into an
average episode of NCIS if you're so inclined—then questioning _why_ for
various decisions. If you're not sure what something means in a thematic or
thesis sense, look at which possibility or possibilities go best with the rest
of what you've seen (or read, or heard—works for books and music, too).

FWIW I find most of the rest of Black Mirror a lot less _rich_ than Fifteen
Million Merits, though I like most of it. I couldn't write as much on most of
the episodes as that one, because I'm just not sure there's as much there.
Most are a fairly straight route up to usually some kind of twist, with a
little worth saying about the relation of the twist to the rest but mostly
what they're up to is pretty _on the surface_. Nothing wrong with that, and
that fact that it consistently tries to say _anything_ at all puts it above
most TV, so far as that goes.

To expand a little into other episodes, though, to give some more idea of what
I mean by asking _why_ about choices creators make in art: take The National
Anthem, for example. When a Certain Pivotal Scene (you know the one) happens,
the creators could have depicted it several ways without changing the actual
story. A few possibilities: 1) actually show it, entirely, 2) show it but crop
out the worst of it, 3) keep the camera nearby, say on people in the room or
in the next room over, or on the fictional production crew "shooting" the
event, 4) any of the above but show only a little and cut away, 5) just skip
straight to after, don't have any footage that takes place during it at all,
among other options. There was a _choice_ to make, and the one that was made
isn't exactly revolutionary, but also isn't one of the most obvious ones: keep
the camera going for all of it (or maybe just most? I can't recall for sure)
after showing us every moment at the scene of the event leading up to it, _but
only show the faces of people watching it on TV_. Why? It wasn't the only way
they could have avoided putting the event itself onscreen, why do it that way?

Showing people watching a screen, especially something sensational that's been
teased from the very first scene of the episode, and holding the camera on it
that long does a couple things, I think: in the general case, it invites
identity with or comparison between the real life viewer and the viewer in the
show; in this particular case, I think the show is also delivering on a kind
of implicit promise of horrible spectacle by _showing us the most disgusting
thing that 's happening in that moment_, by the creator's judgement. Anyone
watching the show, even if they hope the event won't happen or that it won't
be depicted at all if it does (which is hopefully most people?) _was still
held in suspense and to some degree entertained_ by that will they/won't they
dynamic. The result is that this choice both casts judgement on the viewers-
in-the-show _and_ prevents the real-life show watcher from casting them as
Other and distancing themselves from those voyeurs. This option was chosen by
the show's creators because it conveys a message different from what others
would have (most of the other options above wouldn't convey much at all,
without some further effort, but would advance the plot just the same) and
does so effectively. That single choice causes multiple effects on and
messages to the viewer, working toward one end.

To take that a step further, both reinforcing the likelihood that this was
intentional and possibly adding insight into other episodes, the show
_repeatedly_ returns to themes along these lines, of the viewer-as-complicit.
White Bear's a huge one, obviously, featuring punishment essentially for the
act of watching a crime (and filming! The show also loves to jab at itself and
its creators pretty viciously, as Fifteen Million Merits manages to also do in
addition to everything else) with a kind of Greek hell of... having crimes
done to them while "normal people" watch and record, which is called justice.
A couple others tread similar ground—Shut Up and Dance, notably.

Moving into more tenuous territory: Jan Junipero _may also_ be doing something
like this, forcing the viewer into more direct confrontation with some message
in it by identifying them with some element of or character in the show, at
the end, just a bit. The long sequence of server farms and robot arms at the
end definitely adds a sense of melancholy and unease over the happy ending and
neatly reframes our own emotions about the episode, which is probably the main
thing it's intended to do, _but_ that also looks an awful lot like any modern-
day server farm. Like where Netflix episodes come from. Like the episode you
just watched, finding temporary joy in this fiction while sitting, apparently
lifeless... and oh look it's suggesting another show or movie, how nice, this
could just go on forever, couldn't it? And that's the season Black Mirror
transitioned to Netflix, I'm pretty sure.

That's quite stretch from the pretty clear use of those themes and mechanisms
in other episodes and I wouldn't say I'm anywhere near as sure of it as some
of the other stuff, since it's pretty abstract and there are other, stronger
ways to explain _why_ the scene is there and why it looks the way it does...
_but then again_ Bandersnatch came out a few years later and hits some of the
same notes _explicitly_ , which makes it just a tad less likely that that
_wasn 't_ on the creators' minds when they decided how the end of San Junipero
should go, and a little more likely that that connection _was_ intended.

------
lelf
> _Motion Sense will evolve as well. Motion Sense will be available in select
> countries²._

What this has to do with the country?!

~~~
freehunter
According to other comments, various countries have banned the technology. So
that's why it's not available everywhere.

------
prirun
I wish Google would figure out how to do speech-to-text without mistakes
before they start dabbling in radar.

------
gkfasdfasdf
I wish they could use this to turn off touch input on the screen when I'm not
looking at the phone.

------
hiccuphippo
This reminds of the cellphone surveillance system from that one Batman movie.
Scary stuff.

------
imglorp
I wonder if that's proximity sensors or always-on gesture recognition doing
image processing off the front camera.

Either way, gesturing left/right/up/down doesn't seem to add more than
touching the device. My Moto G6 has that now, using the fingerprint sensor as
a directional swipe button, love it.

~~~
Klathmon
they specifically point out on the page that it's not a camera, doesn't
capture any visual images, even works through some solid materials, and can
sense in all directions around the device.

This looks literally perfect for something like a smart display. I have a
google home smart display on my desk, and I like to keep it back far enough
that it's a bit of a reach to touch the screen and interact with it. If i
could just wave in front of it, that would improve things significantly! But
add in the other benefits of something like this (just being able to sense
where a person is around a device like a smart speaker or display seems like
it could be extremely useful for better sound projection and better microphone
listening), and it seems like it really could be a game changer.

I am wondering why they decided to first release it in a phone, where it seems
like it has the least benefits...

~~~
stestagg
> I am wondering why they decided to first release it in a phone, where it
> seems like it has the least benefits...

Google wanting something in your phone that allows it to track engagement? I
can't think why.. :)

> Engaged > Soli anticipates when you want to interact.

~~~
shantly
Plus a stationary or semi-stationary version only lets them collect data from
wherever you place it. Put it in phones and pretty soon you've got whatever
sort of data this thing picks up from damn near every room, every street,
every trail, every car, every _everything_ anywhere that people exist. Data to
train your machine learning algos, for free. God knows what else—data to map
every square centimeter of every environment in the modern world, for all we
know.

Their priorities are spying-first, typically, so this is unsurprising.

~~~
shadowgovt
It may help to understand Google's priorities by following up on this Hacker
News link
([https://news.ycombinator.com/item?id=21271087](https://news.ycombinator.com/item?id=21271087)).

Their goal is ubiquitous access. To get to that goal, they're collecting a lot
of data about the world and their users to figure out where, when, how, and
why users want data to optimize getting it to them. And yes, it probably
serves their ad model too, but there's more to it than that; Google is helmed
by a futurist and employs futurists, and is looking towards a not-too-distant
future of always-on personal networks enhancing what a person can do in their
day-to-day.

~~~
loriverkutya
Their goal is the same as every company's goal, earn money.

If they introduce anything new, the first question is, how that will make
money for them. If it is collecting data, the question is, how they can use
that data to earn more money.

Also as it has been mentioned by someone in the comments, I'm really not keen
being surrounded by another set of devices that can "see" me CCTV is more than
enough already. I also I don't want to invite google to my home to map it out.

Probably Pixel4 owners are going to be asked to turn off their phone and put
it into a box next to the door when they want to come into my flat.

~~~
shadowgovt
Kind of but not exactly. Money is the lifeblood that the corporation needs to
survive; earning money is the goal in the sense that the purpose of human life
is "eat food."

The fact that the corporation is still basically privately owned though
publicly traded (in the sense that the founders retain a controlling stock
percentage) means that they can use the money as a means to whatever ends the
founders wind up the company and point it at. They know the game is over if
they run out of money, but that doesn't mean the game they're playing is
"Build maximum monetary value for shareholders" any more than the game you or
I are playing daily is "What will I have for dinner." They have enough
controlling interest to vote that that's not what the company's primary goal
is, in practice.

Now, why would anyone who _isn 't_ them play that game by buying GOOG/L stock?
Because in spite of the company's goal not being "maximize revenue," it's very
good at generating both revenue and product people care about, and the people
trading its stock are excited about that. They get a piece of the action, even
if they don't actually call the shots.

------
bitwize
The sensor sucks because it's small. Plus as a radar it's subject to
regulatory requirements. Google should have gone with a lidar-based solution,
but the concept of putting lidar in a cellphone is probably heavily protected
Apple IP.

~~~
mileycyrusXOXO
Lidar requires moving parts. Not an ideal solution for something that needs to
be low powered and compact.

~~~
kins
There may be solid state LIDAR available?
[https://www.businesswire.com/news/home/20161213005517/en/Vel...](https://www.businesswire.com/news/home/20161213005517/en/Velodyne-
LiDAR-Announces-Breakthrough-Design-Miniaturized-Low-Cost)

------
paul7986
Wonder if this was created exclusively in house or did Google ATAP do this
again....

[https://news.ycombinator.com/item?id=18566929](https://news.ycombinator.com/item?id=18566929)

------
itsfirat
Just touch the screen. It's already on your hands!

~~~
LeonM
Unless it is implemented in anything different than a handheld device

Unless you have disabilities

Unless you are not allowed to touch the device (like while driving a car)

etc, etc.

------
ppeetteerr
I can't take Google seriously. Every time they add a new feature, my mind goes
"and then they will sell you ads on this geospatial information".

------
crazygringo
Good lord... this is a really cool piece of technology and half the comments
here are just complaints that it will enable better ad tracking or erode
privacy or that touch screens are good enough.

This is HN. This is _cool technology_. Can we just stop to appreciate what
cool new interfaces or game concepts we might be able to build with this,
rather than jumping on the knee-jerk Google hate train?

~~~
Kiro
I agree. I completely understand privacy criticism but we don't need to fill
the whole thread with unsubstantial sarcasm.

~~~
throwawaylolx
For many, privacy concerns are more important than cool tech, and this is not
a Show HN.

------
aloknnikhil
I honestly don't get why you'd need a motion sensor on a handheld device. Let
alone the limited application of such an input, you'd literally need to keep
the device stationary for the motion sensing to be effective. It's a usability
nightmare.

>
> [https://www.youtube.com/watch?v=KnRbXWojW7c](https://www.youtube.com/watch?v=KnRbXWojW7c)

This ad not only fails to sell the feature but also makes using the
touchscreen look easier.

~~~
jcfrei
One useful thing I could imagine is for cooking. If you got dirty hands you
will (hopefully) be able to unlock the phone hands free and browse through a
recipe.

~~~
lm28469
They probably dumped hundred thousands man hours and hundred millions $ on
this so we can skip songs while cooking. Every time I see a project like that
released by a major tech company I lose a bit of my faith in tech.

------
mlevental
I'm sure the answer is no but can I buy just the chip?

~~~
lukifer
That's what I want to know as well. This github[0] mentions the sensor as a
standalone item, and this appears to be the manufacturer[1]. This[2] is the
closest I could find for purchase (24GHz, as opposed to Soli's 60GHz, and
nearly 300 bucks to boot).

[0] [https://github.com/simonwsw/deep-soli](https://github.com/simonwsw/deep-
soli)

[1]
[https://www.infineon.com/cms/en/product/promopages/60GHz/](https://www.infineon.com/cms/en/product/promopages/60GHz/)

[2]
[https://www.avnet.com/shop/emea/products/infineon/demodistan...](https://www.avnet.com/shop/emea/products/infineon/demodistance2gotobo1-3074457345633899287/)

------
lgleason
It feels like a marketing gimmick vs a useful application. Probably will be
cancelled in a year or two.

~~~
dang
" _Please don 't post shallow dismissals, especially of other people's work. A
good critical comment teaches us something._"

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

------
junkthemighty
Consumers need to boycott this stuff. This chip has one purpose: surveillance.
All the "features" on this press release are contrived and useless, but they
are hoping it's shiny enough that we like it and want it.

~~~
drusepth
Why do consumers "need" to boycott this stuff? I want this stuff, and I want
it cheap and readily available -- and hopefully more advanced over time. The
"features" on this press release look useful to me as a start to something
much better later.

~~~
qzx_pierri
Some people find joy in maintaining their privacy. They also ruin some of the
fun, because "regular" people (non privacy enthusiasts) find joy in cool new
technology like this. They make a good point, but they also probably didn't
read the documentation, because it clearly says sensor data is never sent to
Google.

~~~
junkthemighty
Why should we believe that this product _doesn 't_ serve Google's core
business--mass data collection -> advertisement?

Like another commenter said, this would be a lot more innocuous coming from
Nintendo or something.

