
Forget virtual assistants, Asteria wants to be your AI friend - nathantross
http://www.wareable.com/meet-the-boss/forget-virtual-assistants-asteria-wants-to-be-your-ai-friend
======
pyrophane
They are already taking device pre-orders and selling coins for their own
cryptocurrency, but I wouldn't give these guys any money right now. Other than
blacking out our collective tech-hype bingo board, they've provided very
little real information about how they intend to make any of this work.

From what they promise I'd expect their team to be stacked with AI
researchers, but it looks like just a CEO, a COO and a single PhD "advisor."
Who's going to actually build all this? Maybe that's why their "careers" page
shows that they are looking for everything from embedded systems to ML
engineers.

Seems like either a money grab or an overly idealistic founding team happy to
promise the world and figure out how to deliver it later.

Edit: nathantross, you posted this and you're the COO, right? Wanna respond?

~~~
pyrophane
As a thought exercise, here's how a hypothetical scam product (we'll call it
Hysteria) could work.

1\. Market Hysteria as a revolutionary product in whatever space is currently
attracting the most hype. Get everyone excited by promising to bring a popular
science fiction book/film to life.

2\. Spend half an afternoon cloning bitcoin to create new cryptocurrency
linked to your product and start selling the coins. Offer limited time "early
mover" price to capitalize on FOMO.

3\. Work your press contacts to create more buzz around Hysteria.

4\. Once you collected a few million dollars selling coins, go out of
business. You haven't really defrauded anyone (you delivered the promised
coins, and it isn't your fault they didn't turn out to be worth anything), and
in an industry where the mantra is "fail fast" your activity isn't likely to
attract that much attention.

~~~
WhitneyLand
How about doing all of this except build something real with a solid business
plan? For some of us the hype is the hardest part.

------
acobster
> Eventually they're predicting. Then you can do things like ambient
> intelligence where you can provide services and you can provide products or
> experiences to people before they know they need it.

So, it's your friend...but it's also there to sell you stuff. But just think
of it as your friend.

It sounds like they don't quite know what they're selling or how it's going to
be useful to people (which they kinda admit). I could see getting utility from
a VA that also suggests services for specific needs, but a friendship with
"ambient intelligence" behind it figuring out how it's going to chum up its
next product placement? If it's really a "true AI" why not sell it on that
merit alone?

~~~
fallous
Your "friend" the Amway sales rep or annoying insurance salesman who always
tries to bring conversations back to their bottom line. Same with "friends"
that are obsessed with politics and all things devolve into their particular
obsession.

As if we don't have enough narcissistic people to deal with every day. ;)

------
visarga
A clip on camera you can take with you while jogging or in a museum, and it
talks to you? It would be pretty embarrassing to be seen with it. This would
work out better in professional settings, such as in a hospital, providing
help to doctors.

~~~
ravenstine
I think it depends on the interaction. If it behaves more like a toy than a
tool, I could see it being kind of embarrassing. Then again, imagine just how
annoying it would be to have people walking around in a museum saying
"Asteria, tell me about this painting." and then the device blurts out some
description from Google without using an inside voice.

~~~
kuschku
> and then the device blurts out some description from Google without using an
> inside voice.

It would sound like an American tourist by doing that.

(There’s lots of articles from US expats in europe, or european expats in the
EU, showing how US-Americans tend to speak a lot louder than Europeans in
quiet settings, from museums to restaurants)

This leads to an interesting question: Which culture should a voice assistant
follow? Should there be multiple variants of each assistant?

~~~
acobster
Detecting and adapting to volume isn't that big of a challenge in comparison
to natural language processing. But if you mean something more subtle, like
discretion or taboo...that's probably much harder than NLP.

~~~
kuschku
Yes, and also adapting to mental concepts of different things.

That starts with phrases, but also applies to other concepts – different
cultures have different orientation systems even (some use cardinal directions
(north, east, south, west), some use relative directions (front, right, back,
left), etc)

~~~
schoen
I've only heard of one example of the cardinal directions being used as a main
orientation reference in day-to-day language:
[http://www.nytimes.com/2010/08/29/magazine/29language-t.html...](http://www.nytimes.com/2010/08/29/magazine/29language-t.html?_r=0)

I never read the research about this but examples are
[http://anthroweb.ucsd.edu/~jhaviland/Publications/ETHOSw.Dia...](http://anthroweb.ucsd.edu/~jhaviland/Publications/ETHOSw.Diags.pdf)
and
[http://pubman.mpdl.mpg.de/pubman/item/escidoc:66622:3/compon...](http://pubman.mpdl.mpg.de/pubman/item/escidoc:66622:3/component/escidoc:66623/1997_Spatial_description_in_Guugu_Yimithirr.pdf)
(very interesting stuff!).

Edit: I would _highly_ recommend reading the 2nd paper (which includes some
practical experiments testing how Guugu Yithimirr speakers thought about and
remembered spatial positions and orientations). It's astonishing.

~~~
kuschku
The thing is, it doesn’t stop there.

Even colors depend on cultures heavily. The ancient greek are believed not to
have seperated between yellow and green, other cultures similar.

Internationalization is a lot harder than it seems to most people. And then
there’s also accessibility.

Even with traditional UIs where everything is hand made it’s already an
extremely huge task, but a conversational UI is far more personal.

It has to deal with things like how much privacy or directness is expected in
cultures, with taboos, it has to have a full perceptional model of the person
who will hear it to be able to properly handle all this.

------
peter303
Talking cars and appliance were a fad a decade ago. They drove consumers crazy
who disabled these features. Consumers only want emergency alerts and answers
to inquiries, not bff with their toaster.

~~~
gadders
10 years? More like 30 :-)
[https://en.m.wikipedia.org/wiki/Austin_Maestro](https://en.m.wikipedia.org/wiki/Austin_Maestro)

------
sanxiyn
I heartily recommend Kill Process (2016) by William Hertling, where a startup
trying to bootstrap a social network uses AI to avoid empty network problem.
The novel describes the version of this "done right" pretty well.

Right now, you can find the version of this "done wrong" in "dating site"
populated by chatbots.

~~~
tedmiston
Sounds like an interesting dystopia, and definitely geared toward programmers.

Synopsis from the publisher:

> By day, Angie, a twenty-year veteran of the tech industry, is a data analyst
> at Tomo, the world's largest social networking company; by night, she
> exploits her database access to profile domestic abusers and kill the worst
> of them. She can't change her own traumatic past, but she can save other
> women.

> When Tomo introduces a deceptive new product that preys on users’ fears to
> drive up its own revenue, Angie sees Tomo for what it really is—another evil
> abuser. Using her coding and hacking expertise, she decides to destroy Tomo
> by building a new social network that is completely distributed,
> compartmentalized, and unstoppable. If she succeeds, it will be the end of
> all centralized power in the Internet.

> But how can an anti-social, one-armed programmer with too many dark secrets
> succeed when the world’s largest tech company is out to crush her and a no-
> name government black ops agency sets a psychopath to look into her growing
> digital footprint?

[https://www.goodreads.com/book/show/30658546-kill-
process](https://www.goodreads.com/book/show/30658546-kill-process)

~~~
sapphireblue
Am I the only one who thinks that dystopian scifi got boring a decade or two
ago, while utopian scifi is an almost entirely neglected genre?

I prefer a techno-optimistic point of view shown here
[http://foundersfund.com/anatomy-of-next/](http://foundersfund.com/anatomy-of-
next/)

~~~
shostack
I've wondered if that correlates with many of the old themes of such stories
becoming mainstream realities that it turns out people don't care much about.

------
behnamoh
I'm sick and tired of all these technologies that pretend to be AI, while
they're just some ML or DL (DeepLearnin') algorithms going on...

Some words have become so vague and ambiguous in the computer world that
sometimes I wish we would stop using them altogether, like: who is a hacker,
what is AI, what is Cloud, etc.

Siri, GoogleNow, Cortana, Amazon Echo and others claim to be "intelligent" of
some sort, but they're just as smart as their programmers.

Please just stop labeling your next super cool algorithm an "AI".

~~~
crashedsnow
I agree the term has become somewhat throw-away, but to be fair any AI system
probably IS just ML/DL algorithms. That is, ML/DL are avenues to create AI. I
think most people would agree that AlphaGo is an example of AI and was
achieved via DL, so by that definition it's just some DL algorithms.

~~~
behnamoh
>> ...any AI system probably IS just ML/DL algorithms...

I don't think so. ML/DL is just the beginning. Better AI solutions will be
discovered in the future. Note that computer neural networks are just
simulations of some reality, they're not complete yet. Many intricacies are
still to be researched on.

~~~
crashedsnow
I agree that AI != ML/DL, that's not what I'm saying. Tests like the Turing
test don't prescribe anything about implementation. I mean, if you could roll
together some Excel macros that passed the test then great. My point is that
if you do pull the curtain back on most "AI" systems today, at worst you'll
find some sort of basic adaptive learning system and at best you'll find a
[deep] neural net that supports both supervised and/or unsupervised learning.

I think the general spirit of the root of this comment thread is valid though.
There's a lot of "we're using AI and machine learning!" going on when in fact
all they're doing is remembering how frequently you pushed the blue button,
then recommending the blue button.

------
electic
I am going to take a step back from the technical merits of this and say the
whole thing seems sad. I don't need a need a dog collar on my neck or a badge
on my shirt. Lastly, you know life is over when you spend your time talking
and hanging out with a AI bot. Our society has become so isolated because
people are on Facebook and Instagram instead of talking to each other.

~~~
zachlatta
Disagree. Growing up on the internet gave me a sense of community and
belonging that I never found in my hometown.

The internet and social media can both be incredibly connecting things (that
is their purpose after all, right?).

~~~
tedmiston
Coming from a small town where software companies don't even exist — this
x1000.

------
iampims
Really curious about how this will pan out as everything said in the article
screams at me like someone has no clue how difficult making the device is
going to be, regardless of how much ”AI” runs on it.

Hardware is really hard.

~~~
neurotech1
Not disagreeing that hardware is hard, but IMHO Its possible to get a
comparable level of AI/DL performance in the "Asteria" device, using
relatively available technology like Zynq FPGA[0] based boards like the
Parallella[1]. I got my Parallella board from Kickstarter about 2 years ago.

[0] [http://www.xilinx.com/products/silicon-
devices/soc/zynq-7000...](http://www.xilinx.com/products/silicon-
devices/soc/zynq-7000.html)

[1] [http://www.parallella.org/](http://www.parallella.org/)

~~~
dharma1
I've got a parallella knocking about not doing anything. Did anyone write any
interesting software for it in the end, that's worth putting it to use?

~~~
neurotech1
I think some people did, but I mainly used mine as a FPGA dev board. $99 Zynq
board is still good value, even without using the custom Epiphany processor.

------
bb101
Reminds me of John Varley's book Steel Beach. Set in the future, humans live
on the Moon and everyone on Luna is connected to the central computer (CC)
which behaves at once as government, friend, guide, psychologist,
encyclopaedia and diary.

Fascinating stuff. Other topics in the book: nanotechnology and bioengineering
as everyday commodities, gender fluidity and the CC-human relationship.
Reminds me that the book is due for a reread!

------
thetest3r
Reminds me of the movie Her....

------
jazztoobs
there is a 0% chance that this team can ship AI features. it's probably
inevitable, but still too bad, that the AI space attracts so much noise. makes
it hard to see the teams doing real work.

------
c1ph3rS0n
To me it's just Star Trek tech coming to reality again.

------
dharma1
someone watcher "Her" and thought it'd be a good idea to do a kickstarter

