

Viv Is a New Artificial Intelligence from the Inventors of Siri - anigbrowl
http://www.esquire.com/lifestyle/a34630/viv-artificial-intelligence-0515

======
_benedict
All they show is its ability to do some specialized airline searching? Which
presumably means that is its current limit of functionality. Potentially
useful, for sure, but hardly a "New Artificial Intelligence".

Expanding it to generalized product searching, or finding concepts on the
internet, is a whole 'nother ball game, that frankly we haven't a clue how to
do. Specialized tie in to well understood product spaces like airline
ticketing, with a natural language processing engine tied in, is hardly world
changing.

------
sravfeyn
Is the possibility of human level AI a highly accepted Conjecture? Why does
almost everyone use that term so loosely?

~~~
dlss
1\. Brains are physical systems

2\. Physical systems can be simulated

3\. Brains have human level AI

Therefore the possibility of human level AI is real. Hopefully something
better than a simulated brain is possible though :p

~~~
mark_l_watson
I have been a paid practitioner of AI since the early 1980s but I have a
fundamental difficulty with the creation of a "real AI" that thinks as we do.

I follow the Idealism philosophy (which is the opposite of Materialism) that
posits that the fundamental property in the universe is universal mind or
consciousness and our individual thoughts and ego are derived from a filtering
operation: we as individuals filter out a small part of this collective
unconsciousness (to use Carl Yung's term) and that is what gives us our
identity while we are alive. All of modern physics would be layered on top of
the underlying mind or consciousness.

If these ideas interest you, I suggest reading "Why Materialism is Baloney" by
physicist and computer scientist Bernardo Kastrup.

All this said, I think "real AIs" are inevitable (even if far in the future)
but I think they will seem very alien to us.

~~~
deeviant
Philosophy is not the right tool to attack AI science, although it has taken
itself up to do so. I would sum up my general argument to support this as: "AI
has enormous philosophical implications, but philosophical thoughts have very
few implications on AI development (other than which research branches get
social blessing)."

The philosophical lines often seem to boil down into something like, "but even
if it was intelligent, it wouldn't _know_ what it's like to think like a
human, thus it isn't thinking like a human", but this seems like a really weak
line of thought to me. I don't nor ever could know what it's like to be you or
any other person, but I can certainly relate and be related to by others as a
human.

There is over whelming evidence that the brain's operation(and thus it's
resultant product: conscious and human though) _is a physical process_. There
is no physics that suggests we can't replicate this process, but using a
different substrate(ie electronic computing of some shape or another), and
there is no physics that suggests we can't make such a system mimic the
physical processes as a human. It seems more likely, in this case, that the
differences and difficulty of relating to such a artificial system would be
similar to that of just another different human being.

~~~
mark_l_watson
There are many theoretical physicists who argue that consciousness is tied in
with quantum mechanical effects (e.g., my Dad's friend Henry Stapp; they were
in the physics department at Berkeley together, and of course Roger Penrose)
that are not supported on conventional computers.

My Dad, who is a member of the National Academy of Science, does not agree
with me re: Idealism vs. Materialism, but that just makes for more interesting
conversations :-)

------
kitd
This looks rather similar to IBM Watson, but with the added input of the end-
user's preferences.

~~~
mark_l_watson
I have used IBM Watson and this seems very different (and also in very early
stage development).

This seems more like Google Now than Siri. My friends with iPhones just ask
Siri questions. Google Now also takes actions of creating info cards, adding
stuff to my calendar that appears in emails, provides traffic warnings, etc.

One problem that does not get enough discussion is false positives when taking
actions. I saw on my calendar the other day that I was supposed to be checking
into a hotel in some random city that day. It turns out that a customer had
sent me an email that included his travel plans. Voila! Google Now 'thought'
that I was taking a trip.

