

Powerset? - anaphoric

Does anyone know what's going on with powerset.
I tried to join their powerlabs program to get a closer look, but so far I haven't heard anything.
Anyone know when they are going to really launch?

======
henning
I don't see what they know that Google doesn't. Google certainly has no
shortage of people familiar with symbolic natural language processing methods.
It's not like Google hasn't read the major papers and books in that field. If
the Xerox technology Powerset licensed has been published anywhere, Peter
Norvig and lots of other people there have almost certainly read it very
carefully.

~~~
socmoth
you can take what you said and apply it to google and their published
pagerank. or the establishment of the previous players.

as long as there is a lot money in search, there will be a lot of people
competing for it, and people funding them. And the very least it will do is
keep google on its toes, which is good.

~~~
testapplication
pagerank is published, but patented.

------
blader
Anaphoric: Our consumer web search product is still some time away, but we
have launched Powerlabs to a small group of users and we are inviting more
every week. I'll see if I can get you in to the next batch.

~~~
anaphoric
Thanks and sorry about the public grousing to get myself invited :-).

I understand that you guys are taking on a profound and very important
challenge. And I am very much wishing the venture success. I think that NLIs
eventually will become a dominant interface solution for many tasks, search
included. I look forward to looking into what you guys have.

BTW are you doing anything in domain specific search? Feel free to reply at
mjm@anaphoric.com or mjm@cs.umu.se.

Regards, MM

~~~
blader
We are demoing some domain specific use cases in Labs, but the goal is
ultimately an open box search engine at web scale.

~~~
anaphoric
Yes it was my impression that it aimed at the open domain. BTW are you doing
any dialogue modeling?

The reason why I asked about closed domain work is because that's what I am
working on. See <http://www.youtube.com/watch?v=fWio8bHq4wQ>

~~~
blader
Can you email me at schen (at my company's domain) so I can send you an
invite?

------
dfranke
After they finish porting it to Hurd.

------
plinkplonk
They apparently use Ruby for their front end and Rails for their internal
tools. I am a little curious as to how that turned out. I haven't seen any
blog posts recently.

With all due respect, I doubt if they have the brain power to create something
to dethrone Google. Google has some incredible people working on search.

A couple of years ago, when the Yahoo vs Google war for search dominance was
at its peak, I asked a friend of mine who worked at Yahoo, "Who is your
equivalent of Peter Norvig? " (who was then Google's Director of Search
Quality - today he is Director of Research) and after some hemming and hawing
he told me " well, we don't have anyone like that but then we are a media
company, not a search company.("We are a media company" was the mantra Terry
Semel was repeating at Yahoo then) and I knew then that Yahoo would never beat
Google (in search).

I wonder what the Powerset guys tell themselves? I find the internal
mythologies of companies fascinating.

~~~
bsg
From what I understand from the presentation given at the Singularity Summit
by Barney Pell, Powerset's NLP technology was developed over 30 years by
Xerox. (You can listen to the audio at
<http://www.singinst.org/media/singularitysummit2007> ). That could
conceivably give them an edge, or at least a short term lead. :o)

~~~
anaphoric
Honestly it may be somewhat heretical to say it, but I doubt very seriously
that an old Xerox patent really has so much relevance.

The academic community has been working with semantic indexes for quite some
time. I know many of those involved. The real question is whether they have
developed something fundamental recently.

As for performance, yes perhaps they have some innovations. But with NLIs its
usability that matters the most in the end.

At a more technical level I think the question is how expressive/consistent is
the logical form they are mapping too. If they have developed a parser that
maps to an LF that can lead to actual inferece then that would mean something.
But then they need an open domain strategy to actually reason over such
expressions in a meaningful/useful way. We will see.

~~~
neilk
To me, the idea that they went back to a thirty-year old algorithm is the one
credible thing in the whole story. Almost nobody reads old papers in computer
science.

Somewhere out there in the libraries are the computing equivalents of
transparent aluminum, but you can barely get _researchers_ to look at this
stuff, let alone Joe Javahead.

~~~
anaphoric
Good point. And actually from another post in this thread it sounds like it is
more than a patent/algorithm, it is a painstakingly built system. So yes
perhap they have something. We will see.

As for you comment about researchers not being aware of the literature, I
agree 100%. I review from time to time and the number of papers that are
reinventing the wheel (and doing so in a sloppy way) is staggering. I think
the problem is that too many researchers are just concerned with building up
as many papers as possible to beat the tenure clock and/or to impress their
rivals.

Evaluators somehow need to stop bean counting publications as a measure of
merit. The problem they face is they don't know how else to evaluate...

------
ESYudkowsky
An important point to remember about Artificial Intelligence technology:
Stanford's Stanley robot, that won the DARPA Grand Challenge by driving across
a desert course, was based on a conceptual revolution that started 20 years
ago in 1987. The Bayesian revolution. It wasn't based on a big new idea, it
was based on a big two-decades-old idea, the Bayesian revolution, which it
took that long to finally get right.

The latest cool robots an' stuff that the media wants to report on, are
generally based on AI ideas decades old; and the newest most brilliant ideas
of AI today, may not yield impressive technology for years to come.

If PARC developed Powerset's ideas 30 years ago, that might just be par for
the course.

------
rokhayakebe
Powerset is simply a bunch of hype and money thrown in the windows. You can
build an engine much better overnight using the Yahoo answers and AnswerBag
APIs.

~~~
anaphoric
Yes I too am skeptical. I work in the area of natural language interfaces and
when I hear their claims I cringe.

My own feeling is that NLIs can be useful, but really only in closed domains.
In fact my own efforts are toward NLIs to relational databases.

If I understand their approach, they are building semantic indicies over large
sets of documents (e.g. wikipedia). Sure you can match the user's query using
these indicies but the inference thereafter certainly must be very weak. Yes
any functionality this gives could probably be achieved with simple IR based
techniques.

Still I am curious about how this will all go down once they launch a public
interface... Still, I will be pleased if they actually show something of
value. Time will tell.

------
rami
<http://www.lexxe.com/>

