Hacker Newsnew | comments | show | ask | jobs | submit | joe_the_user's comments login

No, I think tend to think your preferences are common.

Until a robot voice gets an appealing personality that people enjoy talking to, people will avoid interacting with "robo voices" if at all possible. Even though may interactions with voice recognition applications have reached the point of being "seamless", I still dislike the things.

Another factor is that voice isn't very desirable for things that involve only formally specifying something. If you type out the key word-preferences for recommendations, you can see what they are and change them easily. If you interact by voice, you'd need to have your preferences tediously read back to you.

Even if a person has a human assistant to perform various chores for them, the value of that assistant would derive from being able to make choices wouldn't want to overtly make or think about yourself - someone who would buy clothes for a person with no interest in fashion but needed to look good for example. Even in this case, you'd want the preferences for your assistant spelled out so much as it made sense. Once robot voices pleasant enough, they'll probably still want to supplement any interactions with "notes about the conversation" that the person could also edit.


Author of the system here.

I do agree that speech-based interfaces can be very cumbersome. Part of this is that current systems aren't perfect yet but part of it is also that speaking aloud is just plain strenuous for people. At least the second thing isn't going to go away.

Some of the things you touched on are certainly valid points, but spoken language systems also have big advantages. For example, having to spend effort for fully form a thought (to articulate it coherently), forces users to spend a second think about what they actually want. Also, and this was a part of this research, the spoken voice carries quite cool side-channel information that can be exploited in aiding recommendation.

In any case, please note that the current UI (which, btw, did show the gathered preferences to some extend by coloring / bolding the shown specs proportionally to the level of satisfaction of your determined preferences / the level of influence) is just a research UI. It does not implement the best I could do, but rather implements an interface that I allowed good interpretability of the results of the conducted study (which, for example, excluded any multi-modal efforts which would have potentially made drawing conclusions harder).


Well,

Most of the examples are all the lines of "without the axiom of choice, one could assume X and not reach a contradiction" - you could assume you had an infinite set with no countably infinite subsets etc.

Which is a little different than the OP, which says something must exist. Within reason, more axioms increase the number of things that definitely are the case but fewer axioms increase the number of things that might be the case.


It's basically a demonstration that sets with the axiom of choice and no other restrictions are wild things.

Of course, if one restricts consideration to "measurable sets" and other objects of standard geometry, the paradox can't happen.


And discrete spaces, I believe. And there are compelling reasons that the real world is better modeled by a very fine discrete space.

This kind of argument seems like it's only appealing through it's similarity to one of Aesop's Fables.

I think as far as we know, ant's don't even ask themselves the question "what is this ant hill". Human cosmologists, on the other hand, seek an explanation for everything that they see in the cosmos. The only way some alien phenomena would be ignored would be if it seemed to have the qualities of some apparently easily explained phenomena. This is certainly possible but it seems unlikely it would happen by accident - cosmology's models for ordinary, unexceptional stars, nebula and galaxies are all fairly detailed.


Maybe, just maybe, the fact that we're not seeing much is actually indicative of a fairly rich sea of life in the galaxy, communicating point-to-point with each other.

Except for the small question of how those many civilizations would have found each other in the first place.


True. But nearer the galactic core where stars are so much closer together? SETI might have a much higher hit rate there. We are in kind of a backwater out here. It's easy to forget about the big city when you're out in the sticks.

It's not very likely though. Occam's Razor alone suggests that the reason for loneliness is that there aren't a plethora of civilizations all talking to each other over narrowband channels. And we'd be very unlikely to pick up the omnidirectional beacon of a lonely civilization way out here anyway.


It's fictional exploration of the intersection of everyday interactions and concepts of truth.

When I read this kind of thing, I usually seems to me that it's bringing out how people generally stumble on a whole range of philosophical and moral assumptions inherited from the Western philosophical tradition when they start talking about "telling the truth".

A simple "riposte" to the fictional "What you're saying is tantamount to saying that you want to fuck me. So why shouldn't I react with revulsion precisely as though you'd said the latter"

- "Well, the niceties here might be called 'protocol' - I'm not definitely saying that X but rather beginning a 'handshaking' process that may let us do that if each of us are comfortable with. Protocols are what grease social interactions. Which is to be say that if a mature adult is disgusted by a guy saying 'I'd like to fuck you', it's because they are inherently disgusted by sex but rather they're disgusted by someone making an intimate demand with no protocol, IE making the demand at the wrong time"

Which is to say our everyday interactions don't involve the rational constructs of true-false logic but rather are more like game-theoretic maneuvers.

And for the last question, "You're saying you wouldn't fuck me right now", the answer would be "So you're saying you would like to know all of my reactions to a rather extreme hypothetical situations, without say, letting me know any of your reactions. Well, that's what protocol is for, letting two people demonstrate their range of range reactions equally, so one person's maneuvers aren't just available for the other to use without that first person getting compensation".

But anyway, the thing that makes the hypothetical girl's actions compelling is that however out-of-line they might be, they are using the true-views coming from Plato - statement are either true or false, their truth or falsehood doesn't depend on circumstance, etc. Of course, if you view the "I'd like to take you to see a play" thing as protocol, then the game is just silly.

-- And this is to say that the hypothetical solutions to the dilemma just dig the hole deeper by trying to modify mathematical logic for inappropriate purposes.

Instead, I'd recommend an evolutionary-game theoretic approach to understanding everyday interactions.

See: https://books.google.com/books/about/Game_Theory_Evolving.ht...


I think there is a lot more to the story than just what was in the first bit. Very interesting analysis of that part though!

Well,

It's a fairly complete solution (imho) to the dilemmas in the first part, which is to say that I don't find the failed-efforts at solution contained in later part to be that interesting.


Nothing guarantees that they weren't monitoring this from the start. Nothing guarantees they haven't set-up the successor to this board already themselves.

We know the FBI actually engages in systematic lying about how they gain information about criminals - see "parallel construction".

We know that in efforts to end drug cartels, the DEA will enter into long term alliances with one cartel to eliminate others.[1] We know the US tolerates the rule of the drug-track caste in Afghanistan for the pursuit of "higher ideals".

Just in general, the paradox of mafias, cartels and so-forth is that it easier to use state power to take over such operations than to root them out. Moreover, taking them over has many appealing aspects. The problem is that the more the state moves into simply managing rackets, the greater the temptation to corruption gets, for both the high policy makers and for the low level operatives (you remember the FBI agent engaging in his own embezzlement etc during the Silk Road investigation). But this problem has been around for a long, the state by now are probably at a "steady state" in their corruption.

So it seems very possible that the FBI already knows what comes next but can only tip it's hand when the next raid comes in another two years.

[1]http://www.businessinsider.com/the-us-government-and-the-sin...


>The problem is that the more the state moves into simply managing rackets, the greater the temptation to corruption gets, for both the high policy makers and for the low level operatives

Well, that is if you ignore the far larger elephant in the room of the government running an illegal operation. It reminds me of the times I hear of the FBI taking over some TOR server hosting abusive images and continuing to host it as a honeypot. By their own admission, looking at, hosting, sharing, etc. those images constitutes concrete abuse of a child, yet they directly engage in such. It would be like if the FBI busted up a brothel with children and kept running it to catch more criminals. The ends in no way justify the means.


The ends don't justify the means, yet it is also useful to explain, to the people who think they do, that nefarious means generally only lead to nefarious ends.

This just seems like fogging the issue.

Calculus is not for economics but there are guides to calculus for economics majors. A given language isn't word processing but a "guide to writing a word process in Pascal" is not unimaginable.

The parent asked for a guide to using functional programming for the web back end. I'm not sure why sure a thing couldn't exist.

-----


It's fascinating that now that things like deep learning have massive traction, people like LeCrun and Bouttou (both now at Facebook), who apparently pioneered the stuff, are taking a critical position on it - critical not being negative or dismissive but rather a "we have to see the limitations and good beyond them" approach.

See: LeCrun's What Wrong With Deep Learning https://drive.google.com/file/d/0BxKBnD5y2M8NVHRiVXBnOVpiYUk...

-----


To be fair, he's been talking about the relationship between theory and empiricism with regards to neural networks for some time now. See, for instance, page 12 here [1] (the titles of his talks all seem to be provocative questions; almost in a tongue-in-cheek sort of way).

One of the problems (in my opinion) with networks is that the analysis has always been post hoc. Personally, I think it's more preferable to build a method starting from theory [2] which can then be tested empirically to see if the assumptions of the theory hold. Then augment the theory, the method, and experiment again.

Now, there's nothing inherently wrong with post hoc analysis - it's just a different start point in the loop of science. However, because we didn't start from theory, the burden is then to extract some theory from empirical observation. Again IMO, this is can be problematic because:

1) It more easily allows for confirmation bias.

2) It leads to a multitude fragmented theories.

The second is why everything surrounding neural networks seems so incredibly ad hoc.

[1]: https://www.cs.nyu.edu/~yann/talks/lecun-20071207-nonconvex....

[2]: The principles could be based on statistical learning (see SVM), neurophysiology (see work by Poggio or Olshausen), mathematical invariants (see work by Mallat), etc...

-----


Thanks for the post, I was hoping for more discussion of this.

I'd be even more pessimistic about one's ability to go forward from empirical observation of opaque mechanisms.

Aside from your incisive observations, there's the point that if you have a "good" "working" "theory of how neural networks operate", what is it a "theory of"? It's dependent on the mechanisms that gather the test data, the sort of answer that a certain kind of person wants out of the test data and so-forth - the "epistemological" questions you didn't answer and couldn't answer will come to bite you.

I'd add that SVMs do seem more firmly founded but their ultimate tweak, the kernel trick plus projection onto feature space, is basically ad-hoc too - still much closer to a "real" probability model etc. The problem with SVMs is that they wind-up more or less equivalent to a 1st order neural network and thus they don't scale - once data becomes truly huge, they require too much storage.

Ironically, I think the best single overall critique of AI efforts was articulated by Paul Allen[1]. The problem is that in building large systems, people encounter a "complexity" barrier that prevents further progress[1]. Creating more complex systems to tackle that tends to fail as people wind-up understanding less and less of their own complex systems.

The problems with all the neurophysiological models is that raw neurons are very complex things and one doesn't know immediately which parts even carrying meaningful signals, a problem made worse by not having a model of what those "meaningful signals" might be.

Consider that if aliens looked at human-made microchips and tried to model them fully, they get the clock signal and various nonlinearities in the transitors right but have enough computation errors that no program would run on it.

Another good argument is that just all our methodology hinges on classical Western epistemology and a change in that may be necessary[2].

[1] http://www.technologyreview.com/view/425733/paul-allen-the-s...

[2] http://aeon.co/magazine/technology/david-deutsch-artificial-...

-----


> if you have a "good" "working" "theory of how neural networks operate", what is it a "theory of"?

I think we should make the distinction between theory pertaining to a task and theory pertaining to methods that perform (or approximate) the task. Certainly, the former can be incorporated into the latter, so the boundary is fuzzy. Actually, the previous is quite important because known principles of the task can be expressed mathematically and incorporated functionally into the approximation method. In this sense, a network architecture could arise naturally. In fact, it sort of does with anything with a cascade-type pattern.

On the other hand, if we're going to talk about the method (networks, in particular) independently of the task, this is more difficult. The question is now: is the network model remarkable in some sense? Meaning: is there some class of functions which are "best" or "more efficiently represented" by network approximations, and what are the properties of the class that make this the case? Yoshua Bengio has touched on this with regards to depth from the point of view of circuit theory, but the argument is basically: "here's a couple circuits which are more efficiently represented by increased depth, therefore deep = good always". It would be more interesting if there was a more rigorous analysis from a function approximation view. Perhaps literature exists on this. I'm not sure - I'm sort of rambling now.

> kernel trick plus projection onto feature space, is basically ad-hoc too

The choice of kernel - yes I agree, but the driving theory of the method is to maximize the margin, not choose the best kernel.

> Creating more complex systems to tackle that tends to fail as people wind-up understanding less and less of their own complex systems.

Interesting. Maybe there's something going on with the relationship between entropy and complexity.

-----


It seems Léon Bottou is an important figure in Machine learning. His personal website has interesting stuff too.

https://research.facebook.com/researchers/1558013787807218/l...

http://leon.bottou.org/

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: