Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps the argument should be steelmanned in that we should generally avoid using algorithms which are so complex that they aren't glass boxes. I doubt the idea to "simply follow gradients" can prove neural networks to be glass boxes because the output of that is still too complex. And we are clearly onto something here. If we can generate artificially hallucinated pictures today, it is not unreasonable to assume that computers will be able to hallucinate entire action sequences (including motor programs and all kinds of modalities) in a decade or two. Combining such a hallucination technique with reinforcement learning might be a key to general intelligence. I think it is highly unethical that there is almost no democratic control over what is being developed at Google, Facebook et al. in secrecy. The most recent XKCD comic is quite relevant: http://xkcd.com/1539/



> I think it is highly unethical that there is almost no democratic control over what is being developed at Google, Facebook et al. in secrecy. The most recent XKCD comic is quite relevant: http://xkcd.com/1539/

I consider myself to be very Left of center, but, I can't imagine what form of 'democratic control' you think is necessary over the research that Google and Facebook does.

I do not fault Google or Facebook for planning on time-scales longer than most governments. Governments ought to be doing this level of long-term planning, but are not (at least publicly)


It's a tricky ethical area. The Google post cites several research papers that seemt o provide more than enough information to replicate these results or get similar ones, which is good, because I think everyone should be able to explore these tools - I stick by my view from yesterday that this may be a scientific breakthrough.

At the same time, I can see the basis for some anxiety, because it's not hard to imagine proprietary research going a few steps farther and developing some sort of general intelligence or even a limited but extremely high-powered intelligence that would confer an overwhelming commercial advantage, and/or a political one. Suppose, as an exercise, that one developed an algorithm to maximize persuasiveness by first leading readers/listeners into a quiescent, semi-hypnotic state and then making your commercial or political pitch. There's certainly a potential for abuse.

In Europe this sort of thing tends to bring up the precautionary principle, the idea that you shouldn't do something without oversight and demonstrated minimization of risk. I think that's highly limiting, but expect some pushback against Google over this. Of course, I don't think democracy is all that wonderful either but then I'm a bit of a misanthrope.


> The Google post cites several research papers that seem to provide more than enough information to replicate these results or get similar ones, which is good

I agree that it is good, but even though the scientific theories and algorithms seem to be "open", having access to both the computing power and data-sets of Google, is not.

So one could replicate these experiments, but not quite on the scale that Google does. I'm not at all sure if it's practically possible for a single (really clever) person with a high-end CPU/GPU machine (and possibly some $$$ for Cloud Computing instances), to replicate something similar to the results in this blogpost.

The recognition nets used in the blogpost seem to be trained on a tremendously high number of training examples, to give the ability to "hallucinate" (or classify) such a great variety of animal species, for instance.


I'm not at all sure if it's practically possible for a single (really clever) person with a high-end CPU/GPU machine (and possibly some $$$ for Cloud Computing instances), to replicate something similar to the results in this blogpost.

It's very possible.

GoogLeNet[1] is an example in Caffe: BVLC GoogLeNet in models/bvlc_googlenet: GoogLeNet trained on ILSVRC 2012, almost exactly as described in Going Deeper with Convolutions by Szegedy et al. in ILSVRC 2014. (Trained by Sergio Guadarrama @sguada)

[1] http://caffe.berkeleyvision.org/model_zoo.html


I'm just questioning whether an autopilot with a profit maximization heuristic is the best tool to guide technological progress. With democratic control I don't necessarily mean our current democratic systems but any kind of decentralization of decision making by voting. Yeah, I know that's vague, but given what appears be at stake it seems unreasonable not to consider alternatives.


I'm reading Rationality: From AI to Zombies, and it goes through exactly this argument. Here's the original post:

http://lesswrong.com/lw/jb/applause_lights/


Fine, my suggestion to solve this problem democratically was an applause light. It was an unfinished thought and a call for action. I agree that this didn't convey any new information, I just wanted to express my distrust towards these kinds of appeasement statements from people who are working on these technologies. Being able to peak at different layers of a CNN doesn't recover NNs from the fact that they are in many regards opaque to us (and will possibly always be due to their complexity). Statements of the sort "I have never understood those charges" makes it sound like they are pretty much ignorant of the potential risks associated with not knowing exactly what your program does (they can perhaps be with regards to current technology, but I could imagine that more advanced systems can potentially arrive sooner than overall anticipated).


Well, the problem with "avoid using algorithms which are so complex that they aren't glass boxes" is that for quite a few problems, the simple choice is to either use a machine learning solution that will essentially give you a black-box or to use an understandable algorithm that doesn't get nowhere near state-of-art accuracy and is practically useless. Speech recognition, for example.


> Combining such a hallucination technique with reinforcement learning might be a key to general intelligence.

Knowing that the most common parallel effect of induced hallucination via psychotropics is ego-loss (complete loss of subjective self-identity) [0], maybe they need to try completely inverse processes in order to create a sense of ego in a machine... Because what's real intelligence but one's sense of self?

[0] https://en.wikipedia.org/wiki/Ego_death


I would argue that a sense of experience is a necessary precursor for that, and also that intelligence and consciousness are two different things, although (if I read you right) the latter certainly informs the former. Barry Sanders' A is for OX has many well-sourced musings on the emergence of consciousness as a product of literary capability vs. a purely oral tradition which you might find interesting, and of course I think everyone needs to read Jaynes, Dennett, Hofstatder on these topics.


Great more "ZOMG I'm skirred of AI" FUD. Stop being so afraid of the future.


There are plenty of reasons to be concerned about AI. Stop dismissing arguments because you don't like their conclusions. There is no law of the universe that the future can't suck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: