Hacker News new | past | comments | ask | show | jobs | submit login
Research Priorities for Artificial Intelligence: An Open Letter (futureoflife.org)
93 points by ghosh on Jan 11, 2015 | hide | past | favorite | 74 comments



The research priorities document should be read first... http://futureoflife.org/static/data/documents/research_prior...

without that the open letter makes almost no sense. The open letter is really just a way for the signatories to say "I support the research priorities document."


Right. The main author is Stuart Russel, author of "Artificial Intelligence - a modern approach", which is a major intro to AI text. Two main concerns are expressed - that AI, whatever it is, should be "beneficial" and "robust". Those are good points.

There are technical problems on the "robust" side. Most AI systems are still rather brittle. Machine learning systems are brittle in ways that are not well understood yet - small changes in input data can produce huge changes in results. There was an article on HN about this recently, showing mis-recognized images. The paper talks a lot about formal verification, but it's not clear that will help. "Robust", though, is an engineering problem, and can probably be solved.

"Beneficial" is an economic and social problem. That's going to be much tougher, because the political issue will be "beneficial for whom?" For discussion purposes, consider that a near-term use of AI is investment asset allocation. Machine learning is already being used in that area, but so far mostly for technical analysis. What will a society look like where AIs determine where investment goes, based purely on maximizing investment return? The concept of the corporation as having as its sole goal maximizing shareholder value, combined with AIs making the decisions to make it happen, is rather scary. Yet if such companies are more successful, they win. That's capitalism.


Russell is one author. The other is Peter Norvig.


Just to avoid confusion: Norvig is the co-author of AIMA, the AI textbook. The research priorities document that is the subject of this thread was drafted by Stuart Russell, Daniel Dewey, and Max Tegmark, though Norvig (along with many others) is signed on via the open letter.


It's hard to design any system (or research program for that matter) if the goal isn't well defined.

If the research is aimed at ensuring that AI is aligned with human interests, another priority should be researching what we deem those interests to be.

The Law and Ethics research section of the research priorities document is centered on mechanisms, with a couple of nods to policy, but with the seeming assumption that that question of what is in the human interest has already been answered.

But the answer certainly isn't obvious. Different sets of humans have wildly diverging and fundamental disagreements over what is right or wrong for humans and our future.

I doubt we're going to find a consensus in the time we have. It will probably make more sense to just have smart people define a set of good principles. This is hard enough that it should be considered a research task, and I would think it would be at the top of the priority list.

Otherwise, how do we know that the research we are doing on other problems even makes sense? If you just stub out the "human wishes" component of all this as one simple "human approves" button, then you haven't solved anything. We need a robust definition of what "aligned with human interests" means.


The current most-widely-accepted answer to this is "indirect normativity". This is basically saying that human values are complex enough that we should instruct an AI to study humans to figure out what they are, rather than try to program values in directly and risk getting them wrong.

I'd be a lot more comfortable if metaethics and related philosophy advanced to the point where we didn't have to rely on indirect normativity. But I don't think this is something you can just throw money and people at, because this is a field which is sufficiently difficult and hard to judge, that work which isn't first-rate tends to detract from progress by adding noise rather than help.


>This is basically saying that human values are complex enough that we should instruct an AI to study humans to figure out what they are, rather than try to program values in directly and risk getting them wrong.

Oh god no. A thousand times no.

Humans are masters at hiding sociopathy behind cultural and political obfuscation.

A machine that looks at human leaders and generalises human values from them is absolutely guaranteed to be a monster.

>I'd be a lot more comfortable if metaethics and related philosophy advanced to the point where we didn't have to rely on indirect normativity.

We already have any number of perfectly sustainable ethical systems. What we don't have is the ability to follow them in practice - partly because all political systems tend to reward power and influence with more power and influence, and partly because even when there are token checks and balances (e.g. in the form of representative democracy) it's so easy to game them with appeals to irrationality and emotion.

This rarely works out well for most of the population.

But I don't much like the content of the proposal. I think you can have extremely useful intelligence amplification without any need for agency or personality.

Much better modelling along the lines of 'If you do this, this will be the outcome' would be a game changer on its own.


Indirect normativity is a losing strategy because you don't know what the resultant ethical system will be. https://ordinaryideas.wordpress.com/2012/04/21/indirect-norm...

Humanity has already solved ethics, so there's no reason for us to give up and ask AI to define ethics for us.


>Indirect normativity is a losing strategy because you don't know what the resultant ethical system will be.

That's the point...

>Humanity has already solved ethics

Where on earth do you get this idea?


Awareness of ethical systems. What makes you think ethics isn't solved?

Here's a relevant essay http://medium.com/@yanazendo/how-to-be-lazy-and-successful-a...


That's not even close to a formalization of human morality. Such a thing probably isn't even possible. See this: http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/


Use ethic monoids to combine seemingly disparate ethical systems. https://medium.com/@yanazendo/ethic-monoids-913c3046079c


"Monoids", you keep using that word. I don't think it means what you think it means. You've basically said "use monoids" without a word of elaboration how to interpret ethics, or really anything, in this formalism.


Intuitively I get it, but it doesn't seem all that useful. I'd love to see an example of it in action.


The ACM and the IEEE want to do business with each other. Business partnerships cannot exist unless all parties agree to follow the same ethical system. Unfortunately, ACM and IEEE have different Codes of Ethics. To do business, they must define an ethical system that they both agree upon.

The simplest monoid combines ethical systems by only including codes that appear in both Codes.

So, the ACM+IEEE Code of Ethics can simply be defined as the intersection of the ACM+IEEE Code of Ethics.

Then ACM knows what to expect from IEEE, and vice versa.

If we want AI to be ethical, we must be able to express ethics computationally, and we must be able to formally verify the correctness ethical systems.

So, we should use monoids.


And, not one mention of the fact that a huge portion of current commercial AI research is going into figuring out how to get people to click more ads or buy more things. "Deep learning" research at Google, Amazon, Netflix, Microsoft, and many others, is centered on advertising and commerce. They aren't calling it AI, but they're hiring AI people to do it and they're working with AI techniques and tools.

I consider this dangerous, in a way that I can't really put my finger on. Is it merely that some of the best minds in the field are being distracted by selling more crap that people don't need? Or is it that AI will have at its developmental core a model of inducing consumption as its prime directive? Or, am I just an asshole that doesn't like too many things?


As someone in the deep learning space who's not working at one of those companies (deep learning on hadoop with cloudera/red hat model is my game) I'd like to add here that while these labs are doing stuff with ads, it's not necessarily all that they are focused on.

I've visited a few of the teams and I'd like to say that many of the things being published out of there (say the recent real time vision stuff with cars published by yann lecun's group) or perhaps the voice stuff coming out of baidu.

That being said: I think you're ignoring the structure of the data that are involved with click through rates and the like.

Reconstructing sparse data and reccommenders is among the hardest problems in AI. Couple that with the at scale unstructured data(being media like text, audio) being used and we can see some real landmark results.

Real value is being created alongside some of the commercial research being done.

As for use cases of deep learning outside of ads, my customers are working on many cool problems that don't involve targeting. I've done training as well as deployments with a variety of companies now. Media centric products coupled with users are among some of the coolest datasets out there. Let me say that search and similarity among data is by far the best application of this stuff outside of yet another conv net classifier.

There's a practical side to this stuff that doesn't involve ads. I hope that helps a bit.


> There's a practical side to this stuff that doesn't involve ads.

Just have a look at the AIMA book (http://aima.cs.berkeley.edu/), it's full of applications not related to advertising (both of its authors, Russell and Norvig, have signed the letter). I'm going to implement many of them in my blog (http://ai-maker.com/), so if you want to have a lot of fun with AI, join me in this learning quest.


I did most of my first homework assignments out of that book a lot of years ago ;). I'm actually writing my own book on deep learning with oreilly now. Plenty to do ;)


I'd the thrilled to hear your story with AI and OReilly. It must be very encouraging to have this project with such an important editor.


Email's in profile.


I think you're misunderstanding something fundamental about how AI research works. The techniques, tools and ideas can be applied to anything by anyone. If google releases a deep learning framework, it can be used for cancer research or economic planning or anything else. Just because it was developed in the context of advertising doesn't mean that the applications are limited to that domain.

The idea that "some of the greatest minds of our generation are spending their time figuring out how to get people to click on more ads" is a bit of a myth. They're figuring out new algorithms, faster methods and new technological concepts that are being applied to ads first. The truth is that the research divisions of Google, Microsoft, IBM, etc publish a lot of research that isn't related to advertising at all, possibly the majority of what they publish. The best and brightest engineers are interested in engineering and theory, and despite working for advertising companies they tend to come up with good ideas that can be used in a wide variety of applications.


And, I think you're overly optimistic about the generality of some of these problems and the effort going into them.

I believe a significant portion of the implementation effort is going toward more effectively delivering ads and consumption, whether they're also writing a paper about indirectly related research or not. Google's most profitable business, by far, is their ads business. They would be crazy to not expend significant money and effort to retain their lead in that space. Other stuff that comes out of it (as nice as often it is, and as much as I enjoy it) is often just a pleasant side effect.

But, I could be wrong. I would also like to point out that I don't have other ideas on funding AI or big data research. I'm just uncomfortable with it, though I don't expect others to necessarily share that discomfort.


It's also going into surveillance, warfare and predictive policing etc. I believe this is mainly because only big companies or deep pocketed individuals have real access, but there could come a "golden age" some time later where the general public can access truly powerful API's and the like, and then we would see a whole new frontier emerge. That is, if they can generalize the algorithms and make a workable product that doesn't need so much manual training and oversight by the engineers creating the AI.


The problem with AGI research is that there have, to date, been few ways to survive as an AGI researcher. So once the commercial platforms became available to apply machine learning/AI whatever you want to call it, there was a way to continue Narrow AI research projects with the possible hope that maybe someday these narrow approaches would help lead to AGI.

In fact the whole AGI/HLAI/Strong AI thing has gotten a bad rap over and over that the majority of super serious researchers won't even talk about it in any seriousness a la Norvig et al. In fact Norvig arguably openly mocks AGI focused groups.

So until there is a real interest from some party that has money to put several billion into hardcore research, we will continue to see AI development live largely in the commercial world. Open source is a possibility like with opencog but history indicates that the bulk of the "big" minds and money don't go that way.


It must be hard to resist a big payslip, we're all humans, heads of family, we want to provide the best for our beloved ones.

I see a veneer of disdain to Norvig's attitude in your words. I don't know much about this criticism, but to me he has done a lot of good to us researchers with his AIMA book. In fact, this is the core that vertebrates my blog: http://ai-maker.com/. In the end, Goertzel (guru in AGI) works again for a financial prediction firm, doesn't he?


I believe this article to o be relevant here:http://m.theatlantic.com/magazine/archive/2013/11/the-man-wh...


I very much share your sentiment. The research unfortunately seems to be done where there is most commercial drive for it, which nowadays means tricking people into parting themselves with their money.

> I consider this dangerous, in a way that I can't really put my finger on. Is it merely that some of the best minds in the field are being distracted by selling more crap that people don't need? Or is it that AI will have at its developmental core a model of inducing consumption as its prime directive? Or, am I just an asshole that doesn't like too many things?

http://slatestarcodex.com/2014/07/30/meditations-on-moloch/

This is long, but very, very good and may clear up some confusions.


I think the danger lies in our tendency to ignore the big picture. We work in teams and groups and we create these bubbles where everyone just assumes that a basic set of things has been figured out by other people. You can second-guess what's going on around you, but it's not your job to test if the entire thing makes any sense at all (that recent CIA report might be an example of this). If you work at some tech giant doing AI research, you focus on the next task. Even asking something like "should we really be gathering that info about customers?" will either get you fired or bullied (I imagine) - because that's some lawyer's job, or your boss's, or even that of your country's legislator. Who are you to doubt the direction your entire field is going, anyway?

Some of the best AI researchers working on this problem probably won't stop the community from pushing towards it. The majority of people in general is happy with just doing their job.


Lots of innovation in the early days of the internet came from porn. Whatever you may think of porn, I think those innovations have arguably had wider benefits than just the porn industry. Similarly, AI originally built to maximize ad clicks and online purchases might eventually be applied to more "noble" causes.


I'm put in mind of a story about this I read... presumably about two years ago: http://www.kaleidotrope.net/archives/autumn-2012/dead-mercha...


I really like their short summary of the automated driving problem:

If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits.

I'm convinced this is an area that deserves serious thought. I've mentioned it previously on here, but it's quite possible to develop "AI" driving software that make dramatic improvements to the overall safety outcome, but can be dramatically less safe than manual driving in certain (rare) circumstances. As a related example, there was a laptop manufacturer a few years ago who developed a "face unlock" feature, and neglected to train it on people with darker skin. That was unfortunate, but hardly life threatening. In a car it's easy to imagine much worse outcomes.


> the automated driving problem: >> If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. > I'm convinced this is an area that deserves serious thought.

My fringe opinion on this topic is that automated cars actually simplify insurance and liability considerably.

The case history of product liability isn't common knowledge, nor are maybe the history and fundamentals of insurance, but both areas are instructive. Each area hints that when accidents are caused by automated or designed processes, rather than spontaneous human decisions, everything gets a lot less messy.

Here's a case very commonly taught to law students around the country: http://online.ceb.com/calcases/C2/24C2d453.htm

In it, a coke bottle explodes, and Justice Traynor explains in concurrence essentially how the industrial revolution had shifted the optimal way to allocate the cost of freak accidents. There's a parallel shift for automation.

If you have manufacturers absorb all the risk and damage of accidents involving automated cars, then the incentives are in the right place. This doesn't work right now. You can't just automatically assign manufacturers all liability with manual cars, because after an accident you have to sort out how much of a dumbass or how drunk the driver was, with lengthy determinations as to percentages of fault for all involved.

Automation strips that whole problem out of the equation (so long as one party wasn't driving a manual car). It would drastically simplify liability and insurance in this area.

Corner cases where a highly uncommon set of conditions prompt accidents? We have those now. Look at all the cases about delayed recalls in the auto industry. They're almost all about whether they expected the set of conditions to occur or not. Companies already deal with these issues, because we're already using complex machines.


I hate the liability view of error handling.

I completely understand that knowing failure rates and likely liability payout leads to manufactures being able to plan .

However, I think it is completely unacceptable to consider a particular type of predictable "accident" acceptable because it is rare.

I think standards and safety testing are needed as well as liability laws.


> consider a particular type of predictable "accident" acceptable

That doesn't actually happen in modern liability cases, because the system shares your intuitions.

If a car manufacturer's defense in class action lawsuit is, "Yeah, but it's only a few people," the court will bankrupt them with punitives on the spot.

This isn't about giving people a free pass to hurt small numbers of people, it's about assuming they're responsible when they make things that hurt people, rather than wasting time figuring out responsibility from first principles after every single incident.


This reminds me of Waleed Aly's fantastic talk on Ethics & Technology in which he explored multiple ethical implications of self-driving cars.

Unfortunately, I can't find a recording of his talk (which I saw at Above All Human conf a few weeks ago) but here's an article about it, which may or may not capture the essence for other interested people: http://www.afr.com/p/tech-gadgets/see_an_ethicist_first_wale...


To be clear, this is research priorities for "given that we have proto-AI technologies, how do we manage the impact on society and make sure the full-AI tech doesn't do something vastly unintended."

Which is somewhat worthwhile, but presupposes the existence of general AI techniques, which is an incredibly low-investment area. Seriously, the amount of research being pointed at anything that's plausibly "full AI" in nature is miniscule. If anything it's lower than in the 80s (and maybe appropriately, given the lack of payoff for the more general work).

(I could go on in detail about how there is approximately zero research being devoted to, eg, automated refactoring of large codebases and other stuff that you'd expect to be absolutely essential to general AI. But I'll just stop with the naked contention.)

And it's rather silly to conjecture effects when you don't even know the order of magnitude of the timelines, the resources required, or the bootstrapability of the technology. There is a huge difference between "a Top100 cluster can emulate a human brain on a 1/100 timeframe" and "a cell phone possesses general-purpose optimization algorithms that are better at learning arbitrary tasks than your average human".


If you read the linked research priorities document you can see that this isn't really about "general AI techniques"...it's about steady progression along the lines of things that exist but aren't necessarily to market yet (e.g. self-driving cars).


> I could go on in detail about how there is approximately zero research being devoted to, eg, automated refactoring of large codebases and other stuff that you'd expect to be absolutely essential to general AI.

the programming languages community has a fairly significant amount of effort devoted to automated re-factoring of large codebases, there is even a DARPA program for this: http://www.darpa.mil/Our_Work/I2O/Programs/Mining_and_Unders...

this isn't seen as "AI" work though...


That's the classic AI success story problem in a nutshell: anything successful from the AI field gets rebranded as something else.


In this case the situation is different. AI researchers didn't succeed at program analysis; they gave up. @fiatmoney is right: it's no longer considered an AI problem. But computer scientists haven't solved it either, because they keep bumping their heads against the computability ceiling. It's very clear to me that AI is actually required for general, accurate program analysis.


why? we get general and accurate program analysis all the time using just unification and simple inference rules...


No, you don't. You may be able to infer some properties of some programs, but you can't take an arbitrary program of real-world size and answer an arbitrary question of a kind that a human maintainer of that program would want to know; for example, under what circumstances, if any, it ever performs an invalid operation -- such as an array bounds violation, throwing a runtime exception, whatever is considered invalid in whatever language it's written in.

That's what I mean by "general and accurate".


yes you can, in many dimensions. one of those we call "type checking". other examples are the myriad examples of using model checking and symbolic execution to identify errors in programs automatically. can you identify all errors? no, but that has nothing to do with weak or strong AI. but you can definitely learn general properties about general programs using today's technology.


I'm certainly not saying that static analysis is useless; that would be an odd statement considering it's the field that pays my salary!

But there's still a very big gap between what we can do and what we want to do. None of the extant static analyzers found the Heartbleed bug, for example.

What I'm saying is that to close that gap will require AI. I don't think it's just a matter of better algorithms.


The PDF mentions some short-term topics relating to automation of jobs, self-driving cars, and autonomous weapons.


The attached paper ("Research priorities for...") reads like a white paper intended for an NSF program office, with the idea that it might be a skeleton for a research initiative and CFP from NSF.

I also have to say that it is a very bureaucratic response to a deep issue. It seems narrow in comparison with the scope of the issue. Maybe that's what happens when you get a bunch of specialists together.


The tacit assumption here is that we can predict what aspects of AI will be most beneficial. But if you look at the inventions/discoveries that have most affected the world over the past few decades I think you'll find that they are largely unpredictable, e.g. the internet, computers, antibiotics, etc. It's actually difficult to find a non-incremental improvement that was planned. We have such a glut of scientists right now that it doesn't make sense to focus them in any specific direction. AI is a popular field and people are going to be working in it, regardless of whether we adopt a set of research priorities or not.


> adopt a set of research priorities

Try this explanation on for size: Have the whole group, with a lot of signatures, etc., come out with a big statement, a paper, a position, policy paper, on what the AI research priorities are.

Then, presto, bingo, any prof looking for grant money can pick one or a few of those priorities, adjust the direction of his work, and claim to both the granting agency writing the check and the journals to publish his papers that his work is on the priorities.

And just what is the solid basis for the priorities? Sure, that paper. And, reading that paper, it looks solid, right? I mean, it very much looks like the right stuff, right?

Or, maybe an AI researcher wants to do something else. So, they can claim that their work is leading edge, disruptive, original, crashes through old barriers, is better than the conventional, etc., right? And maybe that approach will get grant money, accepted papers, tenure, etc.

Sounds to me like a way to help get grants and get papers published and, thus, to keep the parade, party, whatever going.

Meanwhile, back to good engineering, etc.!

That is, if the paper looks weak for its claimed, obvious purpose, then maybe look for the most promising non-obvious purpose! E.g., some people are good at manipulation, and some people are gullible. Or, "Always look for the hidden agenda."! Or, don't be too easy to manipulate!


Eric Drexler wrote rather accurately about the importance of a global hypertext publishing network back in the mid-80s (following earlier less-measured but not crazy writings by Ted Nelson); he also projected that one could exist by the mid-90s. (He avoided timeframes when analyzing other technologies.) It's not so much that there are no good futurists as that "you can't tell people anything": http://habitatchronicles.com/2004/04/you-cant-tell-people-an...


I was in a group that did some research in AI, worked with several famous companies, met lots of smart people, wrote and shipped some software, published a lot of papers, gave a paper at an AAAI IAAI conference at Stanford.

Summary: The good papers at the Stanford conference were really just good cases of traditional engineering, statistics, etc. Anything like artificial intelligence (AI) beyond just such traditional approaches had next to no significant role. By then, AI just looked like a new label for old bottles of wine, some good, some bad.

The "Open Letter" and the paper it links to look like more of the same: The solid, promising work is traditional approaches in good engineering, statistics, optimization, etc. The rest presents severe problems in specification, testing, verification, even security.

Maybe the best thing to do with AI that is not just good engineering, etc., that is, software and systems that are built but not really designed or well tested, is at least to be careful about security and keep the thing locked up in a strong padded cell and, especially where any harm could be done, take any of its output with many grains of salt and as just a suggestion.

It appears that, bluntly, we just do not have even as much as a weak little hollow hint of a tiny clue about how intelligence works, not even for a kitty cat.

When I was in the field, an excuse was that artificial intelligence was not really our work or our accomplishment but only our long-term goal. Hmm ....


The challenge isn't creating the "Intelligence" but creating a virtual "Environment" that trains the artificial agent to do tasks that learn and alter its environment.

With a purely computational environment, all tasks - that can be learned/mastered - can be reduced to basic complexity classes, e.g. prime factorization. To our "natural" intelligence, this seems a contrived skill but consider...better prime factorization would allow the artificial agent to mine all the bitcoins, break passwords, etc making the AI a very rich "in the real world". And compared to a human calculator, even the worst computer looks like a super-intelligence in this domain.

So to learn the skills we consider important, there needs to be a way to make the AI experience and alter a non-virtual world - the physical (macro) world we live in…basically some physical robot that does I/O from the environment to the intelligence. And until the AI has some reason to optimize for behavior in the non-virtual world - have the most kids instead of have the most bitcoins -natural and artificial will remain alien to each other as neither will recognize the others' reality.

With this insight, I'd diagnose the bottleneck in AI to be in engineering robotics at least as complex as insects, to which we're a long way off.


Complexity doesn't solve problems. Making our robots more complex isn't going to make AI smarter, it's going to make it more confused.

I think your 'insight' is likely to be a false analogy.


If anybody has interest in working on some of the Econ-related issues, please tweet me at @brandonjcarl


I think the top priority should be how a hierarchy of weak AI agents can form strong AI with the ability to rewrite its own notions and not become unstable.

I've seen very little about this, except for a video where (as I recall) a multilayer neural network can be trained while holding certain connections at a constant value. That way the network can be trained to do different tasks and then put back into those modes by setting those training connections to various values during runtime:

https://www.youtube.com/watch?v=VdIURAu1-aU

Eventually another neural net could be trained to know which values to hold to get different behaviors. It sounds like one of the goals is to reuse logic for similar tasks, and eventually be able to adapt in novel situations like our brains do. I believe there was also a bit about how time figured into all of this, but I haven’t watched it for a while.

Anyway, I’m thinking that a lot of what we’ve learned about version control systems like git could be used to let an AI brainstorm in a sandbox and replace its own code as it improves, and always be able to revert or merge if needed. Throw in some genetic programming and decent lisp machines with 1000 cores and we could really have something.

When I think about this sort of “real work”, as opposed to the daily minutia our industry gets mired in, it makes me sad inside.


This aggregation/collaboration of weak agents... isn't it boosting? It's certainly a recommended approach to building robust systems, see Pedro Domingos' paper (tip number 10):

http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf

But I had the impression that the letter had a more transcendental feel than just focusing on a particular tuning technique.


I'm glad to read such positive attitude towards AI... I recently discussed in another thread if AI was indeed pushing people into the unemployment lines (an opinion from an MIT Tech Review). Link as follows:

https://news.ycombinator.com/item?id=8866253

Parent: https://news.ycombinator.com/item?id=8863279

As I said, I am so firmly convinced that AI has so much good to do that I just created a blog (http://ai-maker.com/) solely dedicated to AI and its applications, and I'm going to dedicate my spare time for the following years to grow this side project into something awesome, because that's where AI is leading us.


You have posted four links to your blog just in this thread. Forgive me if you find this rude, but I think that's at least two more than necessary.


Sorry for that. I just found it appropriate for each of the different sub-discussions (and I just removed one of them). When threads get massive with text, I sometimes miss interesting stuff and I didn't want this to happen to other readers.

I'm starting this side project and recently HN is on fire wrt AI. I'm seriously willing to put a lot of time in this, and the more people I can help the more I can learn.

Sorry again if that bothered you or any of the other watchers here.


I propose Brian's Law - any AI sufficiently advanced to learn faster than humans would learn from human history and stop talking to us.


In Silico Cognitive Biomimicry: What Artificial Intelligence Could Be http://genopharmix.com/biomimetic-cognition/in_silico_cognit...


Mimicking a portion of human cognition should be #1. For example:

What do you think of when you think of the word "sky"? Our machines and algorithms need to produce an answer similar to yours.


We don't want computers to be as dumb as we are. We already know how to make human brains that can answer such questions poorly; what good does it do to have a machine do it too?


There are cases where it would help to have realistic simulations instead of real humans. E.g. artificial companions, negotiations, public relations, or experimentation that would be unethical on living humans (unless we end up with a theory and evidence indicating that machines can be sentient). Or there may be some huge task that would benefit from having trillions of "people" working on it.


That's true, but there is no known way to verify proper operation of those kinds of machines. What happens if your artificial companion stops liking you? What if your negotiation robot throws a tantrum?

Well, then the robot isn't doing what we want it to do. But you asked for human behavior, and that's what you got. There is not really any point in making robots do what humans do. We want them to do what humans can't do -- even if we want them to seem human while they do it.

Addendum:

>(unless we end up with a theory and evidence indicating that machines can be sentient).

I think we'd sooner find a theory indicating that sentience is either not well-defined or not a useful concept for making decisions. If we found that ants were sentient, we'd still step on them and not care.

This kind of talk is exactly the kind of reason I feel most people are useless for understanding ethics: you can't make decisions based on the answers of arbitrary grammatical yes/no questions. It's the same reason mathematicians don't really care if P=NP is true or not. You can just suppose one way or the other and you'll be no better off without a proof.

The value in the proof is an advancement in conceptualization and language. Similarly, an important advancement in ethics/morality/AI will very likely make your concerns about sentience obsolete in a way that should be obvious to you.


My goodness, we are dumb enough not to produce/engineer a machine that's smarter than us.

Think about this.

The human brain is the best pattern matcher in existence, lets use biomicry, like the human race has done in the past, to make brake throughs.


Out of all the things to worry about AI is at the bottom of my list. 3rd world countries, declining air quality, war, ect. are all things more deserving of our time and energy.


I agree that it's hard to take seriously because it sounds like science-fiction. I mean, right now that's what it is. And I'm sure many people who read HN have managed to learn how to catch themselves working on an interesting problem instead of one that needs to be fixed. And this sure sounds interesting.

But the problem with AI is that it's unlike anything we've ever faced before and that once it becomes clear that it can actually overshadow environmental and political problems, it'll already be too late. Imagine the Manhattan project people finding out that there's a high probability of a single nuclear bomb actually taking out the entire planet.

But that's not all... suppose the entire world already depends on nuclear power, the basic tools to build that bomb already exist everywhere and the first test might be around 1-3 decades in the future.

I'm all for AI researchers allocating some of their time to preventing that sort of thing. If that's even possible.


Wrong. You underestimate the power of AI. AI profoundly affects economics, since economics is fundamentally computational.


I think a plain wrong is mistakenly assuming you both have the same definition of AI.

I suspect the parent comment meant "of all things, an Intelligence Explosion by general purpose AI", which to be fair even the group set up here to study do not seriously believe in.

You are right in that It's almost impossible to deny the impact that some AI research has lead to - but that is the field of AI not the existence of "real" AI.

You might both be talking past each other


> I suspect the parent comment meant "of all things, an Intelligence Explosion by general purpose AI", which to be fair even the group set up here to study do not seriously believe in.

The question of whether there it will end up actually happening is unclear, but my understanding is that FLI takes the possibility of intelligence explosion pretty seriously.

This is a difficult question to meaningfully engage with, without a lot of background research. Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies" is a good entry point into the subject.


Disclaimer: maybe I'm not the target audience of this letter.

What a terrible letter. Hemingway grade is "20, bad".

451 words, but the only hint what the authors wants is "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do ... In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today." But, reading this, I just think, "Sure, that sounds good."

There's an attachment (which I won't read), but it would have been nice if the original document contained some information what the point of this is so I'm motivated to read it.


The attachment is the letter.





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: