Hacker News new | past | comments | ask | show | jobs | submit login
What should we learn from past AI forecasts? (openphilanthropy.org)
72 points by ghosh on May 28, 2016 | hide | past | favorite | 55 comments



What I find interesting is that Hacker News manages to hold the same mentality that there's a startup bubble, this time it's not different, while at the same time holding the mentality that AI is going to take over, this time it's different (in aggregate, I'm not saying everyone holds this view, this seems to be the plurality view).

Many of the same arguments used to argue how AI is different this time around are the exact same arguments you can use to justify that it's not a startup bubble this time.

My guess is something about the fantastical sci-fi aspect of AI captures the deep-seated imagination and wishful-thinking of many engineers, whereas the rush of money, business-people, and focus on sales/growth aspect of startups draws out deep-seated disdain from many engineers.


I might take a guess that, when sensational amounts of money are flying to and fro, there's a degree of truth to the fact that humans trading imaginary numbers or paper tokens or what-have-you, is a centuries old game that consistently harkens back to simple gambling. Taking calculated risks, and swindling fools with asymetric information is just stupid human tricks in redux.

Machines that get up do things for us with minimal hand-holding really would be something unprecedented, new and different. And truly disruptive to the economy, moreso than the boiler room poker games of investment and High Finance, which are disruptive, if only beneficial to a self-serving few.

Whether the reality of such a thing mirrors our imaginations, if it ever happens, things might get really, really weird thereafter, and good luck at guessing how.

We'll be confronted by deceptive situations, while circumstances change around us. And within that mix of confusion, add to that the already comedic/tragic stupid human tricks that continue to beguile so many.


The reason HN probably thinks that with the startup bubble, this time it's not different, while with AI, this time it's different is because that's what the facts indicate and HN tends to be well informed.

The bubble is same old and it's debateable whether it even is a proper bubble or just some frothy valuations. Such periods of overvaluation have happened many times and will come and go.

The AI thing is quite different. If you accept the human brain is effectively a biological computer and that normal computers get more powerful every year then inevitably there's a point when they get more powerful (in terms of processing power and memory) than the biological ones. This is happening over the next twenty years roughly. No one said that is happening before. It's a one off inevitable event that will happen once and only once in human history, of a significance comparable to the evolution of multicellular life forms say. It should be interesting to see.


>If you accept the human brain is effectively a biological computer and that normal computers get more powerful every year then inevitably there's a point when they get more powerful (in terms of processing power and memory) than the biological ones.

That doesn't give you anything like the right algorithms to run on the computers. MOore's Law has been ending, so that free ride to avoid taking asymptotic complexity seriously is over.


Actually, the the facts show "this time it's different" for both. Every indicator tells us this startup boom is nothing like the past bubbles and is actually quite sustainable. (In fact your comment seems to indicate you hold this view, which is not what the HN groupthink view is that I'm referring to).


I think what captures the imagination is that, fundamentally, AI is about understanding ourselves, what makes us tick. If we can reproduce the human mind, we can understand the human mind.

Up for debate which will come first. I think this is what drives people though, even though they may not have thought about it that way or tried to articulate it that way in the thinking.


Yes! I agree 100% and have been saying the same thing.

I too wonder if seeking AI is just another quest for the answer to whether things are fated or not, or whether true randomness exists or not. Are we just atoms bouncing around? Or is there something else?

I think most people will agree there is something more. I don't think the computerized AI as we perceive building it today will result in some new AI being.

And, that should not stop us from trying, however, we ought to be honest with our fellow humans in the non ML world that true AI is not around the corner. As soon as we collectively make a promise on which we can't deliver, and the public realizes that, our funding dries up. 2001 wasn't a bubble because computers and the internet aren't awesome. It was a bubble because we told people that tech would be awesome in 2001. In reality, it became awesome gradually and we're now in 2016 at the point of realizing 2001's promises.

Under promise, over deliver = stability

Over promise, under deliver = instability


Of course we're just atoms bouncing around. The interesting parts are what sorts of atoms you need and how, precisely, they bounce around.


OK, but if we're just atoms that obey physics, where does free will come into play? Neurons make electrical impulses or something right? How do we control that so that it is possible that we could both choose to eat a hamburger or a pizza?


>OK, but if we're just atoms that obey physics, where does free will come into play?

https://en.wikipedia.org/wiki/Compatibilism . Or to say more on the matter, free will is the ability to engage in counterfactual reasoning, and thus select actions according to motivations and understandings of the world rather than having actions determined without an understanding of the world. "Free will" says that the causal path from our circumstances to our actions passes through our motivations and our knowledge, thus making our intentions causal factors in the world.


That just sounds like a redefinition of free will to me. Free will and determinism aren't combinable, in my opinion. If you say there's free will alongside something else, then that's the free will view. Determinism says only one thing is possible, and free will says anything is possible. If you say that both just one thing is possible and also anything is possible, well, to me, that sounds a lot like you're saying anything is possible.


>That just sounds like a redefinition of free will to me.

Only if you start by assuming that "free" will must, necessarily, be supernatural. But of course, if you start that way, you've started everything in bad faith, because you're going to reject any possible view of the world that relies on science instead of magic.

>free will says anything is possible

No sensible definition of free will says that you can will yourself to desire slood (a substance you've never heard of because it doesn't actually exist and has no properties even as a hypothetical) or will yourself out of the influence of gravity.


> Only if you start by assuming that "free" will must, necessarily, be supernatural. But of course, if you start that way, you've started everything in bad faith, because you're going to reject any possible view of the world that relies on science instead of magic.

I'm willing to admit there are parts of science we don't understand, and that people often label this as "magic". In my opinion, sometimes humans call it religion and use it as a form of hope as we continue in the quest to know the unknown. Admittedly, any tool can be used to do harm too, and clearly in 2016, many of us can see how many religions have been perverted. Yet at the same time I think religious folks tend to levy the same criticism of strict scientists.

Anyway, I have no beef with either side, and I know I won't be able to convince anyone here that science can also be viewed as a religion with its own biases. Plus I believe that most of the time science is great. I just don't know about this quest for true AI. It feels religion-y to me.

> No sensible definition of free will says that you can will yourself to desire slood (a substance you've never heard of because it doesn't actually exist and has no properties even as a hypothetical) or will yourself out of the influence of gravity.

Right, no, I didn't mean it like that. The thing I wanted to share is that free will and determinism cannot co-exist, in my opinion, for the reason stated in my last comment.

Say I have a choice - pizza or pasta. If I choose pizza, then free will says I also could've chosen pasta. Determinism says if I chose pizza, then the "choice" was always going to be pizza. If you say both that I could've chosen pizza or pasta, and that I was fated to choose pizza, that makes no sense to me.

[aside]

It sounds like quantum mechanics, which I'm unable to observe myself because I don't have the training or equipment to do something like a double slit test as demoed here [1]. Also I haven't studied quantum mechanics at all.

And, I'm not sure how trustable Dr. Quantum is as a source of information =). It's the first I've learned of quantum theory. I found another that says the same thing from the Royal Institution [2], which seems more reputable, but something tells me these findings haven't been replicated sufficiently to be fact. Or I could be 100% wrong.

[/aside]

Anyway, I'm interested in learning more, so if you have resources you think can point me in an interesting direction, I'm happy to check them out.

[1] https://www.youtube.com/watch?v=Q1YqgPAtzho

[2] https://www.youtube.com/watch?v=A9tKncAdlHQ


>Anyway, I have no beef with either side, and I know I won't be able to convince anyone here that science can also be viewed as a religion with its own biases. Plus I believe that most of the time science is great. I just don't know about this quest for true AI. It feels religion-y to me.

People definitely have a tendency to anthropomorphize "true AI" that is very religion-y. This doesn't mean that we need to throw out science as a religion among religions, it means that we need to throw out quasi-religous thought about "true AI" and be careful whenever we step off the sure ground established by current scientific knowledge.

>Say I have a choice - pizza or pasta. If I choose pizza, then free will says I also could've chosen pasta. Determinism says if I chose pizza, then the "choice" was always going to be pizza. If you say both that I could've chosen pizza or pasta, and that I was fated to choose pizza, that makes no sense to me.

That's not what compatibilism says. Compatibilism says: you weren't fated to choose pizza, period, and if we reran the whole experiment enough times, so to speak, you would in fact choose pizza some of the time and pasta some of the time, without our being able to predict better than just collecting percentage statistics.


> People definitely have a tendency to anthropomorphize "true AI" that is very religion-y. This doesn't mean that we need to throw out science as a religion among religions, it means that we need to throw out quasi-religous thought about "true AI" and be careful whenever we step off the sure ground established by current scientific knowledge.

Okay, I agree with that. I like the description of religion-y AI as anthropomorphizing. I'd say the same thing about throwing out pieces of religions rather than the whole thing. I'm not religious myself, by the way. I just think if they went back to being just about hope and support then that would be a good thing. So many other values have been tacked on that some seem to have desecrated themselves.

> That's not what compatibilism says. Compatibilism says: you weren't fated to choose pizza, period, and if we reran the whole experiment enough times, so to speak, you would in fact choose pizza some of the time and pasta some of the time, without our being able to predict better than just collecting percentage statistics.

Still sounds like free will to me. I guess my brain's not ready to interpret what you're saying.


The problem with taking the aggregate view is exactly what you run into - you get awkward, inconsistent world views.

Most of the LessWrong style AI-safety people would probably heavily disagree with what you wrote, because you imply that AI-safety is a topic that matters now because "this time it's different". On the contrary, I think most people would say AI is still quite a ways out, as in a few decades, and that the arguments about it don't necessarily depend on the current state of AI. The state we're in is certainly seen as a warning sign, but not proof that we're a few years away from true AI.


I think it's that optimism is supported over pessimism. You're more likely to read optimistic comments about an undetermined thing than you are to read negative ones.

Negativity about the future is generally downvoted or unsupported.

Those who know AI isn't around the corner are fewer because it takes a ML researcher to know that. They need time to spread a message of positivity to the masses that while AI isn't about to be developed, there are still some really great things coming.


The plurality opinion on HN seems to be that it's a startup bubble (or at least it's a very very common opinion) - but this is pessimism.


There's a big readership on HN who are good programmers etc working for tech companies, but who don't have a lot of skills outside of that. Many are reliant on a good job in order to feel like they have social standing as well. The biggest existential threat for them is some kind of employment bubble. That fear washes over this space fairly regularly, in waves every 2 years or so.

I don't think the bubble is a deeply held belief, or even something that most of us intrinsically care about. It's just human nature to be paranoid about your food source.


You're right. Different communities will have different views. I'd argue we call it a bubble because we perceive other people's excessive optimism as potentially harmful. Therefore, saying bubble is optimistic for us on HN

If you can't jump through the mental hoops I'm making I don't blame you. This is just what my brain thinks


Computers are millions of times faster than 30 years ago, let alone 60 when the field started. There are that many decades of new knowledge and research. There is a dozen times more funding than in the past. AI research is actually producing things of economic value now. And the field has moved past brute force search and dumb expert systems, to deep general learning algorithms.

Of course this time it's different. The only question is how different?

I would say that HN is cautiously optimistic about the future of AI, in aggregate. With extreme opinions on both sides, of course. But there are many skeptical articles that reach the front page, and there are many skeptical comments like yours at the top of every thread about it.

The opinion also seems to be "in my lifetime" or "in the next few decades", not in the immediate short term future. That's a pretty long time, and a lot could happen in that time. Saying that AI will definitely not happen in that time, is just as overconfident as saying that it definitely will.

Lastly, saying "people in the past who said 'this time it's different' were wrong once, therefore anyone who says that is wrong", is a fully general counterargument. You are applying it to vastly different domains (short term economic bubbles vs long term technology predictions). And you are glossing over all the details of the arguments.


>And the field has moved past brute force search and dumb expert systems, to deep general learning algorithms.

I would still describe deep neural networks as brute-force statistically-weighted function search.


For sufficiently vague and useless definitions of "brute force", literally everything is "brute force". Backprop was such a big advancement because it allowed training nets much faster than, say, hillclimbing. Which itself is much faster than actual brute force.


> "in my lifetime" or "in the next few decades"

is actually quite optimistic. You've put a definite timeframe on something extremely nebulous. I actually think the HN group-think plurality is quite more optimistic than that (next 10 years), however.


It's not that optimistic. Even if you assume a totally uninformative prior, it's still reasonably likely to happen in our lifetime. E.g. the copernicus method/prior gives an estimate of 50% probability within the next 65 years. And an even more optimistic prediction if you add other factors: http://lesswrong.com/lw/mxy/using_the_copernican_mediocrity_...

You must have some strong additional information to make such a pessimistic prediction.

I really don't know where you get the impression that the majority of HNers expect strong AI in 10 years. That's just ridiculously. I would guess maybe less than 5% think that. Yes there are a lot of optimistic articles on AI, but they are almost all about weak AI.


The copernican prior is totally speculative.


Or perhaps this time it is different. I think most think human level AI is not just around the corner. But automating a lot of the mental processes that constitute work? Possibly not very far away at all. That's where the hype is coming from. And for me I think there is a very good chance the hype is justified...


I think the best plan (for now) is to just have NN's et al as accessible as possible and have everyone try different things. It'd be cool to see accessible interfaces for pay-per-training on good hardware (TPUs?) tied in to some library. I think 'this time it's different' is right mainly in the fact that we can try things much more quickly, with a lot more people.


he mentality that AI is going to take over, this time it's different (in aggregate, I'm not saying everyone holds this view

Do you remember the saying that it's very difficult to make someone understand something if their salary depends on NOT understanding it?

"Data Science" is the latest hot industry to be in, if you can keep the hype going, promise some miracles of machine learning then you can score a big payday. That's what's really happening here.


You can't keep hype without some actual successes. If everyone gets on the hype train without researching where it is going it'll overheat

Which I guess is bound to happen. But, it is possible to fight those forces by critiquing frauds and prolong these "ai golden years" as long as we can in order to promote more advances


Sure you can, for a few years, the trick is to time your exit just before you have to deliver, and just as the next tech fashion is getting started.

The Cloud guys are doing the same thing, right now. As are the DevOps snake oil salesmen. In fact if you can pull off DevOps Machine Learning in the Cloud, you will make a fortune before the music stops.


In my experience, actively working in a field gives decent insight into direction. Then again I was still in school during dot com so not sure what happened there. If you're investing yeah timing's tricky for sure

And perhaps that's true for starting a business as well. I'm simply speaking as a lowly employee who drifts where my nose takes me


In this game being an employee is no different from being an investor, you need to sense which way the wind is blowing and position yourself accordingly.

Here's an example, there is no cloud. It's just a computer somewhere else that you rent time on. They were doing this in the 1970s when they were called computer bureaus. But put the right buzzwords on your CV and KER-CHING!


I guess I am lucky. I studied machine learning starting in 2006 and haven't felt the pinch. Always seem to be jobs available. I know I'm destined to learn this lesson but I really don't want to :-P


No! Be quiet and let me raise money via overfitting my datasets and making short demo vids!

More seriously though, even if there's a lot of undeserved hype (I don't think so, I just think non-technical people are making up foolish expectations), all this work and investment into technical development of statistical machine learning algorithms and software is going to pay off. Much in the same way that overinvestment in internet infrastructure during the dot-com bubble paid off.


> undeserved hype (I don't think so, I just think non-technical people are making up foolish expectations),

That's what undeserved hype is and where it comes from..

> Much in the same way that overinvestment in internet infrastructure during the dot-com bubble paid off.

Sure. The longer the hype can be tempered with articles like this, the more reasonable the investments, the bigger the pay off.


Yeah, exactly. I am actually solving problems with ML these days.


The difference this time is that 1) AI is profitable, and 2) everything is on a much larger scale.

As I've mentioned before, I went through Stanford CS in 1985, just as it was becoming clear that Expert Systems were not going to yield Strong AI Real Soon Now. Back then, AI research was about 20 people at Stanford, and similar sized departments at MIT and CMU, with a few other small academic groups and some people at SRI. There were a few small AI startups including Teknowledge and Denning Robotics; few if any survived. Everybody was operating on a very small scale, and nobody was shipping a useful product. Most of this was funded by the US DoD.

Now, machine learning, etc., is huge. There are hundreds of companies and hundreds of thousands of people in the field. Billions are being spent. Companies are shipping products and making money. This makes the field self-sustaining and keeps it moving forward. With all those people thinking, there's forward progress.

Also, we now have enough compute power to get something done. Kurtzweil claims a human brain needs about 20 petaFLOPS. That's about one aisle in a data center today.


>> Also, we now have enough compute power to get something done. Kurtzweil claims a human brain needs about 20 petaFLOPS. That's about one aisle in a data center today.

It's a bit late over here so what I want to say may not come out as I mean it, but, basically, who cares how much computing power we have? Most people are not trying to simulate the human brain anymore. And even if Kurzwail's maths were good (I doubt it - the whole idea of measuring anything about brains in bytes sounds just silly) it would just mean that all the computing power in the world is no good if you don't know how to connect the dots.

And we don't. Lots of computing power is good for machine learning, obviously, but strong AI is never going to happen like that, by brute-forcing crude approximations from huge data sets while pushing our hardware to its limits. That's a dumb way to make progress. Basic computer science says if you want to go anywhere with a problem, you need to solve it in linear time at the very worse. Look at what we have instead: machine learning algorithms' complexities start at O(n²) and go up. We just don't have good algorithms, we don't have good solutions, we don't have good anything.

We have "more powerful computers" today. So what. Today we solve the stuff that's possible to solve, the low-hanging fruit of problems. Tomorrow, the next problem we try to tackle is ten times bigger, a hundred times harder, we need machines a thousand times more powerful... and it takes us another fifty years to make progress.

So, no, it doesn't matter how powerful our computers are. We can't cheat our way around the hard problems. Because they're just that: hard. We're just gonna have to be a lot smarter than what we are right now.


> Today we solve the stuff that's possible to solve, the low-hanging fruit of problems. Tomorrow, the next problem we try to tackle is ten times bigger, a hundred times harder, we need machines a thousand times more powerful...

Nah, if you can't handle the huge amount of data, it's possible to just switch to a sparse model or do MC-like sampling. Take AlphaGo as an example for that - a huge state space, yet it was tractable to beat the human expert. That way the network doesn't scale linearly with the size of the domain being learned.

These kinds of solutions don't rely on improving the hardware. What is needed is datasets, competitions and grants to get people to try and improve state-of-the-art results on them. It's been demonstrated that when a good benchmark appears, a lot of papers follow and top results improve massively. Another useful component would be simulators for training agents in RL.

A promising direction is extending neural networks with memory and attention, in order to focus its work more efficiently and to access external knowledge bases. As we improve on these knowledge bases and ontologies, all we have to learn is how to operate on them.

Thus, improvements can come in various ways, by sampling, sparsity, external knowledge bases, better research frameworks, and improving the hardware (such as having a better GPU card or a dedicated device) is just one factor.


>> Nah, if you can't handle the huge amount of data, it's possible to just switch to a sparse model or do MC-like sampling... That way the network doesn't scale linearly with the size of the domain being learned.

That's useful when your domain is finite, like in your example, Go. If you're dealing with a non-finite domain, like language, MC won't save you. When you sample from a huge domain, you eventually get something manageable. When you sample from an infinite domain - you get back an infinite domain.

That's why approximating infinite processes is hard: because you can only approximate infinity with itself. And all the computing power in the world will not save you.

>> It's been demonstrated that when a good benchmark appears, a lot of papers follow and top results improve massively.

Mnyeah, I don't know about that. It's useful to have a motivator but on the other hand the competitions become self-fulfilling prophecies, the datasets come with biases that the real world has no obligation to abide by and the competitors tend to optimise for beating the competition rather than solving the problem per se.

So you read about near-perfect results on a staple dataset, so good that it's meaningless to improve on them - 98.6% or something. Then you wait and wait to see the same results in everyday use, but when the systems are deployed in the real world their performance goes way down, so you have a system that got 99 ish in the staple dataset but 60 ish in production, as many others did before it. What have we gained, in practice? We learned how to beat a competition. That's just a waste of time.

And it's even worse because it distracts everyone, just like you say: the press, researchers, grant money...

Well, OK, I'm not saying the competitions are a waste of time, as such. But overfitting to them is a big problem in practice.

>> A promising direction is extending neural networks with memory and attention

That's what I'm talking about, isn't it? Just raw computing power won't do anything. We need to get smarter. So I'm not disagreeing with you, I'm disagreeing with the tendency to throw a bunch of data at a bunch of GPUs and say we've made progress because the whole thing runs faster. You may run faster on a bike, but you won't outrun a horse.

(Oh dear, now someone's gonna point me to a video of a man on a bike outrunning a horse. Fine, internets. You win).


Computing power absolutely does matter, because it allows us to run more complicated experiments in a reasonable amount of time, which is crucial for moving research forward. Today we work on the low-hanging fruit so that tomorrow we can reach for something higher. As a side note, your comment about runtime complexity does not make much sense when there exist problems which provably cannot be solved in linear time. It is dangerous to discourage research on that simplistic basis; we could have much more powerful POMDP solvers today (for instance) if people hadn't been scared off by overblown claims of intractability fifteen years ago.


>> your comment about runtime complexity does not make much sense when there exist problems which provably cannot be solved in linear time.

Look, it's obvious the human mind manages to solve such problems in sub-linear time. We can do language, image processing and a bunch of other things, still much better than our algorithms. And that's because our algorithms are going the dumb way and trying to learn approximations of probably infinite process from data when that's impossible to do in linear time or best. In the short term, sure, throwing lots of computing power at that kind of problem speeds things up. In the long term it just bogs everything down.

Take vision, for instance (my knowledge of image processing is very shaky but). CNNs have made huge strides in image recognition etc, and they're wonderful and magickal, but the human mind still does all that a CNN does, in a fraction of the time and with added context and meaning on top. I look at an image of a cat and I know what a cat is. A CNN identifies an image as being the image of a cat and... that's it. It just maps a bunch of pixels to a string. And it takes the CNN a month or two to train at the cost of a few thousand dollars, it takes me a split second at the cost of a few calories.

It would take me less than a second to learn to identify a new animal, or any thing, from an image and you wouldn't have to show me fifteen hundred different images of the same thing in different contexts, different lighting conditions or different poses. If you show me an image of an aardvark, even a bad ish drawing of one, I'll know an aardvark when I see it _in the flesh_ with very high probability and very high confidence. Hell- case in point. I know what an aardvark is because I saw one in a Pink Panther cartoon once.

What we do when we train with huge datasets and thousands of GPUs is just wasteful, it's brute forcing and it's dumb. We're only progressing because the state of the art is primitive and we can make baby steps that look like huge strides.

>> It is dangerous to discourage research on that simplistic basis

It's more dangerous to focus all research efforts on a dead end.


It takes many months to train a human brain so that it would recognize what a cat is, far more than a few calories - and it needs not only a huge amount of data, but also an ability to experiment; e.g. we have evidence that just passive seeing without any moving/interaction is not sufficient for a mammal brain to learn to "see" usefully.

Your argument about classifying images trivially excludes the large amount of data and training that any human brain experiences during early childhood.


>> Your argument about classifying images trivially excludes the large amount of data and training that any human brain experiences during early childhood.

Not at all. That's exactly what I mean when I say that the way our brain does image recognition also takes into account context.

Our algorithms are pushing the limits of our computing hardware and yet they have no way to deal with the context a human toddler already has collected in his or her brain.

>> It takes many months to train a human brain so that it would recognize what a cat is, far more than a few calories

I noted it would take _me_ less than a second to learn to identify a new animal from an image. Obviously my brain is already trained, if you like: it has a context, some sort of general knowledge of the world that is still far, far from what a computer can handle.

I'm guessing you thought I was talking about something else, a toddler's brain maybe?


just as it was becoming clear that Expert Systems were not going to yield Strong AI Real Soon Now

It's fairly clear that deep learning isn't going to yield Strong AI Real Soon Now.

If it's profitable, it will be sustainable anyway.


Yup, the major difference between deep learning and e.g. expert systems is not about the near-future enabling of General AI but that expert systems didn't really work for practical narrow, niche AI/ML tasks in a commercially usable quality but deep learning does so in many areas.


One of my favorite comic artists put it more appropriately than I could - in the form of a flow chart: http://www.smbc-comics.com/index.php?id=4122

TL;DR - once strong AI arrives, we're as good as dead. ;-) Can't argue with a flow chart!


I remember reading the famous book by McCorduck and Feigenbaum about the Japanese fifth generation project in the early 1980s. That book had a strong influence on my career and I recall that I resolved to become an AI expert. As it happened, it turned out that the Japanese fifth generation project achieved pretty much nothing.


Well, some projects fail entirely, and their people go on to do other great things based on what they learned.

Some people from Thinking Machines Corporation (founded 1983) [1] went on to build a cool parallelizable programming paradigm that even those without programming experience could utilize with relative ease. It's called Ab Initio (founded 1995) and is largely used as a Data Warehousing / ETL (Extract, Transform, Load) tool for getting data from databases, running a lot of the same operations on it, then sending it to another database.

> Thinking Machines alumni ("thunkos") helped create several parallel computing software start-ups, including Ab Initio Software, independent to this day; and Applied Parallel Technologies, which was later renamed Torrent Systems and acquired by Ascential Software, which was in turn acquired by IBM. [1]

Ab Initio has a GUI front end for code that looks like a data flow diagram. The "graphs" are converted to k-shell scripts that grab the data from various sources. I guess the k-shell implementation made it easy to parallelize.

I always thought this was cool because this company had found a way to allow inexperienced programmers, and even some other STEM graduates with no programming experience, to write software that operated on data in parallel, which is generally considered to be a hard thing to do, even today in some cases, let alone in early 2000s.

They're very secretive, so many people don't know them, but I believe they are very successful. In the late 1990s and early 2000s, they had clients like AOL and double click, which is basically where all the advertising revenue was going. All that processing and recalculating was handled by this software.

I know because my first job out of college was to use this software on Fannie Mae's loans database. That system was SUPER complicated but working with Ab Initio on it made it a cake walk compared to using, say, Java, which a previous contractor had tried and failed to do.

[1] https://en.wikipedia.org/wiki/Thinking_Machines_Corporation

[2] https://en.wikipedia.org/wiki/Ab_Initio_Software


I think it was Bruce Porter in an AI class at UT Austin who described the fifth generation project as "we need to do machine translation and planning and...so therefore we will build a Prolog machine."


Some of the decisions made were considered strange even at the time: the choice of Prolog instead of Lisp seemed to be largely down to it not being American. The main hardware they wanted to develop was dataflow, for which Prolog isn't a particularly good match.


Our bar for achieving true Artificial Intelligence is set way to high in my opinion. This talk from CCC 2015 is very relevant https://www.youtube.com/watch?v=DATQKB7656E


lol, yes it is. Can't believe you're being downvoted on HN. Just shows how hyped it is. People can't handle disagreement.

This video looks good to me from the intro. I'm going to check it out. Thanks!


"That we don't know what the hell we're talking about."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: