Hacker News new | past | comments | ask | show | jobs | submit login
The AI Battle (aifuture.substack.com)
79 points by ajafari1 on July 18, 2022 | hide | past | favorite | 128 comments



This very much requires acceptance of the worse-is-better model. That "beautiful" drawing of an an astronaut riding a horse is aesthetically... crap. It may take a person 5 hours to paint or draw that, but, arguably, a person wouldn't.

GPT-3 might generate a novel, but, generally speaking the prose is jarring and awful to read, because there is no intent (danger: philosophically loaded word) behind it. It's all automated collages and they all feel... cheap.

The world is beginning to discover that polyester isn't very good. It's bad for the skin, bad for the environment, and terrible for human health in general (microplastics generated from washing polyester give a medium for increases in algal blooms in coastal waters... etc.). It is cheap, but we're going to figure out it isn't good, and stop making or buying it (I hope).

DALL-E and GPT-3 are the polyester of design and creative construction. They're cheap; aesthetically uncomfortable. We'll turn to humans for high-quality design until we actually get AGI, and those systems are not a path to AGI.


> those systems are not a path to AGI

If that's the actual meat of your claim, it's weird to just put it unsupported at the end. If you think it's an aside you've got a big problem, because without it the rest is pretty weak.

We saw this with chess. Anybody who was clinging on to the idea that well, the machines can't really play chess, because technically there do seem to be a few humans who are better, was screwed the moment the machines begin routinely beating grand masters. Either this categorically isn't something that the machines can do, or, it is and so it's important that they can do it at all. We shouldn't expect any half measures on this.

Do you consider Rob Liefeld an artist? What do you reckon the chances are that the machine can get human anatomy right more often than Rob Liefeld? Rob is a human, so it seems like that should give him an advantage. However unlike Rob DALL-E has learned by seeing lots of existing pictures of humans that they don't look the way Rob draws them...


On "not the path to AGI": Gary Marcus on the Mindscape podcast is worth a listen.

https://www.youtube.com/watch?v=ANRnuT9nLEE&list=PLrxfgDEc2N...

Transcript: https://www.preposterousuniverse.com/podcast/2022/02/14/184-...

"...And then there’s natural language understanding and reasoning, and I would say we have not really made progress at all. GPT-3, which we may wanna talk about, gives the illusion of having natural language of understanding, but I don’t really think that it does. And we are nowhere near, for example, an all-purpose general assistant. ..."


> I don’t really think that it does

Either the quote is poorly chosen or reading this article is not worth the time.


That's a false dichotomy. Select the name Gary Marcus, right-click, and search (on my browser it defaults to Duck Duck Go, but that returns the right result).

The Mindscape Podcast is hosted by Sean Carroll. You have a very sharp quantum physicist interviewing an expert in the field of AI research.

The podcast is worth the time, and the quote is representative of an expert's take on the matter. He elaborates, but I don't need to write an essay just to argue on the internet.


For anyone who's reading the above comment, the additional context that slowmovintarget hasn't provided is that Gary Marcus supports a school of thought in AI that opposes the currently popular school of thought that favors deep neural networks. A frequently contested point is what each school thinks is the path to AGI. Marcus is a well-known figure in AI.


This argumentation is based on the "feeling" that something doesn't understand things the way they do and that this is sufficient to say that it is inferior. It's a very old argument against AGI, and it's as boring as it ever was: It relies on human exceptionalism and on the concept of the soul (i.e. something humans have and other things can never have, which cannot be quantified or understood). It is compelling only to those who are religious or who still hold religious tendencies.


>> Anybody who was clinging on to the idea that well, the machines can't really play chess, because technically there do seem to be a few humans who are better, was screwed the moment the machines begin routinely beating grand masters.

Who was "clinging on to the idea that well, the machines can't really play chess" and when was that?


Made an account so I could mention David Levy’s infamous bet: https://en.m.wikipedia.org/wiki/David_Levy_(chess_player)#Co...

That’s the way it is with AI: first people say X is impossible for computers, then they say X is hard, then they say X doesn’t work for certain edge cases, then they say we’ve known all along that computers could do X.


Why "infamous"? In any case, Levy did win his bet when there had been no computer chess engine that could beat him in ten years after betting against McCarthy and Michie [1]. Despite that he acknowledge that chess engines had improved further than he had thought:

>> Levy wrote, "I had proved that my 1968 assessment had been correct, but on the other hand my opponent in this match was very, very much stronger than I had thought possible when I started the bet."[37] He observed that, "Now nothing would surprise me (very much)."[38]

(Edit: the quote is from your WP link.)

So that at least really does not match the pattern you mention. Levy's evaluation of chess engines was pretty much in the water, all the way up to Kasparov's loss to Deep Blue. As far as I can tell he never said anything like "the machines can't really play chess" as the OP suggested. He just made a prediction about how much they could advance in ten years. And he was right.

I would therefore like more justification for your assertion that "that's the way it is in AI". I would also like a clarification: who are the "people" who say all those things? How representative are they of experts and researchers?

Who makes all those naive predictions about the impossibility of AI that are later proven wrong? For the time being, it seems that naive predictions can be traced more directly to luminaries of the field, like Alan Turing or Marvin Minsky [2], who predict that AI is "just around the corner", rather than skeptics and naysayers who say it wont' happen, as is usually suggested.

And then of course, there's Rodney Brooks' dated predictions, so far standing the test of time (although that's a short time!) [3].

_______________

[1] Totally coincidentally, Donald Michie was my thesis advisor's thesis advisor, so I'm, like, his academic grandchild.

[2] See: https://web.eecs.umich.edu/~kuipers/opinions/AI-progress.htm...

"In 1958, Herbert Simon and Allen Newell wrote, “within ten years a digital computer will be the world’s chess champion”; note that this is 10 years before Levy and McCarthy's bet.

[3] https://rodneybrooks.com/predictions-scorecard-2022-january-...


Well, Levy won the bet, but just barely. More importantly, his dismissive attitude towards AI mastering chess (“[…] the idea of an electronic world champion belongs only in the pages of a science fiction book.”) was deeply shaken.


Sure, but I just think it's all a bit more nuancend than the way the OPs make it to be. There's always a lot of speculation that goes both ways: either AI is just around the corner, or it's never gonna happen. And there's plenty of opinions and educated guesses in the middle, always.


Right on. When do you expect a productized AI that can replace your average skilled programmer?


I'm not sure what average looks like. Possibly last August? Possibly some time around 2035? It'll be a moving target though, as anyone worse than the AI isn't likely to stay in the role for long.


Here's me almost exactly six years ago, predicting "most coding and design tasks will be automated within three to five years."

https://news.ycombinator.com/item?id=11718300

If we count GitHub Copilot as the tip of the spear then I was too optimistic.

> GitHub Copilot ... was first announced by GitHub on 29 June 2021

https://en.wikipedia.org/wiki/GitHub_Copilot


You were too pessimistic! Automatic programming, a.k.a. program synthesis, has been a thing since the early days of AI and computer science, for example the idea of deductive program synthesis, where a program is generated wholesale from a complete specification in a formal language that is not a computer language goes at least as back as Alonzo Church himself, in 1957:

https://en.wikipedia.org/wiki/Program_synthesis#Origin

Much work has been done since then and inductive program synthesis from incomplete specifications consisting of input/output examples (a form of machine learning) has been possible for quite some time. This is the second time this week I point to the report by Gulwani et al for a modern overview (of both kinds):

Program Synthesis

https://www.semanticscholar.org/paper/Program-Synthesis-Gulw...

Rather than Copilot being the "tip of the spear", it is really a step back, a system that can only generate code but not check its correctness, unlike pretty much every program synthesis system since forever. Although this may actually be an advantage for its commercial application (since it makes it easier to match expectations of the system's performance) it is not really any indication of progress, in any way, shape or form. In truth, Copilot is famous today because it is supported by a large company like Microsoft and because earlier work is not well-known by most people who get their AI knews from blogs and podcasts, who are also not very well aware of the history of the field. But, "tip of the spear"? Oh, no. Unless it's a very blunt, toy spear, more like a tech demo of a spear.

And this is by no means restricted to program synthesis, and Copilot. For example, the first self-driving car to drive on a stretch of real road, with real traffic, without human intervention, was Ernst Dickmann's 1995 robot car:

https://people.idsia.ch/~juergen/robotcars.html

Again: 1995. And yet, fully autonomous self-driving cars are still not here. The revolution hasn't happened and progress has only crawled marginally forward.


Dickmann's car does Autobahn (~US freeway) traffic under controlled conditions. It doesn't need to understand junctions or traffic signals, there aren't any. It doesn't need to understand pedestrians, there aren't any. It doesn't need to understand bicycles, there aren't any. What happens if things go wrong? In practice a human takes the wheel, this is not a system capable of safely leaving traffic when it can't cope, so it was never demonstrated without what we'd today call a "safety driver". That's not autonomous self-driving cars minus a little bit more research, it's improved Cruise Control.

Waymo has taxi service. It's losing money doing that, and it's only in a few select places, but it's doing autonomous journeys, with no "safety driver" on city streets. It understands junctions, and traffic signals, pedestrians and cyclists, because on the city streets those things are commonplace. It's doing the hard part.

I don't think that constitutes "crawled marginally forward", it's a considerable advance.


That's right, this is your field isn't it? Cheers!

Gosh, the robot car work was incredible, I can't believe I've never heard of it before!

- - - -

Copilot isn't cutting edge but it's significance is just that it's a mass market tool. It will be interesting to see how well it does, and whether users find it worthwhile overall after a couple of years. Will it be improved to the point where it starts to compete with its users?

I've asked other programmers before, if you could write a program that replaced you would you do it?

(One of the reasons I like Schmidhuber is that his goal, since early on, is to "Create an automatic scientist, and then retire.")

To me the ultimate effect of programming is to obviate the programmer.


Ah, my field is Inductive Logic Programming - a sub-sub-sub-sub field of program synthesis. But I need to know the basics!

I think Copilot can be used very effectively, as long as its capabilities and their limitations are communicated clearly. For instance, I think it can make a great boilerplate generator, as long as users stick to short code snippets.

Well, I don't know about replacing programmers. I think that's Sci-Fi, for the time being, and for a while longer still. What I'm more interested in is creating tools to help programmers do their job. Copilot does that already, btw, I'm not dissing it. I'm just pointing out it doesn't represent a sudden shift in capabilities, to be clear.

>> (One of the reasons I like Schmidhuber is that his goal, since early on, is to "Create an automatic scientist, and then retire.")

I didn't know Schmidhuber had said that. My thesis advisor, Stephen Muggleton, was part of an interdisciplinary team who created a robot scientist that can develop its own theories and then choose, and run, the experiments to prove them:

https://en.wikipedia.org/wiki/Robot_Scientist

Another one of those things that are not well-known, I guess. I wasn't involved with that, btw, but I think recent advances could make for a much more powerful system. I am considering something similar as a research project, post-doc.


Good lord! Sometimes I think the only thing that has prevented the techno-singularity happening already is the tendency of humanity to strenuously ignore prior art.

Cheers!


A wise old inventor told me, early in my career, that “people always significantly overestimate the rate of progress in the short run, and greatly underestimate it in the long run.”

Text conditional image generation is still a fairly new thing, we don’t know what its upper limits will prove to be. Also, it seems likely that, whatever level of capability they reach, humans will learn how to use them effectively as tools for their own purposes. Some people might use them for quickly preflighting ideas, others to create boilerplate, some people may use them to speed up their workflow or scale up productivity. People didn’t know what all the use cases for personal computers would be when they were introduced in the mid-70’s, or what capabilities they would possess decades later.


Nice! You were very close. With any exponential curve, it takes forever to get to point A, but once there, it's an immediate job to point Z.


Cheers!

I'm going to try to work up a decent reply to your article, but it might take me a day or two.

Have you read Wendell Berry's "What are People For?" (that essay not the whole eponymous book)?

- - - -

The (open) secret at the heart of AI is that no AI can answer: "What is good?" ( https://news.ycombinator.com/item?id=31720621 )

AKA "Lucky for whom?" Teela Brown, fictional character in Larry Niven novels

Spoiler alert!

This link gives details of the character that are spoilers: https://larryniven.fandom.com/wiki/Teela_Brown

Unfortunately those details are the point I'm trying to make, and I'm a huge Niven fan so I'm not going to spoil Ringworld, but it ties in with the (ultimately metaphysical) question, "What is good?"


The average is so bad it barely even has to produce code that works, so that's not really a bar I'm interested in seeing cleared.


AI vs average programmer, maybe 10 years.

Commercially viable AI vs skilled engineers, that’s going to take longer.

I wouldn’t be surprised if we could replace the average BA with an AI right now.


I agree that the horse picture sucks. Other people might say that they like Dall-E’s output on Twitter, for signalling purposes, but I still think they don’t really believe that.

Still, it’s only a matter of time till nicer graphics are possible. Most graphic designers and artists are still worse than this thing, especially the type that bought a $200 drawing tablet on Amazon and sell their services on Fiverr. Those people are no better than Dall-E, just like most copywriters are no better than GPT-3.

Even if they are in fact better it’s only a few years till that changes. Deepmind can train a new model faster than you can go back to grad school.


Exactly. Any content writer AI, developer AI, graphic designer AI just has to be good enough to cause an economic labor shift. There are plenty of non big tech companies that want affordable labor that can just get the job done. It doesn't need to be insanely good, although imo it will be in short order anyway.


The way I have put it to my artist friends is in the form of litmus test: If I sat at a local farmers market selling this art, people would likely buy some of it.

If the delineation can only be made because you are an art major, or practicing artist, that is not really compelling to your own market.

We are on the third generation of these type of "AI" and they are already past the point where people would exchange cash for a print of this work. It is only a matter of time until people are using these to generate the general picture and then drawing it using their preferred medium (e.g. oil, watercolor).


Not a bad idea: use the AI to generate some sketch for the user, then the user finishes it. You might be able to teach art this way, slowly reduce the help the AI gives a student over time until they don’t need the AI anymore, or perhaps at some point they’ll realize they like coloring more than drawing and just keep asking the AI to produce sketches they can fill in with color, depth, and texture.


This time has come already. Many contemporary painters already leverage this. Some artist, like Jon Rafman, have trained their own models to generate digital imagery.

I do think this can be thought of more like a sketchbook or a camera at the end of the day, since real contemporary art collectors will not go for a print from Dall-e 2 or Midjourney so readily. Make a painting from it and if it's a decent rendition it will likely sell.


Nicer graphics are already possible from freely available colab notebooks like StableDiffusion. DALLE has the cruded interface possible; it's more a demonstration that the idea works at all than something you could use.


And a strategic way for OpenAI to garner more than 1M business and people on their waitlist as I outline in the article.


How about now? Check out the Midjourney Discord bot. You can keep on doing variations and tweaks until you get one you like.


This is the classic (and flawed) "it's gotta be perfect otherwise it's useless" argument that you see in other AI fields, such as self-driving.

You don't have to be anywhere near perfect to cause tremendous disruption. "Polyester" grade artwork will be perfectly acceptable for a lot of use cases.

Just like how self-driving will decimate the trucking workforce, all while people continue to screech about how "it can't handle snow therefore it's useless".


Polyester was hugely disruptive. I'm not saying we can ignore this, or even that it is useless.


I don't think these sorts of tools will ever be used to generate a novel or something on that scale. We're building really sexy autocomplete tools that creators will use to fill in the blanks much like the great masters of the renaissance used apprentices to do much of the work in their masterpieces. People will outline what they want, then ask the AI to fill in the blanks, and iteratively refine the result.

As long as we can miniaturize the models sufficiently that individuals and small companies can get output that's comparable to what big corporations with deep pockets can get, these sorts of AI tools have the potential to revolutionize creativity.


Modern word processors haven’t suddenly and dramatically increased the number of great books available. They save a lot of time and effort relative to a typewriter, but such drudgery isn’t the bottleneck on creativity that you’re suggesting.


I know a lot of writers who are very good at creating an outline and describing what's going on but poor at actually sitting down and getting words on paper for any extended period of time. These same people can read and critique/edit/etc endlessly. I think it's a fairly common problem because block is the number one topic in most forums for writers. Having a tool that takes an outline and generates a rough draft of a chapter that can be iterated on would make a huge difference for the non-Stephen Kings among us.


This is a great use case and in fact I would pay for this service. Sitting and barfing up text can be fun when I'm inspired, but I'm frequently not inspired but would still like to make progress on my stories.


It’s cheap to pay someone to create a rough draft from an outline, no need for AI. However, doing so isn’t very helpful.


Ever is an exceedingly long time. I would be shocked if 100 years from now, we didn't have AI-authored bestselling novels. Even then, they might not be literary masterpieces, but certainly AI will be able to write formulaic stuff that sells really well.

If you don't believe that, just consider where technology was 100 years ago and what the response would have been if you'd described DALL-E in its current incarnation and asked people if they thought that would ever come to pass.


I think it'll always be "human decides what book is about, cues AI, then gives feedback to AI to refine output," The cues will just need to be less specific and well crafted, and the amount of feedback required will go down. Maybe eventually AI will be able to one-shot amazing novels, but they'll still need taste makers to read the output and promote it, which isn't really much faster than a taste maker asking for what they want directly then reading/requesting changes.


DALL-E is already been outdone my Imagen.


Never is a long time.


You say that and yet every year billions in box office revenues go to movies that are aesthetically equivalent to the image of the astronaut riding a horse.

It also seems silly to compare billion parameter AI models to polyester and try to reduce their value to the trite "artificial is bad" argument which has plagued human thinking for centuries.


Agree, this is like those AI anime character generator game startups that YC seems to enjoy investing in so much. They think they can leverage AI face generation to "somehow" make a game/experience/show/app/nft/whatever when that is the extent of their tech. With no consideration at all of any of the actual work that goes into making anything actually valuable. But I mean, who's the idiot - the company that boasts these claims, or the ones giving them money?


>AI anime character generator game startups that YC seems to enjoy investing in

Wait, YC invests in this space? If it isn't any trouble, could you point me to a few?


Parent comment reads as if they lack the mental faculties to grasp basic English. It’s no wonder that a cursory Google search is beyond their abilities.


You might be disappointed to find out the aesthetic tastes of _most people_ :)


I like the way you are thinking and if you are talking about the snapshot of today, that is spot on. However, these models are getting better exponentially. See the improvement from GPT-3 just 2 years ago on this benchmark: https://paperswithcode.com/sota/multi-task-language-understa....

I hope I am wrong so that society has more time to adapt to the worker displacement and downstream policy issues.


I read, many artists have to do dumb comissions for porn all the time. So there might be a market for "art" no humam would wanna draw anyway.


> That "beautiful" drawing of an an astronaut riding a horse is aesthetically... crap

I’ve walked through several of the world’s top art museums. Most art is aesthetically crap.


> DALL-E and GPT-3 are the polyester of design and creative construction. They're cheap; aesthetically uncomfortable. We'll turn to humans for high-quality design until we actually get AGI, and those systems are not a path to AGI.

I wish this were true, because it assumes a lot about Humanity living up to it's highest potential in terms of QC; but the truth is we're talking about digial content creation, which has been churning obscene amounts of data on a daily basis in the Internet era. And this means that it isn't about 'the best of the best' what it actually is a race to the bottom in terms of 'good enough' source material which is as disposable and fleeting as it's consumers attention span.

As a space nerd (its in my bio) I followed James Webb since Cassini was decommissioned/crashed and I found out about the necessary crypgenic tests to make Jw work were underway, but even as impressive as I find the images we see (and they are breath taking) I'm still spending way more time on DALLE-2 subreddit admiring all the prompts because I still don't have access to it myself and like the sheer novelty of it all.

You want a more closer analogue than polyester clothes? Try food: processed junk and by extension the American diet has made us awash in excess and rather than create more discerning consumers who have all the options that Modern chemistry, biology and AG science have to offer to shift the market to a more sustainable and higher quality point: we instead face the stark reality that the 1-2 killers in most developed countries are heart disease and diabetes that are directly correlated to the over consumption of cheap junk. Obesity remains one of the biggest threats to over all quality and longevity of an individual's life!

I study AI and ML and I have many artists friends, I used to be a cook so it's not a far leap, and we often discuss that this is inevitable: the inability to excel and to standout in a World in just one medium due to the advancement of technology.

It's not enough to be a creative who just focuses on music or painting, or sculpture if you want to have anything but a self-funded gallery where you try to sell your pieces to the attendees (often at a loss) in order to market yourself in the hopes of a larger commission that lets you quit your day job (or at least have a hiatus) when social media has availed itself as a gallery of what may not be the best artists in the World but the most prolific and often those with the best marketing and most followers to stay as the top trending in your discipline.

It's something we did encountered sooner in the culinary World as food porn became as ubiquitous as it was pre-covid so I see it quite clearly, it wasn't enough to deliver amazing service and provide a good meal sourced from artisans and local farmers and grow from word to mouth, now you had to play the social media game and become influencers or host influencers and then have them 'engage' with their demographic to expand your clientele and reservations for that quarterly push: and as was our case it helps if you are related to a certain tech mogul who dominates social media and try to ride coat tails where ever possible. And even that isn't enough as you still have to pay off yelp for keeping bad reviews at bay in a perverse game, while gamifying Google reviews etc... but we had tech giants buy-out on a weekly basis, proving it works.

It's becoming such that it's less about the execution of the art or the craft itself but rather doing N amount of things to remain as a signal amongst all the noise while the art becomes a secondary possibly tertiary part of what keeps the lights on; techies wanted disruption, and this is what it looks like in the 21st century.

I think this may be required reading for kids just graduating HS and thinking about what to do, as it gives a sobering view of what has happened in just a short duration when it comes to AI in certain, often precarious, Industries [0] and how it can potentially shape culture itself as so much of AI growth is in the surveillance-economy.

0: https://www.technologyreview.com/supertopic/ai-colonialism-s...


It's not going to happen for a very simple reason, the personal identity of the artist in the age of mass produced crap (not just by machines, but also by other people) is gaining importance.

(Today) consuming music or art isn't just about the work itself, it's even more about the person and culture surrounding it, and there is no culture or interesting narrative in machine produced art. AI art is not authentic, it has no location in time or space.

We know this to be true because it's already possible to generate music that even experts can't distinguish from human creators.[1], yet nobody listens to bot-Bach, everyone listens to the real one. The only people who watch bot chess-competitions are AI enthusiasts, not chess fans, and so on. So I'm kind of tired to treat this as a futuristic doomsday scenario when the present has already shown that the human element in and of itself isn't replaceable, and even increasingly the distinguishing factor. Radio stations already instead of licensing expensive music could simply churn out machine produced tracks. Yet, how many are doing that?

The reason is not the lack of quality (some questionable popular artists should cure everyone of that delusion), it's the fact that as soon as the listeners would be made aware of the fact that the music is machine generated they would lose all interest immediately. Because art that lacks a creator is just a reproduction, it lacks uniqueness and context.

AI bloggers should consult Walter Benjamin[2] rather than internet futurists, because thinking about the nature of mechanically reproducible art isn't a novelty of ML technology, it's been with us ever since the industrial era.

[1]https://www.openculture.com/2018/01/artificial-intelligence-...

[2]https://en.wikipedia.org/wiki/The_Work_of_Art_in_the_Age_of_...


> The only people who watch bot chess-competitions are AI enthusiasts, not chess fans

Actually that’s not entirely true. Grandmasters frequently study AI openings and use them to search for new ideas to bring against their opponents.

While I generally agree with your sentiments, I see more of a synergistic future, where humans take inspiration from computers (and consequently it gets fed back into the machine).


Indeed. And the artist/creative who can guide DALL-E or GPT-3 will be as exciting as the composer who can make a room dance with music from a synthesizer...or the artist who can make compelling images with a camera.


I think the article is overly sensationalist. But I think the gist is totally realistic. For example, there's a huge chunk of design work that's not super critical and often badly done. I can imagine a simple, AI-assited tool that creates posters, simple websites or flyers in the companies style. But it won't replace graphics designers yet and won't be for some time. I think we are not yet completely ready to create even a simple tool, there are still engineering-challenges to overcome (I am not sure how the technical interface to the ML-would look like, I can't imagine text is enough), but I think we might be very close to have at least something resembling a product.

I know a thing or two about machine learning...and one thing I've learned is that it's very hard to predict the future, even the immediate future. Some problems turn out to be surprisingly easy, but other we just can't solve yet, even super basic things like learning multiple tasks in a sequential but unknown manner without forgetting stuff. I find it hard to predict what ML will look like in three years, except that dalle-3 will be much better. If we have a breakthrough on one of those fundamental problems where we struggle with the basics, then we will open new possibilities quite quickly I think. But it als might be that they require new approaches that will take years or decades to develop.


I'm not sure how the author could be so blind to the simple fact that computer generated articles and images are not useful. I mean if you want to replicate low quality blog spam and poorly written local news blurbs, maybe you could con a few companies into using your service. Or if you want to destroy the stock image market or the album art market (maybe that kind of "disruption" is somehow beneficial to you?) I guess more power to you. But beyond those very narrow and largely meaningless endeavors, I think computational statistics (what the author calls "artificial intelligence") has yet to prove itself in literally all of the use cases the author brings up.


OpenAI published an article about some of the things people are using DALL-E for:

https://openai.com/blog/dall-e-2-extending-creativity/

I thought this quote was particularly insightful:

> “Conceptualizing one’s ideas is one of the most gatekept processes in the modern world,” Kamp says. “Everyone has ideas — not everyone has access to training or encouragement enough to confidently render them. I feel empowered by the ability to creatively iterate on a feeling or idea, and I deeply believe that all people deserve that sense of empowerment.”

Images of astronauts on horses may have questionable commercial use or artistic merit, but DALL-E excels at concept art. The true value here isn’t using those images directly, it’s using them to brainstorm and test out creative ideas that can later be turned into something real (a product, a movie, art, etc.) by a human.


I don't follow any of that. If I have an idea about a new type of toaster...how does DALL-E help me by giving me a bunch of computer generated toaster images? If I want to make a painting of a solar system floating in front of a nebula, I could ask DALL-E to do it, and maybe I'll get some ideas on composition/style, but is that really such a "gatekept" process? And does it even matter? Is my art better or more meaningful after DALL-E gives me some random references?


Toasters are probably not a great example, but concept art is a big deal in most product development / creative industries. Tech, cars, Hollywood, fashion, anime, etc.

The process usually involves finding and hiring an artist and working with them as they create lots and lots of designs until you find one you like, and then more sketches until the design is refined. So you either need to have a significant amount of money to hire artists for an extended project, or the artistic ability to do it yourself.

Using DALL-E instead, you get 4 high quality images in 20 seconds. If you don't like what it comes up with, you adjust your prompt, add more detailed instructions, and keep experimenting. And anyone can do this, regardless of artistic ability, and it's free (at least for now).


That's not how DALL-E works. It doesn't understand specifics enough to give you something useable in any of those fields. A car company is not going to use a DALL-E image as a car. It might have a professional designer, who themselves use DALL-E to look for something interesting, but that is not "disrupting" the car concept drawing market like the author claims. DALL-E isn't capable of that. Same with tech, Hollywood, etc. For fashion, I suppose you could just create whatever weird thing DALL-E throws up, but again this isn't any kind of disruption, it's just a novelty thing.


I am not sure how you can be so derogatory yet apparently so ignorant. Have you even seen what DALLE-2 (or imagen) can generate? DALLE-2 does not have to 'prove itself' it is clearly on-par with human designers in terms of quality and price. There even is a product on the market right now called Midjourney that is worse in quality than DALLE-2 yet is growing incredibly fast and already breaks even economically.

If NLP tasks make a similar jump in quality as DALLE-2 did when compared to DALLE then they too might disrupt the writer industry.


DALLE-2 is not in any way "on par" with human designers, unless they are designers of the things I already mentioned like album art or stock photos. Even with the latter it's not particularly good unless the subject involves something that doesn't require fine detail to look decent. Midjourney is not breaking even because it creates useful, high quality art. It is breaking even because of the novelty of DALL-E and I don't think it will last particularly long in the spotlight.


Midjourney != DALLE2

One has a much smaller customer base than the other and effectively amounts to, as you suggest, a hype club that finetunes open works and monetizes by giving early access to members.

The other is backed by Microsoft and currently scales to 100,000 users.

To answer your question - not everyone spent their time learning to master art and design. The idea that this is stuff is trivial and devoid of value for a beginner is non-sense. It is useful anytime a coder is in need of custom assets matching an English description. Will it offend some artistic sensibilities? Probably, but have you looked around lately? Nobody cares about that anyway, unfortunately.


AI that can do creative work is scary for its implications on humanity. Are we at best biological AI's ?

I've played around with several of them. These artwork AI's are way more scary than GPT-3 et al because they seem to do something so creative with such good visible results.

I don't think the genie goes back in the bottle. I think humanity has an existential crisis on its hands.

Whats the point in doing things if the AI can do it better than you ? Do those things become relegated to some increasing unpopular niche ?

I just have increasingly amounts of questions and fear.


> Are we at best biological AI's ?

imho yes. But don't have an existential crisis just yet. It's a very narrow task where the AI systems do well in a way that we don't expected from computers. But it's well chosen, there are still only few tasks where significant progress was made. They really can't make "logical deductions" at all and we have no idea how. Everything you see is learned via a massive amount of labeled data, but imagine you are a tradesmen repairing machines. We have no idea how to build a system that you could explain some basics on how to repair a simple machine and the system understanding what's going on. We can't learn from explanations. It turns out "AI" is not a single skill we have to master, but orthogonal concepts where we managed to conquer a single one. This doesn't mean the next one is around the corner.

I think it's a bit like the early mechanical age where people thought everything in the future will be mechanical (those fun retro-future posts). In my opinion we are in a similar situation. We are surprised by machines automating tasks that we thought are reserved for humans and just extrapolate, but it's not that simple. We have to find out exactly where to extrapolate to, but it currently very much looks like there are bounds to our current approach.


> We have no idea how to build a system that you could explain some basics on how to repair a simple machine and the system understanding what's going on.

90% of biological humans probably couldn't assemble Ikea furniture from the instructions either .

>I think it's a bit like the early mechanical age where people thought everything in the future will be mechanical (those fun retro-future posts).

I like this, and I'm definitely quoting it.


> They really can't make "logical deductions" at all and we have no idea how.

What exactly do you mean by this? Because this sounds like the exact opposite of the problem AI has — logic is the easy part, and has been working in machines since they were clockwork and punched cards and is the foundation for 100% of the functionality of modern computers, but natural language comprehension is only just starting to be possible now, and only at a fairly rudimentary level.


Maybe i could have phrased it differently, i'll try my best to explain it. Keep in mind that this is open research, it can change quickly with breakthroughs. What you mean is strictly "following" logic, by executing code or combining axioms like in prolog. What I want to get at is maybe better described as "reasoning". Learning by thinking about stuff and combining knowledge, not by example. Our current models can't do this at all, this was all the rage of old-school, logic based AI (but this also didn't work at all, hence the AI-winter). Just think about the difference between learning to play tennis (repetition and exercise, learning from errors without much reasoning) and my IKEA furniture example, for which you are expected to assemble it on your first try without guidance or repetition. It turns out that we can solve, through repetition and exercise, a lot of problems that were previously thought have a lot to do with reasoning, like dalle-2 or gpt-3, this involves huge amounts of data and long training times. Is it all solvable by repetition and exercise? I doesn't look like it. The learning process is so fundamentally different that we have no idea how to build systems that learn by explanation and have the ability to "think hard about a problem". Some researchers are convinced it can be done, but we currently can't do this at all and there's not really an indication that it is possible using our current approaches.

My personal opinion is that we now have a hammer and everything looks like a nail. I don't think everything is a nail, but surprisingly many problems are, if you phrase the problem correctly. In practice this means that if we can gather enough training data then a lot of problems suddenly become solvable, but this is not possible for all problems. If we can not gather enough training data, then we have have a problem we just can not solve and there's no indication that it is solvable with current tools. It would have to "reason" and "think hard" about the problem, we can't do that. All those fancy things work by ever increasing datasets. This is currently a hard limit and I can perfectly imagine that we have just solved one of the ingredients for better AI. And just like rolling a dice, if you have rolled two 6s in a row the probability for another 6 is still 1/6. If we need another breakthrough this can take years or decades and just because we've made one in 2012 this doesn't mean the next will happen in 2022.


People run for fun, even though cars beat us for both range and speed before the end of the Ottoman Empire and the departure of Ireland from the United Kingdom.

We'll do creative things for fun too, even if/when AI convincingly bests us in such endeavours by the margin it does in chess.

I, for example, still sometimes make musical noises, even though my musical talent is so bad that I wrote a procedural generator better than me back in 2009 — https://youtu.be/depj8C21YHg and https://youtu.be/k7RM7GsGeUI won't impress anyone, but me thinking I'm worse when I try to do it myself doesn't stop me having fun.


> Whats the point in doing things if the AI can do it better than you ?

“countries engage in international trade even when one country's workers are more efficient at producing every single good than workers in other countries”

“Widely regarded as one of the most powerful[7] yet counter-intuitive[8] insights in economics”

https://en.wikipedia.org/wiki/Comparative_advantage


> Whats the point in doing things if the AI can do it better than you ?

If doing something makes you happy, that can be reason enough for doing a thing. A huge portion of people already engage in "useless" hobbies with no meaningful external outputs. And in the vast portion of those situations, experts and/or machines already exist that can do far better than a hobbyist. Yet people still have hobbies, because hobbies are fun!

Ultimately one's purpose in life can only really be defined by oneself. As human cognition/capability is increasingly shown to be wholly unremarkable, I expect more and more people to turn to lives of leisure, indulgence and self-fulfillment.

This is all contingent on AI being harnessed for our collective benefit, instead of the egos of a select few. That's the facet of AI development that keeps me up at night. AI, AGI, and eventually ASI will be human-intent amplifiers of a magnitude barely conceivable by most. There is no steady-state where things remain as they are today. We are going to either end up with a utopia, or one of a myriad of possible dystopias.


How do you know a huge portion of people engage in useless hobbies with no meaningful external outputs ? I really don't even know how one would go about figuring that out.

I think I see the opposite in my part of the United States. A life without purpose leads to rampant drug use, malaise, and decay.

I think an AI on demand that can out compete people in most of their hobbies would be absolute game changer, and doesn't currently exist. If that kind of automation reached the average consumer, I think the vast amount of people would give up those hobbies as well.


> How do you know a huge portion of people engage in useless hobbies with no meaningful external outputs?

How many hobbyists' hobbies all meet the thread-relevant standard of usefulness with meaningful external outputs (e.g. producing creative work with comparable or greater value to people outside their social circle than hypothetical superb AI alternatives... or the abundance of writing and painting and recorded media and manufactured goods and software already out there available at little or no cost)? What proportion of the population would you say undertook literally no activities which could be described as hobbies? The difference between those two numbers is your portion of people who engage in "useless" hobbies with no meaningful output.

People still play chess badly against human opponents even though computers can play it perfectly (or with dialled down difficulty); even using computers with perfectly adequate chess playing programs installed to seek out remote human opponents. People spend hours trying to play Stairway to Heaven borderline adequately despite the fact anyone can listen to Stairway to Heaven played by Jimmy Page on demand, and has been able to for very little outlay for half a century now. Even if fans could ever be persuaded that the music the AI was generating was superior to that created by Led Zeppelin, why would people interested in playing music cease to be interested in playing music?

A hobby is something people choose to do primarily for enjoyment rather than profit; it's almost a tautology that [further] reducing the potential for profit by spamming the space with AI-generated outputs isn't going to greatly discourage people from doing it.


We still have decades, maybe even a century or two, before humans become irrelevant. We should be happy we got our lives in right before.

https://marshallbrain.com/second-intelligent-species#:~:text....


> Whats the point in doing things if the AI can do it better than you ?

Whats the point in doing things if someone else can do it better than you?


> Whats the point in doing things if the AI can do it better than you ?

That's not how jobs work (comparative advantage). An AI won't replace you, simply because there's something better it could be doing than replacing you. And, no, it can't do everything at once - that would be a perpetual motion machine.


As someone working on some of this stuff:

(1) Open source models for image generation that are comparable to what DALLE2 or Imagen can spit out will be available by the end of next year. There will be nothing for OpenAI or Google to monetize.

(2) These models aren't magic. Often to get the content generation you want you need to finetune the models. 21st century graphic designers are well aware of AI image generation are looking forward to streamlining animation, etc. Half of the people I know doing ML pipelines right now are actually graphic designers. In a few years what formerly took whole studios will be able to be done by a single person. Want to make your own animated movie? With enough effort, fine tuning, and going through the generation of millions of images, it will soon be possible.

(3) By finetuning with your own unreleased images, you will always get an image model better at making your style of art than whatever is available to the public. You can use your proprietary fine tuned model to create art you enjoy or sell.

(4) Everyone should keep in mind that 50 years ago, to make a high definition film required a studio and millions of dollars to pay production crew. We're getting to the point where a single person can do that on their phone. I don't think the future is as dark as what everyone thinks it will be, and shaking out the tedium of jobs, even art, is going to help redirect our collective brain effort to more important tasks or more beautiful works.


> We're getting to the point where a single person can do that on their phone.

50 years ago we had Godard and À bout de souffle (in fact, 60 years), the man was doing his filming thing on the streets of Paris almost single-handedly, nowadays we don't have anyone close to Godard when it comes to creativity. Yeah, the tech is there, the creativity most certainly is not.

Again, there's no AI that can give us what Eisenstein and Prokofiev did [1] with a lot less technical resources. The real creative classes from today should be fine, problem is (as already stated above) we have forgotten how to be really creative. Again, tech won't help us with that.

[1] https://youtu.be/IcPixaWL2Pg?t=85


> There will be nothing for OpenAI or Google to monetize.

I disagree. FB/Yandex are releasing massive text transformers to the public, but even with the model available to download you need a DGX rack to run inference. Google/OpenAI can make money by charging for access to their hardware.


If we don't implement universal basic income, we are utterly screwed.

Markets where one side is desperate are inelastic, which means that a small change in supply or demand can cause huge price swings. Usually, this works against those who are out of power; for example, oil producers can cause price spikes if they can hold a cartel together. The job market is also extremely inelastic. We might only see 10 or 20 percent of truck-driving jobs automated out of existence by 2032, but that will have devastating effects on wages.

The idea that this only threatens low- and middle-skill workers is also absurd. Look at programming. Agile Scrum has given managers the ability to replace those expensive, fussy high-skill experts with chain gangs of far less capable people who barely qualify as software engineers. If it can happen to software engineers, it can happen to anyone, and the broad-based effects on the labor market (collapses of one industry triggering refugee crises into other ones) are going to be horrible.

Anything you do as a subordinate can and will be automated, if not entirely, at least enough to make possible a massive wage cut. Of course, that's not a bad thing. On its own, it's the opposite. The problem is that the financial penalties associated with automation invariably go to the workers, and the benefits go only to capital.

We have about ten years left on the "if ya doesn't work, ya doesn't eat" model.


Thanks for reading my deep dive and your comment. If we zoom out to one year ago, most non AI-savvy people would have considered it preposterous that a graphics designer could become AI automated anytime soon. I believe, similar to you, that we are in the same position with software development becoming automated. Most tech people consider it preposterous right now, but watch it happen in the next couple years.

My point about the highly skilled creative workers retaining jobs is that they may have a year or two of lead time before they too get automated by the AI-job suite.


I don't think Basic Income will save anything. I think its very likely creativity and consciousness are probably tied together.

We might be very well on the way to making humans beings as a whole obsolete. Non-expert Human Labor has been devalued by robotic automation, and now Human minds might be devalued by AI's.


Is there any historical precedent for a short term, big dive in the wages of a certain sector based on a 20% decline in demand for that labor? If that were the expected outcome, wouldn't one have predicted that the economy-wide declines in the workforce in the Great Recession would have sunk wages?


I don't know if the numbers match (e.g. 20%) but this is basically what happened in the Great Depression. The advent of industrial nitrogen fixation led to increased agricultural productivity, which caused food prices to decline, which seems like it should be a good thing, but led to the impoverishment of many farmers who could no longer compete.

The Great Recession did shrink wages, although for office workers it mostly produced permanent increases in work demands (Millennial nightmare jobs vs. cushy Boomer jobs which, by the way, won't be backfilled when those who hold them vacate) with wages merely flat.


Dark times ahead.

If you haven’t already made it big with crypto or startups, you are toast and will be trapped in the permanent UBI underclass. That assumes that there will be a UBI at all. Combined with inflation this will mean that only those with $5 million or more in the bank will live what still count as middle class lifestyles. Those with less money will be forced into poverty. Many will adopt petty crime for survival.

I predict mass unrest, beginning in the global south and spreading to the global north. The anti-racist protests of 2020, the farmer protests in India and Europe, and the Sri Lankan anti-corruption protests all show how this will play out. The state may be forced to accept the demands of the protesters.

Only an AI security system could let the state win, again showing how very important the AI advantage is.

The resulting political turmoil may be exploitable for AI alignment purposes. Those of us who are concerned with AI alignment and automation must ally with whichever political faction wins the fight.

That will be our last chance to implement UBI, make AI alignment mandatory and buy a few more decades for humanity.

I hate AI so much, man.


Things will be shaken up, the way they were when the steam engine was invented. But there's no reason to believe we're headed for a dystopic future.

The result of this development and the underlying trend is vastly increased wealth creation. That's a good thing. You'd have to be a luddite to think otherwise.

Read e.g. https://moores.samaltman.com/, where Sam Altman (OpenAI CEO) suggests a scheme for wealth taxation that gives everyone direct ownership of the AI-created wealth and the machinery to make it.

I'm pretty confident that there would be strong democratic support for a scheme along these lines, given how few are able to participate in the highest echelons of the wealth creation in the kind of AI-driven society that will happen before AI becomes truly autonomous.


The difference is that the steam engine created jobs that went to humans.

Strong AI, if it does create jobs, does not necessarily create jobs for humans. Any jobs that it creates could also be done by AI, cutting humans out of the growth of the economy.

Now I am in fact a luddite if it means (most) humans become second class citizens and especially if there’s no transhumanist enhancement option for us to stay competitive. Even more so if I’m among the people locked in the UBI class.

I like Altman’s welfare plan. I think it’s one of the easier ways to prevent the riots and emergent fascism that I predicted in other posts.


> there's no reason to believe we're headed for a dystopic future.

Oh idk. The recent shifts in wealth distribution, gig economy and cost of living crisis squeezing bottom end sure feels a little dystopian to me


Because graphic designers and software engineers will have some of their work AI-assisted...there will be no more jobs for anyone and the world is going to riot?

It's alarmist nonsense like this that I expect to find on Reddit.


I do believe AI alignment and safety is a huge concern, and unfortunately thinks are moving so fast there is not enough time for society to adapt.

However, if we can safeguard that I am very optimistic about what a sufficiently advanced AI (we don't even need AGI) can do to help with climate change, medical advances, etc. Costs of goods will plummet as things become more abundant.

The question is which path are we on, the utopia or dystopia, or somewhere in between?


Even if alignment works and AI is "safe", what does that actually mean?

Frank Herbert in Dune wrote about AI: 1. "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

If a few people control super intelligent systems, they could probably control all your thinking. Would these machines run all the media, all the politics, all of society? These machines could likely persuade you into any believe or action.

2. "Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments"

These early art generating AIs will only be the beginning to superhuman visual and audio art. All music, movies, games and whatever will come might be created by AI at a superhuman level. Will they control our sense of beauty and will you find human art to be dull in comparison?

This is just some perspective on what might be wrong with an AI utopia.


We’re headed for dystopia.

We desperately go transhuman with Musk’s Neuralink tech or we die as slaves.

Once AI approaches AGI it will enable a highly dangerous form of fascism. Society can be controlled top down by a dictatorship using AI to oppress whoever it doesn’t like. That means the recently impoverished masses will be at the mercy of whichever fascist or totalitarian government is in charge.

You have to do transhumanism in order to make yourself strong enough to resist machine fascism, or you somehow slow technological progress so any mass technological unemployment can’t happen fast enough to cause the civil war meets economic riots scenario that I’ve alluded to in my last post.

Note that I’m not saying the riots inherently lead to fascism, but that fascism may emerge as a reaction by current economic elites against the uprisings, or as a result of an extremist takeover caused by the crisis. The latter would be like the Bolshevik or Nazi takeovers in Russia or Weimar Germany.

If you slow progress down enough there also can’t be a point where machines progress past human level without humans also upgrading themselves at the same speed.

The crux of my argument is that progress in AI is dangerous because of the direct economic consequences and second, third order risks from political chaos.

If I had my way GPT-3 wouldn’t have happened until we already had brain computer interfaces at an economically useful level. Unfortunately neuroscience looks way to hard for that to happen.

I would not at all be surprised if Neuralink isn’t even economically useful by the time AGI happens.

We don’t have the equivalent of the scaling hypothesis for transhumanism yet. Scaling neural networks is simply the best way to make them better. It’s dead simple. You just throw money at the problem.

We need something like that for whatever tech can preserve human agency i.e. Neuralink.


We are moving too fast to put in adequate safeguards unfortunately. Demis Hassabis, founder of DeepMind, recently mentioned something interesting on a Lex Fridman interview. They are going to double down on safety since the algorithm side has moved much faster than expected.

Moreover, Max Tegmark, MIT scientist and author of the famous book Life 3.0 said this on a podcast last month: "Frankly, this is to me the worst-case scenario we’re on right now — the one I had hoped wouldn’t happen. I had hoped that it was going to be harder to get here, so it would take longer. So we would have more time to do some AI safety. I also hoped that the way we would ultimately get here would be a way where we had more insight into how the system actually worked, so that we could trust it more because we understood it. Instead, what we’re faced with is these humongous black boxes with 200 billion knobs on them and it magically does this stuff."


Do you have a link to this interview? Would be interesting to hear more about what DeepMind is currently up to.


It won’t happen because of Jevon’s Paradox and comparative advantage.

If everyone was unemployed then who would buy AI produced output?


I think there's some confusion over the terms graphic design, art, and image generation, which are somewhat overlapping but distinct fields. Dalle-2 doesn't really do graphic design, but a future version of it might.


New AI applications by OpenAI and Google are poised to take over creative jobs and much more. This will be a major disruption to jobs, the economy, and life as a whole. Here’s my take on how it will impact your job or business, your investments, and your life.


People miss the most direct application of language model based AI, IMO - a ton of sales and CSR jobs are fundamentally numbers games following a script with escalation paths, with the existing scripts targeted at cutting your losses because man-hours are expensive. That's the kind of thing that you actually can get a large amount of training data for (for instance because existing interactions are recorded and transcribable) and an implementation opens up a lot of value because the calculus flips - it becomes worthwhile to pursue increasingly marginal opportunities because suddenly your time is cheap.


>>But, how big of a role will human designers have in the future as AI designers take on the lion’s share of the work? Will it take 5 hours for a designer to interface with a client for just one image? I think not. Let’s be extremely generous and say it takes 1 hour of client-tech interface time for a given image. AI just eliminated 4 out of every 5 hours of a designer’s workday. There is no way the entire gap can be filled with a worker simply taking on more assignments. If you multiply out the disruption across hundreds of thousands of designers and artists, then well, you get the picture.

If one could get graphic design work that is 20 times better for one fifth the cost, one would commission far more graphic design work. Easily five times as much, given the number of scenarios where the cost to benefit ratio of doing so would suddenly make it worthwhile.

Something similar has been happening in software development for decades. New frameworks, higher level programming languages, etc are all ways of automating more programming, by enabling the cloning of progressively more complex pre-made functions.

The result of this automation, and commensurate increase in programming productivity, has not been programmers becoming unemployed. It has been vastly more complex programs, allowing software, and software developers, to be utilized in more domains.


First thing I thought of, 7 years old and more prophetic by the day: https://m.youtube.com/watch?v=7Pq-S557XQU

It's only a matter of time. Five years, twenty - we need to prepare for mass unemployment.


Any bets on when the majority of software development is automated? My estimate based on the exponential improvement rate of AI on various tests like MMLU and MATH is within 3 years. I know that might sound crazy, but a year ago it would have been crazy to think that graphics design jobs could be automated by an AI in 2022.

Feel free to disagree and discuss why.


Pick a baseline, like 1970, and call the level of coding productivity based on its contemporary languages, processes & tooling "0% Automated". Wouldn't we already in 2022 be well over "50% automated" perhaps even 80% or 90%? So we may indeed see some improvement in 3 years, but it would be a more accurate perspective to call that a move from 90% to 95% automation or something, not 0 to 51%.

Think about agriculture, for example - based on a preindustrial baseline we are at least 98% automated now. Likely also quite high for textiles and other industrial production. It often seems that in discussions of near-future automation, the vast gains in productivity seen in the past 2 centuries are ignored.


> Any bets on when the majority of software development is automated? My estimate based on the exponential improvement rate of AI on various tests like MMLU and MATH is within 3 years.

An AI solution will always target bang for buck first. So _potentially_ web development or CRUD apps have the greatest probability to be automated.

Nevertheless there is a whole zoo under 'software development'. Bio informatics application, Fast trading software, medical ventilators, hardware debugging with software interfacing with real-world sensors, etc., etc.

Even if your scenario comes true there will be so much 'domain-specific' knowledge that you would still need some sort of 'Business/requirement analyst + Architect' to be able to steer AI agents into any meaningful product.

Maybe the future of Software is not in the writing but in the Designing for the specific domain. Which come to think of it is not that bad of a job and probably not that far off from what a Principal/Staff Engineer is doing nowadays.


You must be using an unusual definition of automated. I’d be willing to bet $10 against your 3 year estimate if you can define it more precisely. I doubt it will happen before language models achieve human level perplexity which should happen around 2038.

Also can graphics design jobs really be automated by an AI in 2022? Cause I’d sure love to stop begging people to make icons for me lol


I think the biggest issue is that those language models are still very limited by transformer window (2k tokens, each word usually consists of 3 tokens), and there is no visible improvement over this.

Your problem, code base don't fit into 700 words, you have no luck.


Well what if it gets 99.9% automated but instead it means each programmer can now program like a team of 50 programmers, devops, pms, and qa people from today.


That's the common counterpoint, but is there 50x the demand for developers? I work in tech and I know how hard pressed people are for developer talent but it is not more than a couple fold off. I doubt companies and startups would have enough work to hire >=5x the developers. Thus, AI can be very disruptive to that job function.


A huge amount of Software development is CRUD and that will be automated very quickly and soon.


The "AI battle" only rages for those who don't make AI; those that write AI know that battle is lost, we will not have an AGI in our lifetimes. We lack fundamental components, such as artificial comprehension. Lacking any form of comprehension, weak or not, all AI technology can ever produce is idiot savants and personality theater. It is not capable of writing original software, beyond the use of pre-composable components, because lacking comprehension it is not writing anything. At the most it can predict given a lot of examples of similar, but it still does not comprehend its own predictions.

AI is a tool for creative people to use, it cannot replace them, unless you want a really boring product, because by design AI rehashes and recombines, it does not have a create capability in its capacity. To create requires comprehension, and lacking comprehension it is just mimicry.


What makes human behavior distinct from "personality theater"? What makes human comprehension different than advanced layers of mimicry and cross-functional adaption of learned experiences? Is this not what AI does?


How would you falsify this hypothesis? What specific task do you think AI will be unable to do in our lifetimes because of this lacking of comprehension?


One can not explain anything to an AI, it has no comprehension. You're talking to nothing, a brick. Lacking comprehension, using "intelligence" in the name of AI is a marketing joke. I write the stuff, at a very high level. All the "software engineer careers are doomed" crap is nonsense. Lacking comprehension, AI is an idiot savant and a damn good actor, and nothing more because there is literally nothing inside.


So what specific task will AI definitely not be able to do in 5 years?


The majority of what people want, because people will expect to be able to explain what they want, and that is speaking to a uncomprehending wall. Lacking comprehension is incredibly fundamental.

For example: you question I am answering, that act of answering your question requires comprehension of your question. Lacking comprehension, only a lookup of canned answers to expected questions is possible.


Can you explain what you mean by comprehension?


A scientific foundation for artificial comprehension does not exist.

When comprehension tasks are discussed and researched within the AI field that is a statistical calculus applied to large bodies of text. The comprehension task is to predict follow-on text given starting text, which does not require any comprehension of the content of the text, only statistical probabilities over an extremely large training data set of human written text. This is, essentially, a misappropriation of the use of the word comprehension by AI Researchers - a common practice. The fact that this type of software development is called artificial intelligence and not applied statistical calculus is an example of how successful these appropriations have been.

Quantum computers or not, we still need some type of underlying scientific reasoning how comprehension itself functions.

Consider for a moment, when thinking about a yet-to-be-understood concept, what that activity of trying to understand is composed. A person will decompose complex concepts into smaller concepts, and then mentally create virtual logical versions of these smaller concepts, followed by experimentally, in their imagination, perform trial combinations of the raw concepts to determine if the complex idea can be reproduced. This process is roughly how comprehension is achieved, and it is a universal process human reasoning can apply to any situation. Human science and technology has no such fully transferrable and operational representation of raw concepts and ideas. The closest we have is software, and the closest form of software to something capable of artificial comprehension is… yet to be formally defined. What we call artificial intelligence is not software capable of decomposing complex ideas into smaller concepts because modern AI does not work with the content, the meaning, of the data it is trained, AI works with the landscape the data exists. Modern AI identifies a landscape the data associated with something of interest exists, and through the training of an algorithm that something of interest can be identified, with some level of statistical confidence. All this can and does take place without any comprehension capacity within the software, it is all just sophisticated pattern matching. It’s an idiot savant that can do but can’t tell you how, why, or is even aware it is doing what it does.

Comprehension is the recreation of an observed behavior or event, virtually, within one's imagination, with that recreation composed of the different ingredients that when composed in some new, unique, never before seen manner recreate this observed behavior or event. Comprehension is the process of mentally reverse engineering reality. Modern AI has nothing capable of such a grand calculus.


tldr; I think this is just all an early "metaverse" enlightenment & look forward to a cool digital future.

1. Im not sure about all the doom & gloom. This more or less feels like a digital revolution... working mostly in style-transfer space myself. While AI-generated assets are awesome/compelling/shocking, the utility hasn't simply extended into complete automation of meaningful outcomes; so much as accelerated/enhanced digital workflows. & I think "meaningful" outcomes/digital creations will simply advance to require new skillsets.

2. A lot of engineering is still required in the applied domains. e.g. apply real style of some modern artist in new, generated, ways is rather complex and super manual (like re-training models built on wikiart, creating artist-specific datasets, etc). Even for pure-ai creation, the best generated artwork still requires a ton of nontrivial configuration, trial/error, etc... basically a whole new knowledge domain and skillset to acquire. CLIP/alike with diffusions setups have made things seem super simple, but a lot of work still goes into anything meaningful.

3. AI is a utility right now, and probably always will be. Adobe's integrations into tools like photoshop have been suite value-adds for designers (harmonization, color transfers, the big one imo- Super Resolution, and Im in-painting seems to be catching big steam)... Software eats software, photoshop will continue to photoshop and new capabilities will arise as old become automated.


Are AIs capable of creating something really unique, or is it just a washing machine spinning on old ideas?


It can apply patterns learned in different contexts to new contexts, which has the potential to create things that are unique in a sense.


^ quite a good way of concretizing the concept of interpolating between training samples.


these particular ML models are trained on human-generated data, which means the model is a reflection of that data.

This doesn't mean that the model can't create something original though. There's a video on youtube where a linus techtips graphic designer competes with DALLE-2, and I found it interesting that the human graphic designer did exactly what some accused the AI of doing - finding images on the internet and kitbashing them to satisfy the prompt. The AI on the other hand could create a completely original image without any source material, even if its understanding of visual concepts are merely an "interpolation" of the training data.


> which means the model is a reflection of that data

Sorry for the image, but a deconstructed animal - here's a paw, one eye in that glass, muscle fibers sorted on that shelf - is not "what it was" anymore: it's incomparable to the original - it's not really a reflection.

"To go to the other side" is not meaningful, it made sense in the joke and in limited other contexts.


Can an android experience fear?


try asking it for a red sphere on top of a green cube next to a yellow cone, it falls apart instantly. extrapolation only goes so far, it's like a photoshop artist who gets the background and then copies elements from the internet to paste into the background


all of these models (GPT-3, DALL-E) massively infringe on copyright and I expect them to be demolished in court


So does the brain of all artists that have learned and been inspired by copyrighted works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: