Hacker News new | past | comments | ask | show | jobs | submit login
Superhuman: What can AI do in 30 minutes? (oneusefulthing.substack.com)
320 points by EMM_386 on March 26, 2023 | hide | past | favorite | 313 comments



I'm not an AI skeptic (this stuff will change the world), but I'm not as impressed as the author.

The primary problem, which seems common to LLMs asked to do this stuff, is "very high level output" - a content smoothie, with few features that are particularly specific to the prompt. The marketing campaign in the OP is so generic, you can `s/Saturn Parable/Any other educational product` and it's still "fine". Similarly the emails - there are 1 to 2 sentences that are product specific, and a bunch of fluff. If I paid a marketing agency for this, I'd be very disappointed.

The LLM isn't demonstrating much more than "Generic templating ability over a large range of templates" in this instance. Whilst that's probably 50% of the jobs in the world, such jobs were already at risk of someone searching for "Basic X template" and expanding the placeholders themselves. I think I could do a similar job in 30 minutes by doing exactly that.

LLM's main wins seem to be providing a single unified entry point to all the templates in the universe. It's a "Universal UI", rather than a "Content creator". I guess I shouldn't discount the value of such a thing, once we get the "Sometimes it just lies" problem under control.

The most interesting immediate thing here is the image generation - that's pretty good, and a big saving over scraping through stock images. I suspect the demise of stock image providers to be the first palpable win for generative AIs, if the copyright question doesn't bog this whole field down.


You note that this will change the world, but then talk about GPT-4 specifically.

The leap from GPT-2 to 3 was enormous. 3 to 4 was enormous, and we’re not even using 32k context yet nor image input. 4 to 5 will likely be as disruptive if not more.

This isn’t about 4. We’re in the iPhone 1 era of LLMs. This is about what the world will look like in one or two decades. And there’s a good chance this comment might age poorly.

That’s a scary thought. I was skeptical of AI, and still am. But it seems undeniable that the world is in for a big awakening. This might be as big of a transformation to society as the introduction of microprocessors.


>This isn’t about 4. We’re in the iPhone 1 era of LLMs.

Well, on the other hand, iPhone 14 isn't that different. Same how a 60s car and a modern Tesla aren't that different. Evolutionary marginally better yes. More convenient, yes. But nothing life changing or necessary. Which is why some folks can even get by reverting to a dumb phone (whereas they wouldn't dream of going pre-electricity or pre-antibiotics).

Also, we were hearing the same about VR in the early 90s, and again in the mid 2010s. Still crickets.


Huh? The iPhone 1 was a toy and lots of people laughed at the users. Today a modern phone is a requirement to be a member of society. It is how I pay for things. It is needed for most of my interactions with friends/family. It is the diary of my life, and the repository of my good memories with its near unlimited video/image storage at a quality only dreamed of when the iPhone 1 came out. Take away a person's iPhone1 and they weren't phased much, taking away a person's iPhone 14 is a LIFE CHANGING experience today. In 10 years taking away your AI will be on the same level, you will function but at a much more frustrating level.


>Huh? The iPhone 1 was a toy and lots of people laughed at the users.

Nothing "toy" about it, it was the most advanced phone on the market. The people who laughed were just the handful of idiots that would laugh because "Apple, har har har" and then go buy the same thing from another vendor. The same kind of Zune buying crowd.

>Today a modern phone is a requirement to be a member of society.

You'd be surprised.

>It is how I pay for things. It is needed for most of my interactions with friends/family. It is the diary of my life, and the repository of my good memories with its near unlimited video/image storage at a quality only dreamed of when the iPhone 1 came out.

None of those are essential, even for a 21st century level lifestyle, some of those are indulgent, others are detrimental. In any case, nothing revolutionary, except if one thinks "I can pay by taking out my phone and pointing it at the gizmo at the cashier" is something far great than "I can pay by getting out my credit card and pointint it at the gizmo at the cashier" (or, god forbid, giving cash and not being tracked).


>Nothing "toy" about it, it was the most advanced phone on the market.

In no way was the original iphone the most advanced phone on the market. Many other smartphones before it and at the time were way more advanced in features and what they could do. What the first iPhone did was make it easy and accessible to everybody, not just nerds. That was the killer feature which made it take over the world.


There was no usable web browsing on a phone before the iPhone. It had the most advanced browser.

There was no iPod level music players on a phone before the iPhone. There were crappy music players you can revisit and compare.

Mail apps on phones were crap.

Messaging was crap, in tiny little screens.

Just a few things.

People reviewing and getting the iPhone the time was wowed and think of it like magic. It's people not having it, and dismissing it outhand because it had a touch screen or because their unusable Windows ME phone had some crappy third party software that didn't get it. Of course all of those got either the iPhone or an Android clone of it very soon and never looked back.


Lmao, I had phones before the iPhone, Ericsson especially that had decent (ie usable) browsers, could play mp3 files etc. And you could install apps on them over wap, take photos/videos etc.

iPhone couldn't take videos as people have already mentioned, couldn't install any 3rd party apps to start with (because Mr. Jobs didn't believe in it), no selfie camera, no torch.

All the iPhone did was streamline people's interaction with the phone, with a large multi-touch display and a simple, intuitive (not anymore though) operating system. They definitely improved things, but in the way that Apple usually does; wait for other companies to do the things, then take the cream of the crop, iterate/improve on them, wrap them up/lock into ecosystem (which some people like) and ship.


This is exactly my point. Everything you have mentioned was available on smartphones before the iphone, iphone just made them more accessible and easier to use. Still stands that the iphone was in no way the most advanced smartphone on the market, it just put a prettier and easier to use interface on these features which led to reviewers being 'wowed'.


I had a Sidekick, and it had a semi-reasonable browser. But clearly, the iPhone was much better.


> Nothing "toy" about it, it was the most advanced phone on the market. The people who laughed were just the handful of idiots that would laugh because "Apple, har har har"

"toy" doesn't have to mean cheap or low-tech.

The point is that at the time, a lot of people didn't really believe that phones could be that revolutionary - and laugh at the iphone because compared to the blackberry, it has next to no functionality.


Both the iPhone and the iPod arguably took a few generations to really hit their stride. I had a fairly new Treo in 2007 and I just didn't see any compelling reason to upgrade until the 3GS. I had nothing against Apple (I owned a 4G iPod). I just didn't have a compelling reason to upgrade. Verizon also probably had a better network at the time around where I lived and traveled.


This is wrong. I waited in line for mine. It was quite clearly a toy. It was a cool but barely usable tech demo and it was completely outclassed in features and usefulness by contemporary devices like the Motorola Q.

It showed the way forward, but it was a frustratingly limited device and everyone around at the time recognized that immediately.


> This is wrong.

Can we just accept that these are opinions? I also waited in line for the first iPhone, and it was by far better than any other phone I owned at the time. True, I was not a "CrackBerry" addict as was common for a certain class of worker in the 00s, but the ability to browse the "real" web in a way that was not completely hobbled was just night and day better than other phones at the time.


I agree that the browser was revolutionary - but it was only really useable at home, on my wifi. The 2.5G wireless hobbled it so badly I never used it for anything more than basic info lookups. Web apps were just too painful to use and I could rarely complete an entire transaction with all the loading and back and forth. In the pre-app store era, that really mattered since the web was the only way to get anything done.


Agree with everything you've written, but that's why I said it was an opinion as to how important this was.

At that point around 2007, that vast majority of time I was in range of WiFi: at home, at work, or at a place with public wifi like a coffee shop/library. Totally agree that the 2.5 G made everything super slow, but honestly, in retrospect, that almost seems like a feature vs a bug. I would only pull out my phone on a cell connection for very targeted actions, e.g. pulling up maps, looking for phone numbers or business hours of operation, sending/reading email (as email was a batch operation the slow connection didn't have too much on an impact), etc. Point being that since it was a "costly" endeavor, I would only use it for things I was really intentional about. Versus now, when I'll pull out my phone at the slightest twinge of boredom and scroll, scroll, scroll through HN, Facebook, etc.


The first iPhone was 2G when Europe had 3G since 2003. It didn't have copy and paste. It did have a touchscreen that no other phone had. It was basically and iPod touch with a phone and a 2G modem. My Nokia N70 was a better phone. I waited 4 years to buy a phone with a touchscreen. Then I thought they were mature enough I bought a Samsung S2.


>It didn't have copy and paste.

Most people in 2006, just before the iPhone came out, didn't have copy and paste either. They still typed T9 style like it was 1996.


Exactly. iPhone was a toy, 20th century internet was a toy, electricity was a toy.

On the other hand nuclear fusion, self-driving cars, and bitcoin were the things to change the world as we know it in the next decade or so.

Things that change the world tend to be hard to recognize as such when we first see them.


I feel like my life would be less convenient without my phone, but I’d also probably be happier. So idk. There are easy alternatives, like using the website on a laptop, you just can’t pay your bills on the train without a smartphone..

I’m actually getting rid of the cell-phone plan on my iPhone, keeping it as WiFi only, and getting a dumb phone for calls. It may suck but I’m trying it as a 6 month experiment, so we’ll see!


I used an iPad the same way for a couple years with no problems. I have internet with Shaw and they have city wide hot spots so I really could get internet access basically anywhere in town. Now there are lots of voip phone providers you can easily get a number and use it to talk with people.


The first iPhone didn’t even take video out of the box. That is how I learned about jailbreaking because my phone company sold me video messaging on my plan and the phone didn’t take videos! Lol. But if you jailbroke the phone you could get cycorder from Cydia and take videos.

Yes there were other “smart” phones at the time but it truly felt like social media blew up in size with the introduction of the iPhone. And that was revolutionary.


> 3 to 4 was enormous... This isn’t about 4. We’re in the iPhone 1 era of LLMs.

GPT3 is great, but I can't reasonably say that 4 is such a huge advance over 3 in my experience so far. Apparently it's better at some things according to the marketing, but for actual usage I can't qualitatively say 4 is an "enormous" advance over 3. It seems to face the same major shortcomings, and it produces qualitatively the same results.

That brings me to the iPhone bit. Yes, the iPhone was a huge advance, but today looking at an iPhone 14, it largely has the same form/function/features as early iPhones. If you looked at the trajectory of iPhones in 2005, you'd conclude that in 2023 they would be 1mm think and transparent with a holodisplay or something. But instead, in the year 2023, my iPhone 14 looks and functions largely like my old iPhone 4. I mean, it does more stuff better, but I'm still using it to browse the net, text, take pictures, and use the maps app -- the same stuff that made the original iPhone revolutionary.


This sentiment pops up with most somewhat new technology, but in my experience the plateaus come quickly. Going with the iphone. The first was transformative, and it continued to improve but the functional difference between iPhone 4 to iPhone 14 is not that great. Think of the wow factor of showing one or the other to someone from 100 years ago.


The entrenchment of smartphones in society dramatically increased between iPhone 4 and 14. Technical capability is just one axis.

Still, I think LLMs are different than phones in terms of scaling. Faster processor speeds don’t necessarily result in more user value for phones, but scaling up LLMs seem to predictability improve performance/accuracy.


>scaling up LLMs seem to predictability improve performance/accuracy.

Isn't this always the case before hitting diminishing returns?


There are no signs of diminishing returns just yet though, and no one knows if that will be at GPT-5 or GPT-5000. I suspect the performance will keep increasing drastically at least until we have a model that's been trained with essentially all available text, video and audio data. Who knows what will happen after we have something that's been trained on all of YouTube. After this maybe we (or an AI) will have figured out how to keep improving without any more data.


>There are no signs of diminishing returns just yet though

Sure about that? GTP-4 doesn't seem 5 times better than 3, much less 10x. Despite having 5/10x the parameters.


yeah it is, gpt3 scored in the 80th percentile for the bar, gpt4 scored top 20 percentile and is much better at math, plus having 4x the context alone gives it much more power.

it's just it's different in capabilities. chatgpt delivers different results and both have unique characteristics.

gpt4 being able to not only create images but also decipher what's in then is another huge advancement.

Gen2 another ai can create amazing videos from a text prompt. Any director or film maker wannabe with more prowess on creating the story than filming it, can now just use ai to create the film from their vision.

even more exciting is the speed that things are progressing. it was supposed to take 8 years to get chatGPT quality training down to 400k price instead of millions. Stanford did it in 6 weeks with llama and alpaca. it can run for under 600 or slower on home PCs.


>but scaling up LLMs seem to predictability improve performance/accuracy.

what exactly is performance/accuracy in slogan generation?


Analyzing thousands of trends, both industry/niche specific and society wide. Tracking campaigns that work by monitoring social media likes, references to past slogans, etc. Potentially dedicating thousands of years worth of brain power and analysis to the coffee shop down the street's new slogan.


gpt does a lot more than slogans...

currently using it like driving a junior programmer.

after gpt has written some functions to my specs in natural language. I can say for example: - "add unit tests". It writes for all functions tests. Not perfect but not bad for short instruction like this. - rewrite x to include y etc


the original post way back was talking about marketing, they were underwhelmed. I recently generated some slogans. They sucked.

When someone mentioned predictability/accuracy how does that apply to marketing slogans. I know how it applies to writing unit tests. The unit tests writing comes pretty close to the original posters definition of GPT as filling out templates. The sucky slogans I got were also very template like.

Would accuracy be if slogans did not suck?

At any rate there seems to be a lot of things people want to use it for where the terms accuracy / predictability don't make much sense. So making claims based on those qualities naturally causes me to ask how do they apply to all these cases - such as slogan generation where accuracy predictability are not normally metrics that apply.


I think it's an open question how much better the LLMs will get. However, we should expect adoption and integration to radically transform their usage. Who knows what the Google Maps of LLMs will be.


It’s interesting how well your description follows the Gartner Hype Cycle, but applied to products instead of hype itself:

https://en.wikipedia.org/wiki/Gartner_hype_cycle


Wasn't the leap from GPT-2 to GPT-3 bigger than from GPT-3 to GPT-4?

Like self driving cars, the leap were enormous until they hit a wall and we still don't have full self driving


The first few days I didn't think there was much difference, but after using GPT-4 a lot I think the leap is huge. For things where I would previously use Stack Overflow or some other Stack Exchange, I now use GPT-4 and get a working answer almost every single time, with GPT-3.5 the frequency of working answers was much smaller.


If someone were to ask me (which nobody ever does) GPT-4 was more about cashing in on the hypetrain than pushing the boundaries.

A little better but a lot “safer” to cut down on the articles on how it’s trying to steal someone’s man.


You are working under an assumption that this tech is an O(n) or better computational regime.

Ask ChatGPT: “Assume the perspective of an expert in CS and Deep Learning. What are the scaling characteristic (use LLMs and Transformer models if you need to be specific) of deep learning ? Expect answer in terms of Big O notation. Tabulate results in two rows, respectively “training” and “inference”. For columns, provide scaling characteristic for CPU, IO, Network, Disk Space, and time. ”

This should get you big Os for n being the size of input (i.e. context size). You can then ask for follow up with n being the model size.

Spoiler, the best scaling number in that entire estimate set is quadratic. Be “scared” when a breakthrough in model architecture and pipeline gets near linear.


I have a sinking suspicion we're not in the iPhone era of LLMs, we're in the "in mice" era of LLMs. I can't elucidate why, but this strikes me as the sort of thing that could either blow up (like the iPhone) or fizzle out (like Theranos), or blow up and then fizzle out (like crypto). And it's too early to know yet which it'll be. Hackernews is making lots of change-everything predictions, but Hackernews was like that about Theranos, crypto, and self-driving cars too.


I wasn't impressed or motivated by the original iPhone when it was new.

I don't know about the future, but by analogy with the past I would say that GPT-3 was the original iPhone (neat tech demo but I didn't really care), ChatGPT is the iPhone 3G, and GPT-4 is the 3GS.

Looking at the sales graphs on Wikipedia (Q1 2012) I think it took until the 4S to transition from "the next big thing" to "the big thing".

Analogies only rhyme rather than replicate, so don't assume GPT needs exactly two more versions to do the same; might be more, might be less, and Uncanny Valley might trigger a Butlerian Jihad at the last possible moment before fully-general AGI.


I would have said the 3GS but, yeah, sometime around then.

Similarly, the original iPod was a not obviously remarkable pocket music player in a fairly crowded field.


Decreasing marginal returns though.

We were a bit blown away with 'Siri' - I mean, it could understand what you said, and 'get the weather'.

I think we're going to start feeling the limits of this soon.

It will be pervasive though.


> one or two decades

More like one or two years at this rate.


Yet the decades will come.

I think progress is sigmoidal rather than exponential, and it’s very hard to tell the difference in the early stages. But even sigmoidal progress with smartphones was enough to completely upend online society. We adapted, of course, but it looks nothing like it did in 2003. We’re all still using the internet; that’s basically it.

Point is, it could slow down, assuming that AGI isn’t waiting like a cat in a corner. But it’ll still displace a tremendous amount of intellectual work.


Remember the crypto hype cycle?


Except crypto lacked utility for most people, even early on.

I've spent the last couple of days creating python scripts to automate parts of my business. I'm not a developer (though technical enough to help point GPT in the right direction sometimes when it's getting stuck on problems) and have written <100 lines of python in my life.

I'm using image generation AI regularly to create images for my marketing emails, and when I've got writer's block it helps with the text too.

Right now the iPhone 1 is a great analogy - it was cool but it was really subpar for using a lot of the internet, because it wasn't mobile optimized. GPT takes some coaxing to get it where you want, like you had to do a lot of pinching to zoom in on websites on your phone. In a few generations, this is going to be as seamless to use as the iPhone 5 was compared to the first gen.


> Except crypto lacked utility for most people, even early on.

Every crypto person said the opposite. They said exactly what ChatGPT-hype people are saying now.


chatgpt has already provided more value to users than all crypto combined


not if you're a teenager who is into using banned substances on the weekend


When I was a teenager, back in the 80s, it didn’t take magic internet money to buy drugs.

In fact, every drug buy I ever heard of (second hand, of course) involved fiat currencies.


You are no longer a teenager in the 80s my friend.


Source?


What sort of source are you looking for? What would satisfy your question?


A source that quantifies value with cited examples and statistics. I ask because the crypto crowd argue there's utility and value there too.


And how is that going? Died down? It’s really amusing when I encounter tech savvy individuals who think crypto is hype, little do they realize it’s stronger than ever and central bankers are becoming increasingly concerned.


I invoked the crypto hype cycle, not crypto itself. Don't conflate the two topics of discussion.


Understood. However I’d still argue there is a reason for the hype and I believe that is the case with the GPT LLMs also.


Concerned the scams and fraud might bleed over into the real economy.


https://www.coindesk.com/policy/2023/03/24/federal-reserve-s...

I don’t think that’s what they are actually worried about. I would also like to point out that the biggest scams, FTX for example, are simply traditional Ponzi schemes with a crypto front, they have all been executed entirely using regular banking systems and due to the incompetence of those regulators. Bitcoin itself is rock solid and constantly gaining users and influence.


microprocessors? Geoffrey Hinton(I think that's his first name), the grandfather of ai recently said this is like the invention of the wheel, agriculture, or electricity.

ie even more foundational for everything that's coming in the future. ai will be as essential as electricity.


I would like to follow up on "Universal UI" where with Microsoft including this stuff in Office 365 it will kill all kinds of apps/plugins.

This is huge and as a software developer I am now not worried that GPT or AI will write code instead of me.

Big change will be that big companies/small companies/average people won't need as many applications anymore. Software devs as I read various comments fixate on "AI writing code" too much, where real threat will be that lots of code will never be needed anymore.


> real threat will be that lots of code will never be needed anymore.

That's a very good point.

Also, I am working in a very small team, developing a free app, for a nonprofit.

I will be suggesting to our CEO, that he consider ways to use AI to multiply the various things we need to do, in order to polish and launch the app.

We have a tiny, part-time team (except for Yours Truly), so there's a ton of "polishing the fenders" stuff that takes forever. I will suggest that he consider using ChatGPT (or some of the other engines) to do some of this work.


Why not just... Use ChatGPT and get the work done


If you think so. It seems many people think this.

Time will tell, if this PoV is valid. I can tell you that a flashy, sexy demo, is not the same thing as shipping code.

A number of comments state that the quality of the output is fairly sparse, and amateurish, but this was also a very fast, thirty-minute demo of a marketing workflow, subjected to basic AI tools.

This article was the equivalent of those "Write an app in two hours" seminar/bootcamps.

Valid, but also constrained by the need to teach, and to get done within a certain amount of time. Very strict guardrails, and keep your hands inside the car at all times.

I have taken many, many of these courses, and have given a few. I'm quite aware of the difference between what we produce in a class, and what I'd hand to a customer.

What I think we'll be seeing, quite soon, is "one-person shops," acting as studios/agencies that will take on jobs normally done by large shops.

Like bootcamp babes that go out, thinking that they can now deliver a full-fat app to customers, many will fail.

But some will succeed. Lots of smart, hungry people, out there.

We'll look at what can be done with these tools (which, I should add, are still very much in their infancy. You ain't seen nuthin', yet). I don't think they'll be able to write the deliverables, yet, but that's OK. I think we may be able to leverage them to make those deliverables much more polished and robust.


I mean if the work could get done without ChatGPT then it's not getting done with ChatGPT any magnitude faster but it may help reduce the intervallic brain farts by being able to ask more than stack overflow has db results for


Go create a “system” with GPT. You’re going to see a ton of, “I’m sorry, you’re right, the SQL statement is referencing a column that doesn’t exist.” Etc…

Right now, it’s amazing for getting some boilerplate very quickly (so is create-react-app, etc).

It’s bad at context as the problem grows and very bad at subtle nuances.

Working with GPT today is like having a super fast and somewhat sloppy developer sitting next to you.

“Shipping” anything it creates means a LOT of review to make sure no false assumptions are present.

I have been “writing code” with it nonstop for weeks now.

Yes, it’s incredible, but it also has serious limitations (at least for now).


I wonder if there is a way to get chatgpt to check its own work. It has been useful as a method to find new literature for science, but the occasional completely made up references can be frustrating.


You can ask it to check its work, or to do the same task three times and compare them.

But these error checks still have similar errors and hallucinations to the basic output, from my personal experience

It’s not obvious that this recycling refines the output

Try this for yourself


> Go create a “system” with GPT. You’re going to see a ton of, “I’m sorry, you’re right, the SQL statement is referencing a column that doesn’t exist.” Etc…

So, you don’t mean “create a ‘system’”, you mean use the UI to talk with ChatGPT about creating a system, rather than using the API and connecting it to tools so it can build the system, verify its behavior, and get feedback that way rather than through conversation with a human user?


I don’t see a difference regarding the work required. If the results are coming from a chat interface or an API, the same problems exist.

There aren’t any tools that I know of that can validate that GPT has correctly interpreted the prompt without any problems related to subtle (or overt) misunderstandings.

This being the case, there’s a lot of back and forth and careful validation necessary before anything ships.


that was actually my point; it's not like you'd ask your CEO for permission to do work that was supplemented with StackOverflow; so just...do the thing that needs to get done, using the sources required to get'r'done


Don't forget the CEO might ask who will fix the bugs in the app...


Some people have made a career out of being good at reading, debugging, and fixing complex incoherent code that was written by other people. I imagine those will thrive in the near future.


I suspect that AI will become fairly good at bug-testing and fixing.

I would not be surprised to see AI testing and diagnostics, integrated into IDEs.

For example, UI testing. Right now, it's next to worthless, as it's basically scripting and screengrab analysis.

An AI tester can do a much better job of simulating a user, and analyzing the behavior of the app. Of course, it will be a real skill to set up the boundaries and heuristics for the testing, but it could be very cool.

I suspect that AI will also find a place in security; both in hardening and red-team testing, and in blackhat probing.


You're missing a huge market that just opened up. Writing "plugins" for ChatGPT. Given an API GPT-4 can now use it to complete various tasks. They've shown a demo of it using a search command and a calc command but there is no limit to what these could be. Better dust off those CLI skills since you'll mostly be dealing with text input and output.


Not sure CLI skills are relevant, seems like OpenAI is pushing for JSON rest apis. Maybe because that's what GPT-4 has seen more of.


> OpenAI is pushing for JSON rest api

Which is probably one of the easiest types of code to autogenerate.

In fact we already have tools to generate apis from a model. And a model could be produced by ai given human (language) inputs.


Do not underestimate the enormous amount of dysfunctional logic in the non-dev population. You don”t code what they ask, you code what they need. That doesn’t change with AI, it gets worse.


LinkedIn has started pushing generated content it would like me to edit, I think they said they use ChatGPT. All of the content is "okay" but there's no depth, out of the handful of articles I read they both had a section that repeated an earlier idea but with different buzzwords.

I agree, it's impressive how it can generate readable text that provides an overview of an idea. But the overview misses key points, or highlights things that aren't really central. For a lot of things, doing something simple like reading a Wikipedia page is likely more productive.


That's pretty much what I've found when I'used it to try writing articles. They're mostly not wrong. But they lack nuance, examples, links, research references, quotes, depth, etc. They're generally shallow and formulaic. Might I consider using one as a stub to get me started? Sure. On the other hand, that means I need to deliberately break out from the formula. I'm not sure how much time it would save me at the end of the day but I may give it a try on something real one of these days.


I'm looking forward to people being hired to edit ChatGPT output... and watching them prompt ChatGPT itself, or another AI model, to do their work for them.


I mean you say that but this is the kind of shit work that 99% of the population does.

Look at the reddit UI, do you really think that it’s better than something GPT could toss out in 10 minutes?


Isn't Reddit's UI hostile by design?


> Look at the reddit UI, do you really think that it’s better than something GPT could toss out in 10 minutes?

Is this some kind of a joke? I'm pretty sure whole Reddit's UI team can't be replaced by GPT.


Whether or not they could, I'm pretty sure they should.


These are different statements.


yes! I doubt chatgpt in it's current form can run A/B tests to make a design that meets reddit's goals


Looking at their downtime manifesto from few days ago - chatgpt would probably spit out a better infra design in like 5m…


The most immediate practical result might be that online product reviews written by AI will be indistinguishable from those written by humans, essentially making online reviews useless to consumers as a source of reliably independent information about various products and services.

On the other hand, perhaps AI could help with due diligence types of inquiries from an independent standpoint? A real-time online AI research assistant with web scraping capabilities would be interesting.


That's not completely correct. The review world is roughly splittable in two broad categories, the "expert person" variant (think fro-knows-photo) and marketplace reviews (the reviews you can find on amazon, etc).

Marketplace reviews are well guardable in terms of that you need to have an account there that purchased the same sku.*

Expert person reviews are based on trust. There is a destructive correlations between spaming Ai reviews and creating a valuable brand as an expert person. So you cannot really do a hard play on ai here, maybe a soft one but that would arguably be just "going with the times".

* Some might ask "why does amazon still have issues with review spam?" Answer is that amazon actually has a pretty firm grip on it, nonetheless the marketplace's incentives seem to be such that some slippage is acceptable/helpful for them.


> The primary problem, which seems common to LLMs asked to do this stuff, is "very high level output" - a content smoothie, with few features that are particularly specific to the prompt. The marketing campaign in the OP is so generic, you can `s/Saturn Parable/Any other educational product` and it's still "fine". Similarly the emails - there are 1 to 2 sentences that are product specific, and a bunch of fluff. If I paid a marketing agency for this, I'd be very disappointed.

a) no-one's telling you to just throw the AI output up on to a website unedited, b) does it not give you at least a bit of pause how quickly this is advancing right now?


> no-one's telling you to just throw the AI output up on to a website unedited

Is there a genuine problem that we're solving here?

"Quickly and cheaply create a large volume of mediocre content" will definitely appeal to certain entrepreneurial types, but were we actually short of mediocre content? What genuine problem are we solving?

Apart from a further lowering of the bar for certain entrepreneurial types to get rich(er) faster, that is.


quantity has a quality all its own.


[flagged]


> The article you're commenting on is only a very surface level use of AI tools.

This really is the problem.

Where are the example of people spending actual considerable amounts of time using these tools to generate meaningful content? Why are articles like this one being upvoted instead?

If what you say is true, shouldn’t we be seeing a flood of amazing stuff created using this tech? Where is it all?

Serious question. I can’t find any.

Where’s “awesome made by gpt”?

Every search result is this kind of low bar “I spent zero effort and got some kind of mediocre output and was amazed!”

When the effort to create content like this drops to zero (and let’s be real, spending 30 minutes using gpt and documenting the results is really basically zero effort), I’m frankly pretty bummed to see this rising to the top of HN.


On the one hand, GPT-4 has only been out for two weeks. Give it time.

On the other hand, if someone is going to claim that GPT-4 can make awesome things all by itself, instead of just toy demos and mediocre content - then they should be prepared to defend their claim with examples. Otherwise it’s just speculation.


Surely, in two weeks you could devote more than 30 minutes to the effort, right?

Surely?


When's the last time you built, shipped and wrote an article about an amazing product in less than 2 weeks? 4 months? That's how new these technologies are and you'd have had to be using them and building a produce with them from day 1 for those timelines to pan out. Give it a year and there'll be so many articles you'll be using GPT-4 to summarize them so you can read them all.


I think it might be like makeup and plastic surgery. When it’s done well, you don’t notice it.

I was talking to someone the other day who maintains the social media presence for a motorbike repair shop. She’s already using chatgpt to help her “simplify” the language she posts on Facebook to their customers. Another person I know is an academic. She just busted a dozen or so of her students for using chatgpt to cheat at a university assignment. (She caught students cheating by checking the references - chatgpt hallucinated a lot of fake papers).

I wonder how much content from chatgpt we’re already consuming regularly. How many comments here on HN or articles are written by it? When it’s done well, how would you know?


Thanks for sharing this. It’s exactly what I was curious about, how to get in the head of a good prompt engineer.

Because I’ve been playing with it and in some areas been blown away, and others (like help me organize my week) underwhelmed.

But I know where ever I’ve been underwhelmed, it’s extremely likely I just gave it insufficient input.

Using this, I want to help it organize my day, week, month.


This is just a get rich quick scheme advert, flagged.


A fair sized pause, sure. But if the argument is “Don’t throw it up unedited”, and what it provides me is bare bones generic/junior stuff, I’m not sure of the huge win at this point in time. The world wasn’t short of “Generic low grade copy” templates before LLMs. It just saves a few steps in the copy/paste.

Of course, GPT5…


>Of course, GPT5…

I'm not sure GPT5 will feel appreciably different on this type of task necessarily. GPT-4 feels a lot like GPT-3 for a pretty wide variety of things, but it's when you get higher complexity tasks that you start to see differences.


>If I paid a marketing agency for this, I'd be very disappointed.

You'd be. Most people wouldn't even notice or care, whether it's the ones paying the marketing agency, or the ones being shown the content as potential customers.


One little wrinkle I will add to your point, which is a very good one just so we’re clear, is that with chat GPT it does away with needing to know how to ask your questions, where to find your answers, how to implement them, etc.

I can literally tell it “write me basic code to do ‘x,’ now explain how to add it to my squarespace site.” In many cases it will just spit out exactly what to do. As we all know, part of knowing how to do your job well is knowing how to find answers. In some ways chat GPT makes that step even easier. At least in the use cases I have found so far.


> If I paid a marketing agency for this, I'd be very disappointed.

A marketing agency would have spent at least a few weeks gathering specific information about your company, the article allotted two minutes.

How many more minutes would you need to devote to giving ChatGPT more specific information before it could match the marketing agency's output? Not weeks surely. What's the cost difference in both time and money? Multiple orders of magnitude.


> If I paid a marketing agency for this, I'd be very disappointed.

But you didn't. You paid only a few cents. You're probably not using it for a million dollars ad campaign but many people could use it to make their communications look better in less time. Same thing as using a stock PowerPoint template instead of paying an agency to create one. Good enough for most of us.


>I guess I shouldn't discount the value of such a thing, once we get the "Sometimes it just lies" problem under control.

Considering the model doesn't "think" or understand abstract concepts, could we ever expect this?


More training data and token lengths seem to help, given how GPT-4 scores better on a lot of standardized tests than 3 and 3.5 do.

We don’t necessarily need to teach it not to lie, but just to improve accuracy through better training and training data. It (probably) won’t ever be 100% reliable, but what is? Google searches can be inaccurate, same with Wikipedia and other encyclopedias.


The model does think but only when you tell it to think out loud.

This is less a weird quirk of the training data or a One Weird Trick That Makes Your Matricies Sentient, and more a limitation of the model architecture. Neural networks do not have the capability to implement 'for loops', the only looping construct is the process that runs the model repeatedly on each token. When you tell the model to "think out loud", you're telling it to use prior tokens as for loop state.

Another limitation is that the model can't backtrack. That is, if it says something wrong, that lie is now set in stone and it can't jump back and correct it, so you get confidently wrong behavior. I have to wonder if you could just tell the model to pretend it has a backspace button, so that it could still see the wrong data and avoid the pitfalls it dropped into before.


I have yet to hear a definition of "think" or "understand" for which this is true.


This is where I landed too.

GPT generated "ideas" strike as wanting to use a swipe file. Only instead of it being full of persuasive, high-performing marketing materials.. it's just fast and prolific


I don't think "templates" is a good descriptor to use here, since the number of required templates is beyond any reasonable number. Just taking 10,000 words, there are 10^40 possible ten-word sentences. A ridiculous percentage of those are nonsense, but even if that fraction is 0.000000000000000001%, that leaves 10^20 sentences to template for, which no modern computer can accommodate for.

I'm not sure what a better metaphor is; each thing I consider ends up defaulting back to what a ML model actually is, without really "metaphoring" at all. But not templates.


"I suspect the demise of stock image providers to be the first palpable win for generative AIs, if the copyright question doesn't bog this whole field down"

I'm surprised the copyright issues aren't given more attention. It's technically not legal (in the US) to modify copyrighted images without the authors permission. I don't see how it's possible that systems like DALL-E haven't already done that. There's a near 0% chance that they aren't trained on at least one copyrighted image.


Humans photographers are also trained on copyrighted images.

They look at countless numbers of them and learn what is the correct "professional style", etc. This is why you can instantly recognize most stock photos, because they all follow the "stock photo template".


The difference is that AI models so closely recapitulate specific features in copyrighted images that stock image company watermarks show through [0]. This is several levels beyond a human artist implicitly getting inspiration from copyrighted images, and more on the level of that artist explicitly copy/pasting specific pixels from them.

[0] https://news.ycombinator.com/item?id=32573523


The models are probabilistic, they replicate the most common features that they've seen. Guess what shows up in a lot of images?


That's exactly my point — they replicate highly specific features in images with such fidelity that their training is not analogous to humans' artistic inspiration.


They replicate common features. If you paint the same happy little tree in your picture as thousands of other people then it will probably show up in an image produced by a model trained on those images but your tree is hardly unique then isn't it?


How is the ai supposed to know these watermarks aren't a style element? They're present in tens of thousands of input images, after all. Therefore, I'd say this is a bad example of an AI literally copying from one specific source. It's similar to it using Arial letters: they're everywhere in the source data.


> How is the ai supposed to know these watermarks aren't a style element?

Because of the “i”.


The i stands for imagination/ignorance at the moment. Intelligence (or something indistinguishable from it) doesn't seem too far away but isn't here yet.

So all we have is a dumb bot that can appropriate styles and ideas. Revolutionary, but not quite to the extent needed to sue it for copyright.


Is more like human than copy paste. Read about how it works first please


Copyright law doesn't work like that for photos. When you take a photo of something you become the owner of the image.

In the context of AI, the issue is specifically with using a copyrighted image and creating something new based off of that. That is explicitly illegal for human artists.


> something new based off of that

But where do you draw the line? If AI imagines 3 people around a business table in front of a flip chart, is that copyright infringement on similar stock photos? Note that in the AI created image, the people are unique, they never existed, the business table is unique, the flip chart is unique, and in general you can't point to any existing photo it was trained over and say "it just copied this item here".

If so, why isn't it also copyright infringement when a human photographer stages another similar shot?


"But where do you draw the line"

Well that's sort of the whole thing with copyright law. It's fairly arbitrary. Copyright specifically forbids derivative works: "A derivative work is a work based on or derived from one or more already exist- ing works."

It's vague on purpose because copyright infringements generally need to be handled on a case by case basis.

Now there are AI's trained on images that are copyrighted. If the image is copyrighted, should the AI have been allowed to train on it?

The reason human training/inspiration isn't specifically forbidden is because it can't be. We are impressioned by things whether we like it or not. Regardless, we can't prove where someone's inspiration came from.

But the act of training an AI on copyrighted images is deliberate. I feel that's a key difference.


> The reason human training/inspiration isn't specifically forbidden is because it can't be. We are impressioned by things whether we like it or not. Regardless, we can't prove where someone's inspiration came from.

And there's plenty of cases that say if you're too inspired, that's illegal and/or you own damagaes/royalties.

https://ethicsunwrapped.utexas.edu/case-study/blurred-lines-...


Then the AI is performing a sort of collage of copyrighted work and the AI / prompt writer would not own the copyright to the derivative work. If a photographer stages a photo based on an existing photo, and it shares enough features with the original work, it likely would be copyright infringement.


The court has already ruled that you can't own the derivative work anyways, because copyright law requires an individual artist. If I ask bob to make a picture for me, bob actually owns the copyright to start (but can assign it to me). I don't automatically get given copyright because I 'prompted' bob with what I wanted drawn (draw me a mouse). Copyright is given to the artist on the artists specific output.

If I ask an AI for a picture, there is no artist 'bob' to be assigned ownership under copyright law and therefor it's not copyrightable under existing law.

Funny how originally all these pro-AI art people were anti-copyright law but I can see them sometime soon lobbying for MORE restrictive copyright law (granting it in a larger pool or circumstances hence making more things copyrighted) so that they can overcome this.


Why are you comparing a product that's powered by web scraping and GPUs and hundreds of millions of dollars to a human being? This is a product.


Style cannot be copyrighted. It's perfectly legal for my to draw something in the style of another author.


It’s explicitly allowed to create new based on photographs, assuming the resulting work is not similar with the original

> For example: if they base their painting on an oft photographed or painted location, generic subject matter, or an image that has been taken by numerous photographers they would likely not be violating copyright law.

> However: if they create their painting, illustration or other work of art from a specific photograph or if your photography is known for a particular unique style, and their images are readily identifiable with you as the photographer, and an artist copies one of your photographic compositions or incorporates your photographic style into their painting or illustration they may be liable for copyright infringement.

https://www.thelawtog.com/blogs/news/what-do-i-do-if-someone...

Because AI rarely recreates images 1:1 it is unlikely the violate any copyrights.


"incorporates your photographic style into their painting or illustration"

Seems pretty cut and paste to me. If it has trained on my images and then uses that trained dataset to generate new images those images are in violation. Using training sets that include unlicensed copyrighted works requires attribution and licensing. TO be legal otherwise the end user/AI company would have to be able to prove in a court of law that without training on my copyrighted work it would have still generated that specific image which I can't see the users/company being able to do.


> Using training sets that include unlicensed copyrighted works requires attribution and licensing

Is there a rulingn for this? This would be similar as using a school book requires attribution and licensing for your education.


It is not illegal for a human to look at something another human created and learn composition, strokes, lighting, etc... and then apply it to their own future creations. This is all the AI is doing.


I disagree.

Taking copyrighted images and dumping them into a machine learning model is deliberate usage. The AI isn't a person, so it doesn't draw on past experience by happenstance.


Still AI is just a tool. It's like saying I could draw in the style of another author, but only if I do it in a parchment.


AI is just a lossy form of storing the copyrighted work and using pieces of the copyrighted work for future output. It definitely requires licensing of the works stored (I mean 'trained on')used if used outside of 'personal use'. I can't just re-compress a tons of pictures into crappy jpg format and then use them however I'd like. I also can't just come up with a new format for machine storing copyrighted images to be used for creating derivative works, call it AI, and say it's 'different'. The AI company has to be able to prove in a court of law it could have generated the image if it hadn't been trained on my copyrighted work. We already covered this area of law with sampling in music. If you didn't want to continue over ownership of the work from the owner of the 'sample' you either license it or.... don't use it.


if it is storing the copyrighted work, then I'm sure you could point which part of the weights corresponds with a specific work, right? Same way that you could do it if we were to "re-compress a tons of pictures into crappy jpg format", or if we were "sampling music". Oh, you can't do it? Then, I'm afraid it's not the same.


It's hugely different - imagine the number of decisions a person makes when making an oil-painting - each stroke is somewhat influenced by past experience but also by the current state of the painting, their emotional state etc. The AI is just directly interpolating based on past input.

Making the two processes equivalent is very reductive.


The AI is a product created by a company. A vacuum sucking up the scraped remnants of the internet. Hundreds of millions of dollars are spent to pull this off. Stop acting like this is a human or anything resembling one. This is a product and not a person.


Yes, it can be illegal. It happens plenty of time in music, where artists produce songs which are too similar to previously existing songs, and owe damages.


Am I allowed to take an imagine and apply a lossy algorithm (say jpg) to it and then use it as my own for business purposes? Nope. You say learn, I say apply a lossy algo and then use the result for business purposes. Seems like clear copyright violation.


This kind of 'training' is not at all equivalent. There's a reason copyright places value on the expression of an idea (i.e. taking the photo) - image-making is difficult and was a valuable skill, even for a stock photo.


Getty's case is active in the court system in multiple jurisdictions, until we get there outcome of that weren't not going to have a resolution of this. Unless countries legislate/decide to allow training on publicly accessible documents, eg as Fair Use/Fair Dealing or whatever.

In short, the copyright issues appear to be given a lot of attention? Legal precedent takes time.


This will take years for the courts to figure out. In the mean time, Adobe Firely has apparently not been trained on anything copyrighted, so people that are nervous about lawsuits will use that.


Isn’t it just fair use? Reading the four factor test for fair use it seems like these generative models should be able to pass the test, if each artwork contributes only a small part to a transformative model that generates novel output. The onus will be on demonstrating that the model does not reproduce works wholesale on demand, which currently they sometimes still do.

Arguably also, the copy is achieved at generation time, not training time, so the copyright violation is not in making the model or distributing it, but in using it to create copies of artworks. The human artist is the same: in their brain is encoded the knowledge to create forbidden works, but it is only the act of creating the work which is illegal, not the ability. The model creators might still be liable for contributory infringement though.

Anyway, I reject the notion that any use of unlicensed copyrighted works in training models is wrong. That to me seems like the homeopathic theory of copyright, it’s just silly. If copyright works that way we might as well put a cross over AGI ever being legal.


Should the model be allowed to train on the copyrighted image in the first place? I think, the answer is no. If I'm an artist, I don't volunteer my art for you to do what you please.

Now consider that these systems are already being used for profit, before this matter has even been settled.


I am, for one, is preparing for an era of mediocre content in every field ML can be applied to.


This. Let me know when AI can write sales emails that convert better than emails written by the world's best copywriters.


But why? Most people don’t need the bar set that high. Most people just need B+ writing with half decent conversions.

If you can get 80% of what you want with a cheap or free tool vs 100% with a full-time salaried employee/expensive freelancers, well, most people are going to pick the former.

I do this as a video editor all the time. If I have a fast turnaround often times I will just drop a LUT or use auto color correction in my in NLE. Of course I will sand down the edges afterwards, but it’s not like I’m going to give every single video that crosses my desk the full color grading treatment. Not everything requires that.


For a supposed smart group of individuals who are good at leaning into the latest innovations, I feel most HN comments as of late are completely , ignorantly dismissive of LLMs simply because LLMs aren’t “amazingly perfect” in their current state .

Yet… we are are talking about a tool that came out literally FOUR MONTHS AGO. And the huge advancement on that came out TWO WEEKS AGO.

Yet everyone here continues to proclaim, “it’s not even that good , honestly.” As though no progress will ever be made from this current moment in time.

I feel like I’m taking crazy pills.


This is honestly so interesting to me because I feel like I have the exact opposite experience. I have never experienced the level of hype there is for LLMs both here on HN and in society at large. I see people talking about how amazing they are, how they will replace everyone in a matter of years and even achieve super intelligence. I see people argue strongly against even the mildest LLM skepticism. I use these tools every day because they are useful but I'm also blown away by how often the answers are completely wrong, like not even close. I sometimes feel like I am crazy or everyone is using an entirely different product to me.

But the fact that you feel the exact opposite shows that maybe this is just an artifact of cognitive bias.


I guess we are two then, I feel the pretty much the same way as you. I might be completely wrong about all this, but it is getting to a point where it feels like I'm getting gaslit when I compare it with my own experience with these tools so far.


Crafting good prompts makes a big difference. Also, after it gives you an answer, say you think there's an error and ask it to check its own result. It will often correct itself. This self-reflection ability leading to more accurate outputs was pointed out in the new Reflexion paper:

https://nanothoughts.substack.com/p/reflecting-on-reflexion

https://twitter.com/Orwelian84/status/1639859947948363777



Thanks for the links, appreciate it!


Genuine question as I definitely need more to balance my own thinking —

What has your experience with GPT been? For me, GPT-3 was not really useful as a software dev.

But GPT-4 is miles ahead of that. It’s helped me write code maybe 4-8x faster than usual, and has even allowed me to debug existing issues far, far quicker and more accurately than I’d ever be able to on my own.

Part of the gap very well might be my own mediocrity with development . I wouldn’t argue that folks with far superior skills and novel challenges day-to-day might be unimpressed.

But as an average dev writing pretty boring code (REST APIs and system integration mostly), I’ve been blown away by GPT-4. I am pretty well compensated and have been in the field for 10 years, too; but I am aware of my own shortcomings.


I've rarely seen completely wrong answers. When I did see them it was because I was too prescriptive.

Like, I might ask "using this library, implement that feature" in the hopes that there it has learned of some way to do a thing I haven't been able to figure out. In those cases I see it hallucinate, which I assume means it's just combining information from multiple distinct environments.

If I'm not too specific, it does a pretty good job.

IMO its biggest fault is that it is not good at admitting it doesn't know something. If they can crank up the minimum confidence values (or whatever, the values used to guess the next token), maybe we'll see better results.


> But the fact that you feel the exact opposite shows that maybe this is just an artifact of cognitive bias.

Or social media/advertising engagement algorithms doing their work. After all, you get more engagement with negative emotions than positive ones.

It seems plausible that you and the OP are at least slightly on different sides of the LLM issue, and so you and the OP could literally be seeing two different realities crafted by engagement algorithms, because it detects that each of you pay more attention to the other side that you disagree with, and that snowballs into seeing only the other side and thinking you're taking crazy pills.


It depends on which one you use. Bard for example is kind of crap.


> As though no progress will ever be made from this current moment in time.

The issue is that the folks promoting chatGPT are for the most part incredibly dishonest. E.g. this entire blog post is about the AI having written a sales email, with zero written about how well it actually converted. The author is claiming that the AI can do a superhuman amount of work in 30 minutes, but we don't actually know if it did any work at all.

How can we even know whether OpenAI is making progress if we don't know how good it is in its current state? Back when Go AI was far less good than even the average club player, we at least knew what rank the AI was playing at. Whereas right now the ChatGPT equivalent is basically that it's putting stones on the board in a way that looks somewhat like a real game, but you're not allowed to know what level it's playing at.


I don't understand this perspective, the only way it makes sense is if you haven't bothered to really try to use the tool to get something done. There are literally a hundred million+ people out there using it to get shit done. I didn't sign up until the GPT-4 release because I had tried 3.5 and thought it was ok but limited, I had more luck using the API and custom prompts with GPT-3 but GPT-4 converted me. I've been using it daily and for increasingly more stuff. It's so good that I'm actively trying to break the habit of using search engines to look up docs, error messages, and SO answers. Mandatory disclaimer: it doesn't always produce perfect output that works on the first try but you know what? the old search result method doesn't work on the first try either especially if I have to read bad or limited documentation, parse a somewhat related SO answer into the solution I need or search through miles of issues about the error message where everyone is just complaining and no one has found a solution. The big difference is that GPT is a hundred times faster, provides better explanations and even when it's wrong it gives me enough information to figure out the problem on my own.

Sorry but it's just frustrating seeing "how can we know it's better?" when it's right f-ing there in front of you. Maybe you don't want to spend $20USD to try it out, fine whatever, wait until it's free to use but don't make lazy negative remarks from a place of ignorance.


Exactly. It’s similar to someone saying Google can do a superhuman amount of work.

“All I had to do is search for a sample product launch email and POOF it appeared! Just had to fill in the company name.”

It’s the same thing with the minor code snippets being “written” by Chat GPT. Any real programmer knows that Google could give you pretty much the same thing. And they also know how complicated their actual job is that goes well beyond the simple prompts people are using that everyone has been googling for over a decade now.

It’s all hyperbole. This technology is just an evolutionary improvement on Google.


That's one application. You don't find it at least a little jaw dropping that the same program that is "an evolutionary improvement on Google" for programming tasks can also translate text, write (bad but improving) poetry, give reasonable relationship advice, creatively answer questions of the form "what do <x> and <y> have in common" for even radically different x and y, and correctly solve analogy puzzles? All in fluent English?


English? I've been prompting GPT-4 in Spanish all along, and it responds in impeccable Spanish. I can ask it to change the writing style of an essay to famous Latin American writers and it does it admirably. Translating back and forth between English and Spanish respecting meaning and context, even subtleties, no problem. Better than DeepL.

If y'all have been focusing on GPT-4 coding abilities, I ask you to try it with literature-based prompts. GPT-4 is an exceptional writer, summarizer, and style corrector.


I'm not going to claim I know where this is going right now with LLMs, but: throughout history, I don't think anyone really saw automation coming for their own jobs until it happened. I think people tend to think of their craft as an 'art' rather than a product. They miss that other people would MUCH prefer it to be automated, and that people will be willing to put up with slightly inferior output if it's practically free in comparison.


This is already a meme but I'll repeat it here "hard pills to swallow: if you're afraid of ChatGPT taking your job then maybe you're not very good at your job"

What this means to me is that if you see the parts of your job that are basically drudgery as being what you're good at then you should worry. If instead you see them as a necessary evil that you have to do in order to accomplish the bigger goals you're aiming for then you should be happy.


That's the kind of take that I was referring to. Yes, if GPT could replace you now, your job is not very skilled. But if investors see a signal that software might be automatable, they are going to throw mountains of cash at that and they'll be gunning for your job as much as anyone else's.

I've seen a lot of comments on here like "well LLMs are good at writing snippets, but could never write or maintain large codebases to accomplish a larger goal" - but are you SURE about that 'never' given the current generation is already doing things people suspected they could never do? "But they'll never be able to manage / interpret stakeholders", Are you SURE that LLMs will have to adapt to fit stakeholders, and not the other way round? I don't know for sure, and even if this is coming, I've no idea on the timelines. But I'm not completely writing it off as a possibility anymore either.


> I've seen a lot of comments on here like "well LLMs are good at writing snippets, but could never write or maintain large codebases to accomplish a larger goal" - but are you SURE about that 'never' given the current generation is already doing things people suspected they could never do? "But they'll never be able to manage / interpret stakeholders", Are you SURE that LLMs will have to adapt to fit stakeholders, and not the other way round?

That sounds a lot like the self driving cheerleaders five or ten years ago. That work so far has resulted in some awesome features like adaptive cruise control and parking assist but it fell far short of what the hype was promising to deliver by now.

Five or ten years later Mercedes is the only company getting ready to ship level three self driving. Level four and five are still a pipe dream, practically restricted to a few companies like Waymo in a few controlled environments like Phoenix and San Francisco.

GPT4 is great and I can't wait to see what 32K or even 100K/1M token models can do, but I fear we're about to hit the point where progress grinds to a halt because going further requires something closer to AGI than what we have now.


As I say, I'm not sure about anything here, I just think it's risky to write the idea off right now just because it requires progress we can't forsee.

I also don't think the comparison quite works, because no one is saying that we need to get down to zero humans for this to to be profoundly disruptive, just enough humans to code review and make relatively small changes, I wouldn't be amazed if that's what software engineering becomes in the coming decades.


I think there's a good chance that the next version of GPT will be better at writing code than a junior level dev. I don't think this is going to result in a net loss of jobs. As software development becomes less expensive per unit volume I think demand will increase so much that both the total number of jobs and average salaries are going to increase.


Yes, because our entitlement expands even faster than GPT's capabilities.


But it's not free. They keyholders might be making it cheap right now but once the quarterly reports aren't showing enough growth it'll all go up.


I don't buy that at all - you should look at what happened in the past, not to what's happening in the present. Over time, prices will go DOWN not up, due to improvements in hardware, engineering and probably smaller, more specialised models. A lot of investment will go into making that happen, like any other automation. If OpenAI did want to try and price gouge, then they'd just make a gap in the market for someone else to fill. Also - just consider how absurdly high SWE salaries are - it would have to get orders of magnitude more expensive than GPT 3.5 (say) for it not to be 'basically free' in comparison.


People are generally very resistant to change. Our minds are literally programmed against it...acknowledging an enormous change happening very quickly triggers all sorts of yucky things in our lizard brains. So we try and rationalize things and fid some way of explaining to ourselves that "no, this isn't actually a big change, my model of the world is still rock solid" as a pure survival mechanism.


I would have thought filtering people on "being in tech" would have weeded out the people resistant to change, so I'm surprised by the negativity on anything innovative here, whether it's EVs, self driving cars, AI, crypto (although I can forgive this one) or VR.


People in tech are still just people.


100000% this. HN attracts some smart people but they're still often very flawed people. Myself included.


LLMs were invented four months ago?

Honestly, I feel the opposite. I'm sick of the endless fawning over ChatGPT because it can print code that exists in a stackoverflow answer somewhere.

I'm also cautious about extrapolating. Constant improvement, let alone exponential, is far from a guarantee, but from what the LLM acolytes would have you believe it's pretty much a given that GPT6 will be an AGI before 2030.


I feel like I’m taking crazy pills

I feel the same way but on the other side. All I see are non-technical or quasi-technical people using AI tools to perform work that is x% better than Google could do.

The only entity at risk of being displaced from this technology for the foreseeable future is Google.

The only thing left to discern is what % better this technology is than Google’s antiquated algorithms.

Think about it for a second. Putting the right query into Google could give similar templated results. This technology is just an evolutionary improvement on that.


That's just wrong, have you used GPT-4? Last I checked I can't ask Google to invent a new programming language specification for me and get a result in under a minute. I then asked it to write a short tutorial for that language in the style of Learn X in Y minutes and got a good result. I then asked it to write bubble sort in the new language and got a result. Comments like this make me feel like I'm taking crazy pills but really I just know that there are a bunch of lazy curmudgeons out there that can't be bothered to look for themselves.


That's just wrong, have you used GPT-4?

Yes. A lot.

Last I checked I can't ask Google to invent a new programming language specification for me

I have literally never done this in my job. Ever. I've been a professional software developer for multiple decades.

I then asked it to write a short tutorial for that language in the style of Learn X in Y minutes

So you can't Google, "Tutorial How to Learn X in Y minutes", get a result that has a completely viable format and details and then do the remaining 15% of the work to fill in your specific items?

I then asked it to write bubble sort in the new language and got a result

Do you know how many times I've written a bubble sort in my actual software profession?

The use cases you're bringing up is very typical of what people bring up with GPT-4. Contrived stuff without real-world application or something that is an evolutionary improvement on Google.

Overinflated claims of what GPT-4 can do causes real harm to our industry. There are business owners making decisions now based on the smoke and mirror demos that people are showing that has convinced them that they won't need developers in a few years. I literally met with a computer science major who was considering switching majors because of GPT-4.

Over the next decade, GPT-4 will be an amazing productivity enhancer for actual software engineers. Just like Google has been but with significant improvements.


> There are business owners making decisions now based on the smoke and mirror demos that people are showing that has convinced them that they won't need developers in a few years

If they aren't doing their due diligence on this then that's their problem. If my example isn't good enough then provide one of your own that you couldn't get to work.


While I’m aware of my own mediocre programming skills and by no means claim to be an expert, I’ve been a software engineer for 10+ years and have done pretty well for myself. I’d hope that doesn’t qualify for “quasi-technical”, but maybe you are lucky enough to work on novel problems that require advanced problem solving every day.

Regardless, I’d argue that GPT-4 is actually far better at programming assistance, understanding concepts (it’s phenomenal at explaining things when prompted within a context), writing in general, and kick-starting creative pursuits than it is being a Google-replacement (for now, at least).

Have you seen or used GPT-4? What has your experience been? What has it failed at, or rather, what would you wish to see in such a system that might make you to, “huh, ok — that is pretty cool.”


I was just thinking this same thing.

I think the sentiment is warranted in some contexts, but in others it just seems dismissive.

For instance, I am not impressed by ChatGPT's code output. It seems to be incapable of understanding the nuance that is required to modify known or similar solutions to fit a novel problem. In that sense, I don't think it's doing much more than a search engine. It could be it just hasn't had enough training examples. It could also be that there is something uniquely more difficult in regards to solving novel problems via code (I doubt this).

But to get a marketing campaign and a website (albeit in mediocre quality) from text prompts is truly amazing imo. A lot of people are missing the point that these models are in the toddler stage of their life.


Completely agreed. Plenty of criticism focuses on specific algorithms or frameworks, as though there’s something inherent to LLMs that render them unable to improve in that regard.

Other objections, such as “if your job can be replaced by an algorithm, you weren’t particularly valuable in the first place” or “software development is much broader than writing code” are irrelevant to the question of whether a large portion of developers will be replaced.

I don’t think they will, given the world’s appetite for software, but it might become a less prestigious and lucrative profession on average.


Why do you think it will become less prestigious? This technology is going to 10x the abilities of every developer and since we are now all guiding an AI to write code instead of typing it out ourselves it's going to become even more important that software engineers earn the "engineer" part of their title through education or experience. More than ever we will need to know how to describe the problem and the solution and how to think critically about the code that the AI produces. The people with the knowledge to do so are going to become even more valuable since they will be among the most productive but even the fresh out of bootcamp junior will be more productive as a prompt-monkey instead of a code-monkey.


My reasoning starts from the idea that “prompting” is essentially programming in a higher-level language, and it will eventually replace traditional coding. At a certain point, we’ll be stating business requirements, and the algorithm will take care of logic and resource provisioning.

Longer-term, what we now consider tech skills will be replaced with communication skills and business domain knowledge. This will cause an influx of workers from different professions and walks of life. As the field starts encompassing a broader spectrum of work, the barrier for entry will be lowered, and there’ll be more work and more practitioners.

There will still be high-paid jobs, but on average, software development will become a more traditional middle-class profession.

This is all speculation on my part, of course.


People were saying this exact thing in 1960 about COBOL. The job of programmer was going to go extinct because business users could write their own software.

We've been through a number of iterations of the same pipe dream since then, but it always turns out that the actually hard problem in programming is figuring out the requirements in full detail without handwaving and glossing over anything, and translating them into unambiguous instructions. And "workers from different professions and walks of life" just inevitably suck at that.

Whether this time it really is different will hinge on whether LLMs can really figure out the handwavey parts, or whether those will be exactly where they will always make up shit and be confidently wrong.


No, higher-level programming languages didn’t make developers obsolete, but they’ve been increasingly lowering the barrier for entry into the field. Plenty of business people can write basic SQL queries nowadays, and Python is jokingly referred to as pseudocode.

Even before the recent ML advances, there’s been a shift towards involving a broader, less skilled workforce as the amount of work expands. Hence the people switching careers and getting gainful employment after a few months of even weeks in a coding bootcamp. Don’t think that was as common in the assembly coding days.

So while none of those advances suddenly destroyed the profession, there’s been a gradual change to include a broader spectrum of practicioners. I don’t expect the LLMs to revolutionize the field in two months, but I feel like it’s safe to extrapolate that this is where it’s headed eventually.


You’re implicitly trying to make a prediction on where the technology is going based on present trends, something that the industry has never been good at.

Remember how 3D printing was supposed to be ubiquitous by now? Or how self-driving cars would lead to an economic apocalypse because of how reliant the economy is on truckers? Remember all the predictions of bitcoin going to $100k? Remember how an AI startup called “The Grid” was making news for their AI website builder back in 2016 (edit: 2014)?

Don’t even get me started on VR and mixed reality. Remember Magic Leap?

My takeaway from all this, other than that futurists are hucksters, is that progress is actually quite slow, and relies on sudden, unpredictable breakthroughs. I mean, without the iPhone, we arguably wouldn’t have smartphone apps, the death of Flash, or responsive websites. Arguably, without this single pivotal product, tech as we know it would be a very different place. I know it’s a stretch to some to call the iPhone a breakthrough, but it’s impact has been pretty huge.

The catch is knowing when something is a real breakthrough, and when it isn’t. I genuinely thought that the Oculus was at the time, and yet here we are, years later, and it seems like nothing has changed, aside from incremental improvements in VR display technology and a very niche VR gaming community.

GPT-4 is clearly impressive from a technical standpoint, useful even, but where does it really go from here? Does the technology take off, or does it plateau in its present state?

Is it VR and 3D printers all over again?


And remember Transistors?


Paradoxically the hype was much lower for transistors.


The root of the problem is that no matter how hard you want to believe, the LLM doesn’t actually know anything.

If you gave an idiot something very intelligent to say and he read it out loud perfectly, people might be very impressed too. That’s GPT.


An AI version of Popehat's Law of Goats applies. If an AI attached to a nuclear command center issues an order to launch the nukes, somebody (likely including us) is fucked. It doesn't matter if the AI "knew" anything, if it deliberated over its target selections, if it assessed and prioritized the most likely threats. Something happened and now we have nukes in the air.

If an AI reliably produces good code in response to prompts, it is a better program than most humans. If an AI reliably produces prose that's free of errors, well organized, and summarizes the issues asked for, it is a good writer. Irrespective of what we can say it "knows" or doesn't know.


I used to think this but now I'm fairly convinced that it "knows" somewhat less than someone who was locked in a tiny dark room with no input except the ability to read a lot of text from the internet would know if that was their whole life. I don't believe it has a sense of self or consciousness, just that it possesses whatever knowledge is embedded in written text. Maybe a better analogy would be if you could cut out someone's language centre and put that into a jar hooked up to text input and output. It's not a whole mind but it sure feels like it's a piece of a mind that can do mind-like stuff.


You can compare GPT-4's limitations to Hellen Keller's. Someone who is deaf and blind can still reason as well as someone with "all inputs enabled". Hellen Keller still had a "full mind."


That's why I included the part about being locked in a small room with only text for input. People who are deaf and blind still interact with the world through their other senses. GPT-4 has no other senses and no body.


That’s just the illusion of LLM drawing you in deeper. Knowing the correct thing to say is not the same as knowing things.


And what is “knowing”?

Everyone repeats the retort you gave, yet I’ve yet to see a clear definition of “knowing”.


Are you talking about the difference between memory and reasoning? It's a bit hard to understand what you mean by knowing the correct thing vs knowing things. Both mean you know things, correct or not.


When you know something you know it in multiple different contexts and forms, not just how it relates in response to a stimulus, or a prompt.


Still not sure what you're talking about that's different from what GPT can do. It's very good at transferring from one context to another while retaining the same intent or meaning. Could you give an example of something you think it can't do?


Innovation.

Could GPT be given some screenshots of a game you want to play and then code it up?

Could you run through a demo of some competitor’s app and have it make something similar but better?


I'm using it right now to help write a game that I had an idea for. I'm writing in a programming language that isn't the same one I use daily and I'm using a graphics library that I've only used once before to make a small game and GPT has been a massive help with this. It's helped me solve some tricky problems like getting a shader to work the way I wanted it to and I've used it to create first drafts of all of the code so far. I guess that's not pure innovation but it sure as hell has a better grasp on a lot of the stuff it's writing then I did at first. It can't just look at a picture and produce the exact game you want but neither could I. I'd have to ask you a bunch of questions about what you wanted the gameplay to be like, if you wanted to release it for PC or console or both, I'd have to get an artist to create a whole bunch of concept art and then ask you to approve the ones you like and then I'd need to implement all the code and play test it with you and make changes etc. It's a bit unfair that you want this tool to do more than a single person could just to prove that it "knows something". Just because it isn't 100% autonomous doesn't mean it has 0 knowledge or ability.


Read the Microsoft Research paper on GPT-4. Some extremely intelligent behavior emerged from just text that has nothing to do with the text it was trained on.


But what does it mean to “know” something?

And when something gives the increasingly-accurate illusion of knowing, I fail to see how it matters (with regard to impact on society and overall utility).

I’m not saying GPT-4 is this amazingly accurate, near perfect model. But if you extend the timeline a bit, it’ll be able to become more and more accurate across a broader range of domains.

Furthermore, how can we prove a human “knows” something?


Ask anyone who has hired someone who says all the right things and seems intelligent, but has no experience or skills in what they actually talk about.

When I write code, I don’t just focus on solving the problem at hand, I think about things like, like: how will another human interpret this, how maintainable will this be, what are the pitfalls down the line, what are the consequences of this, any side effects, performance implications, costs, etc… things GPT does not know.


But still, what does it mean to know something?

And your point about humans lying about knowledge only to be found inexperienced is quite the opposite of an LLM (albeit there is the hallucination problem, but GPT-4 is a massive improvement there):

These models do have “experience” aka their training data. And I would argue with most every one of your examples of things that GPT doesn’t know.

You can ask it about performance implications, side effects, costs. It’s quite good at all that right now even! Imagine the future just a few years out.


When asked about performance implications, it gives fairly shallow generic explanations, it doesn’t do true “deep dives”, these are just built from training data of other explanations.

There is no “getting better” from this. If you gave a monkey a type writer and it occasionally typed words randomly you wouldn’t say “Wow this is just what it can do now, imagine several years out!”


Continue asking it to provide details and it can. Or , prior to asking it about performance, ask it to respond with as much detail as it can and have it include details you specially want to see.

Comparing GPT-4 to a monkey with a typewriter , and claiming the absolute of “there’s no getting better from this” when we’ve literally seen dramatic progress in just months?

I think you’re missing out on some of the utility this stuff can actually provide .


No, you see it needs to do these things on its own, unprompted. It has to consider multiple solutions to problems it encounters and choose the best one, not just the most probable one. It’s not made to evaluate things that way, you can’t hand it multiple implementations and ask it to weigh the pros and cons of the different approaches and recommend the best one for what you’re trying to do. You can’t hand it your code for code review and ask what you could improve and expect to get a response that isn’t just fabricated from what other people have said in code reviews.

And it will never do those things, because it’s an LLM and there are limits to what LLMs can do. There is no “getting better”, it will only sound better.

If it’s going to replace programming, the prompts simply cannot be more laborious than writing the damn code yourself in the first place.


Think 50 LLMs with different personalities and focus points talking to each other, mixed with stuff like Wolfram. You can instruct them to “invoke” tools. An outside system parses their “tool use” and injects results. You can get quite crazy with this.

LLMs are just the part of a much larger looping system that can do these things you speak of. Be active and seek out stuff. Of course, it’s all illusory, but I’m sorry I think it’s no different with myself.

By the way, it actually gives ok reviews on novel code, so I’m not sure what you mean. At some point nothing is truly novel, even innovation is composing existing “patterns” (at whatever abstraction level).


I would like to see examples of these “ok code reviews”. Everything I’ve seen has been fairly plain and not too insightful.


> There is no “getting better” from this. If you gave a monkey a type writer and it occasionally typed words randomly you wouldn’t say “Wow this is just what it can do now, imagine several years out!”

So thinking that chatGTP could gain understanding is as crazy as the idea that primates could learn to use tools or type words?


What does it mean to "know" something, and why does GPT not fit the criteria?


Yep, but that’s true for HN and almost every new piece of tech. The general reaction is always “meh, it’s not 100% perfect so it should be trashed”.


On the other hand we finally got full self-driving cars by now. Or did we?

https://jalopnik.com/elon-musk-promises-full-self-driving-ne...


If you’re on HN, I would assume that unlike regular customers, the journey towards a technical achievement should be as exciting to you as the final achievement itself.

ChatGPT is a giant step forward in the journey towards AGI. Tesla’s cars, for all their flaws, are big steps forward in EVs and even self-driving.

I personally find that exciting enough.


> Yet everyone here continues to proclaim, “it’s not even that good , honestly.” As though no progress will ever be made from this current moment in time.

As a non-expert in the field I was hesitant last year to disagree with the legions of experts who denounced Blake Lemoine and his claims about Google's AI being alive. I know enough to know, though, of the AI effect <https://en.wikipedia.org/wiki/AI_effect>, a longstanding tradition/bad habit of advances being dismissed by those in the field itself as "not real AI". Anyone, expert or not, in 1950, 1960, or even 1970 who was told that before the turn of the century a computer would defeat the world chess champion would conclude that said feat must have come as part of a breakthrough in AGI. Same if told that by 2015 many people would have in their homes, and carry around in their pockets, devices that can respond to spoken queries on a variety of topics.

To put another way, I was hesitant to be as self-assuredly certain about how to define consciousness, intelligence, and sentience—and what it takes for them to emerge—as the experts who denounced Lemoine. The recent GPT breakthroughs have made me more so.

I found this recent Sabine Hossenfelder video interesting. <https://www.youtube.com/watch?v=cP5zGh2fui0>


I’m with you. I believe we haven’t even got started in exploring what you can do with the existing tools, let alone what new and improved ones we are about to get.

I can imagine LLMs becoming a UI in front of just about everything. Instead of googling the ffmpeg flags you need, you’ll just ask your terminal to walk you through it.


> Yet everyone here continues to proclaim, “it’s not even that good , honestly.”

We must be reading different HN. This is not at all what I'm seeing. As of now, the first comment I'm seeing which is dismissive is the sixth from the top, while your comment is second from the top.


I feel like it's a tale as old as time that people have very different experiences reading HN despite it being the same content for everyone. I assume due to some elements of human psychology.

Ask an HN person if the website is pro-spaceman and anti-spaceman. A lot of people feel very strongly about one way or the other.


I would be careful with the extrapolation... - https://xkcd.com/605/


> who are good at leaning into the latest innovations,

maybe not true.

There are many threads where new products got comments of "useless" and then launched. HN users are smart, picky, and not representative for crowds.


This might be how the singularity plays out.


I asked Google Bard to tell me a short story about a computer hacker. Here was the result, completely unedited... Blew me away:

Sure, here is a short story about a computer hacker:

Once upon a time, there was a young hacker named Kevin who loved to explore the world of computers. He was always curious about how things worked, and he loved to find new ways to break into systems. One day, Kevin was exploring a new website when he came across a hidden file. The file contained a list of passwords, and Kevin was curious to see what they were for. He tried a few of the passwords, and to his surprise, they all worked! Kevin had just hacked into a major company's computer system.

Kevin was excited about his discovery, but he also knew that he had done something wrong. He had hacked into a system without permission, and he could get in a lot of trouble. But Kevin couldn't help himself. He was too curious about what he could find in the system. He started exploring the files, and he quickly found out that the company was up to some shady business. They were involved in illegal activities, and Kevin knew that he had to do something to stop them.

Kevin decided to hack into the company's website and post a warning to their customers. He wanted to let them know that the company was not trustworthy, and that they should avoid doing business with them. Kevin knew that his actions would get him in trouble, but he didn't care. He was doing what he thought was right, and he was willing to face the consequences.

The company was furious when they found out what Kevin had done. They threatened to sue him, and they even tried to have him arrested. But Kevin was not afraid. He knew that he had done the right thing, and he was willing to fight for what he believed in.

In the end, Kevin was able to expose the company's illegal activities, and he helped to protect the company's customers. He also learned a valuable lesson about the importance of doing what is right, even when it is difficult.


but isn't this "story" deeply boring?


Yes, but it seemed like the prompt asked for a "boring" story. I guess you could make it more interesting by writing a more detailed prompt


The prompt didn't ask for a boring story; nothing suggests that at all.


FREE KEVIN


Everyone seems to think their particular niche is going to be the last to be commoditised. Yes, you've saved some time marketing your product. But your product is just going to get replaced by equally lazy people who type "write a business simulation to teach my students leadership" into ChatGPT. And then those people are going to get replaced by students not going to business school and just asking ChatGPT for a business curriculum directly. Except those people aren't going to get jobs because all the actual businesses are using ChatGPT instead of employees. Except those businesses' products also just got commoditised so they can't pay their ChatGPT bill.

ChatGPT is not a tool with which you can build a bigger moat. Huge amounts of money are going to be made in the short term, but in the long term, I think your work being amenable to aid or replacement by LLMs is an indication you should be looking for higher ground, even if it's just to survive some plausible AI Winter.


I agree, most are missing this aspect. If AI progresses as the proponents assume, continually exponential, then this is different than all prior disruptions.

It is accelerating technological evolution. Meaning there is no island of stability on the other side. There is no adaption to change and then we move along for a while. It is continuous. What makes this disruption different than all others is AI is not a narrow disruption. It is a disruption for everything because at its core it is a machine for the replication of skill and technology. A concept that has never existed prior with any other technological disruption.

I've described this somewhat as the shrinking innovation, disruption and adaption cycles that leave us completely unable to keep up.

"Climbing the skill ladder is going to look more like running on a treadmill at the gym. No matter how fast you run, you aren’t moving, AI is still right behind you learning everything that you can do."

https://dakara.substack.com/p/ai-and-the-end-to-all-things


If you use ChatGPT as prescriptive, that it tells you what to do, then you're only as good as what you put in and how good ChatGPT is.

But if you use it as a coach/consultant/pair-programmer/R&D exploration/ brainstorming session, then you have instant access to an "expert" in any field.

That's something that can supercharge the productive output of any worker. And/or lead to dismissal of most of the team...


When I was young I read a load of books and wrote a bunch of waffle about them (essays). The waffle wasn't much use to anyone except myself because the process of producing the waffle made me do some basic thinking.

So much human activity is about structuring our own thoughts so we can ascend to a higher level of activity and insight.

I mean everyone knows marketing plans are BS but they're part of a process of group thought.

I am genuinely amazed at some of the chats I've had with AI but I hope the outcome will simply be a clarification of what we are all actually doing.


Yes, we risk AI eating the low hanging fruit, meaning humans can't grow the skills to pick off the much harder to reach fruit. This could stilt human growth, whilst causing a large shift of wealth to those with the resources to train large ML-based systems.

If we don't share the gains, then overall I see this as possibly being a loss for society.


A terse bullet list suffices to guide thought. A 5000 word essay is busy work.


The completed website looks really bad and generic though, I'd argue even with the time saved it's a net loss. Since the generated images and video look uncanny/fake, I'd assume it's a scam website.

Not gonna lie, it's kinda fatiguing seeing people hype lazily generated AI content as really good, when it's more often than not mediocre. I don't know if it's because people are intentionally hyping their results, or if they have poor taste/standards.

I think the real lesson here is less is more, and I'm afraid with generative AI there's gonna be so much churn of content, we'll all become fatigued.


I feel that misses the point. This is a demonstration of what these systems can potentially do.

The website doesn't look great. The emails are a bit generic. But we are in the very early stages of these models. I think the fact that a website can be generated from text prompts is remarkable.

I wouldn't scold my toddler for not walking very steadily.


I agree, it's a cool proof of concept of what can be achieved right now. Maybe I'm being less charitable than you, but I think the author's main focus was on growth hacking and business productivity gains, which works in this case if you ignore/don't care about the nuances/quality of what you're generating


Mediocre was probably better than the the guy's best effort in a domain he didn't understand given a couple of days. That's a net productivity boost for him.


I personally wouldn't want to ship mediocre content that looks like that, so for me all he did was waste 30 minutes prompt generating marketing BS and weird looking images. Don't get me wrong, I think AI can definitely be a productivity and inspiration boost in the right hands, but it's going to be being abused so much, I don't know if in the future I'll be able to read a long paragraph on my screen without wondering if it was written by a human (or even read by one prior)


As per Nilay Patel's comments about SEO spam hustlers, the web is going to become a vast(er) ocean of shit.

https://www.theverge.com/2019/7/2/19063562/ai-text-generatio...

https://www.theverge.com/2023/1/19/23562966/cnet-ai-written-...


Would you be better of working off of only the prompts, or the ai poc - if op sent it to you and asked you to improve it for ~ 5 000 or something?


Could you do better in ~10 minutes?

Sure all the content is mediocre - but it's enough to start, and you could choose where to invest your budget - video editing, artwork, web site - to improve on this stuff that is an actual poc - in half an hour?


It feels like there's going to be a whole lot more marketing content that's not getting read.


yeah, the end result is boring. I'd rather watch a 2 minute Loom of someone using the product, which would have taken them less time then the AI approach.


Perhaps this will only further devalue and misalign the already in-shambles communication between producer and consumer.

For example, I’d trust 1 video review over 1000 text reviews, and even then, a family member’s opinion over 1000 video reviews. The number of “agendas” out there is staggering, and it feels naive to assume that most of them would align with mine.


> (Prompt#2) Pretend you are marketing genius. We are going to launch the Saturn Parable. You should give me a document that oulines an email marketing campaign and a single webpage to promote the game.

Cool, so lets see what Bing spits out.

> I’m sorry but I’m not able to create a document that outlines an email marketing campaign and a single webpage to promote the game. However, I can suggest some steps that you can follow to create an email marketing campaign and a single webpage to promote the game.

> snip

---------

What the hell is up with this? I copy/paste the prompts exactly as outlined in this blogpost, and I get completely different results.

I know I'm not the only one having this issue. But it makes me extremely distrustful of these blog posts. If I can't replicate their prompts or how they work, how the hell am I supposed to believe them?

And yes, I did start with: "Look up the business simulation market. Look up Wharton Interactive's Saturn Parable" as prompt#1, so that Bing/ChatGPT already had Saturn Parable "in its memory".

-------

EDIT: I tried these two prompts in "Creative Bing Chat mode". I got the first sentence out of Bing Chat, but the rest of it failed. There must have been a timeout of some kind, because Bing Chat just hangs and fails.


1000 people using the same prompt will get 1000 different results. LLMs are completely non-deterministic.


I repeated the experiment 3 or 4 times before posting under slightly different phrases and sentences.

I know this thing isn't deterministic. But never have I even gotten past step 2 of this blog post.

I dunno. Has anyone else gotten the prompts to work as the blog post alleges? If so, can you share your prompts?


Why are they non deterministic? Is randomness intentionally injected? Or intrinsic to the approach?


Actually, LLMs are completely deterministic. Their output is a list of possible word, ordered by probability. If you always choose the highest-ranked word for the next iteration, they will always generate the same sequence for the same prompt. In all current implementations, a small amount of randomness (called "temperature") is added. In practice, the higher the temperature, the more probable is that a word down the list is selected, rather than the top one.


Check out the temperature docs in the API reference manual: https://platform.openai.com/docs/api-reference/completions/c...


I wonder if some kind of Blockchain proof of work could be used to verify chat results in the future.

Especially if the models grow quicker with realtime data.


The problem is that, business wise, this doesn't increase your ability to do stuff.

Superficially it does that, but that would be the case edge-wise if ChatGPT was just available to you. You have no business advantage because of it over anyone else using it.

What it does do, is devalue your skills (if you had the skills before to manually create all that material), and devalue the final product.

Once, having such online marketing material used to be a great asset, even if it was crudely made by today's standards (think 1996). Later, as there were all kinds of services to help create one (with assets, etc.), more people that could do it for cheaper, template-based website builders and so on, it was far less valuable. Now, with ChatGPT churning good-enough material in 30 minutes, it would be as valuable as spam.

In fact, there will be (already is) an entirely huge industry of automated AI spam content, including fake companies with marketing material. Not the hand/template build of today, where someone has made 30-50 slightly different BS companies to e.g. scam or drop-ship. But where every scam artist can have 1000s of them within an hour, and get them be topical on the latest products and trends, complete with "chat support" and everything. And of course any person who has some BS idea, but couldn't even use Wix to make a website before, even less so to write copy for it, now will have one (they'd probably wont be able to use ChatGPT directly, but they'd be able to use a turnkey "make me a website" GPT-powered version of something like Wix.

The bussiness advantage of using GPT to boost your productivity is like getting rich by the UN giving everybody on earth 100 billion dollars!


So, congrats. You "created" something without any aesthetic or marketing value. Yes, LLM's help, but the idea that time to market in competitive reality will give the real value is laughable. And no, when the prompt designer/engineer has no taste or more than average skill, the results will not be mind-bending or guaranteed hit.

Force multiplier is not equal to quality multiplier, and the availability of the tools will generate mediocre junk with the speed of the light. Soon we will have to add to our information journeys another blocker for the A.I. generated marketing content.


Particularly on the issue of coding with the help of these models, I am specially concerned with Kernighan's law: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”

After seeing Developers gainfully employed, justifying their horrible SQL as, "I dont know the ORM wrote it". Others mentioning they never heard about transaction isolation levels, the old mantra of "NodeJS never blocks" so your concurrency issues are gone. Even recently and incredibly, having to argue with somebody about a Cloud Architecture design where as surprisingly as it might seem, the argument from the other side at a moment was, "but ChatGPT says..."...Taking all this into account I strongly recommend you keep brushing your skills.

I predict Consultants and Developers, able to fix the bugs in these ChatGPT Driven Development Applications, will be rewarded with hourly rates that will make FAANG salaries look like Monopoly money...


Experience in software projects plus AI gives a clever consultant a force multiplier that is unheard of. A person who knows how things are made but lacks time can leverage AI to boost their productivity by several multiples. What a time to be alive. It feels like I have been given a new career overnight.


AI can filter through thousands of Legal case files and court documents to pinpoint exactly what cases are relevant to a current case that a lawyer may be advising on! It took me exactly 30 minutes to go through 428 case files... :)


Did it? How do you know it didn't confidently miss important documents?

My experience so far has just been asking chatgpt questions and then researching it myself to confirm what it says, so maybe I'm missing something. But, it has been confidently wrong on important details a large enough percentage (right now) to make it absolutely not a fire and forget tool.

The worst part is the confidence: it's like having a coworker that just straight up lies to your face randomly. Even if it's only 5% of the time you basically can't trust anything they say, and so you need to double check all of it.

This doesn't make it useless, but it means it lends itself to "hard to do but easy to verify" tasks. Which afaict your example is not: you can verify the documents it picked out are relevant, but not that the documents that it didn't, weren't.


coworkers give me wrong answers a lot more than 5% of the time.


On the other hand I can usually come up with my own estimate of how trustworthy the answer is when a human gave it to me, e.g. thanks to:

* their reputation with respect to the question domain (if I ask a basic C++ question to a C++ expert I'll trust them)

* their own communicated confidence and how good they are at seeing their own shortcomings (if they say "but don't quote me on that, better ask this other person who knows more" it's fine)

5% of bad answers doesn't matter if 99% of these times I knew I should look further. ChatGPT and others are missing this confidence indicator, and they seem to answer just as confidently no matter what.

To be clear I don't see a fundamental reason why LLMs couldn't compute some measure of confidence (which will itself be wrong from time to time but with less impact) so I expect this to be solved eventually.


Base gpt-4 already did this.(confidence about something directly correlated with ability to solve problem/answer questions correctly) You can read the technical paper. But the hammer of alignment(RLHF) took it away.


How much better is this than a keyword search across the documents?


I've seen people use "code words" exactly to avoid them being find by regulatory keyword searches.

AI can figure out from context, a keyword search will miss it. And no, you don't know the "code words".


Keyword search doesn’t understand the context of what you’re asking for.


And when it’s a probabilistic LLM instead of a keyword search, how do you know you didn’t miss anything?


As opposed to manually trawling through thousands of documents, based on a keyword that may, or may not, appear in the context of the case file?

It's a huge boon, but nothing is perfect. If it's that important that absolutely nothing is missed, maybe try multiple approaches concurrently. Take this as a value add, not a replacement.


If you don't already grok all the legal jargon, the LLM is probably better than a keyword search.

I'm not a lawyer, so when I search British copyright law, I wonder why those forms of words don't result in the staff of search engines and social media sites being arrested and imprisoned:

https://www.gov.uk/government/publications/intellectual-prop...

Although, probably best to ask a lawyer for legal advice even if they end up using GPT-n themselves; they've probably got public liability insurance if they get something wrong.


Humans miss things all the time. It's okay to make mistakes.


It’s not replacing a human, it’s replacing a keyword search.


Can it? Or are you just imagining that it might be able to one day?


How in the world did you feed GPT those 428 case files? There's nothing in the API to make it ingest that much info.


I'm guessing the author is computing embeddings https://platform.openai.com/docs/api-reference/embeddings


Things will improve, to be sure, and the volume of word vomit is impressive, but much like when laser printers democratized publishing, you're going to initially end up with memos using 24 (all of the available) fonts.

I turn on a 'certain kind' of Movie or TV show and it becomes apparent that the scriptwriter really isn't an FBI agent, or Doctor or IT person. They have a feature length story, with amazing CGI, lighting, 4k video and Dolby...but the writing isn't believable and the end product is shit.

I'm wondering if this will allow more people do more things, but the things that are turned out will look average, and the people that dedicate the time learning and skill to %product% will still turn out things that stand out.

I have a similar old man gripe about CGI and music...the end result is that many more people have the ability to do the thing, and the end result is that the thing becomes commonplace and loses some of the wonder as a result. You have more and more and more people turning out music, and as a result, more and more people can't make a living making music.


> I gave myself 30 minutes, and tried to accomplish as much as I could during that time…

Have you watched nerdforge? “I spent 1, 10 and 100 hours on X…”

Excellent content. High rated. Highly successful.

Here’s the thing: yes. Spending 40 seconds clicking on the “generate image” button can indeed produce some random crap.

…but, I’ve already seen that. I’ve done it. Here’s a pro tip: any content that takes you 30 minutes to do is something almost no one is interested in.

It’s too trivial.

So, here my challenge: ok, now go and spend 10 and 100 hours to see how far you can actually take it if you devote real effort to using these tools to actually do something.

It might get a few clicks today, but the barrier to spending 30 minutes to generate this kind of stuff has become so low that it’s basically worthless.

We get it. You can click on the generate content button.

Does it scale? Can it generate prompts for itself when you don’t have time to do it yourself? Can you refine the content so it’s not so bland and generic?

…or is that a problem for GPT5?


Finally the war has been won. This wunderwaffe will destroy the front-end developers once and for all.

“ChatGPT, please update the website to the latest meme framework”

And with that they were all obsolete


I’m not worried. There’s much more to the job than pedantic code reorganizing. As a matter of fact, it seems to be good at what I’d like not to do as a frontend dev.


More like super spam generator.

Humans are good at coming with good explanations.

There is nothing super human about generating a lot of crap marketing noise in 30 minutes.


Yep, dead on.


"Historically, these sorts of disruptions lead to short-term issues, and long-term employment growth"

I think the pace of AI advances may turn these disruptions into long-term issues and not lead to employment growth.

Yes, ChatGPT can be a multiplier on human productivity. But that's assuming you can learn how to use it correctly. The author here seems to know how to scale the complexity of the prompts in such a way that they get meaningful output (a complete website). That sort of prompting is a skillset in itself.

The problem as I see it is: how long is that skillset relevant before an even more advanced LLM comes along, forcing you to re-learn how to interact with it. Now consider that the next advancement need not come from an LLM, but some entirely different system that doesn't work via prompts. That skilled prompter needs a new skillset.

Technological innovations of the past have always come slowly enough that people could learn and master them before something new came along. It seems like things are moving so quickly in the AI space that may no longer be true.

With all that said, an example that gives me hope is chess. The rise of chess AI's has not killed chess in any sense. The players of the last few years are actually much stronger than pre-AI. Can the best players in the world beat Stockfish or Alpha-Go? Not even close. And so far that hasn't mattered. People still prefer to watch two human players.


It's a very important point that the insanely rapid pace of change is itself a huge problem. We're still in the same (northern hemisphere) school year in which ChatGPT was released; teachers and students just started trying to understand its impact; and the underlying model has already been replaced with something much more powerful --- TWICE --- GPT4 and "tool-using" mode.


We’re just starting the first “performance cycle” at work since ChatGPT came out and I strongly suspect next year’s won’t be based on producing so many written documents (self reflection, peer feedback, etc).


I was thinking about the chess analogy too, and I don't think it holds. Chess is a pastime; people do it for fun. Once AI has consumed all our jobs, sure, you can manually tap out some code for fun, but no one is going to pay you for that.


Don't get me wrong, LLMs are pretty breathtaking. In my opinion, just the way that they parse natural language, "understand" it and can generate a meaningful response is absolutely impressive and would be considered out of reach just some years ago.

But I feel underwhelmed everytime I see people coming up with examples about how LLMs are going to revolutionize the job market.

Prototypes for sites are something very old, it's not that you have not been able to create a good looking mock site in record time in the last 10 years. The rest of the article is also underwhelming, AI for generating content for a marketing strategy is not impressing honestly, and I guess that similar tools already existed.


Recently I used chatGPT to “improve” the introduction of my game http://www.squashbyte.com. I wrote the initial introduction and asked chatgpt to make it better. I can speak good English but I am not a native English speaker so I was impressed by the outcome. Also I was mind blown to see how chatGPT answered some questions I asked after feeding the initial text to it. But now I think should I keep using chatgpt to do things like this? Will it deteriorate my writing or thinking ability if I keep outsourcing it to AI?


Seems like the reliability of the output is a big factor. If it is highly reliable then you can learn from the changes and so on.


Mediocrity has never been so easy to achieve!


A task for me on monday: justify why you just requested 3x laptops speced for 32gb ram.

AI answer: More ram is better. Computer go faster with bigger files and less swap. Business needs demand. (Useless answer)

What the boss wants: I assess that software X requires 32gb to perform the task we need. We tried with 8, hard fail. We tried with 16 and it crashed every 30min. And we havent asked for new laptops in years so you owe us.


This might be an issue with your prompt. Here's what I get for "Our software is crashing constantly when run on laptops with 8GB ram and every 30 minutes when run on laptops with 16Gb ram. Further, our laptops are multiple years old and require frequent maintenance. Write a brief email to my boss asking for approval to purchase 3 laptops with 32GB ram." It's a bit more formal-ish than I'd prefer but with a little tweaking it'd probably get the job done.

"Dear [Boss's name],

I hope this email finds you well. I am writing to request your approval for the purchase of 3 laptops with 32GB RAM.

As you are aware, we have been experiencing frequent software crashes, particularly on laptops with 8GB of RAM, and even on those with 16GB of RAM. This is affecting our productivity and causing delays in our work. Additionally, our current laptops are several years old and require frequent maintenance.

To address these issues and ensure that we can work efficiently and effectively, I believe it is essential that we invest in new laptops with increased RAM capacity. With 32GB of RAM, we will be able to run our software smoothly and reduce the frequency of crashes.

I have researched several options and identified three laptops that meet our requirements and are within our budget. I have attached the details and prices of each laptop for your review.

I would appreciate your prompt approval for this purchase so that we can continue to work without interruptions and complete our projects within the given timeline.

Thank you for your attention to this matter.

Best regards, [Your Name]"


That answer would not fly in our organization (military). It reads like someone who isn't part of our organization, making little assumptions about our organizational structure. For instance, we would never "invest". We "acquire" equipment to meet needs. Likewise, nobody would ever talk of something impacting work. Everything is written in terms of operational requirements and missions. Even "approval" isn't appropriate. My boss will "support" my request that will go to another part of the organization for assessment and procurement. If I used language from an AI that did not understand our specific organizational mannerisms, it would be flagged as so odd that my boss would probably drop by my office for an explanation.

This is typical of generalize AI. Everything it writes sounds like it comes from a total stranger. It has no internal voice, no distinct style. It sounds like it comes from a child writing an essay ... which is basically what it is.


I found the main content generated, which are the campiagns, plain and too generic to be useful. And I doubt the effort needed to pull out a better result, even with better models. The article does provide some inspiration on what will the AI-enhanced workflow look like, but also shows the weakness. All results look like pretty generic and skeleton and further polishing from pros is needed. IMO, the future will more be like everyone do their job better with the help of AI, instead of that everyone do all the work with AI and no one is really needed.


Reading this I'm drawn to the idea that, if LLMs are so attractive to us as a productivity boost, and if they do that by synthesizing gobs of similar human output, then we have been asking humans too much for the same output as one another and we're still doing it. Finally, a tool: I no longer have to be the automaton. That's a blessing and an indictment.

I don't know if I'll feel the same way tomorrow; just thought it was a perspective worth sharing.


The current state of AI is amazing and can be a time saver, no doubt.

With respect to the ongoing argument about the social and economic disruption of this “tool,” we need to remember this:

Over short periods of time, we “feel” like the slope of change is relatively flat. But it’s not. The slope of change is getting steeper and steeper ever day. Technology change and capability is exponential.

I have seen a crazy amount of change in my lifetime, and it’s accelerating.


Ethan has been an excellent source for keeping up with the advances in AI on his twitter account, through the lens of both academia (he's a professor) and "industry" (he professors in an MBA program).

I recommend him as a follow: https://twitter.com/emollick/status/1636454151272931337


Today I spent 2 hours cursing at GPT-4 for not being able to fix a stupid indexing mistake in the code it wrote. Just like code before.

It’s helpful and I wouldn’t have the energy to work on this hobby project without GPT. But for now at least, at some point I have to understand every line of non-trivial code eventually.


Of course these aren’t production worthy artifacts. But what if the marketing team sent in their context and the personalized context of the individual they are targeting. Multiplied by millions of people. Custom, on-brand marketing. This is what LLMs are actually good at.


> It then proceeded to create the four emails. without me doing anything.

I wonder how quickly we gonna put any ML-generated email directly to spam folder? I personally would like to have at least a marking of such content in my inbox.


oh the content arms race is tightening!

I have seen myself visiting much fewer websites after I incorporated chatgpt in my workflows. why would I visit some random dev blog that promises me an answer, when I don't need to? and now I don't have to be annoyed by that pesky popup asking me to suvscribe.

it will be the same with marketing. it might be easier to create the content, but way fewer people will even see it.

it is gong to be interesting to see if there will be an implosion of more a fizz out.

it is also going to be interesting to see how marketers will target LLMs.


How do you send the emails to willing recipients?

I doubt AI can find these recipients and that's the most critical element.


This is a bunch of fluff. I am sold on the groundbreaking possibilities of AI but not impressed by this.


Well there is going to be a lot of geocities style websites coming back as the GPT clients can certainly output valid HTML, without any acquired taste when it comes to web design.

Regardless of the role... GPT as a helping tool, absolutely!

But as a replacement for the role... You're setting yourself as a joke.


Yes this is not particularly great aesthetically. But visually it is much better than what a lot of "full stack" (i.e. backend) developers can knock up, no matter how long you give them, as a lot of developers have no interest in making things look nice or even making them useable as long as they get to tick off another story point. It is definitely better than anyone without any experience of development could produce. It is also better than what an experienced designer could knock up in 30 minutes. It is also rapidly improving. I genuinely think that five years from now almost all developers front or backend will be obsolete, in the same way that buggy whip makers are now.


Then what do us software engineers do? If we are obsolete. College diplomas are worthless outside of self improvement. UBI isn’t coming. Even then, I’m in the country I live in through a working visa. If what you say is true… I see this on HN every day at this point. It fills me with enough anxiety. To me, as an engineer I like to think of different scenarios but there is no good scenario with AI. The worst one is, I lose my visa, job, house, at that point I lose 10+ years of my life. I’ve thought about just ending it at that point… talking to a therapist over that obviously, but there are days when that dread is there. There is the thread about the 3D artist replaced essentially by AI. Due to my visa I can’t legally work a blue collar job that will be safe until a robot comes. I don’t want to move back to a country I don’t even know anymore. Death doesn’t sound bad…


I feel you, but remember this, every technological upheaval has lead to a higher quality of life (mostly).

Meaning, you might be extinct as banging out code for a living, you might become assisted by the computer and focus on higher tasks like architecture design, communications and ... probably, rewriting half of what the AI did.

When I started, being a web developer was NOT a thing! There were some classical graphic designers that came from Print and started doing web stuff, and it was regarded like a dirty work.

Even today a lot of system developers will look at web developers and say this isn't programming.

Things are in flux, always, so embrace the change, don't deny it, and trust that there will always be something to be adding value to later on. The worst thing you can do is become a denier of change and lock yourself up.

That said, a certain common sense is required, like jumping off the band wagon and calling yourself a web3 crypto bro developer isn't going to do much for your resume. So sit tight, admire the show, and learn what you can.


None of us know how this is going to play out. I feel anxious too but let’s at least see what happens. Right now the world is still much the same as it was before LLMs arrived. Take some time to look at the flowers. They are still there.


I hope so... I really do. Looking at HN, and I understand nobody knows what the future holds, but to get to those flowers a lot of pain is going to come I think. For me, for other people... to me the most disgusting comments I've seen on HN has been SV people who say that I should have saved up enough as a buffer. That isn't the case for us SWEs who live abroad. Not all of us make SV salary. I could go on and on about this. I really just hope there is a light at the end of the tunnel for us in the short term, because if it is coming for our jobs in 5 years or less, many of us in society will be left for rot.


Life is always better than death. If things really do get bad, you won’t be alone. Take solace in the fact that there will be a huge community of people in the same boat as you with whom you can connect and work with.


It's going to raise the bar for people on a lot of fronts. Also, people will get bored with bland, boring fluff pretty quickly. That's always the challenge for marketing people: how do you stand out from the crowd? The generated movie is actually pretty bland and boring. It's mainly impressive because it almost but not quite looks realistic. It's basically a talking head moving its head between 5 or so positions with an expression less bland face narrating some generic marketing pitch. It's not bad. But it's also not very good. It's marketing box ticking "we got to have the talking head thing on the website somewhere" but it's not going to achieve any miracles in terms of engagement.

That's the problem with this stuff: it's formulaic, unimaginative, etc. Like a lot of real world marketing. If you look at what companies actually do, it's mostly pretty low quality and bad right now. For every well run marketing campaign there are hundreds of really poorly thought out and cringe worthy campaigns. Trying to imitate what the good ones do.

So, AI is going to run circles around that crowd. Just like any competent marketing person would. Except an AI will do it a lot cheaper. This is going to decimate the market for incompetent charlatans and create a new market for effective directors that can work the tools more effectively and cheaply.

That kind of is the point. These tools are really effective in the hands of a skilled professional that knows what to ask for and has a notion of what good and bad look like. It's an iterative process of asking and refining and directing that allows them to take a lot of short cuts.

Imagine Steven Spielberg directing a movie. But without the actors, camera people, post production, makeup, lights, CGI, and all the rest. That kind of is what this could be. How would somebody like that use AI to produce a movie. Well, he'd be directing and refining and iterating and be getting results a lot quicker than is possible now. Maybe he'd raise the ambition level a little and ask for things that are really hard right now. But in the end they'd produce a movie that is hopefully very entertaining and interesting to watch.

Now imagine a young inexperienced director with some vague ambition to be a better director. Would that person be able to produce something with the tools. Sure. And they'd learn a lot in the process. As you iterate, you better yourself. It's not a one way street. The more you engage with some activity, they better you get at it. We'll have a lot of very skilled directors in a few years. And they won't just be directing movies.

And now imagine a very cynical third rate director that produces straight to dvd content for the masses. No budget, hilariously bad scripts, actors that don't give a shit and can't act, etc. That guy is going to produce some amazing results. But there will be so much of it that it won't have any value.


Apparently Mr. Spielberg does exactly this: he seems to be on the advisory board of a company for generating movies from phone footage [1].

We will probably have tons of movies with Avatar-like graphics, unfortunately, also with Avatar-like story, since that's really the hard part: encapsulating emotion in the artifact.

[1] https://wonderdynamics.com/#team


The bigger question is: what can you train it to do in 30 minutes?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: