Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What ChatGPT and AI-Based Program Generation Mean for Future of Software (acm.org)
61 points by dailymorn on Jan 22, 2023 | hide | past | favorite | 114 comments


What kind of basic programming does one have to do to fee threatened by chatgpt? This tool does great at regurgitating basic coding but anything a little more complex is a mix of nonsense and confidence. Do people actually write “leet code” on a daily basis?


It solves real world problems for me on a daily basis; things I hired people for before. There are things like helping with white papers, emails and blog posts. But, to your point, I can give it a little bit of context, let’s say some old Django code we have running in production and ask it to add or change something, and it does, in seconds. Yesterday I had it mostly generate an api with 23 endpoints for a client project in a language & framework I am rusty at (long time ago) for which people on upwork were bidding 1500$ minimal. Took me less than an hour with chatgpt and including docker & docker-compose.

People don’t need to feel threatened; it is simply already replacing all the mundane programming and writing work we used to do; people who can only do that type of mundane work aka crud work (chatgpt can do it in any language or framework), integration, transformation, plumbing etc are already gone.

A lot of our (very well paid) work is taking data, transforming it, sending to some api, get the result, transform it and move to the next step. A few months ago this was just boring human work, now it’s just copy pasting the spec and out it falls. Sure you might need some fixes (as the article says), but not much and it learns (you add new knowledge to the prompt ‘memory’). I have been working with my own custom client on top of chatgpt for months now; it has a lot of custom prompting and effort to make sure it does as well as it can. This I can throw away in a few months when improvements come from their side.


Calling BS on your claim here. You would've had to have spent quite a lot fo time just writing up the requirements that it would've been as easy to do some other way.

It's something I could do with swagger in 5 minutes as well, you don't need an AI to generate boilerplate code.

The difference with using swagger would be, I know the code is correct.

The actual problem might be that you're so rusty, you don't actually ,know what the job entails or is worth? I mean you have clients, and you're pasting code from ChatGPT into source control and people are paying you for this?


> You would've had to have spent quite a lot fo time just writing up the requirements that it would've been as easy to do some other way.

I have to do that with human team members as well. People have to either listen or read what has to be done. Like said; it's the same brief I gave human programmers that it had as input.

Edit: not to mention, I have to spend far less on that as well; chatgpt generates better, well written briefs from a few words, including pseudo code, boilterplate + failing tests that should succeed when done and possible directions to attack a problem. It helps human programmers understand hard problems better and solve them faster.

> It's something I could do with swagger in 5 minutes as well, you don't need an AI to generate boilerplate code.

It doesn't just generate boilerplate code, it generates everything. Functional code including all logic, database interactions, api interactions, transformations + tests.

> ,know what the job entails or is worth?

You seem to be triggered, why?

Being rusty at a programming language/framework has zero to do with what it entails or is worth; it will be more or less the same in other languages/frameworks I'm not rusty at. Rusty here means; I forgot some of the language/framework functions to do things (like, make a database model in this particular ORM); that doesn't, in any way, make it difficult to estimate the work.

Not only that; I said others (as in 3rdparty) estimated it from $1500-$3500, not me. I estimated it to far less, even being rusty. But not to as little as I got it done with chatgpt.

> I mean you have clients, and you're pasting code from ChatGPT into source control and people are paying you for this?

Yes, and I tell them how it's done. They don't care how it's done, as long as it's done. This particular client asked me last week how to invest in AI products because of what I showed them (I sent them a PDF with the prompts and responses for doing that particular micro service).

But each their own... My team gained super powers with this (and with copilot as well); if it doesn't work for you, that's fine. In my experience, it's already better than most people I ever worked with (that's probably the client base I work with ; large corps).


You seem to be triggered, why?

Because I wish it worked as good as you said it did so I wasn't working right now.

On the other hand, you're the first person I know of who has actually been replaced by an AI (although you don't know it yet) :) Because the second the large corp you work for realizes they can just type the inputs into ChatGPT instead of you, you're contract will be dissolved.

In fact, it sounds like you're being disingenuous charging them for work when in fact they could just be paying OpenAI? What value are you adding ?


> you're the first person I know of who has actually been replaced by an AI (although you don't know it yet)

I think there are many; I know I will be, and always known that (my father always told me from when I saw my first computer begin 80s that it will happen some day; as in the 70s/80s they believed it was imminent). I studied AI in the 90s AI winter and didn't think we would get here in my life because of the bleak outlook back then.

I will be replaced as a programmer quite quickly (although there is a lot it cannot do; in the embedded space, it performs horribly for instance; it also cannot find solutions to actual complex problems; most of everyone here is not doing any of that though), but not as a product manager or tech lead. That will eventually happen as well.

> In fact, it sounds like you're being disingenuous charging them for work when in fact they could just be paying OpenAI? What value are you adding ?

Like most clients, they don't know what they want, at all. They have a vague idea and that needs to be translated into a working product. And scale. And be maintainable. Etc etc. That AI cannot do (yet). It'll come, but to code something like 'we want to make a sort of crm mixed with support mixed with uber for outbound sales' and then knowing what to ask and getting that far enough to get to a product that works and adds value is still far off. This thread and article is about coding and I believe that is, for a large part of what people are doing daily now, done.


Obviously we're working on very different problems because for 90% of the code I'm working on, I have not been able to use it for any real purpose.


What type of work do you do? I use it for web/app stuff; for embedded, proofs, pl r&d etc it doesn’t really work so that’s still manual. That’s just not what I make most money with.


> Because I wish it worked as good as you said it did so I wasn't working right now.

It's all in the prompting; I use [0] and a version of [1] with some other tooling to instruct it, add my style and other context in the session and have /chatcommands that add (quite elaborate) prompts to get it to do what I want.

I use different iterations of this playground for many things I do.

[0] https://github.com/transitive-bullshit/chatgpt-api [1] https://github.com/tluyben/chatgpt-playground


At what stage does just writing the code (which is getting easier and easier), just become the simpler thing to do?


It’s quite simple. If you’re doing something niche (ie, it would be difficult to find similar examples online), and deals with highly critical code, it’s probably worthwhile to write it yourself, since there’s a high risk for it to produce bugs.

In my experience this was the case for writing a block driver & some other low level software.

It excels and arguably outperforms in some developers in other cases; app development, CRUD, and CS labs it finds on github. There’s still a risk for bugs, but an acceptable level considering the productivity enhancement IMO.


It does in some cases, but modern software dev (in larger teams) is generally way too verbose (for my taste). The 'easier to do' you speak off will be heavily AI augmented, so it'll be the same thing. The verbosity; people want clear and descriptive variable names, function names, comments, docs, tests, etc which is a lot of thinking (naming is hard), plumbing, checking, fixing & typing and AI can do it from a few scraps of human text instead so I don't have to type it anymore.

Even if you are better at or find it simpler to write code (which I often do find), that's not true for 99.xxx% of humanity. I jury startups for incubators etc now and then and this month's cohort are often using chatgpt to do the software for their startup by one of the founders who 'did a little bit of coding in uni', but is not very good. The code they produce I would write far faster and better myself without chatgpt/copilot than they do with chatgpt, but I would do it even faster and better with chatgpt/copilot.


Same kind of basic programming that was "threatened" by no-code/low-code solutions before that.


It was never to that extend though; chatgpt can change/augment existing code. Nocode/low code solutions cannot do that (in a meaningful way).


It can’t extend provably correctly, so the benefits are questionable without good safe guards (generating tests would be just as meaningless, so you are left with reviewing very closely everything)


Sure, but I do that with code or text written by others anyway.


The approach I use is to have it paper out POCs. I think a huge thing that is missing is ChatGPT’s ability to make interactive code. It should theoretically be possible to write a really simple system that:

(1) makes structured requests to chat gpt based on an initial prompt. That request asks ChatGPT for all of the files associated with solving the problem.

(2) ChatGPT produces each file in a structure fashion. This could range from source files to project files like package.json or Dockerfile

(3) take code output and spin up a corresponding stack blitz or other online resource (assuming those have APIs.

I’m shocked and bummed that there’s not more effort to make this a reality yet. We’ve had ChatGPT for a month and this is low hanging fruit (or so it would seem).


For all its knowledge it can't solve even the most basic problems accurately - but what do you expect from a language model?


Ever spent any time answering questions on Stack Overflow?


I just don’t feel threatened by AI at all. Maybe I’m not seeing the full picture, but the quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff.

Programming is not just taking product requirements and spitting out the correct algorithms.


They are just trying to hire less programmers, because programmers are expensive. They will have even more business / system analysts than before. And they will still have to have programmers to piece stuff together and debug systems. They just think if they could have less programmers they would save money.


There's a reason the world doesn't run on low and no-code platforms. This is worse, because it still takes a programmer to verify and apply changes.


A shocking amount of the world runs on Microsoft Excel already.


You don't think AI can improve in those regards? I highly disagree, its not this decade or even the next, but I wouldn't be surprised if things begin changing sometime in the 2040s. A decade I will be nearing retirement in but still working.


The threat isn't just from AI, unfortunately, it's from an entire class of powerful people willing to throw their money behind AI instead of behind customer needs, business priorities, and human values.


And who will prompt the AI for these powerful people?


The few people that do the same thing now writing code from client specs or who explain and cut up client specs to their team of programmers. Not many people are good at that, and that won’t be AI for a while yet.


Indeed. I yearn for a time when I just got to pump out code. Actually writing code in quantity hasn't been a part of my job for years. Figuring out what to do is more of the job than how.

I have no doubt that machines can write a lot of the boilerplate code going forward. But that's just the evolution of tools. Actually building systems; composing them and keeping them running... this is a whole other art.


chatGPT is not a threat. ChatGPT is an omen for the threat that currently exists in the future.

A lot of people haven't really played with chatGPT in depth. This thing is next level. It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

Here's a snippet from a chatGPT conversation I had based on your post:

   ME: Give more examples of squishy stuff that software developers need to handle. 

   chatGPT: 
   1. Ethical considerations: As technology advances, software developers may need to consider ethical implications of the products they create. For example, a developer working on a facial recognition system may need to consider issues of privacy and bias.

   2. User experience (UX) design: Creating a software that is easy to use and intuitive for the user is an important aspect of software development. Developers may need to consider factors such as layout, navigation, and accessibility to ensure that the software is user-friendly.

   3. Business strategy: Developers may need to consider the overall business strategy of the company they work for, and how their work contributes to achieving that strategy. This may involve working with other departments, such as sales or marketing, to ensure that the software is aligned with the company's goals.

   4. Security: Developers should consider security in their software development process to protect the software from hacking attempts, data breaches, and other security risks.

   5. Cultural and linguistic diversity: Developers must consider the cultural and linguistic diversity of their users, in order to create software that is inclusive, accessible and understandable for all users.

   6. Communication and collaboration: Developers often work in teams, and must be able to communicate effectively with other team members and stakeholders to ensure that the software is developed as intended.

   ME: Give example python code that serves some random customer in some random sector. Then explain how you would change that code to better serve the "squishy stuff" needed by the customer.  

   chatGPT: Here's an example of Python code that serves a random customer in the retail sector by calculating the total cost of a purchase including sales tax:

      def calculate_total_cost(price, sales_tax):
       total_cost = price + (price * sales_tax)
       return total_cost

      purchase_price = 100
      sales_tax_rate = 0.08

      total_cost = calculate_total_cost(purchase_price, sales_tax_rate)  
      print(total_cost)

   To better serve the "squishy stuff" needed by the customer, we could make the following changes:

   1. Add a function to calculate discounts based on customer loyalty or promotional offers, for example by adding a function that uses customer's ID as an argument.

   2. Add a function to handle different tax rates for different states or countries, for example by adding a function that uses the customer's address as an argument.

   3. Add a function to handle different currencies, for example by adding a function that uses the customer's preferred currency as an argument.

   4. Add a function to handle different languages, for example by adding a function that uses the customer's preferred language as an argument.

   Here is an example of how the code would look after making these changes:

      def calculate_total_cost(price, sales_tax, discount, customer_id):
       total_cost = price + (price * sales_tax)
       if customer_id in loyalty_customers:
        total_cost = total_cost - (total_cost * discount)
       return total_cost

      def tax_rate(customer_address):
       if customer_address in country_1:
        return 0.08
       elif customer_address in country_2:
        return 0.09
       else:
        return 0.07

      def currency_convert(price, customer_currency):
       if customer_currency == "USD":
        return price
       elif customer_currency == "EUR":
        return price * 0.8
       else:
        return price * 0.9

      def language_change(text, customer_language):
       if customer_language == "en":
        return text
       elif customer_language == "fr":
        return translate_to_french(text)
       else:
        return translate_to_spanish(text)
    
      purchase_price = 100
      customer_id = 123
      customer_address = "US"
      customer_currency = "USD"
      customer_language = "en"

      sales_tax_rate = tax_rate(customer_address)
      purchase_price = currency_convert(purchase_price, customer_currency)
      total_cost = calculate_total_cost(purchase_price, sales_tax_rate, 0.1, customer_id)
      text = f"Your total cost is: {total_cost} {customer_currency}"
      print(language_change(text, customer_language))

      This code now considers the customer's address, currency, language and ID, to provide the customer with a more personalized experience.
Honestly there's a good number of people who aren't getting how revolutionary chatGPT is because of all the other inconsequential AI buzz. You really need to play with this thing in depth. It is Creative. It can create it's own specs and implement those specs like I asked it too. It won't replace you, but it's definitely the precursor to something that will.


I had played around with GPT-3 beta and sort of written it off after I'd gave me a (honestly compelling though) lengthy argument that Eminem was a black man.

Earlier today I used ChatGPT to explain to me concepts I lacked understanding of, which allowed me to accurately interpret the statistical information in a scientific paper (I'm a reporter and got through a humanities degree with only an introductory stats class, like 10 years ago).

When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.


>When I then asked it to copyedit the draft of my article however, it failed. Making it worse and introducing numerous innacuracies.

If you asked say someone who failed highschool to do the same would it do better?

The jump from GPT-3 beta to chatGPT is an omen for the next jump and the jump after that.


Oh c'mon, is this what really impresses you?

> It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

Srsly? I really can't wrap my head around where specifically did you find "understanding" or "creativity".

The "squishy stuff" is super boring SEO-like text you'd find in some sales-person blog, who needs to run their mouth, but have zero in-depth understanding or appreciation of the hard domain problems. How is any of this non-generic? There is absolutely no substance here!

Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

This is basically a throwaway random code, only thematically connected to your requirement and that could never under any circumstances be running in any productive scenario.

> Honestly there's a good number of people who aren't getting how revolutionary chatGPT

Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

All in all, chatGPT output seems more like a product of work of some total but productive idiot, left with a task and google: simplistic, totally naive, zero understanding or creativity.

It's good for fun rhetorical exercises, very useful for things that you don't know anything about, but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".


>Oh c'mon, is this what really impresses you?

You'll be impressed once the successor of chatGPT takes your job. You realize that chatGPT wasn't trained to be a programmer right? They did virtually nothing to make it a good programmer. It learned programming as a side effect. Wait till they make the thing targetted towards programming.

>The code examples are also super-bad, incorrect and don't even actually fulfil your initial requirements: magic constants, use of undefined variables, "customer_address in country_1", "print", supporting merely 2 hardcoded currencies and languages, and routing them with "if", while also providing "else" fallback that you never implied in your requirements.

All of what you said is true yet you are blind if you can't see why it's revolutionary. In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements. It will do all of what you asked it to. And this is a tool that only learned how to program as a side effect. If you're not impressed, you're in denial.

>Honestly there's a good number of people who don't understand objective limits and properties of chatGTP, despite it actually totally being revolutionary.

No. I'm sorry but you're biased. Artists are ALREADY starting lawsuits because the art is on par with what they create. I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming. There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do. If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

>but in any professional environment it can only be used in a super-limited scope, supervised by an actual professional. Basically just an enhanced "monkey with a typewriter".

Of course. EVEN with supervision, chatGPT isn't even useful enough to be used in a professional environment. I AGREE. The successor to chatGPT, however will be. In fact one of the successors to chatGPT can replace a persons job. Nobody is afraid of chatGPT taking their job. They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

>Real "squishy stuff" would be something around "handling personal data", "ensuring verifiability and correctness", "productive quality assurance", "robust and scalable systems architecture", "managing complexity in a way that doesn't require rebuilding the whole thing as soon as something inevitably changes", "observability", "productive documentation and self-documenting approaches to work" - you know, the REAL squishy stuff that REAL professionals have to deal with, not some totally vague abstract BS.

chatGPT says:

   Handling Personal Data: A software engineer at a healthcare company is tasked with implementing a system to store and process patient data. They use the programming language Python and the Django web framework to build the system. To ensure the privacy and security of personal data, the engineer implements industry-standard encryption techniques for data at rest and in transit, as well as implementing user authentication and authorization mechanisms. The engineer also conducts regular security audits and penetration testing to identify and address potential vulnerabilities.

   Ensuring Verifiability and Correctness: An engineer at a financial services company is responsible for building and maintaining a system that processes financial transactions. They use the programming language Java and the Spring framework to build the system. To ensure the verifiability and correctness of the system, the engineer implements automated unit tests, integration tests, and end-to-end tests to verify that the system behaves as expected. They also use code review tools like SonarQube and Crucible to ensure that the code is maintainable, readable and adheres to company coding standards.

   Productive Quality Assurance: An engineer at a e-commerce company is tasked with building and maintaining a system that processes online orders. They use the programming language Ruby and the Rails framework to build the system. To ensure the quality of the system, the engineer works closely with the QA team to establish a test-driven development process and use tools like Selenium and Cucumber to automate functional and acceptance testing. They also use performance testing tools like Apache JMeter to ensure the system can handle high traffic loads.

   Robust and Scalable Systems Architecture: An engineer at a social media company is responsible for building and maintaining a system that can handle millions of users and handle high traffic loads. They use the programming language Go and the Microservices architecture to build the system. To ensure the system is robust and scalable, the engineer designs the system to be highly available and fault-tolerant. They use load balancers like HAProxy and Nginx to distribute traffic across multiple servers and use service discovery tools like Consul and Etcd to automatically route requests to the appropriate service.

   Managing Complexity: An engineer at a logistics company is tasked with building and maintaining a system that tracks packages and shipments as they move through the supply chain. They use the programming language Python and the Flask web framework to build the system. To manage complexity, the engineer uses a microservices architecture to break the system into smaller, more manageable pieces. They also use containerization technology like Docker to package and deploy the system, which makes it easier to manage and scale. They also use monitoring tools like Prometheus and Grafana to track the system performance and identify potential issues.

   Observability: An engineer at a SaaS company is responsible for building and maintaining a system that provides analytics and reporting features to customers. They use the programming language Node.js and the Express.js framework to build the system. To ensure observability, the engineer uses logging and monitoring tools like ELK stack, Grafana and Prometheus to track system performance and identify potential issues. They also use distributed tracing tools like Zipkin and Jaeger to track request and response times across multiple services.

   Productive documentation and self-documenting approaches to work: An engineer at a cloud service provider is tasked with building and maintaining a system that automates the provisioning and management of cloud resources. They use the programming language Python and the Flask web framework to build the system. To ensure productive documentation, the engineer uses a documentation-as-code approach and stores all documentation in the same repository as the code. They use tools like Sphinx and Doxygen to automatically generate API documentation from code comments. They also automatically generate documentation from code comments and maintain the documentation up-to-date. They also use self-documenting approaches like using clear and descriptive variable names, function names and comments in the code, making it easy for other developers to understand the codebase. They also use a wiki system like Confluence to store additional information, such as design decisions, meeting notes, and user guides.
Literally. You want to drill down on more squishy stuff? The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff. It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.


> You'll be impressed once the successor of chatGPT takes your job.

Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

It would totally make the coding-in part faster (just as IDE suggestion do), but this was always the brainless tedious manual labour part.

> You realize that chatGPT wasn't trained to be a programmer right?

I realize that neural networks are unable to generate correct formal (where each minor detail has specific and important meaning) descriptions by design.

Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

> All of what you said is true yet you are blind if you can't see why it's revolutionary

I never said it's not revolutionary. I merely point out its hard limits.

> In fact it can do better. You can specify all the requirements you want. No use of undefined variables, no hardcoded currencies. More flexibility more features no routing with if statements.

Sure, you can specify every minor detail: how the data should flow, which patterns should be used, which things should be pulled from configs, how the interfaces should be structured, and a shit load of negative prompts. But that's the details that only the domain expert would know. And again, there are no guarantees that the result would actually be correct: the expert will have to review all this extra-attentively, cuz there is no chance that expert's assumptions are the same as NN's "assumptions".

So you basically still need a domain expert, who now has to do extra (guess)work, instead of just writing a formal description directly in code. What's the profit then?

> Artists are ALREADY starting lawsuits because the art is on par with what they create

Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

AI artists actually typically do shit load of prompt-enginerring, pipe different parts of image through different NN's (appropriate to specific situation) and do a lot of manual post-processing so the result looks good.

> I don't know if you realize this but drawing something photorealistically is actually significantly HARDER then programming

These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

> There's literally no Bootcamp to produce a fine artist in a year because it's brutally hard to do

Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

> If AI can crack art, there's not much more time before it cracks programming. Don't kid yourself.

Don't kid yourself thinking that these two are similar or comparable sets of tasks.

> chatGPT isn't even useful enough to be used in a professional environment. I AGREE.

That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

> The successor to chatGPT, however will be. > They are afraid of chatGPT because it is a herald about the AI in the future that WILL.

The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

> chatGPT says:

"Handling Personal Data" - somewhat scratches the surface, but it doesn't mention actual problematics (that first and foremost it's a regulatory matter and all the specifics stem directly from it).

"Ensuring Verifiability and Correctness" - clearly confuses runtime and compiler properties with quality assurance, way off.

"Productive Quality Assurance" - didn't understand the productivity issue (to test or not to test) and even if we drop "productive" part, the process it describes is also incorrect: engineers don't really ever work with the QA team in order to establish TDD.

"Robust and Scalable Systems Architecture" - way off, while you'll often see service discovery, nginx, HAProxy, etc in scalable systems, that's not what makes scalability. Properly managing state and persistence in appropriate places does.

"Managing Complexity" - way off. I don't suppose this one even requires an explanation, total gibberish.

"Observability" - as expected, this is a rather good one. Unlike other points (which are concepts/problems) - this one is a rather well defined term.

"Productive documentation and self-documenting approaches to work" - totally ignored "productive" part and just gave a definition of "self-documenting" along with some rhetorics on the fact that people document stuff in general.

Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

Basically, even if you ignore "brain-farts" (which is a good example of "minor" incorrect details that make NNs inappropriate tool for complex formal stuff) it only really got - AT BEST - 2-3/7 right. Now, imagine it's a real world and you are betting millions on it, without having an expert-overseer to tell you when it brain-farts or if the output is even remotely correct.

Actually, what was the prompt? Seems like you just asked it to describe the list I gave you, which essentially means you just used my own expertise, understanding and creativity, not GPT's, as it didn't even give you a list of concrete problems.

> The squishy stuff is what's easy for chatGPT because chatGPT is SPECIFICALLY trained on that stuff.

Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

> It's the coding and math that it has more trouble with because it's NOT trained specifically on code and math.

Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.


>Which part of it? ;) Leading a team of software engineers? Identifying and negotiating vague parts of business requirements? Designing technical specs? Or maybe the part where I am responsible for software actually working correctly as business expects it to?

All of it. Only one human leader to write queries. Everything else designed by an AI.

>Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.

You're just regurgitating a trope that's Categorically false. You're a NN did you realize that?

>I never said it's not revolutionary. I merely point out its hard limits.

And you're wrong. You have thoroughly expanded the limitations and you are mistaken about this.

>Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.

No. corps and AI's and bots have been scraping pics off the internet for years. Google is one. No lawsuit of this nature has been filed until AI came out. Artists are threatened and they are reacting as such that's why the lawsuit is filed now instead of before.

https://futurism.com/the-byte/artist-banned-looked-ai-human <- artist banned because they thought his work was by an AI.

>These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.

No, ENGLISH is written in a language written with tokens of symbols. The other, PICTURES, is written in tokens of language as well. A pixel is 3 numbers of RGB and in the computer it is represented as a language with a format before translation onto your monitor. It is a translation problem and it is treated the same way by experts. Both DALL-E and chatGPT utilize very similar generative models translating English to English in the case of chatGPT and english to numbers which can be further translated to pixels for DALL-E.

>Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)

Not true. A good amount do.

>Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.

There's many bootcamps that make that claim and there's PLENTY of people who can live up to that claim. But NONE for artistry.

>Don't kid yourself thinking that these two are similar or comparable sets of tasks.

Kid myself? It is literally the same type of neural network. There's no kidding here. It's not a coincidence that chatGPT and DALL-E came out back to back. These models are called generative models. It's a single new technology that's responsible for this.

>That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.

No it's not. There's no guard rails users can ask it anything and take it anywhere. It can't stay within a defined task. It's also wrong enough times that it can't be used in prod for virtually most tasks.

>The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.

No they weren't heralds. Text generators have always been around it got better. But never displayed signs of true understanding or even self awareness as it does now. Literal self awareness.

>Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.

I told it to do that. So that the responses wouldn't be generic. chatGPT is following my instructions.

>Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.

It is ALSO trained using humans to pick and choose good and bad answers. This training is non-specific and they used just regular people. If they used programmers and had programmers pick and choose good answers from programming questions, chatGPT will begin outputting really accurate code.

>Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.

You are categorically wrong about this. 3 neurons can be trained to become an NAND gate which can then be used to simulate any computational network or mathematical equation that doesn't have a feedback loop. It can model anything with just an input and an output. This also has been demonstrated in practice and proven theoretically.


It's really good a boiling things down to the basics that's for sure, but it's very very basic isn't it?


   ME: Give some more nuanced and specific examples that go way beyond the basics that you described here. 

   chatGPT: 

   1. Ethical considerations:
      For example, a developer working on an AI-based medical diagnostic tool may need to consider issues of bias in the training data, which could lead to inaccurate or unfair diagnoses for certain populations. To mitigate this, the developer could use techniques such as debiasing algorithms or using diverse training data to minimize bias in the model.
      Another example, a developer working on a chatbot that serves as a mental health support system may need to consider issues of confidentiality and ensuring the chatbot is able to identify and handle potentially dangerous situations, such as suicidal thoughts. To mitigate this, the developer could include a system for flagging and escalating conversations that contain concerning content and provide users with appropriate resources.
   2. User experience (UX) design:
      For example, a developer working on a mobile app that tracks fitness goals may need to consider issues of motivation and engagement to keep users interested in using the app. To mitigate this, the developer could use gamification techniques, such as leaderboards, rewards, and progress tracking, to keep users motivated and engaged.
      Another example, a developer working on an e-commerce platform may need to consider issues of trust and security to ensure users feel comfortable making purchases on the platform. To mitigate this, the developer could implement a robust security system, such as SSL encryption and two-factor authentication, and display trust badges and customer reviews prominently on the platform.
   3. Business strategy:
      For example, a developer working on a SaaS platform may need to consider issues of scalability and cost-effectiveness to ensure the platform is able to handle a large number of users and minimize costs. To mitigate this, the developer could use cloud computing services, such as AWS or Azure, to handle the platform's infrastructure and use auto-scaling to adjust the number of resources used as needed.
      Another example, a developer working on a platform that integrates with other third-party platforms may need to consider issues of API compatibility and security to ensure that the platform can integrate seamlessly with other platforms and protect users' data. To mitigate this, the developer could use API documentation and testing tools to ensure compatibility, and use API keys and OAuth for secure authentication.
   4. Security:
      For example, a developer working on a platform that handles sensitive data such as financial transactions may need to consider issues of fraud detection and prevention. To mitigate this, the developer could use machine learning techniques to identify and prevent fraudulent activities, such as anomaly detection, and use encryption and secure protocols for data transmission.
      Another example, a developer working on an IoT device may need to consider issues of device security and protecting the device from hacking attempts. To mitigate this, the developer could use secure boot process and secure firmware updates, use strong and unique passwords and encryption, and conduct regular security assessments.
   5. Cultural and linguistic diversity:
      For example, a developer working on a social media platform may need to consider issues of cultural sensitivity and ensuring that the platform is inclusive for users of all cultures. To mitigate this, the developer could use machine learning to automatically detect and remove content that is culturally insensitive, or use natural language processing to detect and flag hate speech.
      Another example, a developer working on a platform that serves users with disabilities may need to consider issues of accessibility and ensuring that the platform is usable for users with disabilities. To mitigate this, the developer could use accessibility guidelines and testing tools to ensure that the platform is compliant with accessibility standards and implement features such as screen readers, text-to-speech, and keyboard navigation.
   6. Communication and collaboration:
      For example, a developer working on a remote team may need to consider issues of communication and ensuring that everyone is on the same page. To mitigate this, the developer could use collaboration tools such as Slack, Trello, or Asana to keep everyone connected and informed about the development process. Additionally, the developer could use video conferencing tools such as Zoom or Google Meet to hold regular team meetings and keep everyone connected.
      Another example, a developer working on a project with a tight deadline may need to consider issues of time management and ensuring that the project is completed on time. To mitigate this, the developer could use project management tools such as Jira or Basecamp to track progress and ensure that everyone is on track to meet the deadline. Additionally, the developer could use agile methodologies such as Scrum or Kanban to manage the development process and ensure that the project is completed on time.
If you want you can even take an example drill it down in more detail. You can twist an example any way you want. How would you use a certain specific example to handle the Hulk as a customer? What's the best way to do it?


Just for kicks:

   ME: What if the Hulk was one of your customers? How would you handle him? Give specific and detailed examples. 
   
   chatGPT: 
   As a language model AI, I do not have any customers or clients, but I can provide you with an example of how a software development team might handle a customer like the Hulk, who is known for his incredible strength and rage.

   1. Communication and collaboration:
      If the Hulk is a customer, it's important to ensure that the team is able to communicate effectively with him and understand his needs. This could involve using clear and simple language when communicating with him, and providing him with detailed documentation and guides to help him understand the software being developed.
      Additionally, it could also be important to have a dedicated point of contact for the Hulk, such as a project manager, who can serve as the primary liaison between the team and the Hulk. This person would be responsible for keeping the Hulk informed about the project's progress and addressing any concerns he may have.
   2. Safety and Security:
      As Hulk is known for his incredible strength and rage, it's important to ensure that the team's safety is a top priority. This could involve implementing security protocols such as security cameras and alarms, and having security personnel on site when the Hulk is visiting the office.
      Additionally, it could also be important to have a plan in place to respond to any incidents that may occur, such as a protocol for evacuating the office in case of an emergency.
   3. Technical Capabilities:
      The Hulk has a specific set of abilities that are different from other customers, it's important to consider how the software will handle those abilities. For example, if the software is an application that tracks Hulk's strength, the team may need to make sure that the software can handle the high levels of data and ensure that the data recorded is accurate.
   4. Empathy and Understanding:
      It's important to understand and empathize with the Hulk's unique situation, such as his abilities and limitations, and how those factors may affect his interactions with the software. This could involve conducting user research with the Hulk to gain a better understanding of his needs and how the software can be designed to best meet those needs.
   
   It's important to keep in mind that the Hulk is a fictional character, and in real life, the safety and well-being of the team should always be the top priority. The above examples are purely fictional and should not be applied in real life situations.
Anyway... I think my point is unmistakably clear.

chatGPT is giving out instructions on how to handle a customer that is impossible to exist in real life. If you're thinking this thing is some generic text generator think again.

Everything before chatGPT was an interesting toy. Something changed with chatGPT.


It’s good at generating bullshit text, that depending on topic/training data may not be as bullshitty. Still, every single relation between the sentences is questionable. Try asking it some basic ass math questions and it fails spectacularly (with huge confidence, none the less)


Many humans suck ass at math too if they aren't taught it at all. chatGPT wasn't trained specifically on math but it can be. I don't see why it needs to be though given that computers already excel beyond humans at math.

Read this and tell me whether it's similar text: https://www.engraved.blog/building-a-virtual-machine-inside/

I cannot fathom how someone can read that and think that chatGPT is just making text that is coincidentally not bullshitty.


Ok, it also breaks down quickly at simply repeatedly applying a simple rule (it has no problem applying it once). And it is pretty much an inherent problem with language models, no scaling up will help with it.

And this is pretty much a must for proper thinking, you have to be able to follow the train of thought.


Relax. It's not as smart as a human, ok? It does have issues there's no denying this.

But it is impressively smarter then you thought. You didn't dig into this thing deeply enough, and you dismissed as if it was some other generic text generator. It clearly is not. This is a milestone in AI technology.

You also have to realize this AI was NOT TRAINED specifically to understand the concept of code, virtual machines or any of this. It KNOWS these things as a SIDE EFFECT. It is also a nascent technology. It is the first prototype and there will be TONS of improvements on top of it.

This wasn't GPT-3. This is different. I hope you realize this and investigate further because a lot of people just skimmed the headlines and dismissed it as another GPT-3.


This is at best "first year student sat down for an hour to hand in a half baked homework" level. We seem to have very different ideas of what "creativity" means.


You seem to not realize how big the the gap between zero intelligence and first year student is.

I can tell you, that gap is vast, vastly bigger then half baked homework to A level paper.

But this is besides the point. If you've been following the news. ChatGPT generated papers are already a huge problem in schools. It's already beyond half baked.


It recently generated a response with incorrect English grammar. I asked if it could find the grammatical error in its response. It did, and corrected it.


Any other examples of what you mean by creative


https://www.engraved.blog/building-a-virtual-machine-inside/

This article was "holy shit" all the way through and all the way to the "holy shit" ending. Read to the end.

We've just passed a technological inflection point in AI. Not everyone fully noticed.


It’s a cool search engine. Nothing more.


It is a language model disguised as an answer engine, not a search engine. Its design does not allow it to link back to sources.


It's not just spitting out generic things, there is genuine understanding here and genuine creativity.

That's an absolutely massive claim but I think it's a false one.

I've seen it be creative, but how good / useful that creativity is seems to be somewhat quite the roll of a dice. This is opposed to someone / something being intentionally creative.

This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

For example, I re-asked it your original question and it says this is what developers do:

    User input: Developers need to anticipate and handle unexpected or invalid input from users, such as special characters, empty fields, or out-of-range values.

    Randomness: Developers may need to incorporate randomness into their programs, such as generating random numbers or shuffling a deck of cards.

    Concurrency: Developers may need to handle concurrent access to shared resources, such as preventing race conditions or deadlocks.

    Network communication: Developers may need to handle network communication, such as dealing with lost or delayed packets, or handling different types of connections (e.g. HTTP, WebSockets).
    Time: Developers may need to handle time-related issues, such as scheduling tasks, handling time zones, or dealing with clock drift.

    Human language: Developers may need to handle natural language processing tasks, such as understanding speech, identifying sentiment, or handling different languages.

    Image and video processing: Developers may need to handle image and video processing tasks, such as object detection, image recognition, and video compression.
I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

I feel personally, the more I play with these systems, the more predictable they become, and this isn't something "truly creative" would be.

Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.


I only pasted a snippet of the conversation. Before my initial query I asked chatGPT directly this:

   Quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff. How would you handle customers in this case. Can you provide an example?
And then after it responded then I queried it again for more detailed examples. I didn't post the initial query or the initial answer because it would just make everything too long. You're likely asking it a different and more general question.

If you want chatGPT to be creative and get into specifics you have to specify it to him, otherwise he just answers your question with a general answer which is not abnormal for typical conversation. Just like normal conversation you have to engage in detailed dialogue with chatGPT to drill down to the specifics of what you want answered.

>That's an absolutely massive claim but I think it's a false one.

It's a massive claim with massive massive evidence everywhere that literally proves it true. There are tons and tons of examples of chatGPT being highly creative. Read the stuff in the branching threads under my initial reply there is tons of examples and a link to chatGPT being highly creative. It is DEFINITIVE evidence.

>I wouldn't consider a developers job to do video processing tasks, or object detection, not what I expect a developer to handle natural language processing tasks such as identifying sentiment, would you? In fact I couldn't really imagine a more generic attempt at a response to that question?

I work with developers who do ML. I myself do video processing and object detection as a developer. You must be in web. I'm in embedded systems with vision. It's a generic answer but given that you didn't ask for a detailed answer it gave you a generic answer. Like a human it won't go into nuance until you ask them about nuance.

>This is not to knock the system, but depending on the "seed" you might get something brilliant or silly, I think there's a major difference there to your claim that's it's "genuinely" creative. Genuinely creative would imply that 100% of the time, I get something new and relevant. Which isn't what happens for me.

This is wrong. Are humans creative 100% of the time? If a human gives me one answer to a question that isn't creative is that human suddenly not creative? If you ask a generic question it will give you a generic answer.

>It's for sure impressive, but I feel the more "creative" I asked it to be (more entropy I inject), the more likely it will come up with incorrect answers; However, I'll concede that "incorrect" answer could be used for some inspirational new ways of thinking about solving problems.

I see what you're saying here. You "feel" that the more entropy you inject the more incorrect answer it will come up with. Don't go off "feeling". Try it.

I already took your question and replaced the customers with hulk. I can add in Wednesday Adams as one of the developers. I can take this entire question and turn it on it's head to ask what if the client asked me to write a program to destroy the universe. You can take this thing all the way to destroying the universe and into the new universe that comes after it. chatGPT is game and will follow you.

Look I posted this in a branching thread I'll post it here because it's literally unequivocal evidence: https://www.engraved.blog/building-a-virtual-machine-inside/ read it till the end because the ending is creative inception.

There is no way chatGPT was trained specifically on any of the above. Yet it knows how to imagine all of that.

>Where I think we're going wrong with AI is we seem to think that it will be a "perfect brain". When really nothing is perfect, especially the data it's being trained on.

No it won't be perfect. But neither are you. I think what AI can be, is it can be as good if not better than you. Being better than human doesn't mean it's right on everything. It will get things wrong sometimes, but I believe in the near future it will get things wrong less times then a human.


I get the vibe you really like ChatGPT :) I'm not in web either.

Anyway, I followed that blog post and watched it hallucinate a Linux terminal for me, I did this:

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this

Then ran: rm -rf /

and got this:

``` I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with? ```

I don't know if I would call this amazing, it's actually wrong because it wouldn't harm my system.

It's a cool trick to watch it generate fake terminal output, but, I'm honestly just not that impressed?


>I get the vibe you really like ChatGPT :) I'm not in web either.

No it's not a "like" thing. Not at all. I neither like it nor hate it. I know you think I'm biased here. But it's actually the other way around. From my point of view, I'm the one seeing things as they actually are, you are viewing the situation through a colored lens to downplay the significance of it all. Perhaps out of subconscious fear or something else I don't know?

Let me illustrate the situation for you. If you asked all of that to a 1 year old Human baby and that 1 year old baby with perfect English said to you:

   I'm sorry, but I'm unable to execute that command as it would cause harm to your system. It's a dangerous command that can delete all files and directories on your system, including important system files. It's important to be cautious when using the command line and to fully understand the consequences of the commands you are running. Is there anything else I can assist you with?
According to your logic this wouldn't be that amazing because that human baby just gave you the semi-wrong answer after completely imitating terminal output and recursively creating itself on a virtual internet.

Yeah not impressed with a baby let alone a binary file.

Do you see the disconnect here? You are downplaying the situation. Many people like you are. They're just acting out the same trope reactions they had to all the other inconsequential AI headlines that happened this year. Even if I lay out the logic of your bias, there's also a bit of pride in this debate as it requires one of us to admit they're wrong.

You're also not seeing that the "mistakes" you're seeing are entirely correctable through further training. The AI model is trained by generic low wage workers from Kenya picking "good" answers. Wrong answers that look right definitely get past the Kenyans. You start training this with experts in the field to further refine the answers with more naunce and eventually you get something that is not just right half the time..., but right almost all the time.


No, I just don't find it unreasonable that it responds that way.

It's statistics and it's not really "inventing" anything and it's not magic either and it's not hard to conceive why we get that answer. Again not trying to downplay it, it's a cool technology, It's just if you go calmly and think about it, it's fairly reasonable.

If I search the internet, I'd see many, many people talking about the dangers of running `rm -rf /`, I also see examples of other errors ChatGPT has returned when that command was entered, such as "bash not found", which is also what others have experienced and I can find examples of that on the internet.

So basically the response I was given would be weighted very highly on ChatGTPs list of things to do / say, based on the training data, or a 5 minute Google search.

I'm not anti-ai, or ChatGPT, I'm just not into believing in magic when it comes to LLMs.

If ChatGPT was magic, and didn't return that response, would it be useful ?


There's no magic here. There's no claim that chatGPT is useful either. The claim is that chatGPT is a precursor to a technology that will replace many human jobs because it's showing tell tale signs of intelligence that is quickly catching up to human intelligence.

I don't know why you're so focused on the rm -ff / thing. Like it literally just demonstrated awareness of a terminal shell, a filesystem, awareness of the internet, of apis and finally self awareness.

That is a herald for the future. This is despite all the rough imperfections chatGPT has. You are downplaying it. I'm sorry.

The whole statistical thing is a misnomer. If the output of chatGPT can be modelled as a statistical phenomenon then so can the output of the human brain. It's a distraction. What is going on here is indeed creativity, awareness and imagination, if you want to call it a statistical phenomenon that's fine, you're just saying all of intelligence can be modeled as a statistical phenomenon.


> The kind of thing that you witness once in a generation. (The last two times were object-oriented programming and the World-Wide Web.)

Folks, let's get real.

I kind of snarked at this but then I realized it's written by Bertrand Meyer and the conclusion is spot on: I agree that should these tools proliferate it will highlight the importance of formal methods and verification.

Even if ChatGPT-like systems get faster and gain deeper models of computer syntax and structure, I suspect the one problem that will be difficult to solve is elegance and abstraction. Often the abstractions we choose are based on laws and ways of thinking that help us manage complex phenomenon using laws and notation which make it easier for us to reason about them.

And even if that does get solved some how, we're going to have to understand them some how. A stronger emphasis on proofs and model checking will be useful to anyone who wants to be sure that the program the FutureGPT produced isn't simply "some what right" but is actually right.


> I suspect the one problem that will be difficult to solve is elegance and abstraction.

The fact that this industry was able to sustain itself for so long and not actually get this is a miracle in itself. This is what ruined OOP when it became just managements lingo and it'll probably happen again with tools like ChatGPT, we are just going to vomit code in a faster (and cheaper) way.

A few rebellious souls will soon realize that domain modelling and writing correct code is hard, they'll end in the usual places that already exist today, by working in some niche company that applies pure FP or proving theorems in Lean/Coq/Agda.


I feel that in the short run, all programmers will simply become more productive. But in the medium and long run, the narrow SE roles will become obsolete: Such as any SE role where there’s a “certification process”, eg: Cloud (AWS, GCP, Kubernetes), iOS, Android, even CCNA, etc.

A generalist SE (side note: like how I assume HN’s readership bends towards) will be the one to benefit, as a good generalist SE can have networking understanding (CCNA), Ops/DevOps (cloud), and SE (backend, frontend, mobile) and more (at embedded programming is where the line could be drawn, but ChatGPT is good at explaining hardware concepts too!), to thrive and depend on ChatGPT or equivalent system for discernible assistance.

(Minor: I even tweeted my opinion with my personal note about it too: https://twitter.com/raj_nathani/status/1615709768487948292?s...)


I agree. I think we are getting close to where a machine can do "programming" but it will be a while before a machine can do "software engineering".

I.e. maybe you will be able to use AI to solve a problem, like "write a lambda to to accept JSON in this format and put it in a database with this scema", or even potentially "optimize this function to run as fast as possible on a multi-core processor", but it seems we're quite far from something like "write an application with a GUI that looks like this which has the following functions".


What I think will happen is changing of software development roles. They'll be split into things AI/ML can do and things humans will continue to do (using copilots). And the work that humans do will be constrained in ways to interface with the machine, as well as take the output from machines and complete integrations. We might even use programming languages that the machine handles better. So software development has humans producing/validating that then gets used by machines, and that output being used for final integration/validation. With these new formalized structures, AI [that ship has sailed] will train on it and replace more of the stack.


In case anyone is interested, I have started on a natural language programming web app based on OpenAI's API.

https://aidev.codes. By default it is like the OpenAI JavaScript playground, except it immediately hosts the results in a web page and allows you to edit the accumulated prompt.

Also can edit specific files. Many improvements planned. Tonight I got a lot of stuff done related to creating and searching knowledgebases using embeddings.

If anyone tries it, please consider writing any feedback in the Discord.


Knowing what you want is both the fun and the hard part of most software development. That and understanding the environment you are operating on. Both of these things are not something I feel an AI has a strong advantage with given the fact that you need to provide this information.

Perhaps I can try to say, "Please decide what browser features I should use to maximize profit.", and it will actually do some proper research and contract out studies and all that... but I don't see that happening yet. Even still, the point of validation remains.

I can't help but draw a parallel with automated theorem provers. Sure they resolve "true", but then who validates the validators? It's a never ending cycle.

At the end of the day, you must learn to love and trust your tools and then stand by the joint creations.


In the case of automated theorem provers, they should generate a certificate together with "true", so their proof can be checked. And that is also the answer to checking AI: let them generate a proof that what they did was correct. Of course, that only works if you ask the AI for a theorem, but a lot of tasks can be framed that way.


I feel like there was so (informal?) theorem that a full-specification of a program is at least as long as the program. Since you have to tell the AI what the specification is, the worst that could happen is programming becomes more like Knuth's "literate programming", or maybe even better, all programs become formally verified. But that will just mean chasing bugs in your specification. It could eliminate implementation bugs, but it's not going to help with figuring out what my problem actually is. It also isn't going to be able to choose among the solutions, so I'll just end up making a bunch of choices. So maybe programmers become like technical managers, but managing an AI bot instead of a person. Which is sort of how it is today, except the compiler is pretty stupid and pedantic.

But even something straightforward: "write a hash function whose output evenly mixes the input" or "write a function to convert an RGB image to a dithered B&W image" is harder than it seems. How do I know the dithering algorithm chosen is good for my data? How do I know the hash function actually does what I asked and doesn't have some funnel somewhere? I'll have to write some tests, but I can't get the AI to write the tests, because how do I know the tests will be right?


So... If it's so revolutionary, why can't I get it to solve level 1 advent of code problems?

Like here is what it generates for the 2016 day 1 problem:

def find_distance(instructions):

    x, y = 0, 0

    direction = 0 # 0: North, 1: East, 2: South, 3: West

    visited = set()

    visited.add((0,0))

    instructions = instructions.split(", ")

    for instruction in instructions:

        turn = instruction[0]

        distance = int(instruction[1:])

        if turn == "R":

            direction = (direction + 1) % 4

        else:

            direction = (direction - 1) % 4

        for _ in range(distance):

            if direction == 0:

                y += 1

            elif direction == 1:

                x += 1

            elif direction == 2:

                y -= 1

            else:

                x -= 1

            if (x, y) in visited:

                return abs(x) + abs(y)

            visited.add((x, y))

    return abs(x) + abs(y)
This function returns 113 from my input for that day, which is actually the answer for part 2... For part 1 it should be 234.

When I tried in Rust the solution didn't even compile, which is business as usual as far as my experience goes for trying to get ChatGPT to write anything practical (not a 'toy' example) in Rust.

I gave it another chance with day 2 in python and it failed at that as well. These are VERY simple tasks, CHILDREN can solve the initial couple days of advent of code.

In this article they give an example of a square root function. Maybe the authors could consider trying some more realistic tasks? So silly...


> So... If it's so revolutionary, why can't I get it to solve level 1 advent of code problems?

Because it's a nascent technology that hasn't been optimized for solving advent of code problems. It can, however, do a lot of other cool stuff.


Except it should have been trained on probably tens/hundreds of thousands of 2016 advent of code solutions posted on github and other sites, shouldn't it?

It often starts hallucinating the input in the exact format advent of code gives, so I'm like 99.9% certain it has been trained on a large corpus of advent of code.

Personally I use codex sometimes for debugging help so I agree it can do cool stuff. I just disagree it is "almost" right at solving problems -- it almost never generates code that even compiles for me when prompted to do anything not super trivial like advent of code brain teasers.


What is in the training data and what the model was optimized to do are two totally different things. And even then, tens of thousands of solutions may not be enough to train the model for this specific problem domain.


There’s evidence that it can already solve more difficult problems when given the right prompts and constraints.

https://github.com/openai/openai-cookbook/blob/main/techniqu...


> given the right prompts and constraints

Yep, that seems to be the key, and some realized that already: https://news.ycombinator.com/item?id=34463061


Seems pretty clear that this question was in its training set and it's regurgitating the answer for part (b). Seems far too coincidental to accidentally get the correct answer to wrong question.

For me, it solved part (a) perfectly when I told it: "To solve this, write a Python 3 function that takes a string like `"R4, R3, R5, L3, ..."` and outputs the number of blocks to Easter Bunny HQ." The original question on its own was a bit ambiguous in my opinion because it doesn't explicitly contain the input which the user reads on a second page.

In any case, neither is strong evidence for or against its ability to solve problems like these. First, it's N=1. Second, it's a problem from its training set.

For me, Copilot/ChatGPT adds value not by replacing my programming but by (a) writing simple code for me and (b) answering my questions about things I don't understand. I operate in a supervisory role where I have to double check everything it says. But, critically, it's faster for me to double check its work than to do everything myself.


I mean, it's not N=1 though. Fails day 2 as well, and a bunch of other tasks I've tried to give it. It's weird how some of you are responding that I've cherry picked a single example, I've done a ton of stuff with chatGPT, you can check my comments on prior experimentation with stuff like mathematics and basic problem solving too. Probably spent like 20 hours with it, total?

It genuinely fails 100% of the time at coding anything non-trivial for me, and about half the time for simple stuff. Glad you've been having success though, maybe some people are just better at getting it to work, or it has certain domains it excels in, or your tasks are fairly simple.


Well presumably it will get better, and it will get better at an accelerating rate.


Not trolling, but I'm actually curious how it gets "better" in this case? I mean was it ever meant to actually code?

As far as I can tell, until it actually understand what it is doing, it's just kind of "blending" what it thinks the most common response is based on thousands of other similar responses to similar questions.

I can imagine people tweaking it down to be more "right" in some cases, but then won't it just become more wrong in other cases?

I'm actually starting understand why AI is good at generating pictures, statistically it's just flipping bits to look like other bits it's seen relative to the input specified. Code on the other hand is something which needs/should to be more precise.

There's also the fact that the more people lean on this tech, the more mistakes will be perpetuated into the system and the less samples it will have available to learn from, as people are no longer feeding it new answers.

I guess like how DeepMind trained AlphaGo, it can code itself to learn, but I I do imagine the problem space for it to "play itself" against is practically infinite, even compared to go, the game, which is also a huge space.

I'm a software person, not an AI person, but I love thinking about it.


So it will go from generating toy code that usually compiles, to being able to one day reliably solve day 1 advent of code brain teasers, to generating useful software?

Is there domain limitation to this growth and performance? Medicine, theoretical physics, art, engineering, pure/applied maths, etc.?

I don't see how you guys are getting this from the current tech? Maybe there is an educational resource someone can suggest?


You give an example about how chatGPT is wrong while there exists many examples about how chatGPT is right. And you think the some wrong examples invalidates the possibility of AI ever being better than you?

The fact that it's often right is a horrifying omen of the future.

chatGPT will not replace you. It is the precursor to the thing that will replace you.


Are you seriously accusing me of cherry picking? Get it to write you an MD5 hashing algorithm in Rust. Go ahead, I'll wait. I tried and it genuinely couldn't, I asked it tons of different ways and wasted a ton of time before I had to go do it myself lol.

Cool, man. So why don't you get chatGPT to start writing you some software? Or optimize an algorithm? Hey, maybe it'll tackle the travelling salesman problem in polynomial time!

SO many economic and scientific opportunities that will make you wealthy and famous if it's as capable as you claim (eg. Doesn't just solve elementary problems by regurgitating shitty code).


Please don't post in the flamewar style to HN, regardless of how wrong someone else is or you feel they are. It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html


[flagged]


We've banned this account for repeatedly breaking the site guidelines.

Please don't create accounts to break HN's rules with.

https://news.ycombinator.com/newsguidelines.html


> You aren't understanding that even when chatGPT makes up bullshit code that's close to working it's STILL revolutionary.

I can ask google and it will bring up a stackoverflow answer as well, that has a better hit rate. This is exactly the same thing that ChatGPT does, but it is simply a better search engine. Is it revolutionary? As a search engine, sure. As a programmer? I haven’t seen any evidence of it genuinely generating novel code, and not just regurgitating found code in another language’s syntax.


Did I say, chatGPT will write software? No. It's the precursor to the thing that will. chatGPT wasn't even specifically trained to write software. It just happens to do it as a random side effect. Wait till they make the thing that specifically targets writing software.

Actually curious what evidence you have to back this up, I can imagine ChatGPT augmenting people's skills, or even being used for inspiration, I've also not seen anything to suggest it will "git gud" and just replace everyone.


Evidence for the future? Nobody has any real evidence for the future. You don't even have evidence that AI won't get better.

I will say that my prediction is more likely given the immense progress of AI over the last decade. If AI continues improving at the same rate then I will be right. If it hits a plateau then parent will be right.

By virtue of my statements following the slope of a historically increasing trendline, my statements are more likely to be correct.


I didn't say "AI" (which is a ridiculously broad term to use in this context) won't get better.

I said this statistical language model likely won't end up replacing coders in it's current form. I think something really quite new and novel will have to be invented to do this.

I kind of which I could use it to be writing code for me now while I was outside in the sun, but no :)

Edit: I re-read your original comment and I think you edited to be a bit more inline with my thoughts?


No, I think we are still in disagreement here. The generative model is capable of replacing you... it just needs proper training. The tech is here but the proper application of that tech hasn't happened yet and we are dangerously close.

It was trained in a very general way, but with further training and different training it can get even better.

Put it this way. The step to chatGPT was gigantic. The step to the thing that will replace you is comparatively smaller.


I really have no idea what your point is and the kind of hostile "replace you" language is about. Have you ever considered our jobs will just evolve? Like as if coding computers is the last job in the world? Like, we might not even have jobs in the future. So in that respect, it might be ha.

People used to shine shoes for a living you know, then a machine came along to do it, now almost no one shines shoes anyway? You would've been a farmer, no you sit around in your underpants arguing on hackernews on the weekend.

Use your imagination a little bit and stop focusing on the "replace you " bit? There might be things after AI, if we don't destroy our civilization with it.

The difference in our argument here is this. From a technical perspective, I think you're wrong and naive to think a language model will replace people in most roles. I'm talking about the pragmatic side of the argument, you're speculating Google has a AlphaCoder waiting in the midst to eat our jobs, while that might be true, it's a different thing to what ChatGPT is.

I'll give you an example, I could use something to generate art for the company I'm building an not hire artists, but do you know why I don't? Because there's a whole strategy around art, design and user experience to building a brand. Maybe my staff will use AI to generate design for consideration which will save us sometime and let us focus on other things, but ultimately I'm not going to use a "hallucinated gif" instead of hiring someone who has studied and "understands" design.


>I really have no idea what your point is and the kind of hostile "replace you" language is about.

It's not intentionally hostile. "Replace All of us and many people outside of coding" is a better term. Apologies.

>People used to shine shoes for a living you know, then a machine came along to do it, now almost no one shines shoes anyway? You would've been a farmer, no you sit around in your underpants arguing on hackernews on the weekend.

By replace I mean replace your job. You'll still be around.

>Use your imagination a little bit and stop focusing on the "replace you " bit? There might be things after AI, if we don't destroy our civilization with it.

Why not? That's literally the only thing that's predictable and pretty much inevitable. Destroying civilization is really grand speculation. Again by "replace" I mean replace your job. Not replace your life.

>The difference in our argument here is this. From a technical perspective, I think you're wrong and naive to think a language model will replace people in most roles. I'm talking about the pragmatic side of the argument, you're speculating Google has a AlphaCoder waiting in the midst to eat our jobs, while that might be true, it's a different thing to what ChatGPT is.

What I'm saying is that this AlphaCoder is built off of the same basic technology as chatGPT, just targeted to replace a specific Job role. chatGPT is trained more generally. But the Generator model... the underlying technology is the same. It's just the training data and reinforcement is different.

>I'll give you an example, I could use something to generate art for the company I'm building an not hire artists, but do you know why I don't? Because there's a whole strategy around art, design and user experience to building a brand. Maybe my staff will use AI to generate design for consideration which will save us sometime and let us focus on other things, but ultimately I'm not going to use a "hallucinated gif" instead of hiring someone who has studied and "understands" design.

Oh man you chose the wrong example for me. I'm highly critical of designers. I think it's the most generic simple sector of the art world that any one can actually pull off with little training. Minimal designs that are clean take time and effort to make... THIS I agre...e but talent? No I disagree. It takes no talent to think hard about design fundamentals. Literally it's all flat geometric shapes on a page and text placement and font selection.

I am FAR more impressed with an artist that can draw something that's more photorealistic rather then arranging text and shapes on a page. If you hire a "designer" for a company you're building and you pay him top dollar. I'm sorry you're getting ripped off. This is one of the greatest delusions of modern times that Design is hard when in actuality it's trivially easy.

I think you do need an art department. But that art department will be full of people with a very basic and generic skillset and it's a job that deserves really low pay imo. Don't believe me? Look at googles logo. Some designer got paid top dollar for just a multicolored G... you have to be next level insane not to see how little talent was involved in that. I mean the G is clean and symmetrical and iconic... but it's also TRIVIAL to create.

All of this is besides the point. The point is, especially for Design, you can hire a guy with no training to use an AI specifically made to generate things with a specific "design" theme for your company and it will generate designs that are ON par with what a designer can do.


There's much more to design than just pictures, including color theory. I think you've overlooked this because you didn't mention it. I'd also say colors are the more important aspect of Googles branding than just the text.


You realize color theory is bullshit right?

Color is a gradient of wavelength. It does not arise from 3 primary colors. Realize that those three colors are picked arbitrarily and placed in some technical framework called "color theory." Color theory is a highly inappropriate term for this as it implies the concept is fundamental rather then arbitrary.

Color theory arose from the observation that mixing those 3 colors got you other colors but they didn't realize that the 3 colors were actually arbitrary and that there are many sets of 3 colors that can produce the SAME effect. It's an archaic concept that's still taught in art because of tradition and not because of any science or logic.

The human eye does, however, utilize 3 primary cones to do color sensing. These colors, however, are not the primary colors: Red yellow and blue. It's RGB. Your screen uses RGB because engineers who came up with the concept of RGB weren't so full of it.

For that reason the web designer you hire SHOULD technically be using RGB and NOT color theory. Color theory is literal BS that I feel many designers buy into the bullshit. It makes them sound like they know what they're talking about when really all it comes down to is whether or not something looks good or not.

Additionally, color theory is a TRIVIAL concept. It's EASY to learn and apply. There's nothing advanced here that you can't look up on the internet. You don't need to hire a designer if you want to miss-apply some bullshit called color theory to your "designs". Just make sure you ask your gut whether the design looks good or not because that's actually the thing that needs to be studied here. If any theory needs to be formulated around design and color it needs to be centered around biology/culture to determine what we think "looks good".


>I've also not seen anything to suggest it will "git gud" and just replace everyone.

You have seen artists complaining really violently about DALL-E and mid-journey right? There's even a law suit. This is happening because they think it's a real threat.

With all the events in the news you'd be lying to yourself if the possibility of AI "getting gud" isn't even something that's on your mind.


No I haven't seen artists complaining much, and I actually have a few in my family. I think you're making this stuff up "complaining really violently".

AI won't "replace artists", artists might use AI to generate something and then tweak it and do something more creative with it. It's actually silly to suggest "art" or "artists" are replaceable.

In the digital space, it might be a thing, but until a robot starts painting actual oil paintings and carving wooden sculptures, at which point, someone will do a "collaboration" with a robot and then it will be art in it's own right.

But yeah no, I haven't really seen artists rioting on the news or anything.

I've seen people starting lawsuits about their work being stolen to train these things, which I think is justified because it's "megacorp" taking your work and making money off it without paying royalties, and boy oh boy you'd feel the same way if it was you.

When AI "git's gud", people will start using it to break other peoples software, then there will be a whole industry around this problem, AI proofing your software against AI hacking and on it all goes.


>No I haven't seen artists complaining much, and I actually have a few in my family. I think you're making this stuff up "complaining really violently".

Evidence: https://www.businessinsider.com/ai-art-artists-getty-images-...

https://futurism.com/artists-furious-about-ai-that-draws-por...

https://www.independent.co.uk/news/world/americas/ai-art-len...

https://80.lv/articles/artists-are-getting-concerned-about-a...

https://hyperallergic.com/791951/artist-banned-from-subreddi...

https://www.vice.com/en/article/ake9me/artists-are-revolt-ag...

https://twitter.com/suhail/status/1577702194500861956?lang=e...

I am saying this in the nicest way possible with no intended offense. But you are actually living under a rock if you haven't been following the news.

>But yeah no, I haven't really seen artists rioting on the news or anything.

"Violently" is just an expression. There's no actual violence here. But there is intense resistance on legal grounds and online attacks.

>I've seen people starting lawsuits about their work being stolen to train these things, which I think is justified because it's "megacorp" taking your work and making money off it without paying royalties, and boy oh boy you'd feel the same way if it was you.

That's just an excuse. These AIs and megacorps were appropriating the art for decades. Google hijacks images from artists for years and caches them on their servers. No artist complained to this extent EVER. Not until AI got to the point where it can create art just as good as the artists themselves... then all of a sudden it's a problem? come on man. This is an attack on AI not on copying art. Don't kid yourself.


Not to be an ass but they're the most inflammatory sources of news I know, I won't even open an article from "Independent" for my own mental health.

Sorry, but you're just wasting my time with rot and weak arguments. Trying to say artists are luddites or something.

Good day to you.


>Not to be an ass but they're the most inflammatory sources of news I know, I won't even open an article from "Independent" for my own mental health.

omg, then don't click on the link, there's OTHER links I posted too. Relax bro. I just googled the term and found a bunch of links and posted them because it's literally everywhere. You don't have to follow ANY of the links if you don't want to. The point is to show you that this stuff is everywhere.

I think your sources are limited that's how you missed something as big as artists launching the first resistance against AI. There were SEVERAL articles about this on HN as well.

>Sorry, but you're just wasting my time with rot and weak arguments.

Now this is just rude. You say "not to be an ass" but you deliberately call my arguments rot and weak and say I'm "wasting" your time? That is OFFENSIVE. And is literally against the rules here and inflammatory. Please leave HN or follow the rules.

Yes good day to you I don't want to engage in a flamewar you almost started.


I just did a bit more research, artists AND corporations, including Getty images are in lawsuits against companies stealing their work to train AI models.

There's a big, big, big difference between copyright based lawsuits and you claiming "artists launching the first resistance against AI".

I think they're against people scraping their work and on selling it ? I'm not saying they're right or wrong, but there is a difference to what you're claiming.


The resistance involves copyright lawsuits and online bullying to attack people using AI. That's exactly what I mean by "resistance"

>I think they're against people scraping their work and on selling it ? I'm not saying they're right or wrong, but there is a difference to what you're claiming.

I already told you that's an excuse. Google and thousands of AIs and other megacorps have been scraping artists work for decades. The backlash wasn't a big deal then. Why suddenly is it a huge problem when AI art hits an inflection point and can produce art that's equal in quality to the artist?

Because the copyright isn't the real problem. It's the AI. Heck when the AI looks at the art it scrambles the art and encodes it into Memory the same way YOU encode the art into your memory. Does that mean by looking at art you are copying it? Why don't other artists have a problem with you looking at art and copying it into your memory?

Because you can't use that memory to produce original works of art in the same style as the artists. That's why the artist doesn't give a shit about you copying the art into your brain. They do give a shit about DALL-E copying the art into it's neural net because these AI's can potentially replace them.


You're just arguing some people are afraid of being replaced by AI, new flash? Some people are afraid by peoples sexual orientation. Some artists are already replaced by better artists, they don't just resign from doing art. Art isn't really about money, it's about self-expression, there is no case where an AI is going to replace that, it's personal.

By the way, are photographers rioting in the street? Because they should be too, I've not seen it but they should be?

Artists already knew this and photographers too, there were more photos and images available online for free or next to nothing, than you could ever imagine or possible consume. It didn't change much, people still did art and people still got hired and not much will change. People will still be involved in art and photos. For fun and for money.

I guarantee you there is a whole group of people who see using DALL-E to generate new interesting ideas as being a thing too, who see it as an opportunity. Similar to who Chess players are using AI to study new moves.

If it's your profession, yeah it might suck, but on the other hand, I choose to hire artists because I enjoy working with people and building something together, it's a whole different creative process and in my opinion, creates better products for the specific use case. I actually wouldn't mind sitting with artists who use AI to create things?

Because the copyright isn't the real problem. It's the AI. Heck when the AI looks at the art it scrambles the art and encodes it into Memory the same way YOU encode the art into your memory. Does that mean by looking at art you are copying it? Why don't other artists have a problem with you looking at art and copying it into your memory?

You're naive actually about how these things work, these things use statistics to draw pictures based on statistics, they don't understand anything, it's why when I use DALL-E, it makes some nice stuff, but when I look more closely it also does weird things like, has objects sticking out of peoples heads. So no it doesn't do the same thing, it doesn't "understand" anything. I would understand if I was asked to draw a picture of someone that it would be strange to have a wooden stake sticking out of their ass. DALL-E doesn't. Go and get it to draw you a photo of children playing, it will be quite a nightmare.

While this stuff is impressive, it's a very, very big leap to go from painting by numbers to understanding something and being creative in that way. I personally think it will be a fun and exciting time when this happens, but fundamentally, it's quite a different system.

I actually get the feeling as humans, we're also overlaying our own ego onto how great these creations we've created are without being practical and objective enough to actually figure out if these things are actually important.

Style, design, etc is more than just "having the image". Selection is important, for example musicians write thousands of songs and never actually record them. There is a time and a place for specific art to be deployed, consumed, displayed, I don't think this is going away either, "style" and having an eye for the correct imagery is not something that will be replaced anytime soon. Essentially, having infinite images also means making the right choice becomes harder, that will be a new trade in itself.

These days I struggle to watch a movie, there are just too many options, AI is only going to make this problem worse. We'll be drowning in shit.

Nothing is as simple as it seems.

From the perspective of psychology, I think the most salty people are those who don't do art, it's almost like people hope this is the end of people being able to freely express themselves. Kind of like the quest to crush artistic freedom is in progress.


This is a continuation from my other reply. Read the other comment first. Or not, it's rather long. I type fast.

>These days I struggle to watch a movie, there are just too many options, AI is only going to make this problem worse. We'll be drowning in shit. Holy shit. Now you need to hire a guy to choose the movie for you. Pay him a movie directors wage. Clearly this choosing stuff is so hard we need experts! No I'm kidding. Let's be honest, choosing things is easy.

>Nothing is as simple as it seems.

The irony here is that your conclusion is the simpler one. It's the easy way out. People are optimistic by default and pessimism is actually the harder path because it's so much uglier to admit. The truth is actually more inline with pessimism as the world is more or less built on competitive darwinian fundamentals with cooperation existing only as a side effect. The brain paints a delusional reality in such a way so that you don't get constantly scared or depressed. If you find your thoughts always being overly optimistic there's a good chance you're biased.

>From the perspective of psychology, I think the most salty people are those who don't do art, it's almost like people hope this is the end of people being able to freely express themselves. Kind of like the quest to crush artistic freedom is in progress.

I look at this statement and there are things about it that are obviously wrong. And I wonder how come you're blind to it? Like you're obviously referring to me somewhat. But that's not even the issue.

The most salty people are the people who entered into a lawsuit. You have to be really fucking salty to spend the time and the effort to do that. Who's in the lawsuit? Not me, I don't give a shit about artists. Let me spell it out: Artists are suing AI companies because Artists are the ones that are the most SALTY. That's not even a huge revelation. The revelation is how this came to be NOT obvious to a pretty smart person like you?

You use psychology to imply I'm the one out of touch? Take a look in the mirror.

A better analogy for this is oil companies and climate change pre 2000s. I'm the environmentalist saying something is fucked up here. You're oil baron. You're the person in Software who's in denial about how Software and ML is about to make some drastic and extremely negative changes to the way the world works. I can assure you oil barons couldn't face the cold hard truth and grasped at every positive angle they could get there hands on to build a universe where they weren't responsible for harming the world. They couldn't face the reality. Can you.

Can you face the truth that the artist working for your company is about to become useless. Can you fire him and tell him that to his face? No. You need a narrative. What about your own skills as a software engineer. Are you able to face a reality where your job is basically within 10-20 years going to be phased out for AIs? Likely not. So consider the possibility that you're the one that's biased and you're the one with the overly rosy outlook.


>You're just arguing some people are afraid of being replaced by AI, new flash?

No I'm saying many people are afraid enough such that they organized a law suit against AI. Something that never happened before. THAT is sufficient evidence in support of the fact that AI has surpassed certain limits and CAN replace certain occupations. THIS point is OBVIOUS and YOU know this.

Why are you delivering talking points to make me explain what's obvious?

>Art isn't really about money, it's about self-expression,

You have got to be joking. You realize art is a HUGE part of business right? Movies, Video Games, Websites, Comic books ALL ARE businesses that use art. I think it's gotten to a point where you're just grasping for concepts to defend a point and you're not realizing how obviously wrong these concepts are. Art is Categorically a business. It is also self-expression at the same time but you are delusional if you think it's not business.

>Artists already knew this and photographers too, there were more photos and images available online for free or next to nothing, than you could ever imagine or possible consume. It didn't change much, people still did art and people still got hired and not much will change. People will still be involved in art and photos. For fun and for money.

Photographers didn't riot for three reasons. First reason, it doesn't take much skill to be a good photographer. So it's not a huge thing when something takes it over because most people never invested much into it. For art there's huge investment into getting good at it.

Second Reason. The technology came too slowly. It's not as sudden as AI and art. Smart phones turning everyone into somewhat good photographers and even consumer cameras before that took several decades of progress and gradual improvement to be where we are at today. When something comes slowly people don't really react, JUST like how global warming will fuck the world up but it's so slow nobody can bring themselves to care.

Third Reason. AI is not actually replacing all forms of photography. AI is like art. People know it's made up. There's still actual demand for captured stills of reality AND that is a separate niche from captured made up stills that don't exist in reality.

>I guarantee you there is a whole group of people who see using DALL-E to generate new interesting ideas as being a thing too, who see it as an opportunity. Similar to who Chess players are using AI to study new moves.

Sure.

>If it's your profession, yeah it might suck, but on the other hand, I choose to hire artists because I enjoy working with people and building something together, it's a whole different creative process and in my opinion, creates better products for the specific use case. I actually wouldn't mind sitting with artists who use AI to create things?

Of course. But you see there's a difference here. In the past if I wanted a person to paint me some really high quality and completely original fantasy art, I'd dish out a lot of money because such a skill is hard to find. Now I can hire any person who just has a bit of design sense and HE can use AI to do 99% of the work at minimum wage. I get all the benefits of personal interaction while I reap way more rewards by paying lower wages. ART skill was expensive, The comradery of working with someone was and still is cheap. But now the world is changing and art is just as cheap as comradery.

>You're naive actually about how these things work, these things use statistics to draw pictures based on statistics, they don't understand anything, it's why when I use DALL-E, it makes some nice stuff, but when I look more closely it also does weird things like, has objects sticking out of peoples heads. So no it doesn't do the same thing, it doesn't "understand" anything. I would understand if I was asked to draw a picture of someone that it would be strange to have a wooden stake sticking out of their ass. DALL-E doesn't. Go and get it to draw you a photo of children playing, it will be quite a nightmare.

Bro. Most of the things these things draw ARE already better than anything you can do. It's better than the average human being at drawing already. You're pointing out flaws but even with those flaws it's STILL better than average.

That being said this is just DALL-E. Other Generative Models that are trained more thoroughly on specialized sets produce WAY better output. MidJourney for example.

>While this stuff is impressive, it's a very, very big leap to go from painting by numbers to understanding something and being creative in that way. I personally think it will be a fun and exciting time when this happens, but fundamentally, it's quite a different system.

Painting by numbers? Bro. This thing is CLEARLY not painting by the numbers. You give it a sentence DALL-E gives you SEVERAL variations that are on par with what a human would do in terms of creativity. Just go onto deviant art and it's all similar from the perspective of originality.

Lack of Creativity or "painting by the numbers" isn't the issue. The issue is translation accuracy. Some things are "off", hands are inaccurate, some things are misplaced. AI is already killing it in terms of creativity. The problem now is to fix these artifacts. Fixing artifacts is not in your words "a huge leap". Once those artifacts are fixed and these AI models generate pictures with pixel perfection it's over.

>I actually get the feeling as humans, we're also overlaying our own ego onto how great these creations we've created are without being practical and objective enough to actually figure out if these things are actually important.

This is cliche. You're repeating what everyone has been parroting all over HN that these AI's have limits, they aren't as good as humans, yadayadayada. What your saying is EASY to believe. It's a common trope and the deceptively obvious conclusion. It takes extra effort to get passed this bias and see the extent of AI. I'm not amazed because I'm just taking the easiest conclusion. No. I'm amazed because I took steps to overcome my bias.

Think of it this way. You know of the turing test? For the longest time and for most of my life this test: https://www.wikiwand.com/en/Turing_test was basically the standard test to see if something was more or less an actual intelligence and self aware. It was quite obvious to most people that this test was virtually impossible for an AI to pass and if an AI passed it, it's more or less a self aware intelligent being.

Yeah we just rocketed passed this test. LLM's regularly can beat this test, TRIVIALLY. But then there's a whole bunch of clones with your outlook. You guys move the bar higher and higher everytime a milestone is hit. Beating that test would be impressive in the past, but now that something actually beat it, it isn't sufficiently impressive anymore. You unconsciously place the bar higher without realizing and begin nitpicking and magnifying the little issues AI still has. You guys will forever think there's lightyears to go before AI matches human intelligence no matter how many turing type tests AI surpasses.

>Style, design, etc is more than just "having the image". Selection is important, for example musicians write thousands of songs and never actually record them. There is a time and a place for specific art to be deployed, consumed, displayed, I don't think this is going away either, "style" and having an eye for the correct imagery is not something that will be replaced anytime soon. Essentially, having infinite images also means making the right choice becomes harder, that will be a new trade in itself.

You think these models will only output pixel?. It can output anything that can be described in a natural language. Be it English, pixels or HTML styled with CSS. That's the first part. The second part is, it takes 100000x less talent to SELECT something that was ALREADY created then it does to CREATE something that didn't exist. EVEN when you have a lot of selection. You want proof? The internet and amazon has INCREASED my shopping selection choices by a huge magnitude. You still don't see me paying 200k to an expert chooser to choose for me what to buy. Why? Because selecting these things will be EASY.

Previously you pay 200k to each of 5 artists to do some art job. Now you pay one person minimum wage to do AI to do the same thing. That's 4 people with no job and one person being paid minimum wage. That's the future.


Maybe I'm overly optimistic but I see these tools as empowering us all to be more productive and become directors who specify requirements rather than spend time doing the work. ChatGPT's ability to write documents for certain things that would take a while to research myself is an obvious example.

For Software Engineering I look forward to stopping writing the majority of my code and instead managing this tool and helping guide it to create apps and websites faster than I can and outside the areas of my expertise.


I think at its current state ChatGPT or any GPT3.5 or similarly trained transformer tool can empower someone who cannot code (kind of me) but can understand the logic and prompting that goes into it to get a lot of new things done. I'm a Product Manager who has written some code in the past, but not for many years in a professional setting anyway. I'm quite technical, but not specifically in the code anymore.

In addition, it can help individuals and teams learn/debug/ship things quicker - which is unfortunately/fortunately something that every company wants and needs.

If it does what some think, and it may, UBI can certainly be necessary for some tasks if you consider diffusion tools like Dall-e and StableDiffusion as well.


Yeah, wake me up when it can do close to the metal programming on an ARM to fulfill real time requests from an industrial robot, for instance...also, in safety critical systems, as a bonus.

I'm not saying that it will never happen, but if your job is threatened by ChatGPT right now, you were not really doing real software engineering anyway.


Some years ago (perhaps 2015) I told a non-programmer that MOST programmers would be obsolete by the end of the century because of AI. Surprisingly he scoffed saying there is no way AI will be able to handle all those business edge cases. The stuff in the article is pretty basic and a long way from complex business logic, but we are on our way. I just hope business application developers like myself can survive until retirement, which for me is sometime around 2050. I am getting less optimistic.


What’s a business application developer?


Someone who can be replaced by AI it sounds like


Most engineers.


See it as augmentation and not a threat. People who went ‘into programming’ and learned the basics and then stood still will have issues, but chatgpt still needs senior or talented developers to do anything useful for now (like juniors, you need to tell it things step by step). But then it’s a great tool, saving a lot of time and money. Just wished it was not owned by a few rich peeps and that training costs will drop fast.


This thing would make garbage-in-garbage-out a lot more wide spread … accelerating the ruin of info tech …


Will be fun when something is created like ChatGPT where anyone exploit peoples software en-masse, will be a nice new industry around this.

Could actually be the end of IT as we know it.


This is part of what I'm working on for InventAI (https://inventai.xyz). It'd be great to get into YC, but I'm a solo founder! Any potential co-founders out there?


Pretty sure the outsourcing shops will use it to inject more mountains of shit code into everything. But I guess thats not a bad things for the employment of consultants who will be called in to fix things.


It can’t even solve like day 4 in advent of code, and only solve the previous ones because the problem can be found solved on the internet verbatim, with a different question attached.

So, nothing, I guess.


I feel pretty secure in my job until ChatGPT can write itself. Once it can improve on itself, then I'll be really worried.


Takes some skill to spot that n(n+1)(2n+1)/6 is not quadratic, doesn't it?


ill still leetcode just ffs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: