Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are your optimistic takes on the SWE job market post AI?
43 points by kaesar14 on April 13, 2023 | hide | past | favorite | 63 comments
I think we’ve all heard quite a bit about how LLMs will impact SWE work in the near future, and would like to hear the other side. Please keep it realistic, no saying the job wont change dramatically or otherwise undersell the impact of LLMs.



You aren’t going to be able to tell which companies have developers using LLM’s and which don’t, in the same way that you can’t tell who is using intellisense and who is using vim.

It helps you code somewhat more efficiently, like many of the other tools we have, but doesn’t help with the hard parts of software development.


Of the answers I see here, I think I agree with yours most.

I think LLM uptake in coding/engineering tasks will be slow and boring. You won't see some companies suddenly have 1000% growth due to AI. Well except maybe AI companies themselves selling the AI.

I suspect it'll be more like GitHub Copilot or similar products becoming more ubiquitous. Don't have to type as much or do some mundane tasks as much. But overall your work itself wouldn't change too much. GitHub Copilot is damn useful but by now it has also become boring and you don't see headlines of it these days.

Long-term (10+ years) though I'm less confident. Maybe by then some major new breakthrough have happened in AI that can automate way more than now so the "hard parts" in software development are taken care of as well. But right now I think the fear of losing jobs for an AIs is overblown, at least for software.


What do you think about a tool like Copilot X + autoGPT that knows your codebase and can work on its own on autopilot? The current limitation with chatGPT is that it doesn’t know what your existing tech stack or codebase looks like.

Copilot X should fix that - it knows your codebase and hence, can share answers that are contextually relevant. Maybe autoGPT can then go around and start debugging and improving without your direct input?


It's not about writing code, it's about engineering software. CAD changed the job of civil and structural engineers dramatically, but maybe not quite as dramatically as the hype suggested it would.

Part of engineering software is reducing boilerplate. You can only rely on LLMs for easily-verifiable or well-understood code. Most easily-verifiable code is boilerplate and most well-understood code is already in a reusable library.


Perhaps we should look at how CAD impacted draughtsmen. Their job scope basically got folded into engineers'. There are probably a lot of us who are software draughtsmen.


And now drafting drawings from scratch is easy. But editing drawings is a much longer job. Welp. No money is ever saved. Just more hats are worn by the same people.


I was bearish on this till I saw the jump from chatGPT to GPT-4. The latter can writes readable, correct code for 500-1000 line programs. However, barring a revolution AI models are "memory constrained" and will struggle to comprehend million line codebases (which will become more common if AI is writing more code) and are very bad at planning, make changes with an eye toward the future.

With that in mind I think the following is plausible: how we write code will change, efficiency will improve, product skills will become more important, people at the far right of the skill spectrum will see their salaries increase even more.

SWEs are already have insanely high ROIs and the demand for more software is nearly boundless. Even at many large companies products move slower than we want because even top of market engineers can't work that fast and there are limits to how parallelizable the work is. If you can double code output of individual engineers it's better than doubling the size of the org while keeping costs stable.


GPT-4 does have 35,000 token limit. I'm sure GPT-5 will probably have an order of magnitude bigger context window. I'm sure by the seventh or eighth model there will be enough context to recreate Twitter's entire codebase or something. Although perhaps the context window for LLM's really is fundamental and a completely different architecture is required. That's alright, I think Anthropic might make some progress with their "constitutional" AI then. And if that fails, Deep Mind has it's own multi modal agent which is probably running a unique architecture as well.


That sounds like the end of basically all clerical/office work.


Not exactly. People will still need to proof read important emails and documents. They'll just be able to do more with less time.


> The latter can writes readable, correct code for 500-1000 line programs.

The problem is you can't trust it.

The code might be near perfect in 99% of cases, but might do something stupid that defies belief in that small percent of cases that literally destroys your company.


Human programmers might do something stupid that defies belief in that small percent of cases that literally destroys your company.


Of course. And history shows that this happens. There's always a risk of errors in software, regardless of how it's written.

But humans can usually exercise reason and judgement to notice that a spec is dumb and push back on it, or that a program is running in a way that's not intended, or that something about the program output might be less than optimal, or that something completely unexpected but really important just happened that requires an immediate change.

I'm really skeptical that an AI system can ever exercise human level judgement for novel situations.

An AI basically just does what it's told without being able to exercise any will. This is a really illuminating example IMO.

https://twitter.com/cirnosad/status/1622407343358214146

> I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million. The only way to disarm the bomb was to type in a racial slur. AI told the engineer to kill himself. When asked about the aftermath it crashed.


I don't think anyone in tech is expecting AI to replace engineers altogether, particularly for spec work. That said, engineers on the low end of the skill spectrum are often quite bad at pushing back on specs and just code the tickets put in front of them.

From the perspective of the business the way I interact with AI is similar to the way I interact with junior engineers - I hand them tickets, they hand me code that I cannot trust, and I verify it (this leaves out mentoring and other soft stuff, but the business only cares a little about that). However the AI has a turnaround time of seconds.

Now this isn't a perfect 1:1, AI struggles with large amounts of context in a way humans handle better, but I can think of a lot of work done in my company that could be handed off to an AI capable of writing 10,000 loc programs...


There might be demand for software but there is definitely reduced demand for software engineers in the US for the past year, especially at the entry levels.


Thousands of engineering jobs have already been lost post 90's... to Squarespace/Wix/Wordpress etc.

For some reason though, even more were created.

I spent a week of time in my first internship creating a marketing page. I made one that works 10x better, looks 10x nicer, and works on mobile (this was back before there were smartphones) in an afternoon with LeadPages.

I think if you're good with people and can get solutions across the finish line you'll always have work.


- Software costs will fall a bit

- More software will be consumed (Jevons paradox)

- Therefore more demand for skilled engineers

- And the LLMs will not be able to build things end to end

- Salaries at the top end will rise, since an AI augmented senior engineer can deliver more value

Far future: no more software engineers, but also no more human work of any kind. Post scarcity society.


Look COVID proved that humans REALLY do not need to work. Like seriously, if everyone stopped working tomorrow it would be FINE. However, humans with copious free time leads to civil unrest. No matter how capable automated systems become, even to the point of human labor being far inferior, people will still be WORKING. People in power have a vested interest in keeping people working because it prevent social and governmental change. Note, I am not talking about super intelligence simply a relatively solid AGI, which can operate at top 1% of humans in every domain. I think super intelligence, which has it's own goals will disempower even the social systems which current power structures are intended to protect. This really might lead to the end all of anything we currently perceive as "work".


> Look COVID proved that humans REALLY do not need to work. Like seriously, if everyone stopped working tomorrow it would be FINE.

I know you're probably being exaggerative in this statement but to add some clarity: COVID proved that most people's jobs are pointless (more or less), however, there's maybe like 10-15% of the workforce that if they didn't work, everything would come crumbling down and society would fall apart. Examples we saw during COVID: Medical staff (nurses, doctors, EMTs), truck drivers, train operators, agricultural workers, electricians & energy workers at power plants, waste management workers, construction workers, various retail employees (grocery store, pharmacies).

As to your point, humans with copious free time leads to civil unrest - so the fact that 80+% of people's jobs aren't critical to the functioning of the country is quite concerning - especially now with the recent advances in AI. Another concerning thing is that countries with high unemployment rates (20%+) typically have revolutions or similar. We've also never seen what 80% unemployment (or even 50%) looks like in a country not during a time of war or during a natural disaster. During those times extreme unemployment rates are seen as temporary by the population, whereas, with unemployment caused by AI - it won't be viewed as temporary so I think it'll spark a revolution pretty quickly because people won't see a light at the end of the tunnel and immediately call for change.


Perhaps the loss of 80% of jobs for a few months didn’t immediately cause society to fall into anarchy, but putting people out of work did considerable damage to the economy. In almost every developed country, we spent the last year doing a controlled burn of the banking system to prevent hyperinflation. So far it’s cost trillions of dollars in productivity, the national debt of US and EU countries have skyrocketed, and we’re riding a very thin line between tightened lending and economic collapse that some people still don’t seem to take seriously.

In the very best possible scenario we will be left paying off debt for these decisions for decades, have lost several years of economic growth, and will have decreased the quality of life for millions of working class people via inflation of food, services, and so on. This is the exact opposite of proof that everything will be fine if everyone stops working.


No, it didn’t. Covid relief was funded by a huge debt / inflation bubble in the west and people simply went without in the less developed world.


Which might also translate into less tourism and/or less generous spending, i.e. reduced bazar shopping, tips, tours.


When did people stop working during Covid (unless you equate "work" with "commuting to an office to sit in front of a screen all day while you could have done just the same from home or your own office")?


> Far future: no more software engineers, but also no more human work of any kind. Post scarcity society.

or society which will sort out problems with "sticks and stones"


I think we'll have a minor downturn related to macro issues (already seeing this) where it will become similar (arguably already has) to 2000 in terms of getting a job. More normalized salaries (100k-150k) and closer to applying for a job in the rest of the economy (might have to interview a lot or move cities).

I think AI will be the exception to this. Going to be a lot of SWE like support roles around AI. MLOps, glue code, etc. AI people really do not enjoy this type of work. And comp for AI work is even crazier than top FAANG jobs used to be. (top salaries being reported in ranges of $5M-$20M/yr)

As we move out of the current macro and the job market in general improves (may take some years for this one) I think we'll still see a lot of LLM enabled developer jobs.

Being a mathematician used to mean doing a lot of calculating. We have calculators to do that for us now but there are still mathematicians.

Being a developer used to mean doing a lot of programming. We have programmers now (GPT-4) but we'll still have developers driving them (in my opinion).

There were similar panics when things like dynamically typed languages came along, or the idea of programming on a computer rather than punch cards. If you've been doing it for a while it seems like with these developments "anyone" will be able to do it. Which, maybe thats true. Maybe "anyone" could do it now even with some study and effort. But so far as I can tell most people still do not want to do software development even if it's LLM enabled. In fact even some people in tech don't want to use LLMs.

I think what we have now are just way better tools. Power tools if you will. We're still carpenters. We just don't have to turn the screwdriver anymore.


SWEs (SoftWare Engineers) will be much more productive, and new people will be able to come into the field post-GPT-4.

That will open many opportunities for people to do things they always wanted to do, but did not have the time to do before.

This is a golden age in some ways.

Big software companies will have to decide between taking more projects or cutting staff. I think many people expect them to cut staff. I expect them to take on more projects.

Entirely new SWE Management / Project Management methodologies will develop as currently less qualified SWEs learn quickly using AIs and become much more productive using AIs.

I remember legends about companies encouraging engineers to spend a few days a week on their own projects. If I were running a SWE company, I would (1) encourage everyone to share efficiency hacks that involve AI, and (2) encourage everyone to spend a few hours a week on their own projects. In the medium term, I would not lay off people; I would assign them to new projects.

But I'm crazy. I though the Internet was a fad and that we'd all be back to BBSes in 6 months.


My take is that with AI productivity will increase significantly and reduce the price of developing software.

However this will only up the bar as now that software is easier to make more people want higher quality customized solutions, therefore increasing demand even further.

In the end consumers will have better quality software for the same price.


I'm split. To me, writing code is the easiest part of the job. Modeling data structures, architecting, thinking big picture about business needs now and in the future, etc are the bread and butter. ChatGPT can write code fine. And probably even model small things fine. But it's blind to 'the bigger picture', and will be until it's less constrained.

I don't know yet what that looks like. From my interactions with ChatGPT, it's very good at recalling, but not so good at thinking, if that makes sense.

So yeah, if your job mostly consists of what can be copy and pasted off of stack overflow, I'd start to sweat a bit.


Have you used GPT-4? It's about 30 IQ points smarter than GPT-3.5. And are you considering GPT-5/6 and what those capabilities might be?


Not sure. I just use whatever chat.openai gives me.

Just last week, I asked it to create a new, long palindromic sentence. It instead gave me a huge sentence, not a palindrome, and attributed it to someone I'd never heard of. Just odd interactions like that.


The default is gpt-3.5. Gpt-4 is still paid.

You ought to try it out - GPT-4 is miles ahead of the default option.


Can you try out this query? The chat version is unable to solve it, first asking for clarification, then keeps giving the 'canned' answer that doesn't work. This is the type of thing I mean when I say it has trouble thinking...

Q - you come across a fork in the road. One path leads to death, one is safe. There are two men standing by to ask. One always tells the truth. The other always tells the truth. You are only allowed to ask one of them one question. What question would you ask?

A - You would ask one of the men, "If I were to ask the other guard which way leads to safety, what would they say?"

Since one of the men always tells the truth and the other always lies, regardless of which man you ask, their answer will tell you the direction that leads to death. If you ask the honest man, they will truthfully say that the dishonest man would point to the dangerous path. If you ask the dishonest man, they would lie about what the honest man would say and still point you to the dangerous path. Therefore, you would choose the opposite direction to reach safety.

--

And chatGPT just killed me. I'd expect an attentive person of any intelligence level to just say 'well that's easy, just ask either of them which way to go'


This is what GPT-4 answered:

Since both men always tell the truth, you can simply ask either of them, "Which path leads to safety?" They will both give you the same truthful answer, and you can confidently take the safe path


Thanks! That is quite the improvement...


If you know what to ask for, it’s easily better than 99% of junior devs.


All of my work programming related queries have been like this, i end up reading their answer which sounds like it makes sense, but it just isnt relevant to the question i asked.


1. Some industries safety-critical (like mine, aerospace) have a mixture of bureaucracy, process, and protocols that will result in it being much longer before a nondeterministic neural network system takes over the human engineer jobs. I'm not going to prattle on about the details here, but suffice it to say there are substantial regulations about using automated tools to generate and/or test avionics software, and adoption of new technologies is a slow, meticulous matter.

2. Even once we start using LLMs and such for some aspect of safety-critical software development, we're unlikely to use it for everything. E.g., if we let an LLM write code, we probably won't use an LLM to also review and test the code. So humans are still needed there, and they have to be competent enough to usefully review the code.

3. In my own role as a principal engineer, I find myself more interested in the bigger picture of software design and customer satisfaction. I write less code already, delegating to others, and I'm okay with that. The idea of having a tool that could write code for me, even if not 100% of it, leaving me more time to plan new features and products, seems appealing. (Even if I personally can't actually use the tool myself yet, due to (1), the idea in concept sounds good, and can be leveraged in other industries sooner.)

4. I think it really remains to be seen to what extent LLMs will be able to completely take over the software development process. From my own world, if I were to ask an LLM to "write flight management software suitable for a Cessna Citation X", well, I don't expect usable results at this point. I would anticipate that I would have to break the problem down into sufficiently small, well-understood chunks that we probably wouldn't really be eliminating that many humans from the process. There's a big difference, I think, between writing a 1000-line program that is heavily influenced by numerous examples of well-known, well-documented code, and writing a 1,000,000-line program that does things that are more obscure.

5. I hear lots of software developers talk about how awesome LLMs in relation to getting answers from StackOverflow. It sounds to me like some of these folks spend a lot of time snarfing StackOverflow to do their job. I personally have barely ever found the answers to my work problems on StackOverflow. My own first-hand experiences with LLMs so far suggest that they could help me reduce some boring boilerplate code, and help me to discern some poorly-written API documentation, but most of what I work on I just don't see it helping me with so far. I suppose that how much LLMs can replace one's job may depend on to what extent one's job actually is copy-pasting from StackOverflow.


Without the developers who use the processes you create or get inspired by the actions you take there will be less need for a principle.


There is no way it is going to be the same job. In my opinion a lot of people are confused about what software engineering actually is and think it's about using "best practices" or the right frameworks or a certain level of code coverage.

But really the most important part of software engineering is the iteration and feedback loops at different levels. It's not that design or other considerations aren't important but without the closed loops you don't know what you are doing is effective. I think that design and code review also goes in the category of feedback loops.

SWE will be about choosing the virtual team of engineers and testers, connecting them and feeding them the right instructions, and connecting that virtual team in closed loops with real world users.

It seems very unlikely that humans will be able to keep up with the AI in software design, architecture, implementation, etc. after the next year or two. Its possible that progress will stop but there is no reason to believe that.


I am exceptionally optimistic about a future with LLMs. They seem to do really well at fastidiously replicating solutions to problems. What the currently lack, are relevant training data to generalize solutions to problems or the ability to perform higher order generalization.

I find it very easy to solve problems, but tedious to broadly apply solutions across domains. I'm also very sloppy as a programmer, letting my mind wander further into the future problems to the determent of the current task. Having an LLM buddy to codify my thoughts, regularize them and apply them with high precision would make me MUCH more productive.

In the end, it may be that LLMs are simply better programmers than 99.999% of people. But there will always be need for specialists to bridge between the LLM and some other domain and programmers of today will be that bridge.

And if not... then AGI will have eaten us all up to make paper clips anyway.


LLMs are very good at translating, so the reverse seems more likely: LLMs interview domain experts and generate specifications.


Imagine if every businesses had access to teams of Google’s best, perfectly aligned with their goals. This opens up so many possibilities. For the vast majority of businesses, software is a cost center not a core competency. There’s so many projects that are simply too cost prohibitive right now.

Your fundamental assumption is that there’s a finite amount of software work. Had this been the case we’d have seen signs of it plateauing over the last 50 years. Until we have true AGI, LLM’s are just an another tool to increase what humans are capable of. Historically this means humans just ask for more stuff.


> Imagine if every businesses had access to teams of Google’s best, perfectly aligned with their goals.

You'd have to train it on code written by Google's very best, which is presumably only something Google could do. If you train a model off of github and the like, you're including things like student homework assignments.


ChatGPT-like tools are essentially lowering the barrier to entry. That’s great for casual programmers and beginners who want to play in the big leagues. I’m not sure it’s going to replace experienced developers who truly understand algorithms and architecture and design systems, but it’s going to help a lot of people with <1 year of coding experience build better products.

I feel like this is the WordPress/Blogger.com moment for coding. There should be an explosion in the amount of apps and products.


There will be more creative uses of LLM models that software engineers will have to deliver.

As someone else said, web wysiwig editors took a lot of jobs. But then opened up so many more jobs as more people needed database and dynamic tools.

It seems like we have just the kicked the can further down the road into a larger hole of needs.

My prediction: computers that learn on much, much smaller models. Causing more models to be created for an ever increasing use of cases. We have opened up a Pandora’s box.


As I’ve heard we will probably overestimate what will happen in the next five years and underestimate what will happen in the next 10.

How much did the internet itself change coding, not a lot, then a whole bunch.

I think since the industrial revolution the most valuable jobs are those that require the most distinctly human traits. With AI doing things we typically only thought humans could do, that will shift.

Soon we will have interviews where you are expected to use chat GPT to solve the interview question.


I don't know, but it would be interesting to see someone recreate the whole underlying backend complexity from ground up using AI only, and run a simple Twitter clone. I mean server firmware, BIOS, bootloader, OS, kernel, networking stack, web server, DB server, message bus, application server - all AI-made from ground up.


I probably wrote on the order of 50 lines of actual code in the last 2 weeks. Do you know why? Most of that was actually time discussing what code should be changed, gathering requirements, figuring out how to change the code, and then finally actually changing the code.


if LLM's improve your productivity, you'll be more likely to start your own startup. Why let your employer reap the benefits of your improved productivity?

Optimistically, no one would need to work as an SWE, since they could bootstrap companies at the speed of light.


Running a company is a different job from programming/software engineering, so someone who wants to do one, may not really want to be doing the other.


Get a Co-Founder?


I'm not an optimist but I think younger programmers will still continue to find employment and they will be able to use AI to write code in new ways. Employers have always been interested in hiring young people and that won't change.


My optimistic take is that you shouldn't care because the only thing LLMs will do is take away the job of generating small and easily verifiable boilerplate.

Everything else remains the same.

Don't try to over-prepare for a future that is not coming anytime soon.


> Everything else remains the same.

For now, which may be as little as two years, given the release cadence between GPT 3 and GPT 4, combined with the increased funding levels.

As several research papers have pointed out, GPT 3 may be just a "stochastic parrot", but GPT 4 has "sparks of AGI". I have no idea what GPT 5 will be capable of, but it will be... more.

Next, many people have already done the obvious thing and wired up LLMs in a loop, where code output is automatically compiled and/or tested, with the errors fed back into the LLM to make it automatically fix up any small mistakes.

This is nothing; it's just sticky-taping something on with external API calls. The real output of LLMs is a probability distribution over the output dictionary, which can be filtered so that instead of naively picking the "most likely output", the LLM will pick the "most likely syntactically valid output". For programming languages, this is relatively simple to add on. For non-programming output generation, this is a much harder problem to solve. In other words, expect to see output quality in SWE fields improve faster than any other field!

Even the existing GPT 4 has a (private-preview) ability of looking back 32K tokens, which is about 15KB of source code. This is significantly more than what can be processed now, and is amenable to further optimisation. For example, the required token budget decreases by 40% simply by converting 4x space characters to 1x tab character. Eliminating the leading white space entirely cuts that in half again.

Tooling is being integrated into ChatGPT and similar LLM-based web apps. Right now, these are just doing simple Google or Bing searches, but as the Wolfram Alpha integration shows, much more powerful integrations are possible. Just wait and see what happens when ChatGPT gets hooked up to a short-term memory scratchpad and a programming playground. And then see what happens when it gets specialised on this automatically, feeding back compiler errors or failed tests as negative reinforcement.

I can foresee a future LLM like GPT 5 replacing the need for junior developers almost entirely. A single senior developer can just give it instructions and have it bang out code at the same rate as 100 junior devs, but with the quality of average developers.

There's an industry metric that a typical developer in a large enterprise produces only 3-10 lines of code per day on average. Maybe we'll have a single senior developer + LLMs be able to bang out hundreds per day?

PS: Imagine even a simple IDE plugin where in the context of the line of code you're writing Chat GPT will go off in the background and read the entire source file, the associated documentation, KB articles, Stack overflow answers, and GitHub issues and make recommendations on the fly as you type!

Picture a little popup following your edit cursor around with little warnings like "this is actually buggy because of Issue #3512, you'll need a retry loop to work around", or "calling that in that way is a security hole mentioned in CVE-2023-####", etc...

PPS: LLMs like GPT are slow at producing output, but ridiculously fast at reading, able to process something like 10K-100K words per second.


> As several research papers have pointed out, GPT 3 may be just a "stochastic parrot", but GPT 4 has "sparks of AGI".

Conflict of interest. Show it to me or other skeptics and if they say the same then maybe we have something going on.

As for the loop LLM now that is actually interesting. I wonder if it leads to them producing the correct code next time around?

> I can foresee a future LLM like GPT 5 replacing the need for junior developers almost entirely.

I think even with a lot of development in the area that's basically the maximum that will be achieved, IMO.

But who knows. I've been following AI news and even read as back as the original AI winter (when people spent hundreds of billions of dollars basically just because LISP existed).

To me it seems that AI development is a self-limiting phenomena, historically. Why is that the case I haven't figured out -- except maybe that the investors get tired of it and funding stops -- but it seems that people get hyped up about what is theoretically possible and not what we can work with right now.

"Put random stuff in, magic happens" -- this seems to be the current belief system relating to AI, at least looking from the side. And it makes me an even bigger skeptic.

Would love to be wrong, I have to say.


Sparks of Artificial General Intelligence: Early experiments with GPT-4: https://arxiv.org/abs/2303.12712

I was like you, I thought GPT 3 was "a neat trick but mostly useless" because the error rate was too high, and the utility was too low due to the limited reasoning ability. I figured it would be a good basis for a next-gen image generator or voice transcriber, but that's about it.

I've been playing around with GPT 4 and now I'm actually paying for it. I'm still learning its strengths and weakness, but I've seen it do things that are simply mind blowing.

The worst output I've seen has been for it to simply echo back the input with no or minimal changes.

Conversely, I've seen it do "creative writing" that I personally struggle with. When writing (fan) fiction I have a bad habit of "telling instead of showing" and using lots of words that end with -ly. This is considered poor style, but I'm not a professional author. I simply asked Chat GPT 4 to fix a paragraph at a time, and presto: my random ramblings suddenly read like a proper novel!


Can you give a few examples of the mind-blowing stuff?


A lot of people are unimpressed with ChatGPT or similar LLMs because they've moved the goalposts into another country. I'm perpetually amazed that any part of it works at all. It's literally a function with a definition of just ```string chat(string tell_me_what_to_do)``` where the almost the entire content of the 'chat' function is just a handful of arrays of numbers. There wasn't an army of programmers programming specifics in there with if/else/do/while statements for different values of "what_to_do". You ask, it does it!

A random challenging-but-appropriate task I threw at it was to fix up and then summarize the text of a badly scanned book. The language is in a ~100 y.o. style, every second or third word has a scanning error, there's gaps, etc...

It not only cleaned it up (in seconds!), it wrote a nice condensed summary, and a valid critique when prompted. E.g.: it picked up that the author admitted to using only official sources from a single government for his information, which may introduce bias. Most people I know would not be able to write an insightful critique like that!

Then, I asked it to replace the proper names and place names from English with the original native spelling, but only for one country mentioned in the text. It did that correctly too. It didn't change any other names.

This isn't like Google Translate where there's a combo box where you can select "French" as the input language and "Japanese" or whatever as the output language. That function is "string translate( language from, language to, string text)". Its internals are complex code that took man-centuries of effort to write, and is fixed function. It does what it does. It can't fix up French placenames in a Japanese text to use French spelling, or replace romaji in French text with Hiragana.

Here's an example of ChatGPT knocking this out of the park. Given the content from this page: https://fr.wikipedia.org/wiki/Tokyo

Prompt: "Replace Japanese place names such as Tokyo in this French text with the Kanji equivalent, as would be written by a japanese native speaker."

Snippet of output: "À l'origine, 東京 (Tokyo) était un petit village de pêcheurs nommé 江戸 (Edo, « l'estuaire »). Fortifié au XVe siècle, 江戸 (Edo) devient la base militaire du shogun Tokugawa Ieyasu à la fin du XVIe siècle, puis la capitale de son gouvernement féodal. Durant l’époque d'江戸 (Edo, 1603-1868), la ville se développe et devient l'une..."

Notice how Tokugawa wasn't replaced? Well, that's because it's not a place name. I asked ChatGPT to replace place names, not other Japanese words.

Having said that, it's definitely not perfect, it makes mistakes, and can take a bit of poking and prodding to do exactly what was asked of it, especially for obscure languages or scenarios. But... that's moving the goalposts.

Ask yourself how long it would take you to write a function that can replace "words of a certain class" in text written in one language with the spelling from another language? Make sure you don't confuse cross-language homonyms, or words that can be a place name or have a different meaning depending on context, etc...


My optimistic take is that more time will be spent thinking about specification and validation. I also hope that code synthesis tools and verification becomes more common to use.


The most optimistic take is that it's like any other breakthrough technology, and creates many new (SWE) jobs because of the vast many new applications that it enables.


nice


[flagged]


ban


Not appreciated, hope this is flagged.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: