Hacker News new | past | comments | ask | show | jobs | submit login

I work for Google, and I just got done with my work day. I was just writing I guess what you'd call "AI generated code."

But the code completion engine is basically just good at finishing the lines I'm writing. If I'm writing "function getAc..." it's smart enough to complete to "function getActionHandler()", and maybe suggest the correct arguments and a decent jsdoc comment.

So basically, it's a helpful productivity tool but it's not doing any engineering at all. It's probably about as good, maybe slightly worse, than Copilot. (I haven't used it recently though.)




I also work at google (until last Friday). Agree with what you said. My thoughts are

1. This quote is clearly meant to exaggerate reality, and they are likely including things like fully automated CL/PR's which have been around for a decade as "AI generated".

2. I stated before that if a team of 10 is equally as productive as a team of 8 utilizing things like copilot, it's fair to say "AI replaced 2 engineers", in my opinion. More importantly, Tech leaders would be making this claim if it were true. Copilot and it's clones have been around long enough know for the evidence to be in, and no one is stating "we've replaced X% of our workforce with AI" - therefor my claim is (by 'denying the consequent'), using copilot does not materially accelerate development.


> no one is stating "we've replaced X% of our workforce with AI"

Even if that's been happening, I don't think it would be politically savvy to admit it.

In today's social climate claiming to replace humans with AI would attract the wrong kind of attention from politicians (during an election year) and from the public in general.

This would be even more unwise to admit for a company like Google who's an "AI producer". They may leave such a language for closed meetings with potential customers during sales pitches though.


> and from the public in general

Don't think the public will be that concerned about people in Google's salary bracket losing their jobs.


It’s a disservice to the public to assume they aren’t capable of understanding why AI job losses might be concerning even if they aren’t directly impacted. Most people aren’t so committed to class warfare that they will root for the apocalypse as long as it stomps a rich guy first.


You mean poor person. As long as it stomps a poor person. The rich don’t have a habit of getting stomped. They direct other poor people to stomp their contemporaries. The poor don’t have a chance.


I don't think a lot of people realize how few people are "rich" in the sense of not being impacted by the labor market, or how virtually all of them are retirees. CFOs aren't looking forward to a massive shift in the labor market for accountants any more than CPAs. Warren Buffet has a "job," he writes those letters for BH and oversees the firm's investments at a high level... and most of the people who live off of investments have children in the workforce. Even most people whose children live off of their investments have kids in the (nonprofit) workforce.


Software engineers and grocery store workers are in different income brackets, but in the same class (labor/prolaterian). It is managers, executives, and investors that are in the capitalist class. Class is determined by your relationship to production.


Software engineer salaries and stock compensation can be enough to shift alignment somewhat, especially after many years of capital accumulation.


if you make the majority of your earnings from passive income or you do not need to work to live you are more part of the leisure class


Two things: capitalists don't not work; and if you have a sizeable portfolio, you may not need to work and may earn plenty of passive income, yet still work because you add more value at the margin working than fiddling with stock allocations or angel investing or whatnot (vs index funds etc.).


It's easy to get a capitalist to come out of retirement. Most of the time you just have to ask them to take a look at your business. Before you know it they accept a board position and shortly thereafter they are running point as President.


For an illustrated example, you can watch Succession


I’ve switched from manager to IC and vice-versa a few times at FAANG. Didn’t strike me as moving between the capitalist and proletariat classes, lol!


The public might though be concerned that if they are being replaced, many in other positions at other companies will soon be replaced as well.


That’s not how the mind works. People cheered when Elon fired 80% of the Twitter staff. No one cares when people with high paying jobs suffer.


The people who cheered about the firing of 80% of the Twitter staff largely believed (rightly or wrongly) that they were being adversely affected by them. While Google may be seen with more wariness in tech circles, I don't think the average person believes that Google is actively harming them (again, rightly or wrongly).


These aren't the same types of events. In Twitter's case, it was a one-off act, caused by one-off circumstances. With Google, it'd be more of a precursor to a new trend that might soon take root and impact me or those I care about.


I think twitter is an outlier because people hated the employees already for various reasons.

For example they thought that twitter had a bloated workforce because of videos like this (https://www.youtube.com/watch?v=buF4hB5_rFs).

And a lot of people heavily disagreed with how they handled moderation. You can take things like the hunter Biden laptop suppression or in the funny category you had the getting banned for saying learn to code (https://reason.com/2019/03/11/learn-to-code-twitter-harassme...).

Take random company without controversies and you will find less vitriol about them getting fired.


No one cares about self checkout on supermarkets impact on their employees, until their employer does something similar.


I care as a consumer who hates standing in long lines. My former bank branch had thirteen teller stations and two tellers. This wasn't on a bad day. This was for years.


People in Google salary brackets get jobs at Google-1 salary brackets, pushing junior people at Google-1 to Google-2, all the way down to IT departments at non-tech firms. This impacts everybody who's in the industry or capable of switching.


Why would the general public care about Google employees. Google is however a major saas provider. And people might start to worry that their employer is going to soon buy a subscription to whatever that that Google used to automate jobs.


The bank tellers didn't go away: they just became higher paid and higher skilled when cash management was no longer the job.


>> Even if that's been happening, I don't think it would be politically savvy to admit it.

When I was working in RPA (robotic process automation) about 7 years ago, we were explicitly told not to say "You can reduce your team size by having use develop an automation that handles what they're doing!"

Even back then we were told to talk about how RPA (and by proxy AI) empowers your team to focus on the really important things. Automation just reduces the friction to getting things done. Instead of doing 4 hours of mindless data input or moving folders from one place to the other, automation gives you back those four hours so your team can do something sufficiently more important and focus on the bigger picture stuff.

Some teams loved the idea. Other leaders were skeptical and never adopted it. I spent the majority of those three years trying to selling them on this idea automation was good and very little time actually coding. Its interesting seeing the paradigm shift and seeing this stuff everywhere now.


> Even back then we were told to talk about how RPA (and by proxy AI) empowers your team to focus on the really important things.

As a non-politically savy person ;-) I have a feeling that this is a similarly dangerous message, since what prevents many teams to focus on really important things is often far too long meetings with managers and similar "important" stakeholders.


The reason you don't lead with headcount reduction is two-fold.

1. Almost every business has growing workload. That means reassigning good employees and not hiring new headcount, not firing existing headcount. Unipurpose, low-value offshore teams are the only ones who get cut (e.g. doing "{this} for every one of {these}" work).

2. Most operational automation is impossible to build well without deep process expertise from the SME currently performing it. If you fire that person immediately after automating their task, what do you think the next SME tells you, when you need their help?

Successfully scaling operational automation programs therefore rely on additional headcount avoidance (aka improving their volume:employee ratio) and value measurement (FTE-equivalent time savings) to justify/measure.


> I don't think it would be politically savvy to admit it.

Would it be? Do they care?

Sam Altman's been talking about how GenAI could break capitalism (maybe not the exact quote, but something similar), and these companies have been pushing out GenAI products that could obviously and easily be used to fake photographic or video evidence of things that have occurred in the real world. Elon's obsessed with making an AI that's trained to be a 20-year-old male edgelord from the sewer pits of the internet.

Compared to those things, "we've replaced X% of our workforce with AI" is absolutely anodyne.


100%.

Altman encourages anyone that will listen to him that monopolies are the only path to success in business. He has a lot riding on making sure everyone is addicted to AI and that he’s the one selling the shovels.

Google isn’t far off.

Most capitalists have this fantasy that they can reduce their labour expenses with AI and continue stock buy-backs and ever-increasing executive payouts.

What sucks is that they rely on class divisions so that people don’t feel bad when the “overpaid” software developers get replaced. Problem is that software developers are also part of the proletariat and creating these artificial class divisions is breaking up the ability to organize.

It’s not AI replacing jobs, it’s capital holders. AI is just the smoke and mirrors.


Sam's company is not a multi-trillion dollar behemoth that employs hundreds of thousands and has practical (near-)monopoly on a huge swaths of the digital economy.


> I don't think it would be politically savvy to admit it.

Depends on who you ask.

If Trump wins and Elon Musk actually gets a new job, they would be bragging about replacing humans with AI all day long. And corporates are going to love it.

Not sure about what voters think though. But the fact that most of these companies are in California, New York etc means that it barely matters.


Yup, just like full self driving and ending the war in Ukraine on 24 hours.


I find the boast about ending the war to be reasonably likely -- if it is clear the US is switching sides in the conflict, a negotiated capitulation could happen pretty quickly.

In a similar vein, solving world hunger is closer today than it's ever been. The previous best hope was global thermonuclear war, but honestly that would leave enough survivors as to be mostly ineffective, and much more likely to have the opposite result. Severe climate change has a better shot at fully eliminating [human] hunger.


Corporates will soon have to realise the hard reality that when masses of humans have been replaced there won't be masses of humans with salaries to buy said corporate's goods anymore.


AI is socialism, and it's unstoppable. People are trying to stop progress and go back to the old days. Nothing about the universe permits this.

A new economy is forming and there is nothing that can stop it without causing major, unintended fallout.


>> they would be bragging about replacing humans with AI all day long.

Has either bragged about this at all?

The only thing I've heard floated is Musk running a "government efficiency commission" which I just assumed meant he would be looking for ways to gut a lot of the never ending, never dying government programs. I've never heard him saying the commissions goal was to replace people with AI.

https://www.newsnationnow.com/politics/2024-election/trump-m...

The former president said such an audit would be to combat waste and fraud and suggested it could save trillions for the economy.

As the first order of business, Trump said that this commission will develop an action plan to eliminate fraud and improper payments within six months.


Trump and Musk will get bored quickly if elected. Once in office your power is checked.


[flagged]


That would be the way someone with no real awareness of the philosophies and realities of the two parties in the US would see it. And to be fair, that's a good description of a large chunk of the American electorate.

But you can't have a guy who literally used to relieve himself into a golden toilet take over your party and be anything but the party of big business and billionaires.


>Despite widespread rumors, there is no verified evidence that Trump actually owns a gold toilet.

https://royaltoiletry.com/does-trump-have-a-gold-toilet-unpa...


Fair enough.

Still a guy who operated multiple luxury hotel and golf course properties that would laugh a working man out the front door if he asked for an affordable room.


no one is stating "we've replaced X% of our workforce with AI"

That's only worth doing if you're trying to cut costs though. If the company has unmet ambitions there's no reason to shrink the headcount from 10 to 8 and have the same amount of output when you can keep 10 people and have the output of 12 by leveraging AI.


Almost all the big tech companies have had layoffs over the past several years. I think it’s safe to say cost cutting is very much part of their goal.


But the specific roles being laid off are arbitrary, and the overall goal headcount reduction is driven by macroeconomics factors (I'm being generous there), not based on new efficiencies.

Note the difft between "cost cutting" (do less, to lower cost) and "efficiency" (do same, but with less cost)


The goal of these cost cutting initiatives is not an absolute reduction in cost, but a relative one. They needed to show an improvement in operating margin, ie % of revenue spent on engineers.

If your engineers become 20% more efficient then your margins are better and your problem is solved. (Indeed if you have tech that can make any engineer 20% more efficient then you are back in the game of hiring as many as you can find, as long as each added engineer brings in enough additional revenue.)


Thanks, that is how I read the announcement. The powers that be decided that there must be some quota to be fulfilled, and magically that quota was fulfilled.

AI engineers will not yet get a Nobel prize for putting everyone out of work.


"we've replaced X% of our workforce with AI"

Most likely what is actually happening is that the X% of workforce you would lay off is being put to other projects and Google in general can take on X% more projects for the same labor $$. So there is no real reason to make that particular "replaced" statement.


Google has to sell its AI some how. The problem is that businesses will see this and want to fire head count because they go, "Well I guess AI can do it for freeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee!". Nope no way is it writing code freely.


> including things like fully automated CL/PR's which have been around for a decade

I haven't seen this yet so I'm intrigued. Is this a commercial product, or internal tooling?


I’m assuming this refers to things analogous to dependabot on GitHub where maybe it automatically updates a library version reference and runs the tests and creates a PR if everything seems good, or similarly for fixing style issues or other stuff that is pretty trivial and has good test coverage.

When you maintain an open source project on GitHub you will occasionally get some open source automated bot that submits a PR to do things like this without you even asking, and I’m sure there’s plenty more you can sign up for or implement yourself.

I wouldn’t really call it AI, but it is automated. I agree with the parent comment that a journalist trying to push an angle would probably lump it in as AI in order to make the number seem larger.


It's common at most mega-corps like google. For example, if a utility function in an internal library was deprecated and replaced with a different function that has the same functionality. A team might write a script which generates hundreds/thousands of PR's to make the migration to the new function.

You don't want a single PR that does that, because that would affect thousands of projects, and if something goes wrong with a single one, the whole PR needs to be rolled back.


I also work at Google and I agree with the general sentiment that AI completion is not doing engineering per se, simply because writing code is just a small part of engineering.

However in my experience the system is much more powerful than you described. Maybe this is because I'm mostly writing C++ for which there is a much bigger training corpus than JavaScript.

One thing the system is already pretty good at is writing entire short functions from a comment. The trick is not to write:

  function getAc...
But instead:

  // This function smargls the bleurgh
  // by flooming the trux.
  function getAc...
This way the completion goes much farther and the quality improves a lot. Essentially, use comments as the prompt to generate large chunks of code, instead of giving minimum context to the system, which limits it to single line completion.


This type of not having to think about the implementation, especially in a language that we've by now well-established can't be written safely by humans (including by Google's own research into Android vulnerabilities if I'm not mistaken), at least with the current level of LLM, worries me the most

Time will tell whether it outputs worse, equal, or better quality than skilled humans, but I'd be very wary of anything it suggests beyond obvious boilerplate (like all the symbols needed in a for loop) or naming things (function name and comment autocompletes like the person above you described)


> worries me the most

It isn't something I worry about at all. If it doesn't work and starts creating bugs and horrible code, the best places will adjust to that and it won't be used or will be used more judiciously.

I'll still review code like I always do and prevent bad code from making it into our repo. I don't see why it's my problem to worry about. Why is it yours?


Because I do security audits

Functional bugs in edge cases are annoying enough, and I seem to run into these regularly as a user, but there's yet another class of people creating edge cases for their own purposes. The nonchalant "if it doesn't work"... I don't know whether that confirms my suspicion that not all developers are aware of (as a first step; let alone control for) the risks


And especially if it generates bugs in ways different from humans - human review might be less effective at catching it...


It generates bugs in pretty similar ways. It’s based on human-written code, after all.

Edge cases will usually be the ones to get through. Most developers don’t correctly write tests that exercise the limits of each input (or indeed have time to both unit test every function that way, and integration test to be sure the bigger stories are correctly working). Nothing about ai assist changes any of this.

(If anybody starts doing significant fully unsupervised “ai” coding they would likely pay the price in extreme instability so I’m assuming here that humans still basically read/skim PRs the same as they always have)


Except that no one trusts Barney down the hall that has stack overflow open 24/7. People naturally trust AI implicitly.


It's worrying, yes, but we've had stackoverflow copy-paste coding for over a decade now already, which has exactly the same effects.

This isn't a new concern. Thoughtless software development started a long time ago.


As a security consultant, I think I'm aware of security risks all the time, also when I'm developing code just as a hobby in spare time. I can't say that I've come across a lot of stackoverflow code that was unsafe. It happened (like unsafe SVG file upload handling advice) and I know of analyses that find it in spades, but I personally correct the few that I see (got enough stackoverflow rep to downvote, comment, or even edit without the user's approval though I'm not sure I've ever needed that) and the ones found in studies may be in less-popular answers that people don't come across as often because we should be seeing more of them otherwise, both personally and in the customer's code

So that's not to say there is nothing to be concerned about on stackoverflow, just that the risk seems manageable and understood. You also nearly always have to fit it to your own situation anyway. With the custom solutions from generative models, this is all not yet established and you're not having to customise (look at) it further if it made a plausible-looking suggestion

Perhaps this way of coding ends up introducing fewer bugs. Time will tell, but we all know how many wrong answers these things generate in text as well as what they were trained on, giving grounds for worry—while also gathering experience, of course. I'm not saying to not use it at all. It's a balance and something to be aware of

I also can't say that I find it to be thoughtless when I look for answers on stackoverflow. Perhaps as a beginning coder, you might copy bigger bits? Or without knowing what it does? That's not my current experience, though


This is a good idea even outside of Google, with tools like copilot and such.

Often when I don't know exactly what function / sequence of functions I need to achieve a particular outcome, I put in a comment describing what I want to do, and Copilot does the rest. I then remove the comment once I make sure that the generated code actually works.

I find it a lot less flow-breaking than stackoverflow or even asking an LLM.

It doesn't work all of the time, and sometimes you do have to Google still, but for the cases it does work for, it's pretty nice.


Why remove the comment that summarises the intent for humans? The compiler will ignore your comment anyway, so it's only there for the next human who comes along and will help them understand the code


Because the code, when written, is usually obvious enough.

Something like:

  query = query.orderBy(field: "username", Ordering.DESC)
Doesn't need an explanation, but when working in a language I don't know well, I might not remember whether I'm supposed to call orderBy on the query or on the ORM module and pass query as the argument, whether the kwarg is called "field" or "column", whether it wants a string or something like `User.name` as the column expression, how to specify the ordering and so on.


Like he says, the "comment" describes what he wants to do. That's not what humans are interested in. The human already knows "what he wants to do" when they read the code. It's the things like "why did he want to do this in the first place?" that is lacking in the code, and what information is available to add in a comment for the sake of humans.

Remember, LLMs are just compilers for programming languages that just so happen to have a lot of similarities with natural language. The code is not the comment. You still need to comment your code for humans.


> Like he says, the "comment" describes what he wants to do. That's not what humans are interested in.

When I'm maintaining other people's code, or my own after enough time has gone by, I'm very interested in that sort of comment. It gives me a chance to see if the code as written does what the comment says it was intended to do. It's not valuable for most of the code in a project, but is incredibly valuable for certain key parts.

You're right that comments about why things were done the way they were are the most valuable ones, but this kind of comment is in second place in my book.


Or for something that needs like a quick mathematical lemma or a worked example. A comment on what is fantastic.


It's often unnecessarily verbose. If you read a comment and glance at the code that follows, you'll understand what it is supposed to do. But the comment you're giving as an instruction to an LLM usually contains information which will then be duplicated in the generated code.


I see. Might still be good to have a verbose comment than no comment at all, as well as a marker of "this was generated" so (by the age of the code) you have some idea of what quality the LLM was in that year and whether to proofread it once more or not


External comments are API usage comments. LLM prompts are also implementation proposal.

Implementation comments belong inside the implementation, so they should be over if not deleted.


Next human will put the code in a prompt and ask what it does. Chinese Whispers.


I tried making a meme some months ago with exactly this idea, but for emails. One person would tell an LLM "answer that I'm fine with either option" and sends a 5 KB email, in response to which the recipient receives it and gets the automatic summary function to tell them (in a good case) "they're happy either way" or (in a bad case) "they don't give a damn". It didn't really work, too complex for meme format as far as my abilities went, but yeah the bad translator effect is something I'm very much expecting from people who use an LLM without disclosing it


If someone is going to use an LLM to send me an email, I'd much rather them just send me the prompt directly. For the LLM message to be useful the prompt would have included all the context and details anyway, I don't need an LLM to make it longer and sound more "professional" or polite.


That is actually exactly my unstated point / the awareness I was hoping to achieve by trying to make that meme :D


Not necessarily. Your prompt could include instructions to gather information from your emails and address book to tell your friend about all the relevant contacts you know in the shoe industry.


Well that sounds reasonable enough. My only request is that you send me the prompt and let me decide if I want to comply...informed consent!



Wow, I love good, original programming jokes like these, even the ideas of the jokes. I used to browse r/ProgrammerHumor frequently, but it is too repetitive -- mostly recycled memes and there is anything new.

This is one that I really liked: https://www.reddit.com/r/ProgrammerHumor/comments/l5gg3t/thi...


(No need to Orientalize to defamiarize, especially when a huge fraction of the audience is Chinese, so Orientalizing doesn't defamiliarize. Game of Whispers or Telephone works fine.)


Do the Chinese call it English Whispers?


Chinese-Americans, at least, call it a game of Telephone, like everyone else in the English-speaking world except for the actual English.

We call it “Telephone” because “Chinese Whispers” not only sounds racist, it is also super confusing. You need a lot of cultural context to understand the particular way in which Chinese whispers would be different from any other set of whispers.


I happened to re-read this, and to be clear, I'm not Chinese-American. the "we" there means "everyone else in the English-speaking world except for the actual English."


It’s all Greek to them.


Pardon my French.


I can guarantee you there is more publicly accessible javascript in the world than C++.

Copilot will autocomplete entire functions as well, sometimes without comments or even after just typing "f". It uses your previous edits as context and can assume what you're implementing pretty well.


I can guarantee you that the author was referencing code within Google. That is, their tooling is trained off internal code bases. I am imagining c++ dwarfs javascript.


Google does not write much publicly available JavaScript. They wrote their own special flavor. (Same for any hugel legacy operation)


Can we get some more info on what you're reffering to ?


They're probably talking about Closure Compiler type annotations [0], which never really took off outside Google, but (imo) were pretty great in the days before TypeScript. (Disclosure: Googler)

0. https://github.com/google/closure-compiler/wiki/Annotating-J...


I find writing code to be almost relaxing plus that's really a tiny fraction of dev work. Not too excited about potential productivity gains based purely on authoring snippets. I find it much more interesting on boosting maintainability, robustness and other quality metrics (not focusing on quality of AI output, actual quality of the code base).


I frequently use copilot and also find that writing comments like you do, to describe what I expect each function/class/etc to do gives superb results, and usually eliminates most of the actual coding work. Obviously it adds significant specification work but that’s not usually a bad thing.


I don't work at Google, but I do something similar with my code: write comments, generate the code, and then have the AI tooling create test cases.

AI coding assistants are generally really good at ramping up a base level of tests which you can then direct to add more specific scenario's to.


Has anyone made a coding assistant which can do this based off audio which I’m saying out loud while I’m typing (interview/pairing style), so instead of typing the comment I can just say it?


I had some success using this for basic input, but never took it very far. It's meant to be customizable for that sort of thing though: https://talon.wiki/quickstart/getting_started/ (Edit: just the voice input part)


Comment Driven Programming might be interesting, as an offshoot of Documentation Driven Programming


That's pretty nice. Does it write modern C++, as I guess it's expected?


Yes it does. Internally Google uses C++20 (https://google.github.io/styleguide/cppguide.html#C++_Versio...) and the model picks the style from training, I suppose.


So this is basically the google CEO saying "a quarter of our terminal inputs is written by a glorified tab completion"?


Yes. Most AI hype is this bad. They have to justify the valuations.


"tab completion good enough to write 25% of code" feels like a pretty good hit rate to me! Especially when you consider that a good chink of the other 75% is going to be the complex, detailed stuff where you probably want someone thinking about it fairly carefully.


The problem being that the time spent fixing the bugs in that 25% outweighs the time saved. Now that tools like Copilot are being widely used, studies are showing that they do not in fact boost productivity. All claims to the contrary seem to be either anecdotal or marketing fluff.

https://www.techspot.com/news/104945-ai-coding-assistants-do...


The AI tap complition is >100000% better than the coding assistants, it just saves you typing and doesn't introduce new bugs you need to fix instead of writting buggy shitty code from a text description.


As far as I know, LLMs are a genuine boost for junior developers, but still not close to what senior/principal engineers get up to.


I have around 7 YOE, and I have found LLMs useful for very specific questions about syntax whenever I am working in a new language. For example, I needed to write some typescript recently and asked it how can I make a type that does X.

It is not as good with questions about API documentation for popular java libraries though and it will just hallucinate APIs/method names.

If I ask it a generic question like "how can I create a class in Java to invoke this API and store the data in this database" it is pretty useless. I'm sure I could spend more time giving it a better prompt but at that point I can just write the code myself.

Overall they are a better search engine for stackoverflow, but the LLMs are not really helping me code 30% faster or whatever the latest claim is.


It'd be interesting to know how much of Google's code is written by junior engineers. I can't imagine 25% of the code is from juniors, at which point Google's CEO is either exaggerating what he considers LLM-generated code or more than just juniors are using it.

I agree with your take though, it does seem helpful to juniors but not beyond that (yet), and this OP stat seems dubious unless juniors are doing a big portion of the work.


"rm re[TAB]" to remove a file called something like "report-accounting-Q1_2024.docx" is really helpful, especially when it adds quotes as required, but not exciting enough to get me out of bed any earlier in the morning.

I feel it's a bit like the old "measuring developer productivity in LoC" metric.

As I hinted at in another comment, in Java if you had a "private String name;" then the following:

    /**
     * Returns the name.
     * @return The name.
     */
    public String getName() {
        return this.name;
    }
and the matching setter, are easy enough to generate automatically and you don't need a LLM for it. If AI can do that part of coding a bit better, sure it's helpful in a way, but I'm not worried about my job just yet (or rather, I'm more worried about the state of the economy and other factors).


For me it's really goddam satisfying having good autocomplete, especially when you are just writing boilerplate lines of code to get the code into a state where you actually get to work on the fun stuff (ther harder problems).


Also if your code gets sent to someone else's cloud?


I don't care. The vast majority of code written in the private space is garbage and not unique. Products are usually not won because of the code.

Would I send the source of a trading algo or chatgpt to a third party, probably not but those are the outliers. The code for your xyz SAAS does not matter.

I am probably an outlier in that I don't really care what corpus a LLM trains off of. Its its available in the public space, go for it.


Have you ever had your code repository hosted by Github, Bitbucket, Gitlab or similar?

If so, all your code is sent to cloud.


Answer: yes, some code. But other code I and my company like to keep private.


Where exactly is the repo hosted if there is one?


It's common for companies to have something like self-hosted GitHub Enterprise or self-hosted GitLab hidden behind the company's VPN.


But where is the box where it's hosted? Is it in-house?


There are alternatives out there for self-hosted git. I have a Gitea instance running on a mini PC at home for my own projects.


Do you have backups of that as well? If something were to happen to your mini pc would you lose your code?


Great question, yeah I do. Right now it backs up to a separate NAS on my home network. Every once in a while I'll copy the most important directories onto a microSD card backup, but its usually going to be at least a few weeks out of date.


Own servers.


Do they manage their own servers? I wonder what proportion of companies would have in house servers managed by themselves.


They are colocated in a data center and you need physical keys to access the rack.


Internally hosted gitlab instances are a thing.


They are, but frequently the boxes where they are hosted are in AWS or similar. Or do frequently companies have actual in house servers for this purpose?


Not in house, but in a "segmented" part of the cloud that comes with service level agreements and access control and restrictions on which countries the data can be hosted in and compliance procedures etc. etc.

An extreme example of this would be the AWS GovCloud for government/military applications.


25% is a great win if you are prone to RSI. And for quicker feedback. But in terms of the overarching programming goal? Churning out code is a small part of it.

Code is often a liability.


It would be funny if they had a metric for how much code is completed by CTRL+V


Yes, isn't that the essential idea of industrialization and automation?


I think the critique here is that the AI currently deployed at Google hasn't meaningfully automated this user's life, because most IDEs already solved "very good autocomplete" more than a decade ago.


LLM autocomplete is on an entirely different level. It's not comparable to traditional autocomplete and mostly does not even compete with traditional autocomplete. LLM autocomplete will sometimes write entire blocks of code for you, with surprising skill. I often wonder how it knew what I wanted. It also generates some wrong code from time to time, but that's well worth it.


> LLM autocomplete is on an entirely different level.

Which is how they've surpassed 25% in new code, as compared to the 10% (made up number, but clearly non-zero) in the past. But incremental improvement, is all.


glorified, EXPENSIVE tab completion.


I assume you're referring to the compute/energy used to run the completion?


to train the model


Yeah, but he wants people to hear "reduce headcount by 25% if you buy our shit!"


How do you know that? You are creating this false sense of expectations and hype yourself.

I am going to argue contrary. If AI increases productivity 2x, it opens up as much new usecases that previously didn't seem worthy to do for its cost. So overall there will just be more work.


> I am going to argue contrary. If AI increases productivity 2x, it opens up as much new usecases that previously didn't seem worthy to do for its cost. So overall there will just be more work.

This is the entire history of the computing industry. We’ve been automating our work away for decades and it just creates more demand.


Yeah, this is only side projects, but I've been spending pretty much all of my free time now on side projects, largely because I feel much faster building them with LLMs and it has a compounding motivational effect. I also see so many use cases and work left to do, even with AI, the possibilities almost overwhelm me.

Well I do freelancing as well besides my usual day to day work, and that's also where direct benefits apply, and I'm getting more and more work, overwhelmingly so.


[flagged]


I wouldn't call it genius tab completion. Unfortunately, more than half of the time that the "genius" produces the code, I'm wasting my time reviewing code that is incorrect.


I'm sorry but I don't understand how people say LLMs are simply "tab completion".

They allow me to do much more than that thanks to all the knowledge they contain.

For instance, yesterday I wanted to write a tool that transfers any large file that is still being appended to to multiple remote hosts, with a fast throughput.

By asking Claude for help I obtained exactly what I want in under two hours.

I'm no C/C++ expert yet I have now a functional program using libtorrent and libfuse.

By using libfuse my program creates a continuously growing list of virtual files (chunks of the big file).

A torrent is created to transfer the chunks to remote hosts.

Each chunk is added to the torrent as it appears on the file system thanks to the BEP46 mutable torrent feature in libtorrent.

On each receving host, the program rebuilds the large file by appending new chunks as soon as they are downloaded through the torrent.

Now I can transfer a 25GB file (and growing) to 15 hosts as it is being written too.

Before LLM this would have taken me at least four days as I did not know those libraries.

LLMs aren't just parrots or tab completers, they actually contain a lot of useful knowledge and they're very good at explaining it clearly.


> By asking Claude for help I obtained exactly what I want in under two hours.

Did you use it in your editor or via the chat interface in the browser? Because they are two different approaches, and the one in the editor is mostly a (pretty awesome) tab completion.

When I tell an LLM to "create a script which does ..." I won't be doing this in the editor, even if copilot does have the chat interface. I'll be doing this in the browser because there I have a proper chat topic to which I can get back later, or review it.


I did not use copilot or cursor. I used the Claude interface. I'm planning to setup a proper editor tool such as Cursor as I believe they got much better lately. Last time I tried was 2023 and it was kind of a pain in the butt.


I tried Cursor this month but even though it is much better than copilot, it also tries to do too much. And both of them fail regularly at generating proper autocompletions, which makes Cursor a bigger annoyance because it messes up your code quite often, which copilot doesn't do. Cursor is too aggressive.

But using copilot as a better autocomplete is really helpful and well worth the subscription. Just while typing as well as giving it more precise instructions via comments.

It's like a little helper in the editor, while the ChatGPT/Claude in the browser are more like "thinking machines" which can generate really usable code.


good to know, thanks


That's fine for your quick hack that is probably a reimplementation of an existing program you can't find.

But it's not a production quality implementation of new need.


I am of the strong opinion most problems were solved 20-40 years ago and that most code written today is reimplementation using different languages.

I have shipped production code using LLMs in languages I did not study approved by seasoned SWE's is evidence that an acceleration is happening.


It's a knowledge base that can explain the knowledge it returns when you ask, how is that not useful in a professional environment for production code?

I mean if you assume all devs are script kiddies who simply copy paste what they find on google (or ChatGPT without asking for explanations) then yeah it's never gonna be useful in a prod setting.

Also you're very wrong to believe every technical need or combination of libraries has already been implemented in open source before.


True, but hey, even if it's not production code, it may be an ad-hoc thing that never gets push to production, it may be code reviewed by C++ experts and improved to production quality. At very least, someone saved four days with it, and could use the time for something, maybe something they are expert at. Isn't that still good?


Most of the time saving time is just an illusion. When that code will needed to be changed, people will spend more than 4 days debugging and understanding it. The mental model of it was written by AI. It can make sense or not at all. You’ll figure it out after 4 days.


The code is 2 files of 80 lines each and is very clear. There's no way any software developer needs 4 days to understand what it does.

Moreover Claude can explain the functions used very clearly (if you're too lazy to jump to definition in your editor)

LLMs are becoming actually useful to developers new to a language. Just as Google was 20 years ago.


People talk about completey different things. The article was about Google using LLM-s to generate code, not people making 80 lines with them at home. There is a huge difference. I don’t see any problem with the latter, but with the former there are many problems.


That sounds like a great idea, are you going to open source that?


I think I will, I don't have time to maintain additional software right for other people now but I'm definitely planning on open sourcing it when I get time


Yeah i see your point.

However i think that you might open source the thing with a disclaimer of no maintenance. Whoever is willing to maintain it can just fork it and move along.


> thanks to all the knowledge they contain

This is what's problematic with modern "AI". Most people inexperienced with it, like the parent commenter will uncritically assume these LLMs poses "knowledge". This I find the most dangerous and prevalent assumption. Most people are oblivious to the fact how bad LLMs are.


I know excatly how bad the output they give is, because I ask for output that I can understand, debug and improve.

People misusing tools don't make tools useless or bad. Especially since LLMs designers never claimed the compressed information inside models is spotless or 100% accurate, or based on logical reasoning.

Any serious engineer with a modicum of knowledge about neural networks knows what can or can't be done with the output.


Do people find these AI auto complete things helpful? I was trying the XCode one and it kept suggesting API calls that don't exist. I spent more time fixing its errors than I would have spent typing the correct API call.


I really really dislike the ones that get in your way. Like I start typing something and it injects random stuff (yes in the auto-complete colors). I have a similar feeling to when you hear your voice back in a phone: completely disabling your thought process.

In IntelliJ thankfully you can disable that part of the AI, and keep the part that you trigger it when you want something from it.


> I have a similar feeling to when you hear your voice back in a phone: completely disabling your thought process.

This is a fantastic description of how it disturbs my coding practice which I hadn't been able to put into words. It's like someone is constantly interrupting you with small suggestions whether you want them or not.


This is it. I have a picture in my mind and then it puts 10 lines of code in front of me and my brain can't ignore. When I'm done reviewing that, it's already tainted my idea.


I find the simpler engines work better.

I want the end of the line completed with focus on context from the working code base, and I don't want an entire 5 line function completed with incomplete requirements.

It is really impressive when it implements a 5 line function correctly, but its like hitting the lottery


I particularly like the part where it suggests changes to pasted code.

When I copy and paste code, very often it needs some small changes (like changing all xs to ys and at the same time widths to heights).

It's very good at this, and does the right thing the vast majority of the time.

It's also good with test code. Test code is supposed to be explicit, and not very abstracted (so someone only mildly familiar with a codebase that's looking at a failing test can at least figure the cause). This means it's full of boilerplate, and a smart code generator can help fill that in.


Visual Studio "intellisense" has always been pretty good for me. Seemed to make good guesses about my intentions without doing anything wild. It seemed to use ad hoc rules and patterns, but it worked and then got out of the way.

Then it got worse a couple of years ago when they tried some early-stage AI approach. I turned it off. I expect that next time I update VS it'll have got substantially worse and it will have removed the option for me to disable it.


Agreed, the old Visual Basic, Visual C++, Borland Delphi, Visual C# experiences were how I dove into the deep end of several languages back in the late 90's/early 2000's. Things were VERY discoverable at that point. Obviously a deeper understanding of a language is necessary for doing real work, but noodling around just trying to get a feel for what can be done, is a great way to get started.


I like Cursor, it seems very good at keeping its autocomplete within my code base. If I use its chat feature and ask it to generate new code that doesn’t work super well. But it’ll almost always autocomplete the right function name as I’m typing, and then infer the correct parameters to pass in if they’re variables and if the function is in my codebase rather than a library. It’s also unsurprisingly really good at pattern recognition, so if you’re adding to an enum or something it’ll autocomplete that sensibly too.

I think it’d be more useful if it was clipboard aware though. Sometimes I’ll copy a type, then add a param of that type to a function, and it won’t have the clipboard context to suggest the param I’m trying to add.


I really like Cursor but the more I use it the more frustrated I get when it ends up in a tight loop of wanting to do something that I do not want to do. There doesn’t seem to be a good way to say “do not do this thing or things like it for the next 5 minutes”.


It probably depends on the tool you use and on the programming language. I use Supermaven autocomplete when writing Typescript and it’s working great, it often feels like it’s reading my mind, suggesting what I would write next myself.


I mostly use one-line completes and they are pretty good. Also I really like when Copilot generates boilerplate like

    if err != nil {
      return fmt.Errorf("Cannot open settings: %w", err);
    }


I use the one at G and it's definitely helpful. It's not revolutionary, but it makes writing code less of a headache when I kinda know what that method is called but not quite.


I often delete large chunks of it unread if it doesn't do what I expected. It's much like copy and paste; deleting code doesn't take long.


So your test is "seems to work"?


No, what I meant is that, much like when copying code, I only keep the generated source code if it's written the way I would write it.

(By "unread" I meant that I don't look very closely before deleting if it looks weird.)

And then write tests. Or perhaps I wrote the test first.


Oh, if the AI doesn't do what you expected, got it.


Right now my opinion is that they're 60% unhelpful, so I largely agree with you. Sometimes I'll find the AI came up with a somewhat better way of doing something, but the vast majority of the time it does something wrong or does something that appears right, but it's actually wrong and I can only spot it with a somewhat decent code review.


I suspect that if you work on trivial stuff that has been asked on stackoverflow countless of times they work very nicely.


This is what I've been noticing. For C++ and Swift, it makes pretty unhelpful suggestions. For Python, its suggestions are fine.

Swift is especially frustrating because it will hallucinate the method name and/or the argument names (since you often have to specify the argument names when calling a method).


Ah I've had it hallucinate non-existing methods in python rather often.

Or when I say I need to do something, it invents a library that conveniently happens to just do that thing and writes code to import and use it. Except there's no such library of course.


No, not at all.

"classic" intellisense is reliable, so why introduce random source in the process?


I use Codeium in NeoVim and yes I find it very helpful. Of course, is not 100% error free, but even when it has errors most of the time it is easier for me to fix them than to write it from scratch.


Often yes. There were times when I was writing unit tests that was me just naming the test case, with 99% of the test code auto generated based on the existing code, and the name.


Looks like model is not trained well. From my exp, after make few projects (2 looks enough), oldest XCode managed to give good suggestions in much more than 50% cases.


It is useful in our use case.

Realtime tab completion is good at some really mundane things within the current file.

You still need a chat model, like Claude 3.5 to do more explorational things.


I was evaluating it for a month and caught myself regularly switching to an IDE with non-AI intellisense because I wanted code that actually works.


No, not at all. It’s just the hype. It doesn’t replace engineering.


The one Xcode has is particularly bad, unfortunately.


Copilot is very good.


This is my experience as well. LLMs are great to boost productivity, especially in the hands of senior engineers who have a deep understanding of what they're doing because they know what questions to ask, they know when it's safe to use AI-generated code and they know what issues to look for.

In the hands of a junior, AI can create a false sense of confidence and it acts as a technical debt and security flaw multiplier.

We should bring back the title "Software engineer" instead of "Software developer." Many people from other engineering professions look down on software engineers as "Not real engineers" but that's because they have the same perspective on coding as typical management types have. They think all code is equal, it's unavoidable spaghetti. They think software design and architecture doesn't matter.

The problems a software engineer faces when building a software system are the same kinds of problems that a mechanical or electrical engineer faces when building any engine or system. It's about weighing up trade-offs and making a large number of nuanced technical decisions to ultimately meet operational requirements in the most efficient, cost-effective way possible.


In my day to day, this still remains the main way I interact with AI coding tools.

I regularly describe it as "The best snippet tool I've ever used (because it plays horseshoes)".


Horseshoes? As in “close enough”?


Or, as in, “Ouch, man! You hit my foot!”


As long as hand grenades arent introduced, I could live with that.


Honestly, I don't think "close only count in horseshoes, hand grenades, and production code" will ever catch on...


This is why I frame it as a "snippets" plugin, rather than a Code generation tool.

I would be very confused if someone told me that they uncritically used the generated code from a snippet program with no manual input or understanding, and I feel the same with Copilot. At best, it suggests an auto-complete that I read and interpret before accepting.

The closest I come to "code generation" is during test writing, where occasionally I will let the description generate some setup, but only in tests where there are a broad number of examples to follow, and I am still going to end up re-writing a decent chunk of it based on personal example. I would not "let it write the test suite for me" and then trust the green, and I suspect that would easily fail code review (though it would be an interesting experiment...).

Obviously your comment as a good goof and well made, but it does speak to a little bit of the disconnect between what is being touted as an "AI coding tool" and how I, a person who makes react native apps to pay my rent, actually use the dang thing (i.e., "A pretty good snippets plugin"). Is My code 'AI generated'? I wouldn't call it that, but who can say definitively? We're in a fun new semantic world now.


I'm working on a CRM with a flexible data model, and ChatGPT has written most of the code. I don't use the IDE integrations because I find them too "low level" - I work with GPT more in a sort of "pair programming" session: I give it high level, focused tasks with bits of low level detail if necessary; I paste code back and forth; and I let it develop new features or do refactorings.

This workflow is not perfect but I am definitely building out all the core features way faster than if I wrote the code myself, and the code is in quite a good state. Quite often I do some bits of cleanup, refactorings, making sure typings are complete myself, then update ChatGPT with what the code now looks like.

I think what people miss is there are dozens of different ways to apply AI to your day-to-day as a software engineer. It also helps with thinking things through, architecture, describing best practices.


I share your sentiment, I've written three apps where I've used language models extensively (a different one for each: ChatGPT, Mixtral and Llama-70B) and while I agree that they where immensely helpful in terms of velocity, there are a bunch of caveats:

- it only works well when you write code from scratch, context length is too short to be really helpful for working on existing codebase.

- the output code is pretty much always broken in some way, and you need to be accustomed to doing code reviews to use them effectively. If you trust the output and had to debug it later it would be a painfully slow process.

Also, I didn't really noticed a significant difference in code quality, even the best model (GPT-4) write code that doesn't work, and I find it much more efficient to use open models on Groq due to the really fast inference. Looking at ChatGPT slowly typing is really annoying (I didn't test o1 and I have no interest in doing so because of its very low throughput).


> context length is too short to be really helpful for working on existing codebase.

This is kind of true, my approach is I spend a fairly large amount of time copy-pasting code from relevant modules back and forth into ChatGPT so it has enough context to make the correct changes. Most changes I need to make don't need more than 2-3 modules though.

> the output code is pretty much always broken in some way, and you need to be accustomed to doing code reviews to use them effectively.

I think this really depends on what you're building. Making a CRM is a very well trodden path so I think that helps? But even when it came to asking ChatGPT to design and implement a flexible data model it did a very good job. Most of the code it's written has worked well. I'd say maybe 60-70% of the code it writes I don't have to touch at all.

The slow typing is definitely a hindrance! Sometimes when it's a big change I lose focus and alt-tab away, like I used to do when building large C++ codebases or waiting for big test suites to run. So that aspect saps productivity. Conversely though I don't want to use a faster model that might give me inferior results.


> approach is I spend a fairly large amount of time copy-pasting code from relevant modules back and forth into ChatGPT

It can work, but what a terrible developer experience.

> I'd say maybe 60-70% of the code it writes I don't have to touch at all

I used to to write web apps so the ratio was even higher I'd say (maybe 80/90% of the code didn't need any modification) but the app itself wouldn't work at all if I didn't make those 10% changes. And you really need to read 100% of the code because you won't know upfront where those 10% will be.

> The slow typing is definitely a hindrance! Sometimes when it's a big change I lose focus and alt-tab away, like I used to do when building large C++ codebases or waiting for big test suites to run.

Yeah exactly, it's xkcd 303 but with “IA processing the response” instead of “compiling”. Having instant response was a game changer for me in terms of focus hence productivity.

> I don't want to use a faster model that might give me inferior results

As I said earlier, I didn't really feel the difference in quality so the switch was without drawbacks.


> I'd say maybe 60-70% of the code it writes I don't have to touch at all.

...yet. Bugs can take time to surface.


And this is equally true whether the code was entirely written by a human or not.


... except "not" delivers this "the output code is pretty much always broken in some way".


> Also, I didn't really noticed a significant difference in code quality, even the best model (GPT-4) write code that doesn't work,

Interesting, personally I have noticed a difference. Mostly in how well the models pick up small details and context. Although I do have to agree that the open Llama models are generally fairly serviceable.

Recently I have tended to lean towards Claude Sonnet 3.5 as it seems slightly better. Although that does differ per language as well.

As far as them being slow, I haven't really noticed a difference. I use them mostly through the API with open webui and the answers come quick enough.


I use o1 for research rather than coding. If I have a complex question that requires combining multiple ideas or references and checking the result, it's usually pretty good at that.

Sometimes that results in code, but it's the research and cross referencing that's actually useful with it


Its interesting to see these LLM tools turning developers into no-code customers. Where tools like visual site builders allowed those without coding experience to code a webpage, LLMs are letting those with coding experience to avoid the step of coding.

There's not even anything wrong with that, don't take my comment the wrong way. It is an interesting question of what happens at scale though. We could easily find ourselves in a spot where very few people know how to code and most producing code don't actually know how it works and couldn't find or fix a bug if they needed to. It also means LLMs would be stuck with today's code for a training set until it can invent its own coding paradigms and languages, at which point we're all left in the dust trusting it to work right.


> I paste code back and forth

There is this tool Aider. Takes your prompt, adds code files (sometimes not all of your code files but files it figures relevant) and prepares one long prompt, sends it to an LLM, receives the response, and makes a git commit based on the response. If you rather review git commits, it can save you the back-and-forth copy-pasting. https://aider.chat/


Note that the default mode will automatically change and commit the code, which I found counter-intuitive. I prefer using the architect mode, where it first tells you what it is going to do, so you can iterate on it before making changes.


This is exactly how I’ve used copilot for over a year now. It’s really helpful! Especially with repetitive code. Certainly worth what my employer pays for it.

The general public has a very different idea of that though and I frequently meet people very surprised the entire profession hasn’t been automated yet based on headlines like this.


Because you are using it like that doesn't mean that it can't be used for the whole stack and on its own and the public including laymen such as the Nvidia CEO and Sam think that yes, we (I'm a dev) will be replaced. Plan accordingly my friend.


> Because you are using it like that doesn't mean that it can't be used for the whole stack

Well no, but we have no evidence it can be used for the whole stack, whatever that means.


Even last year's gpt4 could make a whole iphone app from scratch for someone that doesn't know how to code. You can find videos online. I think you are applying the ostrich method which is understandable. We need to adapt.


Complexity increase over time. I can create new features in minutes for my new selfhosted projects, equivalent work on my entreprise work takes days...


New Gemini has millions of context windows. Think big and project 1-2 years


> I think you are applying the ostrich method which is understandable

Asking for evidence is not being an ostrich.


The ostrich method is avoiding existing evidencea available online and searchable for full stack llm programming


Making a simple app isn't evidence that it will replace people, any more than a 90%-good self-driving car is evidence that we'll get a 100%-good self-driving car.


Which industry would you pivot to? The only industry that is desperate for workers right now is the defense industry. But manufacturing shells for Ukraine and Israel does not seem appealing.


I was a hacker before the entire stack I work in was common or released, and I’ll be one when all our tools change again in the future. I have family who programmed with punch cards.

But I doubt the predictions from men whose net worth depends on the hype they foment.


It's not tools. It's intelligent agents capable of human output.


The laymen was ironic of course..


A few years ago we called that IntelliSense, right?

I remember many years ago as a Java developer, Netbeans could do such things as complete `psvm` to "public static void main() {...}", or if you had a field "private String name;" you could press some key combination and it would generate you the getter and setter, complete with javadoc which was mandatory at that place because apparently you need "Returns the name.\n @return The name." on a method called getName() in case you wondered what it was for.


I think most people define "Intellisense" as "IDE suggestions based on static anaysis results". Sometimes it blends a bit of heuristics/usage statistics as added feature depending on the tool. They are mostly deterministic, based on actual AST of your code, and never hallucinates. They may not be helpful but can never be wrong.

On the other hand, LLMs are completely different -- based on machine learning and everything is random and about statistics. It depends on training data and context. It is more useful but make a ton of mistakes.


Yes, Copilot and other LLM coding tools are just a (much) better version of IntelliSense.


Much worse imo.


That could be too. I don't use LLMs so I'm just giving it the benefit of the doubt based on other commentors here.


Most jetbrains IDEs come with those snippets and if you’re using IDEA, the code will be 50%+ generated by the IDE.


That's what I thought. In recent weeks, most of the code I’ve written has been AI-generated. But it was mostly JSDoc comments, type checking (I'm writing JavaScript), abstracting code if I see that I'm repeating myself a little too often, etc.

All things that I would consider tedious housekeeping, but nothing that needs serious reasoning.

It's basically a glorified LSP.


I know you're not saying anything revolutionary but this is the best succinct yet fair description of these tools that I've seen. They're not worthless but they're not job destroying.


You're right, it's not revolutionary at all. But I'm glad you liked my summary!


Before I go and rip out and replace my development workflow, is it notably better than auto complete suggestions from CoC in neovim (with say, rust-analyzer)? I'm generally pretty impressed how quickly it gives me the right function call or whatever, or it's the one of the top few.


It's more than choosing the right function call, it goes further than that. If your code has patterns, it recognises and suggests them.

For instance, one I find very useful is that we have this pattern of checking the result of a function call, logging the error and returning, or whatever. So now, every time you have `result = foo()`, it will auto suggest `if (!result) log_error...` with a generally very good error message.

Very basic, but damn convenient. The more patterns you use, the more helpful it becomes.


Does it make you 25% more productive?


Between the fraction of my time I spend actually writing code, and how much of the typing time I’m using to think anyway, I dunno how much of an increase in my overall productivity could realistically be achieved by something that just helped me type the code in faster. Probably not 25% no matter how fast it made that part. 5% is maybe possible, for something that made that part like 2-3x faster, but much more than that and it’d run up against a wall and stop speeding things up.


I imagine that those who cherished the written word thought similar thoughts when the printing press was invented, when the typewriter was invented, and before excel took over bookkeeping.

My productivity isn't so much enhanced. It's only 1%... 2%... 5%... globally, for each employee.

Have you ever dabbled with, mucked around in, a command line? Autocomplete functions there save millions of man-hour-typing-units per year. Something to think about.

A single employee, in a single task, for a single location may not equal much gained productivity, but companies now think on much larger scales than a single office location.


This is a fallacy because there is no way to add up 1% savings across 100 employees into an extra full time employee.

Work gets scheduled on short time frames. 5% savings isn't enough to change the schedule for any one person. At most, it gives me time to grab an extra coffee. I can't string together "foregone extra coffees" into "more tasks/days in the schedule".


This. I had the same conversation years ago with someone who said "imagine if Windows booted 30s faster, all the productivity gains across the world!" And I said the same thing you did: people turn their computer on and then make a cup of tea.

Now making a kettle faster? That might actually be something.


If 25% of code was AI-written, wouldn't it be a 33[.333...]% increase in productivity?


It is not a direct correlation. I might write 80% of the lines of code in a week, then spend the next 6 months on the remaining 20%. If the AI was mostly helpfull in that first week, overall productivity gain would be very low.


Who spends 100% of their time actually typing code?

It’s probably closer to 10% than 100%, especially at big companies.

One thing I would love to see is reports of benefits from various tools coming with one’s typing ability in WPM. I’d also like to see that on posts where people express a preference for “a quick call” or stopping by your desk rather than posting what they want in chat. I have some hypotheses I’d like to test out.


Not if there was also an 8.333̅% increase in slacking off.

Wait, no. That should be based on how much slacking off Google employees do ordinarily, an unknown quantity.


You can just check Memegen traffic to figure that one out.


This is a great anecdote. SOTA models will not provide “engineering” per se, but they will easily double productivity of a product manager that is exploring new product ideas or technologies. They are much more than intelligent auto-complete. I have done more with side projects in the last year than I did in the preceding decade.


One of my friends put it best: I just did a months worth of experimentation in two hours.


I find this hard to believe. Can someone give me an example of something that takes months that AI can correctly do in hours?


Not hours; but days instead of months: porting around 30k lines of legacy livescript project to typescript. Most of the work is in tweaking a prompt for Claude (using Aider) so the porting process is done correctly.


Thankfully it seems like AI is best at automating the most tedious and arguably most useless endeavor in software engineering- rewriting perfectly good code in whatever the language du jour is.


Again, what AI is good at shows the revealed preferences of the training data, so it does make sense that it would excel at pointless rewrites.


Legacy code in a dynamically typed language is never good.


Use Undermind to gather a literature review of a field adjacent to the one you’re working in but with a wealth of information that you don’t yet know.

Use OpenAI to convert a few thousand lines of code from a language you're familiar with to one you’re not, as all the state-of-the-art tools in the field above use that language. Debug all the issues that arise from the impedance mismatch between the languages. Recreate the results from the seminal paper in the field to verify that the code works, and run it on your own problem. Write a stream-of-consciousness post without spell-checking, then throw it into GPT and ask it to fix it.


sounds to me like you're tooting your own horn.


I can totally see it.

It is actually a testament that, part of Google's code are ... kinda formulaic to some degree. Prior to the LLM take over, we already heard praise how Google's code search works wonder in helping its engineer writing code, LLM just brought that experience to next level.


Long before this current AI hype cycle, we’ve had excellent code completion in editors for decades. So I guess by that definition, we’ve all been writing AI assisted code for a very long time.


I'd say so, and it's a bit misleading to leave that out. Code generation is almost as old as computing. So far, most of it happened to be deterministic.


Yeah but it didn't cost trillions and needed its own nuclear power plant. Noone disputes that llm/ai is cool/can be helpful but at what cost, where is the roi?


So more or less on par with continue.dev using a local starcoder2:3b model


I wondered if this the real context. i.e. They are just referring to code-completion as AI-generated code. But, the article seems like it is referring to more than that?


Sounds like the JetBrains new local AI autocomplete. If it's anything like that, it's honestly my ideal application of generative deep learning.


Stuff that works well with AI seems to correlate pretty well with high churn changes. I've had good luck using AI to port large numbers of features from version A to version B, or getting code with a a lot of dependencies under mocked unit tests.

It's easy to see that adding up quickly to represent large percentages of the codebase by line, but it's not feature development or solving hard problems.


Same things I use it for as well - crap like "update this class to use JDK21" or "re-implement this client to use AWS SDKv2" or whatever.

And it works maybe... 80% of the way and I spend all my time fixing the remaining 20%. Anecdotally I don't "feel" like this really accelerates me or reduces the time it would take me to do the change if I just implemented the translation manually.


Amazon is publicly claiming that they have saved hundreds of millions on jvm upgrades using AI, so while it feels trivial - because before that work would end up in the "just don't do it" pile - it's a relevant use case.


I wonder how this works with IP rights in the USA. Like, is `function getAc` eligible for copyright protection, but `tionHandler()` isn't? After all, [1]

[1] https://www.reuters.com/legal/ai-generated-art-cannot-receiv...


Thank you for this comment. So the code written in this manner isn't really "created by AI"; AI is just a nice additional feature of an editor.

I wonder if the enormous hype around AI is a good or bad thing; it's obviously both but will the good win out the bad, or will the disappointment eventually be so overwhelming as to extinguish any enthusiasm.


How do you square this comment with the one right below it[1], which explicitly confirms the statement that Google is using GenAI via Gemini to write code? Lots of mixed signals coming from the Googlers here.

1: https://news.ycombinator.com/item?id=41992028


This is pretty much what I've found with Copilot as well. It's like a slightly smarter autocomplete in most cases. Copilot tends toward being a little eager sometimes, but it's easy enough to just ignore the suggestions when it starts going down a weird path.


This autocomplete seems about on par with github copilot. Do you also get options for prompting it on specific chunks of code and performing specific actions such as writing docs or editing existing code? All things that come standard with gh copilot now.


I'm confused, I've been doing similar tab completion for function names in eclipse since about 2003...


We have this at our company too. I guess it’s useful but doesn’t really have a whole lot of time.


Which editor is Google's AI code completion integrated with? VS Code?


Yeah


also useful for writing unit tests, comments, descriptions, so if you count all of that as code, together with boilerplate stuff, then yeah, it could add up to 25%.


> If I'm writing "function getAc..." it's smart enough to complete to "function getActionHandler()", and maybe suggest the correct arguments and a decent jsdoc comment.

I really mean no offense, but your example doesn't sound much different from what old IDEs (say, Netbeans) used to do 15 years ago.

I could design a Swing ui and it would generate the code and if I wanted to override a method it would generate a decent boilerplate boilerplate (a getter, like in your example) along with usual comments and definitely correct parameters list (with correct types).

Is this "AI Code" thing something that appears new because at some point we abandoned IDEs with very strong intellisense (etc) ?


This video is a pretty good one on how it works in practice: https://storage.googleapis.com/gweb-research2023-media/media...


"Our overhyped Autocomplete Implementation (A.I.) is completing 25% of our lines of code so well that we need to fund nuclear reactors to power the server farms."


My first reaction to the title was, "That explains why things are broken." but this explanation makes so much sense. Thanks for clarifying.

But yeah, I wish the new version of Chrome worked better. ¯\_(ツ)_/¯


Kerry said hi




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: