Hacker Newsnew | past | comments | ask | show | jobs | submit | marcyb5st's commentslogin

I am more afraid if AI will actually deliver what CEOs are touting. People that are now working will be unemployable and will have to pivot to something else, overcrowding these other sectors and driving wages down.

If that comes to pass you will work the same or more for less money than now.

Basically jump back to a true plutocracy since only a few people will syphon the wealth generated by AI and that wealth will give them substantial temporal power.


This is basically just a standard cliche doomer prediction about any new development.

I mean, I just dont see any evidence of that happening. TBF I'm a SWE so I can only speak to that segment, but its literally worse than useless for working with anything software related thats non-trivial...

I see that sentiment here all the time and I don't understand what you must be doing; our projects are far from non trivial and we get a lot of benefit from it in the SWE teams. Our software infra was alway (almost 30 years) made to work well with outsourcing teams, so maybe that is it, but I cannot understand how you can have quite that bad results.

> our projects are far from non trivial

Well, there you go; as they say, it's okay for trivial stuff.


Butting in here but as I have the same sentiment as monkaiju: I'm working on a legacy (I can't emphasize this enough) Java 8 app that's doing all sorts of weird things with class loaders and dynamic entities which, among others, is holding it in Java 8. It has over ten years of development cruft all over it, code coverage of maybe 30-40% depending on when you measure it in the 6+ years I've been working with it.

This shit was legacy when I was a wee new hire.

Github Copilot has been great in getting that code coverage up marginally but ass otherwise. I could write you a litany of my grievances with it but the main one is how it keeps inventing methods when writing feature code. For example, in a given context, it might suggest `customer.getDeliveryAddress()` when it should be `customer.getOrderInfo().getDeliveryInfo().getDeliveryAddress()`. It's basically a dice roll if it will remember this the next time I need a delivery address (but perhaps no surprises there). I noticed if I needed a different address in the interim (like a billing address), it's more likely to get confused between getting a delivery address and a billing address. Sometimes it would even think the address is in the request arguments (so it would suggest something like `req.getParam('deliveryAddress')`) and this happens even when the request is properly typed!

I can't believe I'm saying this but IntelliSense is loads better at completing my code for me as I don't have to backtrack what it generated to correct it. I could type `CustomerAddress deliveryAddress = customer` let it hang there for a while and in a couple of seconds it would suggest to `.getOrderInfo()` and then `.getDeliveryInfo()` until we get to `.getDeliveryAddress()`. And it would get the right suggestions if I name the variable `billingAddress` too.

"Of course you have to provide it with the correct context/just use a larger context window" If I knew the exact context Copilot would need to generate working code, that eliminates more than half of what I need an AI copilot in this project for. Also if I have to add more than three or four class files as context for a given prompt, that's not really more convenient than figuring it out by myself.

Our AI guy recently suggested a tool that would take in the whole repository as context. Kind of like sourcebot---maybe it was sourcebot(?)---but the exact name escapes me atm. Because it failed. Either there were still too many tokens to process or, more likely, the project was too complex for it still. The thing with this project is although it's a monorepo, it still relies on a whole fleet of external services and libraries to do some things. Some of these services we have the source code for but most not so even in the best case "hunting for files to add in the context window" just becomes "hunting for repos to add in the context window". Scaling!

As an aside, I tried to greenfield some apps with LLMs. I asked Codex to develop a minimal single-page app for a simple internal lookup tool. I emphasized minimalism and code clarity in my prompt. I told it not to use external libraries and rely on standard web APIs.

What it spewed forth is the most polished single-page internal tool I have ever seen. It is, frankly, impressive. But it only managed to do so because it basically spat out the most common Bootstrap classes and recreated the W3Schools AJAX tutorial and put it all in one HTML file. I have no words and I don't know if I must scream. It would be interesting to see how token costs evolve over time for a 100% vibe-coded project.


Copilot is notoriously bad. Have you tried (paid plans) codex, Claude or even Gemini on your legacy project? That's the bare minimum before debating the usefulness of AI tools.

> Copilot is notoriously bad.

"notoriously bad" is news to me. I find no indication from online sources that would warrant the label "notoriously bad".

https://arxiv.org/html/2409.19922v1#S6 from 2024 concludes it has the highest success rate in easy and medium coding problems (with no clear winner for hard) and that it produces "slightly better runtime performance overall".

https://research.aimultiple.com/ai-coding-benchmark/ from 2025 has Copilot in a three-way tie for third above Gemini.

> Have you tried (paid plans) codex, Claude or even Gemini on your legacy project?

This is usually the part of the pitch where you tell me why I should even bother especially as one would require me to fork up cash upfront. Why will they succeed where Copilot has failed? I'm not asking anyone to do my homework for me on a legacy codebase that, in this conversation, only I can access---that's outright unfair. I'm just asking for a heuristic, a sign, that the grass might indeed be greener on that side. How could they (probably) improve my life? And no, "so that you pass the bare minimum to debate the usefulness of AI tools" is not the reason because, frankly, the less of these discussions I have, the better.


I'm saying this to help you. Whether you give it a shot makes no difference to me. This topic is being discussed endlessly everyday on all major platforms and for the past year or so the consensus is strongly against using copilot.

If you want to see if your project and your work can benefit from AI you must use codex, Claude code or Gemini (which wasn't a contender until recently).


> This topic is being discussed endlessly everyday on all major platforms and for the past year or so the consensus is strongly against using copilot.

So it would be easy to link me to something that shows this consensus, right? It would help me see what the "consensus" has to say about the known limitations of Copilot too. It would help me see the "why" that you seem allergic to even hint at.

Look, I'm trying to not be close-minded about LLMs hence why I'm taking time out of my Sunday to see what I might be missing. Hence my comment that I don't want to invest time/money in yet-another-LLM just for the "privilege" of debating the merits of LLMs in software engineering. If I'm to invest time/money in another coding LLM, I need a signal, a reason, to why it might be better than Copilot for helping me do my job. Either tell me where Copilot is lacking or where your "contenders" have the upper-hand. Why is it a "must" to use Codex/Claude/Gemini other than trustmebro?


What products and what models have you tried?

I couldn't tell you because I've kept it at arms length but over the last year our most enthusiastic "AI guy" (as well as another AI-user on the team) has churned through quite a few, usually saying something like "$NEW_MODEL is much better!" before littering garbage PRs all over the project.

I am a solution engineer mostly on the traditional ML side of things but have good knowledge of K8S/GKE. The most fun I had last year was helping a customer serve their models at scale. They thought it was cost prohibitive (500k inferences/second and a hard requirement of 7ms at p99) and so they were basically serving from a cache which was lossy (the combinatorial explosion of features made it so that to have full coverage you needed exabytes of ram) and was stale prone. We focused on the serving first. After their data scientists trained a New pytorch model (small one, 50k parameters more or less) we compiled to onnx (as the model is small and CPU inference is actually faster), grafted the preprocessing layers to the model so that you never leave the ONNX C++ runtime (to avoid python), and deployed it to GKE. A 8 core node using AMD genoa cpus managed to get 25k/inferences per second. After a bit of fiddling with Numa affinity, GKE DNS replication, Triton LRU caches and few other things we managed to hit 30k inferences per second. If you scale up to the traffic it would cost them few thousands per month, which is less than their original cache approach.

Now they are working on continuous learning so that they can roll out new model (it is a very adversarial line of business and the models get stale in O(hours)). For that part I only helped them design the thing, no hands on. It was a super fun engagement TBH


Are they paying you as well as your comment makes it sound? That was a ton of lingo and I'm used to lingo!

Yeah :) happy on the money front. I didn't mention this earlier, but I'm a Googler and my role is to make sure that big customers are as happy as possible on GCP. And Google still pays well (talking with my SWE friends my total comp lands around the middle of the pack, perhaps a bit fewer stock grants). I was a SWE (also at Google) before, so maybe they didn't change my comp too much in the new job family. I don't know as those things are mysterious.

Also, not all projects are this fun. Sometimes is solving the same problem over and over or working with customers that aren't tech savvy and there is a bunch of politicking and "fluffy" stuff.


It is not the same and your metaphor is bad. Furniture generates revenue when sold, loans generate revenue just by holding them as there is an interest that needs to be repaid.

Hence the shop sells the inventory as fast as they can, while banks hold safe loans as long as they can unless they believe that aren't safe anymore or that they can make more money with something else.


You understand how present value works right? Holding a loan on balance sheet generates a stream of income that extends into the future and has to be discounted and have credit value adjustments applied, which is very expensive because you have to then hedge the rate and credit risk on top of funding the actual loans themselves. Selling a loan has two benefits over this. Firstly it generates actual cash now which you can book right away and is real so doesn't need to have CVA and discounting. Secondly it gives you a mark to market price for any piece of the loan or similar loans which you are still holding in your book.

I'm not speaking theoretically here - I have been forced to sell loans for capital reasons because they were too expensive to fund even though they were all current, 100% money good and massively overcollateralized.


Or want to diversify to reduce exposure, after all a loan inherently has a risk. Doesn’t mean ( it could ) it’s a fire sale a eggs basket situation

Invest into stuff that people will need regardless of the bubble popping like medicine, food, internet access, energy, ... . Stay away from luxury/travel stuff.

Also, during a crash there is the so called "flight for quality" where people cash out from risky assets and invest in stable ones that can weather the storm. So, try to invest in assets that are A or above (https://en.wikipedia.org/wiki/S%26P_Global_Ratings). The chart is for countries, but analysts grade companies as well in case you want to stay away from treasuries/national bonds.

Also diversify geographically. US will likely take the biggest hit if the bubble pops, so perhaps European markets that lagged behind in adopting the technology are safer (IMHO).

Personally, I am preparing by moving money from growth items to stable ones a bit at the time. To diversify even further I am using ETFs that, in addition to what mentioned above

1) pay dividends (whether these distributed or reinvested doesn't really matter) 2) are denominated in or hedged in safer currencies (CHF especially, but also Euro)

You definitely get smaller returns, but the name of the game is to maintain what you have, not to make heaps of money.

Finally, I am not a financial advisor, so do your own valuations/risk assessment analysis.


AUAU ETF crashed 11% today... Ask me how I know that :(

I would argue that parts of the economy should (hopefully) remain healthy. I mean, AI bubble or not, people need medicine, food, internet access, energy, ... . Invest in that.

Also (not a financial advisor), when a crash occur there is a so called "flight for quality" where people move money they made by cashing out the assets to stable (A+ assets). So look for companies that have solid financials and can weather the storm.

Finally, diversify not only on the industry, but also geographically. EU, Swiss, Asian. I personally stay a bit away from emerging markets stuff as I don't have enough knowledge to make informed decisions (I don't even consider Emerging Market ETFs which should be run by SMEs).


Well, you can go to another bank so that they can cover your mortgage and open a new one with new rates/conditions.

That you can do anywhere as long as you have a collateral/guarantor.


You really said 12 USD/KWh? Time to put solar panels/batteries over there. Even if you resell to the grid at 1/10th of that you recoup the investment in O(months) and not O(years)


Yeah, it's a bit of a convoluted system. They'll take your peak day during a period, and charge you 12/kwh for your usage during the peak period of the day.

So you can easily add 1-200 dollars to your bill for one day of higher usage.

https://www.myhorrynews.com/news/horry-electric-co-op-to-cha...


Similar to my thoughts. If we are still scrambling to find stuff the average Joe finds useful, the 100s of Billions poured into this gold rush are wasted (IMHO).


Nadella's vibe lately (here and in his 2025 retrospective) seems to be "AI can be amazing and transformative and life-changing, and it's up to end users to figure out how to make that happen and they're not doing it and it's not our fault."

It's not even a solution in search of a problem, it's a tool in search of a reason to use it as a solution to a problem on such a scale that it justifies the billions of dollars of money we've poured into it while driving up the cost of fresh water, electricity, RAM, storage, data centre space, and so on.


This reminds me of the early 1980s, when home PCs were still very new, the main use cases that vendors used to promote were managing household accounts and recipes. These use cases were extremely unimpressive for most ordinary people. It took a long time for PCs to become ubiquitous in homes - until gaming and the web became common.


The web was an academic project funded by modest research grants, requiring nowhere near the level of capital and electricity AI requires. The output of that research emphasized open source and decentralized implementation, which is antithetical to corporate AI models that are predicated on vendor lock-in.

Consumer adoption also happened organically over time, catalyzed mostly by email and instant messaging, which were huge technological leaps over fax and snail mail. IBM and DEC didn't have to jam "Internet" buttons all over their operating systems to juice usage (although AOL certainly contributed to filling landfills with their free trial disks).


Well, LLM is mainly aiming to “Improve” what we can already do. It’s not really opening up new use cases the way the personal computer, the smart phone, or the Internet did.


Thank you for putting this so succinctly, this is what I'm observing as well.

Feels like this combination (usually) creates a race to the bottom instead of expansion of new ideas.

LLMs kind of feel somewhere in the middle


Ideally, zillions of consumers have been languishing for years and when the time is right they're all collectively chomping at the bit when a new highly-affordable technology comes along that they just can't get enough of.

This isn't one of those times.


People said the same thing 30 years ago about the internet.

I’m spending $400/mo on AI subscriptions at this point. Probably the best money I spend.


And the people who bought a lot of shovels during the gold rush thought they were making the smartest money move


some of them did make it big, and towns and building are named after them

but lots of folks were broke as hell and miserable


or dead


Dude, I'm getting a shovel factory for practically nothing. I'm easily realizing 5x value on that investment.

I'd say for an estate that I am the executor of, it probably saved me $50k in legal fees and other expenses because it helped me analyze a novel problem and organize it ask the right questions of counsel.

Another scenario i had to deal with i needed a mobile app to do something very specific for a few weeks. I specced out a very narrowly useful iphone application, built it out on the train from DC to NYC, and had it working to my satisfaction the next day. Is it production code ready for primetime? Absolutely not. But I got capability to do what I needed super quickly that my skill level is no longer up to the task to accomplish!

IMO, these things let you make power tools, but your ability to get value is capped by your ability to ask the right questions. In the enterprise, they are going to kill lots of stupid legacy software that doesn't add alot of value, but adds alot of cost.


I'd wonder how much that scales up though for the benefit of the companies that are each investing hundreds of billions and hope to see a net return. How many developers like you (presumably less of you seeing as each is more productive) or enterprises you work for paying fees (along with slimming down legacy costs paid to someone) does it take to get up in the 12 digit range?


No idea, and not my problem. I’m surprised I’ve been downvoted so much in these comments. I’m not saying OpenAI et al is a good company or good financial scenario, or good investment.

The technology is amazingly powerful. Full stop.

The constraint that drives cost is technical — semiconductor prices. Semiconductors are manufactured commodities over time, those costs will drop over time. The Sun workstation I bought for $40k in 1999 would get smoked by a raspberry pi for $40.

Even if everyone put their pencils down and stopped working on this stuff, you’d get a lot of value from the open source(-ish) models available today.

Worst case scenario, LLMs are like Excel. Little computer programs will be available to anyone to do what they need done. Excel alone changed the world in many ways.


The owner of the metaphorical shovel factory is the company you pay for access to a model. You have a steady supply of shovels.


that $400/month is essentially the introductory price, subsided in an attempt to grab market share

that $400 will go up by at least a factor of 10 once the bubble pops

would you be prepared to pay $4000/month?


Nah, I'll move much of it locally when it becomes cost justified to do so.

I doubt that the exponential cost explosion day is coming. When the bubble pops, the bankruptcies of many of the players will push the costs down. US policy has provided a powerful incentive for Chinese players to do what Google has done and have a lower cost delivery model anyway.


it's not exponential, it's linear

> the bankruptcies of many of the players will push the costs down

the running costs don't disappear because people go broke


Your words bro. 10x isn’t linear.

The cost iceberg with this stuff isn’t electricity, it’s the capital.

Other than Google and Facebook, the big hype players can’t produce the growth required to support the valuations. That’s why the OpenAI people started fishing for .gov backstops.

The play is get the government to pay and switch out whatever Nvidia stuff they have now with something more efficient in a few years.


It's the dot com bubble all over again. When you are losing money on every transaction you can't make it up on volume.


My take is that if we are still scrambling to find something objectively useful (as recognized by the median person) then we really are in AI bubble territory.

When non techie friends/family bring up AI there are two major topics: 1) the amount of slop is off the charts and 2) said slop is getting harder to recognize which is scary. Sometimes they mention a bit of help in daily tasks at work, but nothing major.


My non tech friends/family use AI to ask for silly stuff (they could google it), or just to ask silly questions and see how they react. We have a relative not that famous but maybe known in a niche and they spent like a whole weekendd sending screenshots of GPT, where they asked if this person was known, who was this person, etc.

They don't find AI useful, just a toy. Is their fault? Maybe.


  > They don't find AI useful, just a toy. Is their fault? Maybe.
idk i'm a software dev, and to be honest, when outside of work this is also what i use chatgpt for, its really funny to see its reactions to various prompts


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: