Hacker Newsnew | past | comments | ask | show | jobs | submit | arrowleaf's commentslogin

My feeling has been that 'serious' software engineers aren't particularly suited to use these tools. Most don't have an interest in managing people or are attracted to the deterministic nature of computing. There's a whole psychology you have to learn when managing people, and a lot of those skills transfer to wrangling AI agents from my experience.

You can't be too prescriptive or verbose when interacting with them, you have to interact with them a bit to start understanding how they think and go from there to determine what information or context to provide. Same for understanding their programming styles, they will typically do what they're told but sometimes they go on a tangent.

You need to know how to communicate your expectations. Especially around testing and interaction with existing systems, performance standards, technology, the list goes on.


All our best performing devs/engineers are using the tools the most.

I think this is something a lot of people are telling themselves though, sure.


Best performing by what metric? There aren't meaningful ways to measure engineer "performance" that makes them comparable as far as I know.

Your org doesn't track engineering impact?

What about git stats?

I can tell you the guys that are consistently pushing code AND having the biggest impact are using LLM tools.


Are we measuring productivity by lines of code again? This was treated as unserious for decades.

Why ignore where I mention engineering impact??? Come on, be real here

I would be surprised if AI prices reflect their current cost to provide the service, even inference costs. With so much money flowing into AI the goal isn't to make money, it's to grow faster than the competition.

I remain confident that most AI labs are not selling API access for less than it costs to serve the models.

If that's so common then what's your theory as to why Anthropic aren't price competitive with GPT-5.2?


I think it’s more instructive to look at providers like AWS than to compare with other AI labs. What’s the incentive for AWS to silently subsidise somebody else’s model when you run it on their infrastructure?

AWS are quite happy to give service away for free in vast quantities, but they do it by issuing credits, not by selling below cost.

I think it’s a fairly safe bet AWS aren’t losing money on every token they sell.


From this article:

> For the purposes of this post, I’ll use the figures from the 100,000 “maximum”–Claude Sonnet and Opus 4.5 both have context windows of 200,000 tokens, and I run up against them regularly–to generate pessimistic estimates. So, ~390 Wh/MTok input, ~1950 Wh/MTok output.

Expensive commercial energy would be 30¢ per kWh in the US, so the energy cost implied by these figures would be about 12¢/MTok input and 60¢/MTok output. Anthropic's API cost for Opus 4.5 is $5/MTok input and $25/MTok output, nearly two orders of magnitude higher than these figures.

The direct energy cost of inference is still covered even if you assume that Claude Max/etc plans are offering a tenfold subsidy over the API cost.


Thank you for some good intel. Thats very interesting. But, I wonder how this affects supply pricing to other customers. Not that you haven't shown the direct power costs have been borne, but the more indirect ones remain for me.

> I would be surprised if AI prices reflect their current cost to provide the service, even inference costs.

This has been covered a lot. You can find quotes from one of the companies saying that they'd be profitable if not for training costs. In other words, inference is a net positive.

You have to keep in mind that the average customer doesn't use much inference. Most customers on the $20/month plans never come close to using all of their token allowance.


The more religious people I know are some of the best critical thinkers. Especially those types who enroll their kids in the 'classical' education model. With the decline of religion in the USA, I don't think this is a very coherent scapegoat.

Religion isn't the only factor, nor did I claim it was.

But it's the only one I've seen convince PhDs to believe self contradictory "scriptures", cherry picked "evidence", appeals to authority, parrot useless platitudes, indoctrinate their kids, dismiss injustices, other people even for the most trivial differences in doctrine, and consistently vote against their own interests.


This SQL Studio which was seemingly released to the public yesterday? Or are you talking about MS's SQL Server Management Studio? The MS one is a beast.


Management Studio is a monster. I was using for years and every so often someone would show me a feature I was totally unaware of that blew my mind.

Visual Studio also had "Database Project" which was amazing. Not seen anything like it. I think everyone moved over to using EF or Fluent Migrations but I loved the Database Projects.


Database projects are still there, I also love them.


Ah, I guess not then. I revised my comment. Maybe it was DBeaver, after all.


Taking a huge risk with the naming here, I would be expecting to hear from a Microsoft lawyer any minute (Due to MS's flagship 'SQL Server Management Studio').

e: Don't let this dishearten you, I only would consider a name change to be more of your own brand. When I saw 'SQL Studio', I assumed MS had created an online version of their product. This looks like a well-done passion project.


Trademarks are complicated, but they probably won't let anyone claim SQL Studio


That doesn't matter if you run out of money before the end of the case.


true


Not to mention that when you Google "SQL Studio", all you see are MS SSMS results.


> all they had to do to label a cow “free range” or “grass fed” was change the finishing stage to a lower density configuration instead of those abominable feed lots you see along highways.

And this is exactly what people have wanted, and are willing to pay a premium for.


Interesting. All the Flock cameras around me are stationed around the entrances to Lowe's parking lots.


All the Flock cameras around me are stationed around the entrances to Lowe's parking lots.

Most of the ones in my neighborhood are pointed at parks, playgrounds, and the big transit center. Which makes no sense to me since there's a ton of government buildings around that you'd think would be under Flock surveillance for "safety."


All of the ones I've noticed have been pointed directly towards streets for mostly license recognition but it's notable that they record whatever objects a typical real world AI image model could. In my area, we have Flock, Shotspotter, Stingray devices, free Ring camera programs from law enforcement departments.

Our Lowe's have the mobile parking lot camera/light units, I wasn't aware if these were Flock but either wouldn't be surprised if they were, had access or plans to buy in.


Lowe's and Home Depot both seem to be hubs for their cameras. I only know of one in my rural area and it's at the Lowe's entrance.


Home Depot and Lowe's Share Data From Hundreds of AI Cameras [Flock] With Cops - https://news.ycombinator.com/item?id=44819750 - August 2025


You missed the touch of sarcasm. It's a joke, recent AWS announcements have been heavily AI-focused.


I don't really see how this is a productive comment for the article. Most of big tech focuses on AI and those typically get traction in the news. AWS specically has plenty of non-AI announcements: https://aws.amazon.com/new/

Parent comment made a low quality joke that lacked substance.


I think that is a joke that reflects pretty well the feeling of many people (me included) that miss the ten years ago AWS and their ability to amaze us with solutions for practical problems, instead of marketing claims on PowerPoints.


Kevin was CTO / head of product engineering at Windsurf, Anshul was a founding engineer


If they can fund a fork, they can continue business as usual until the need arises


A fork is more expensive to maintain than funding/contributing to the original project. You have to duplicate all future work yourselves, third party code starts expecting their version instead of your version, etc.


Nobody said the fork cannot diverge from the original project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: