Hacker Newsnew | past | comments | ask | show | jobs | submit | 3uler's commentslogin

Do you not value your time? Paying a 100 bucks for a Claude max subscription is well worth it


Opportunity costs: Would you rather pay 100 bucks for making more money or for your foss projects?

The same can be said of your time, but here we're talking about scale benefits due to LLMs (i.e. lots of SaaSs dying due to lots of "full featured f/oss projects").


AI is pretty good at following existing patterns in a codebase. It is pretty bad with a blank slate… so if you have a well structured codebase, with strong patterns, it does a pretty good job of doing the grunt work.


The read-only hesitation seems overcautious. If you’re genuinely using it read-only, what’s the failure mode? The tool crashes or returns bad data - same risks as the AWS CLI or console.

The “middleware layer” concern doesn’t hold up. This is just a better interface for exploring AWS resources, same as k9s is for Kubernetes. If you trust k9s (which clearly works, given how widely it’s used), the same logic applies here.

If you’re enforcing infrastructure changes through IaC, having a visual way to explore your AWS resources makes sense. The AWS console is clunky for this.


> what’s the failure mode?

The tool misrepresents what is in AWS, and you make a decision based on the bad info.

FWIW I agree with you it doesn’t seem that bad, but this is what came to mind when I read GPs comment


Fair. Best use might be to double check on the proper UI before making any big decisions, and just use it as a general monitor


I mean sure… but to me that is as likely as the official ui misrepresenting the info.


The fakes vs mocks distinction here feels like a terminology debate masking violent agreement. What you’re describing as a “fake” is just a well-designed mock. The problem isn’t mocks as a concept, it’s mocking at the wrong layer. The rule: mock what you own, at the boundaries you control. The chaos you describe comes from mocking infrastructure directly. Verifying “deleteUserById was called exactly once with these params” is testing implementation, not behavior. Your HashMap-backed fake tests the right thing: is the user gone after the operation? Who cares how. The issue is finding the correct layers to validate behavior, not the implementation detail of mocks or fakes… that’s like complaining a hammer smashed a hole in the wall.


Google had a good 10 year run, where the ads were genuinely useful, until the need of the public markets required and lack of competition allowed them to enshitify the experience to the current state.

I hope the same fate does not await ChatGPT but in the mean time I expect it to be a pretty good experience at first.


Comments it seems taken in bad faith


It's a legitimate concern.


I mean the sort of user you are describing sounds like they’d struggle with PC gaming in general.


I've always maintained that microservices are more for scaling people than scaling systems.


But then they’ve not reviewed it themselves?


The lady doth protest too much. People see every AI limitation crystal clear, but zero self awareness of their own fallibility.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: