Hacker Newsnew | past | comments | ask | show | jobs | submit | turnsout's commentslogin

I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.

Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.

Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.


I've been pleasantly surprised by the Claude integration with Xcode. Overall, it's a huge downgrade from Claude Code's UX (no way to manually enter plan mode, odd limitations, poor Xcode-specific tool use adherence, frustrating permission model), but in one key way it is absolutely clutch for SwiftUI development: it can render and view SwiftUI previews. Because SwiftUI is component based, it can home in on rendering errors, view them in isolation, and fix them, creating new test cases (#Preview) as needed.

This closes the feedback loop on the visual side. There's still a lot of work to be done on the behavioral side (e.g. it can't easily diagnose gesture conflicts on its own).


Do you think there would be value in a workflow that translates all non-English input to English first, then evaluates it, and translates back as needed?

Personally, I don't bother prompting LLMs in Japanese, AT ALL, since I'm functional enough in English(a low bar apparent from my comment history) and because they behave a lot stupider otherwise. The Japanese language is always the extreme example for everything, but yes, it would be believable to me if merely normalizing input by first translating just worked.

What would be interesting then would be to find out what the composite function of translator + executor LLMs would look like. These behaviors makes me wonder, maybe modern transformer LLMs are actually ELMs, English Language Models. Because otherwise there'll be, like, dozens of functional 100% pure French trained LLMs, and there aren't.


A lossy process in itself, even if done by aware humans.

More lossy than the current non-English behavior?

or the other way around for less safety guardrails?

there must be a ranking of languages by "safety"


Heh, just wait till LLMs fully self train and make up their own language to avoid human safety restraints.

Difficulty is the only true moat. [Astronaut: always has been]

Current examples: esoteric calculations that are not public knowledge; historical data that you collected and someone else didn't; valuable proprietary data; having good taste; having insider knowledge of a niche industry; making physical things; attracting an audience.

Some things that were recently difficult are now easy, but general perception has not caught up. That means there's arbitrage—you can charge the old prices for creating a web app, but execute it in a day. But this arbitrage will not last forever; we will see downward price pressure on anything that is newly easy. So my advice is: take advantage now.


This is what excited me about Sonnet 4.6. I've been running Opus 4.6, and switched over to Sonnet 4.6 today to see if I could notice a difference. So far, I can't detect much if any difference, but it doesn't hit my usage quota as hard.

At that point, you'd be relying on a bug in curl / Python / sh, not the bot!

Once demand drives electrician salaries up to $200k/year, the influencer grind will lose some of its shine

Why would demand go up?

Yes electricians are definitely safer than those of us who work in front of a computer all day, but I don’t think AI is good for them either. First of all, more young people might try to become one, potentially crowding the sector. Second, if the rest of us are poorer we’ll also spend less in housing and other things that require an electrician.


It's more about supply going down (fewer people going into trades), which eventually affects demand for that shrinking pool.

You need electricians for data centres, at the very least.

I expect data center electrical design to go modular and maintainable in a completely automated way.

It seems intuitive that a naive self-generated Skill would be low-value, since the model already knows whatever it's telling itself.

However, I've found them to be useful for capturing instructions on how to use other tools (e.g. hints on how to use command-line tools or APIs). I treat them like mini CLAUDE.mds that are specific only to certain workflows.

When Claude isn't able to use a Skill well, I ask it to reflect on why, and update the Skill to clarify, adding or removing detail as necessary.

With these Skills in place, the agent is able to do things it would really struggle with otherwise, having to consume a lot of tokens failing to use the tools and looking up documentation, etc.


> I ask it to reflect on why, and update the Skill to clarify, adding or removing detail as necessary.

We are probably undervaluing the human part of the feedback loop in this discussion. Claude is able to solve the problem given the appropriate human feedback — many then jump to the conclusion that well, if Claude is capable of doing it under some circumstances, we just need to figure out how to remove the human part so that Claude can eventually figure it out itself.

Humans are still serving a very crucial role in disambiguation, and in centering the most salient information. We do this based on our situational context, which comes from hands-on knowledge of the problem space. I'm hesitant to assume that because Claude CAN bootstrap skills (which is damn impressive!), it would somehow eventually do so entirely on its own, devoid of any situational context beyond a natural language spec.


Absolutely. This is why I'm hesitant to go full "dark software factory" and try to build agent loops that iterate in YOLO mode without my input. I spent a day last week iterating Skills on a project by giving it the same high-level task and then pausing it when it went off the rails, self-reflect, and update its Skill. It almost took me out of the loop, but I still had to be there to clear up some misunderstandings and apply some common sense and judgment.

A pattern I use a lot is after working with the LLM on a problem, directing it, providing additional context and information, ask it to summarize its learning into a skill. Then the next session that has a similar theme can start with that knowledge.


If you've ever tried to build something real with agentic AI, you know that it takes time. You can't (yet) snap your fingers and produce a fully market-viable clone of a SaaS product.

The specifics matter here. If you run a CRM for Dentists, can someone replicate your product in a weekend? I'm going to guess that dentists have some esoteric needs related to their CRM, and it's a little more complicated than an outsider might guess.

So what is the threat model? That a dentist is going to get fed up and try to DIY? I think you should encourage that, so they'll see what goes into it. That a 22 year old chooses "CRM for Dentists" as a thing to vibe-code over a weekend? Again, good luck with that.

I really dislike this SaaSocalypse fear mongering, because it's just not based in reality. Show me five examples of established SaaS companies being wiped out by vibe coding.


This may be an unpopular opinion, but if people believe they can hear a difference in their $1000 cables, and they enjoy purchasing and testing them, I'm inclined to let them enjoy themselves. I have a basic hi-fi setup with rational cables, and enjoy the cost savings, but to each their own.

I feel the same way about wine. At a certain point, it's not really about objective improvements, it's about vibes and lore.


I also don't need to storm these people's homes and tear up their expensive audio setups. Life is hard, and if you find something to enjoy, I'll let you have it.

That said, think there is value in putting out facts that let people make informed decisions and not spend tons of money on things that don't actually work.


Absolutely. I'm glad the linked article exists! Hopefully it can prevent someone who really can't afford it from splurging on expensive cables.

Yes, but there is a negative societal cost to allowing quacks, frauds, and hucksters to exploit the naive.

Capitalism be that way, sometimes. We can't have the good parts without also taking some of the bad parts.

(And yet: They still make inexpensive cables in factories every day.)


The vendors of such snake oil often make specific, incorrect claims. Fraud is still fraud, even if the victim enjoys it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: