Hacker Newsnew | past | comments | ask | show | jobs | submit | kuil009's commentslogin

This feels a lot like the old RPA hype cycle to me — more sales narrative than structural change.

Most companies are not going to replace stable SaaS with a pile of AI-generated internal tools. They don’t want the maintenance or the risk.

If there’s a real B2B game changer, it’s Microsoft.

The day Excel gets a serious, domain-aware AI that can actually model workflows, clean data, and automate logic properly, half of these “build vs buy” debates disappear. People will just solve problems where they already work.

Excel has always been the real business platform. AI will just double down on that, not kill SaaS.


Thanks for this. It put into words a lot of the discomfort I’ve had with the current AI economics.


The positioning makes sense, but I’m still somewhat skeptical.

Targeting power, cooling, and TCO limits for inference is real, especially in air-cooled data centers.

But the benchmarks shown are narrow, and it’s unclear how well this generalizes across models and mixed production workloads. GPUs are inefficient here, but their flexibility still matters.


As someone who enjoyed 'Dilbert' at times long ago, I offer my condolences with a sense of appreciation for the work itself


Rather than treating SRS as a learning tool for facts, I find it far more valuable as a system for recording and periodically revisiting past judgments, especially to reflect on whether a decision made in context was actually a good one.


I too would appreciate learning more about your implementation. This seems akin to Andy Matuschak's concept of "spaced everything".

I presume you use FSRS. What do your card prompts look like? And how do you go about performing, evaluating, and scoring your review of each card?


This seems really interesting to me as I don’t often work in domains that require me to know a lot of facts, but I still feel like SRS could be useful. I just don’t quite know how to use it. Could you give me an example of what you mean here? What kind of decisions do you find meaningful to periodically reflect on?


Thank you for your service


I use LLMs mainly as a mirror for my own thinking, not as a source of authority.

When I explain my ideas to the model during development, I often see flaws or confusion in my own words. This is where I learn the most. The author talks about people who rely on AI for arguments or research. They let the model's smooth, but statistical, language replace their own thinking. Language is naturally uncertain. LLMs just show this uncertainty using statistics. If you understand this, LLMs are no longer a "confidence engine." Instead, they become a tool to fix and improve your thoughts.

A key point is that even if we try hard, we cannot help but react to what the AI says. We must remember that neither AI nor humans are perfect. I believe we should accept AI responses critically and always be skeptical, just like when meeting a stranger.



눈물나게 감동적이었습니다.


Translation: “It was so moving that I cried.”


OMG


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: