Hacker Newsnew | past | comments | ask | show | jobs | submit | MATTEHWHOU's commentslogin

My current workflow: I describe the component in plain English with specific constraints ("a data table with sortable columns, sticky header, and virtual scrolling for 10k+ rows"), let the LLM generate the first pass, then manually fix the edge cases it always misses.

The key insight I've found: LLMs are great at generating the 80% scaffolding but terrible at the 20% that makes UI actually feel good — animation timing, scroll behavior, focus management, accessibility edge cases.

So I've stopped asking them for "production-ready" components and instead ask for "the boring structural parts" so I can focus on the interaction details that users actually notice.


There's a version of this argument I agree with and one I don't.

Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.

Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).

The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.


> if you use AI as a draft generator [...] you're spending your cognitive budget on the high-value parts (ideas, structure, voice

I don't follow. If you have the ideas and a structure to give to an AI you already have a working draft. Just start revising that. What would an AI add other than turn into the replacement for thinking described in your negative example?


This is one of those projects that sounds impossible until you realize CUDA is basically C++ with some extensions and a runtime library.

The hard part isn't the language translation — it's matching NVIDIA's highly optimized libraries (cuBLAS, cuDNN, etc.). If BarraCUDA can hit even 80% of the performance on common ML workloads, that's a game changer for anyone who bought AMD hardware.

Curious about the PTX translation layer specifically. That's where most previous attempts (like ZLUDA) hit a wall.


I've been A/B testing the big three (GPT-5, Claude Opus 4, Gemini 3.1) on a real codebase migration this week.

Quick take: Gemini 3.1 Pro's long context is genuinely better now — I fed it a 200k token codebase and it could reference files from the beginning without losing track. That was a real problem in 3.0.

For pure code generation though, Claude still edges it out on following complex multi-step instructions. Gemini tends to take shortcuts when the task has more than ~5 constraints.

The exciting thing is how close they all are. Competition is working exactly as it should.


The interesting thing about llms.txt isn't the file format — it's the incentive shift.

With robots.txt, you were telling crawlers to go away. With llms.txt, you're inviting them in and curating what they see. That's a fundamentally different relationship.

I've been experimenting with this on a few projects and the biggest lesson: your llms.txt should NOT be a sitemap. It should be the answer to "if an AI could only read 5 pages on my site, which 5 would make it actually useful to end users?"

The projects where I got this right saw noticeably better AI-generated answers about our tools. The ones where I just dumped every doc link? No difference from not having it at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: