Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the other side, as someone doing a lot of work in the GenAI space, I'm simultaneously amazed that I can run Flux [dev] on my laptop and use local LLMs for a variety of tasks, while also wishing that I had more RAM and more processing power, despite having a top of the line M3 max MBP.

But it is wild that two years ago running any sort of useful genAI stuff on a MBP was more-or-less a theoretical curiosity, and already today you can easily run models that would have exceeded SotA 2 years ago.

Somewhat ironically, I got into the "AI" space a complete skeptic, but thinking it would be fun to play with nonetheless. After 2 years of daily work with this models I'm starting to be increasingly convinced they are going to become increasingly disruptive. No AGI, but it will certainly reduce a lot of labor and enable things that we're really feasible before. Best of all, it's clear a lot of this work will be doable from a laptop!



I would love to hear more about what exactly you think will be disruptive. I don’t know the LLM world very well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: