Hacker News new | past | comments | ask | show | jobs | submit login

There are no moats in deep learning, everything changes so fast.

They have the next iteration of GPT Sutskever helped to finalize. OpenAI lost it's future unless they find new same caliber people.




> They have the next iteration of GPT Sutskever helped to finalize

How do you know that they have the next GPT?

How do you know what Sutskever contributed? (There was talk that the most valuable contributions came from the less well known researchers not from him)


sha256:e33135417f7f5b8f4a1c98c28cf26330bea4cc6b120765f59f5d518ea0ce80e5


What should this mean?


Isn't access to massive datasets and computation the moat? If you and your very talented friends wanted to build something like GPT-4, could you?

It's going to get orders of magnitude less expensive, but for now, the capital requirements feel like a pretty deep moat.


How do you know massive datasets are required? Just because that’s how current LLMs operate, doesn’t mean it’s necessarily the only solution.


Then the resources needed to discover an alternative to brute-forcing a large model are a huge barrier.

I think academia and startups are currently better suited to optimize tinyml and edge ai hardware/compilers/frameworks etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: