Hacker News new | past | comments | ask | show | jobs | submit login

I used to work at a university where my professor had been in automatic speech recognition for a long time, but basically gave up on that line of research about 10 years ago because he figured that universities simply cannot compete budget wise with the big industry players.

I suppose the same will soon be true for most ML-related areas of research sooner or later, at least as far as applied ML is concerned.

Already, a substantial amount of research innovation in NLP and CV has been coming from big companies in recent years.

Of course there is a discussion to be had about what that means for society at large. At this point, a lot of said companies to publish their results at conferences etc. But what if at some point they decide to be as "open" as OpenAI (ie., not)?




Well, universities can’t compete in things like car production or rocket manufacturing but find ways to contribute nonetheless. Researchers have and always will struggle to get resources relative to BigCorp - AI is just joining the party. Daimler and Lockheed are no more open than Facebook is, AFAIK. There is still plenty to do and to analyze. Verifiable AI, more efficient models, knowledge transfer, 1000 brains, human interpretability, etc.


I think the academic side will start shifting towards research on efficiency and speed while companies will continue to push the cutting edge.

In the NLP space there's been a lot of work recently around reducing model sizes, since they've started to reach the point where model weights sometimes don't fit in the memory of most GPUs.

There's also projects like MarianNMT which completely abandon Python and write heavily optimized models with fast languages that can run quickly and accurately even without GPUs. I think we'll see a lot more of this, though of course there's a pretty big barrier in the sheer rarity of being good at both deep learning research and writing optimized low-level code.


It would be a bit ironic for universities to compete on efficency and speed given those are two things companies optimize on. Not impossible of course, theory and encouragement to a bit more abstract could lead to providing that.

As for writing low level code, I thought that was something usually handled by the compiler or where even the advanced high performance for high price mostly tweaked the compiler after analyzing the output. Not my direct space so I speak with no authority.


> I think the academic side will start shifting towards research on efficiency and speed

Constraints are the mother of creativity.


Julia is not hard for a Python programmer to pick up, and it can be very fast.


Academia is not necessarily the pinnacle of achievement in a field, it is the pinnacle of published achievement. There is always a dual track of proprietary knowledge, and knowledge available to the commons. Since the latter is most beneficial to society, that’s why we have awards for people when they publish their research, instead of hiding it for maximum profit.

I don’t see something new here, these institutions to encourage people to share are old, so it must be a problem that had been recognized for a while.


Even historically, how much cutting-edge research for commercial tech has come from universities? I'd say government-backed labs, military and private corporations have all always had a greater impact.


"But what if at some point they decide to be as "open" as OpenAI (ie., not)?"

Aisde from some of the academics and the "gain and share knowledge for knowledge's sake" types they hire why would they care?

For the record, I don't like the idea of scientific research becoming proprietary. At all. But is there anyone credulous enough to think these organizations would willingly risk their bottom line for principles like "openess" and not just play the PR games to make themselves appear open and concerned?

In other words "Don't LOOK evil but do evil when no one's looking".

The Frances Haugen already shows how damaging such openness can be.


I hope a positive outcome of this will be that universities direct more of their research effort toward efficiency of network architectures and/or understandability.


Unfortunately it will be hard to investigate properties of large, powerful neural networks without access to their trained weights. And industrial labs that spend millions of dollars training them will not be keen to share.

If academics want to do research on expensive cutting-edge tech, they will have to join industrial labs or pool together resources, similar to particle physics or drug discovery research today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: