Hacker News new | past | comments | ask | show | jobs | submit login
On the Opportunities and Risks of Foundation Models (arxiv.org)
20 points by satorii 26 days ago | hide | past | favorite | 6 comments



I'd suggest using the original article title. [Edit, it's been updated]

Still have to read the article. It's great to see people exploring this. From the first "language models are unsupervised multitask learners" type papers, i wish there had been more emphasis that the various behaviors these models have are essentially a side effect of learning some kind of self supervision task. A model has been trained to e.g. predict the next word given previous words, and we're happy to discover that it can be repurposed as a chatbot. And then people find the chatbot has some undesirable behaviors, and talk about fairness and governance and all that. When the basic point is the model was never really trained to do any of that, its just a word predictor. Why did you ever think it would be OK to just let it run wild on some other task?

All that to say, a big problem in AI/ML is models getting used for things they have no business being used for, and them people being at best underwhelmed, or harmed or offended by the results. The first step should be asking why is this model suitable for making the prediction I'm asking it to, and I think closer scrutiny on what these "foundation models" actually do is a good direction.


(Title changed now. Submitted title was "What is this new AI term, foundation models".)


To answer the question the original poster apparently had, here are the first two sentences of the abstract:

> AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.


I'm curious about the format/formatting of this paper. There are a few visual roadmaps to the various sections and subsections throughout the paper, complete with drawings/iconography (clip art?). I haven't seen anything like this before in an academic paper. Is it something that's becoming popular in certain research communities?


Interesting findings! Not sure about whether it is within certain research communities or a broader trend.

But Clip cloud be a good plug-in for nowadays writings/design then, something like Clip empowered unsplash.


It's a 212 page scientific report, not a traditional article.

Could the vision system not be the AI foundational model? Car vision, specifically Tesla state-of-the-art is not mentioned. I see bit of NLP bias.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: