Hacker News new | past | comments | ask | show | jobs | submit login
The Testimony Before the US Congress of Clem Delangue, CEO of Hugging Face
8 points by AnhTho_FR on June 27, 2023 | hide | past | favorite | 6 comments
AI innovation especially for popular AI systems today such as ChatGPT has been heavily influenced by open research, from the foundational work on the transformers architecture to open releases of some of the most popular language models today. Making AI more open and accessible, including not only machine learning models but also the datasets used for training and the research breakthroughs, cultivates safe innovation. Broadening access to artifacts such as models and training datasets allows researchers and users to better understand systems, conduct audits, mitigate risks, and find high value applications.

The tensions of whether to fully open or fully close an AI system grapples with risks on either end; fully closed systems are often inaccessible to researchers, auditors, and democratic institutions and can therefore obscure necessary information or illegal and harmful data. A fully open system with broader access can attract malicious actors. All systems regardless of access can be misused and require risk mitigation measures. Our approach to ethical openness acknowledges these tensions and combines institutional policies, such as documentation; technical safeguards, such as gating access to artifacts; and community safeguards, such as community moderation. We hold ourselves accountable to prioritizing and documenting our ethical work throughout all stages of AI research and development. Open systems foster democratic governance and increased access, especially to researchers, can help to solve critical security concerns by enabling and empowering safety research. For example, the popular research on watermarking large language models by University of Maryland researchers was conducted using OPT, an open-source language model developed and released by Meta. Watermarking is an increasingly popular safeguard for AI detection and openness enables safety research via access. Open research helps us understand these techniques’ robustness and accessible tooling, which we worked on with the University of Maryland researchers, and can encourage other researchers to test and improve safety techniques. Open systems can be more compliant with AI regulation than their closed counterparts; a recent Stanford University study assessed foundation model compliance with the EU AI Act and found while many model providers only score less than 25%, such as AI21 Labs, Aleph Alpha, and Anthropic, Hugging Face’s BigScience was the only model provider to score above 75%. Another organization centered on openness, EleutherAI, scored highest on disclosure requirements. Openness bolsters transparency and enables external scrutiny.

The AI field is currently dominated by a few high-resource organizations who give limited or no open access to novel AI systems, including those based on open research. In order to encourage competition and increase AI economic opportunity, we should enable access for many people to contribute to increasing the breadth of AI progress across useful applications, not just allow a select few organizations to improve the depth of more capable models.

Full testimony: https://twitter.com/ClementDelangue/status/1673349227445878788




I say this is important. However, given the fact that the big companies have invested millions into this endeavour, providing access to the public for free seems a bit discriminatory and unfair. I would caution against getting people used to getting stuff for free. This happened with the newspapers, back when I was at the NYT, and we had to put up a pay-wall to protect our investments.


big companies made millions by exploiting workers, only to invest these profits into 'endeavors' aimed at replacing those very individuals with AI systems

nothing extraordinary to see here


I have to disagree. Making people work for money wouldn’t be exploitation in my book.

But YMMV.:)!


I agree its a point of view thing...

making people toil the fields and having right of the first night with their wives is not an exploitation, its right up there with distributing fun coupons big companies have endless supply of even if going bankrupt.


It is different, what you describe. I can’t say I understand the point of being with their wives: perhaps a different culture, that you and I come from.

But please don’t misunderstand my point of not exploiting people. Some of these, what you call ‘exploited people’ are sometimes the root of evil in our society. Having them work for little money is not an issue in my opinion.

But I take your point. Exploitation has existed since the beginning of time. It is all a matter of where one can draw a line..:) again, thanks for this lively and fun discussion!


Yes I agree with vbh21. We should pay all humans on earth only 1cent per month. Only vbh21 and I should get 1billion$ per month.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: