Hacker News new | past | comments | ask | show | jobs | submit | samj's comments login

Item 1: Repeal the Open Source AI Definition (OSAID)

Item 2: Adopt a process for formal review of previously approved licenses

Item 3: Remove “code of silence” from Board Member Agreement

Item 4: Directors should be allowed to use FOSS for Board activities


This journalist didn't: "I’m a dues paying member and I didn’t receive it, and it’s highly possible that others deleted the email without opening it."

OSI’s Continually Changing Election Story https://fossforce.com/2025/02/osis-changing-election-story/

It wouldn't matter anyway as they announced the deadline as 17 February, which would mean midnight local time by default (and should mean midnight AoE, especially for an organisations purporting to be concerned with openness).


Bradley M Kuhn has since announced[1] the platform[2] Luke would have been running on (with a blog post for updates[3]):

Shared Platform for OSI Reform

Item 1. Repeal the Open Source AI Definition (OSAID)

Item 2. Adopt a process for formal review of previously approved licenses

Item 3. Remove “code of silence” from Board Member Agreement

Item 4. Directors should be allowed to use FOSS for Board activities

Which of these points are the OSI's current leadership so committed to blocking that they would make Luke's candidacy their hill to die on?

1. https://mas.to/@bkuhn@floss.social 2. https://codeberg.org/OSI-Reform-Platform/platform 3. https://ebb.org/bkuhn/blog/2025/02/28/osi-board-election.htm...


We’re finally starting to see some precision in language from the company, in that they consider that the Meta LLaMA AI models themselves are NOT in fact Open Source, rather Ope nWeight. Thank you for admitting on the public record what the Open Source community has been saying all along! The data is the source for AI and it’s not provided, though Harvard’s move is a step towards rectifying that.


Yeah but the souce isnt theirs right? It was in the first versions trained on books and internet and stuff, no way they can publish it. Idk how right or wrong I am here, but I’m glad it’s something open enough you can self host all of it and nothing proprietary is needed.


This this is another issue, and it's why I'm not holding my breath for the data and will be satisfied with rebranding to the more appropriate Open Weight moniker (caveat: license).


"Nobody disagrees about the principles [behind the OSAID], where we see disagreement is implementation"

The principles are the four essential freedoms of free software, and the one job the OSI had in implementation was to work out what had to be open to protect them.

Hint: it’s the data.


“Some of the tension we've seen is in people that haven't made that mind shift to the wider picture and are just trying to apply what they are very familiar with, that they're experts at, and that made it more difficult.”

Like many of the objectors, I’ve been in Open Source since before there was Open Source or the OSI, and am now doing a masters specialising in ML while coding an AI OS. This is condescending BS from talking heads who don’t even claim to have AI expertise.

Meanwhile their chosen “co-design” process is literally the “Do Your Own Research (DYOR)” of technical standards:

“We believe that everyone is an expert based on their own lived experience, and that we all have unique and brilliant contributions to bring to a design process.”

What a clown show. How do we get off this train?


Karsten correctly called the consensus on that thread, including my own view as one of the folks named in the article, that accepting less-than-open "public" datasets like the dumps of the Internet made available by the Common Crawl Foundation may have been an acceptable compromise to cast a wider net in recognition of AI industry norms. I no longer believe that to be the case, accepting the Open Source Definition (OSD) author Bruce Perens' view that the data IS the source, on the basis that it is what you need to modify in order to freely change the output of the system.

The OSI's position that ANY data is acceptable has shifted the Overton window of Open Source, and categorising it into open, public, obtainable, and unshareable non-public data, only to then accept all of them, is a form of doublespeak appearing to maintain openness while accommodating even the most restrictive data. We don't negotiate with terrorists.

Indeed, there are two dimensions to "Open" AI systems which "can be freely used, modified, and shared by anyone for any purpose" (per The Open Definition): openness, which is already well-covered by the Open Source Definition, and completeness, which is covered implicitly — after all, AI systems are software — but which would need to be specified in approved frameworks that could be self-applied like the MOF (were a new Class 0 to be created requiring open data licenses rather than "any license or unlicensed" like Class I).

In other words, the Open Source Definition (OSD) covers openness but not completeness (at least not explicitly, which is arguably a bug the community may want to fix in a future version so it covers both). The MOF covers completeness but not openness. The OSI's proposed OSAID covers neither, so any vendor using it to open-wash closed systems as Open Source AI rightly deserves ridicule as it is patently ridiculous.


That's why Open Source analyst Redmonk now "do not believe the term open source can or should be extended into the AI world." https://redmonk.com/sogrady/2024/10/22/from-open-source-to-a...

I don't necessarily agree and suggest the Open Source Definition could be extended to cover data in general (media, databases, and yes, models) with a single sentence, but the lowest risk option is to not touch something that has worked well for a quarter century.

The community is starting to regroup and discuss possible next steps over at https://discuss.opensourcedefinition.org


Yes, and Open Source started out with a much smaller set of software that has since grown exponentially thanks to the meaningful Open Source Definition.

We risk giving AI the same opportunity to grow in an open direction, and by our own hand. Massive own goal.


> Yes, and Open Source started out with a much smaller set of software that has since grown exponentially thanks to the meaningful Open Source Definition.

I thought it was thanks to a lot of software developers’ uncompensated labor. Silly me.


The OSI apparently doesn't have the mandate from its members to even work on this, let alone approve it.

The community is starting to regroup at https://discuss.opensourcedefinition.org because the OSI's own forums are now heavily censored.

I encourage you to join the discussion about the future of Open Source, the first option being to keep everything as is.


For reference, this is the OSI Forum mentioned: https://discuss.opensource.org

Didn't personally know they even had one. ;)


Heh... HN has always been full of massive proponents of the OSI, with people staunchly claiming any software under a license that isn't OSI-approved isn't 'real open source'.

Now we're seeing that maybe putting all that trust and responsibility in one entity wasn't such a great idea.


We still have the FSF and free software, both predating "open source" and the OSI.


OSD is widely accepted in the community and I don't expect that to change regardless of what happens with AI definitions.

Plus we still have FSF's definition and DFSG.


OSI must defend the open source trademark. Otherwise the community loses everything.

The legal system in the US doesn't provide them any other options but to act.


They don’t have a US trademark on “open source”. Their trademarks are on “open source initiative” and “open source initiative approved license”.


Hahaha… very open. Yeah, no one saw this coming.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: