Hacker News new | past | comments | ask | show | jobs | submit login
Open Source Security Foundation (openssf.org)
233 points by Garbage on March 27, 2021 | hide | past | favorite | 72 comments



Honk! I represent Google on the OpenSSF, and help lead our Google Open Source Security Team. We've kicked off several projects inside the OpenSSF, and contribute to several other related efforts.

Here's a non-exhaustive list: Security Scorecards (https://github.com/ossf/scorecard): auto-generated security checks for OSS, Criticality Score (https://github.com/ossf/criticality_score): auto-generated criticality score for OSS, Package Feeds (https://github.com/ossf/package-feeds): watches package registries for updates, malware analysis tools, SLSA (https://github.com/slsa-framework/slsa): proposal for a supply chain integrity framework, Sigstore/Cosign (https://sigstore.dev/): code signing made easy!

We are also investing and exploring different efforts for improving security of critical OSS projects, and making it sustainable! If any of these projects sound interesting, come join us in the OpenSSF Working Groups!

*edited formatting


tbh, the Criticality score has done a lot to make me mistrust the quality of pretty much everything associated with it (cf https://news.ycombinator.com/item?id=25381397).

And then there's https://security.googleblog.com/2021/02/know-prevent-fix-fra... which effectively calls for the end of open-source contributors staying pseudonymous.

Google does a lot of good for open-source security, but these recent things are a terrible look.


> And then there's https://security.googleblog.com/2021/02/know-prevent-fix-fra... which effectively calls for the end of open-source contributors staying pseudonymous.

I think that will do more harm than good. First of all a lot of critical software is security related and encryption related and I would guess a higher proportion of contributors in that area are more sensitive to protecting their identity than the general developer population. So you would lose out on some contributions that you would otherwise have gotten.

Second, a major threat in this area arises from nation states. However, due to experience with physical espionage, nation states are already pretty good at establishing fake identities for people (for example it would be no problem for them to supply a fake (or even real) passport/ birth certificate/etc or turning people who are already working in critical areas. Thus getting rid of anonymity would not even be a speed bump for Five Eyes, Russia, China, North Korea, etc.

So I don’t thing there would be much benefit, but there would be a lot of cost.


I think a known pseudonymous identity is better than an unknown fake identity. Real names lower guard but most have no way to actually verify.


Whether or not there is an impenetrable fake identity is a different question from "now that identity Foo is untrustworthy, what other things do I need to inspect?"


For that, you don’t need any type of real world identity. If you require that people sign their commits/reviews with a PGP key and base their reputation on that, that still gives you much the same benefit, while at the same time allowing people to be anonymous.


To be clear, I agree.

The point I was trying to make was that stable identity, whether real or pseudonymous, has value in a security context.


From the second link:

> It is conceivable that contributors, unlike owners and maintainers, could be anonymous, but only if their code has passed multiple reviews by trusted parties. It is also conceivable that we could have “verified” identities, in which a trusted entity knows the real identity, but for privacy reasons the public does not. This would enable decisions about independence as well as prosecution for illegal behavior.


Who gets to decided who this "trusted entity" is?

For example, I don't want anyone to know my real name. I'm not up to any mischief (criminal or otherwise), I just want the separation of identities. There isn't a single entity on Earth that I'd feel safe delegating this knowledge with if I could avoid it.


It sounds like, unless someone is an owner or maintainer of a critical open-source project, the blog post isn't necessarily calling for that person's deanonymization. For projects that are both critical and owned/maintained by anonymous entities, I think it's reasonable for an organization to think twice before taking a dependency on such projects, given the sort of anonymous attacks mentioned in the article.

Disclaimer: opinions are my own, not my employer's (Google)


> I think it's reasonable for an organization to think twice before taking a dependency on such projects, given the sort of anonymous attacks mentioned in the article.

I'd argue that "thinking twice" should be the standard bar for all open source dependencies, not a discrimination levied towards anonymous or pseudonymous developers.

(Though, to be fair, I doubt Google would ever use any of my code. I know your cryptographers; they don't need me to contribute lol.)


Does someone at google want to be the "trusted entity"?


> which effectively calls for the end of open-source contributors staying pseudonymous

Among other things, attacking pseudonymity is an effective means for ensuring the exclusion of trans people, wherein they're forced to identify as their legal name (a.k.a. dead name).

Google needs to correct course on this if they're to be trusted at all.


> Among other things, attacking pseudonymity is an effective means for ensuring the exclusion of trans people

Like you said, trans people are far from the only ones affected. Here is a more extensive list:

https://geekfeminism.wikia.org/wiki/Who_is_harmed_by_a_%22Re...


I was very aware of how it hurt LGBT folks, but that link really punctuates how bad these ideas are for everyone else too.


In most first world countries I'm fairly certain one can change their legal name.


So I've talked to a number of people at various companies about open security work for various areas of detection and response, which is something I don't see really represented in the existing working groups for OSSF. Is there somewhere I can discuss ideas about this?

I see tons of opportunity for guidance to OSS devs that, when implemented, would have massive positive impact for detection and response.

I don't have the experience with such foundations, or the time, to really form a working group, but I'd certainly be interested in discussing this with others.


What attacks would these measures stop? OpenSSL had all of them and was still a disaster.


>Honk!

We finally found out who runs GooseInfosec!


I work at Microsoft and lead one of the OpenSSF working groups (https://github.com/ossf/wg-identifying-security-threats). We're always looking for folks to join the conversation and contribute to any working groups. There is a public calendar for those meetings, and there is a recording of our last town hall at https://openssf.org under the Community menu at the top.

I'm also looking to hire a software/security engineer to join our team at Microsoft, to improve security tooling and analysis around open source. This work will align/contribute to OpenSSF projects. If you like having one foot in software development and the other in security, please take a look: https://careers.microsoft.com/us/en/job/1009857


Could the position be negotiated to be 100 percent remote?


Here is some feedback for Microsoft regarding security, the Azure Sphere security story would be more interesting if C wasn't the only option to actually develop for it.


The founding members are GitHub, Google, IBM, JPMorgan Chase, Microsoft, NCC Group, OWASP Foundation and Red Hat, among others.

So, Microsoft x 2, Google, IBM x 2, a for-profit Security company, and non-profit security group, and a bank. I don't see any of the BSDs listed or any other big open source projects.

I'm starting to get a bit worried that this is and some of the goals going to be more for future legislation than helping open source projects with security.


If you go to the full members page[1], you will notice that the major Linux distributions are represented: SuSE, Canonical, and Red Hat. It's certainly a corporate interest group, but it would be incorrect to say that large open source projects are not represented.

Edit: it should also go without saying that Google et al. are already responsible for a significant portion of the open source ecosystem. For better or worse.

FD: My employer is a (much, much smaller) member of the OpenSSF.

[1]: https://openssf.org/about/members/


Red Hat is IBM, and I don't see anything but the "corporate" Linux.


Yep; like I said, the OpenSSF seems to be explicitly branded as a "cross-industry" initiative. That seems to be consistent with their partnership (and subsumption?) under the Linux Foundation.


yeah, this starting to sound a bit like US Chamber of Commerce. https://en.wikipedia.org/wiki/United_States_Chamber_of_Comme...

The name makes it sound like a US gov organisation, in reality it is a lobbying group to curb consumer rights.


We definitely need to do something like this. Trying to set up vulnerability scanners in CI for my open source project, I realized that open source DevSecOps is a shitshow: hard to use tools, lots of focus on GUI tools that are harder to script, and proprietary tools and platforms.

To be honest it feels like vested interests are keeping it that way: professionals want to keep the tools manual so they can charge by the hour; and tool vendors obviously have no interest in open source tools


The professionals I know and myself work solely with open source tools and open source frameworks... It is certainly a libre open source push in infosec.

And the community of practitioners are giving back to the community in all dimensions, code, knowledge, talks, support, heck even governments and institutions five free insights; mitre, nist, cis for instance.

With a few exceptions such as Nessus and Qualys.


Any chance you could share what open source tools and frameworks you work with? I would be grateful for that.


CIS, https://www.cisecurity.org/

CIS audit, https://www.auditscripts.com

Mitre Attack, https://attack.mitre.org/

NIST, https://www.nist.gov/cyberframework

CISA, https://www.cisa.gov/cybersecurity

OWASP top 10, https://owasp.org/www-project-top-ten/

Cloud security alliance, https://cloudsecurityalliance.org/

Higher level standards: Iso27001

IEC62443

Tools:

AD: Bloodhound / Sharphound

PingCastle

Web:

Owasp ZAP

Burpsuite

Basically download the Kali linux distro

+++++++++++++++

Conferences:

Blackhat

Defcon

Hope


Great list thanks


Thanks.


Have you tried using OWASP ZAP? We're focusing on automation and feedback gratefully received. I'm also on the OpenSSF Security Tools Working Group and making security tools easier to use is definitely one of our priorities


First of all great product, I found a lot of XSS when using the GUI. Also I managed to get this scripted with zap-cli with an unauthenticated scan, not too much work. I don't like to complain about specific open source project so please take this as feedback.

I gave up on trying to do an authenticated scan. Docs/ Forum answers always say "First get it to work in the GUI, then try running on command line." Well that's not helping me very much, because "getting it to work in the GUI" is not reproducible and shareable in the same way as a code/script showing clear steps. Secondly, getting authenticated scans to work when your login form is protected by a CSRF token is very much not trivial (don't think I got this to work in any tools). But if your forms are not protected, you have a vulnerability.

My feeling is that the right kind of tool is really a library, so that one can script the login process which may be quite complex with 2FA.


Regarding 2fa - For TOTP there is a Linux command line tool called oathtool. For SMS, set up Twilio. For U2F you can emulate a device with an ECDSA library.

For CSRF you'll want browser automation, like Chrome Headless. Alternatively, you can load a page and extract a token from the DOM in a normal scraper.

>To be honest it feels like vested interests are keeping it that way: professionals want to keep the tools manual so they can charge by the hour;

As a security professional, get out of here with that non-sense. You've run into a challenging problem and still think there is some conspiracy. What we do is highly technical and often customer specific (e.g. automate 2fa due to some weird requirement rather than the customer disabling it for the test account). There is no market in automating a lot of this work, packaging it in a nodejs library for you to use, and writing docs.


My "challenging problem" is that I use CSRF tokens, which is a minimal requirement of any non-toy project.


I haven't tried it in Zap, but there is definitely a Burp add-on that takes care of this for you. Probably several.

Zap is pretty scriptable as well, so there are likely solutions for it also. What have you tried?


Sadly, the commercial tools are also kind of a shitshow. We've got one (very expensive one) that emits risk reports with "high" rating where the message is clearly a programmer saying "not implemented yet."

Oh, and it also crashes with null pointer errors.


One of the (expensive) tools we used would always flag the word "key" regardless of context. Have an array of postal suffixes that includes "St", "Rd", "Key"? That gets flagged as a high level cryptographic vulnerability. Same for "Press any key to continue" or any variable name that contains the word key, regardless of context. These obvious false positives erode trust in the product by developers and are used by their PMs as proof that security scans serve little purpose other than causing missed deadlines. The vendor refused to correct any of them, claiming it's better to be safe than sorry but offered us the ability to turn off checking for hardcoded cryptographic keys as a work around, which of course wouldn't catch actual cases of hardcoded keys, which sadly does happen.


That certainly sounds like Tenable's (Nessus) Security Center.

That piece of software is some of the worst, verbose, bug-ridden garbage I've been required to work with.


Have you tried Snyk? I've found it to be the simplest to use and completely free for public repos


I get a lot of false hits and plainly incorrect PRs opened; My project is a monorepo (lerna/npm), not sure if I have it set up incorrectly or snyk doesn't play with monorepos


Fund communities around products like Security Onion. Champion portioning at least a part of budget allocated for support to the direct support of active open source contributors.

That’s the only way to break the consultancy chain - stop paying the wolves to guard the henhouse.


Lookie here, another foundation with work groups, directors and a bureaucratic apparatus! Years of fruitful work!

"Open" source is being drowned in bureaucracy by talkers.


Please join us, and help us do things.

Seriously, our biggest challenge is that we need more folks like you, who will roll up your sleeves and help us get something tangible done "right now". Attend a working group and tell us that we're not moving fast enough, or that we're not working on the most important things, or that your idea is better than ours and we should do your thing instead.

Give it a shot, you might be surprised. Or we might be. Either way, one of us is getting better.


> our biggest challenge is that we need more folks like you, who will roll up your sleeves and help us get something tangible done "right now"

The perspective of working for free in initiative where all good outcomes will be used to promote Google and other corporate entities, is not something that encourages people to become involved.

Especially hard working people who produce something useful.

<self interest> Maybe fund some grant program accessible without tons of paperwork? And fund people who would be happy to create/continue creating/improve something useful and security related but are unable to justify doing this as a hobby? </self interest>


I know you mean well, but i've tried so very very hard to work with Linux Foundation-driven projects like this. The lesson I learned was that as an independent developer, you simply have no place in an organization like this.


I'm sorry you had a bad experience with LF projects. We absolutely need independent developers in the OpenSSF - if the only voices that are heard are the "enterprise" voices, we're going to come up solutions that work primarily for enterprises.

If you're willing to try again, I'd be happy to have a chat with you about where you could contribute/collaborate with us. You're also welcome to join any of the working groups (mine meets next on Wednesday, 3/31); calendar: https://calendar.google.com/calendar?cid=czYzdm9lZmhwNWk5cGZ...


From my perspective, try sounding less like a uniform beige bureaucratic committee and meeting machine, and make it way more clear why some under appreciated oss project maintainer might see some worthwhile benefit from what you’re doing.

Attending meetings with workgroups and advisory committees filled with people from Microsoft/Google/Redhat/JPMorgan is not the sort of thing that fills most oss devs I’ve ever met with joy...


can you give some examples

All Linux Foundation projects separate business/funding governance (you pay to get a vote on how the money is spent) from technical governance (you do the work you get a seat at the table). If you show up and do the work, you should be fine.


I have a suggestion for ground rules as far tooling is concerned.

1. If it requires a GUI, at any stage at all, it's broken.

2. Unless it's controllable through scripting, it's unfit for purpose.

3. If it doesn't support real-time observability and control via an API, it's useless.

This space doesn't need any more "products". We need a good suite of composable tools, not more C-suite bingo sheets.


So I admit that this is totally based on outside optics only, but everything I've seen signals "big corporate group thing" similar to many standard groups and other committees, which is fun and games if participating is your job: you get an opportunity to talk about about what you are doing on company time, with peers that are also there on company time and thus don't mind spending hours on participating in meetings (during work time, because it's work for everyone else), reading mailing-lists, ... But if you aren't paid for it, you either need to have effectively unlimited free time or be someone highly respected already to be able to participate on a meaningful level.

If that's not the goal, you need to invest into a) ensuring that you actually do provide a useful environment for outside participants and b) communicating this, i.e. by having a public presence that clearly explains this and doesn't look like it's targeted at impressing middle management. Transparency into what is going on right now is key to this.

"Linux Foundation" doesn't scream "community". See also other comments here specifically about Google's input, which reads a lot like "we've thought up some things we're now going to impose on everyone, just open-washed a bit". (Again, this is totally outside image, and not intended as an attack on any individual involved as having done anthing wrong, just trying to communicate what outside impression you are likely dealing with)


I'm having trouble finding a calendar (after very shallow searching). Perhaps you could add a link someplace prominent?


Sorry, should have posted the link directly: https://calendar.google.com/calendar?cid=czYzdm9lZmhwNWk5cGZ...


If I want to publish a security bug why do I need a Foundation to do that? I'd act as an individual and be just fine. Those Foundations are all highly infiltrated by corporate and intelligence interest. This is especially true for the Linux foundation which sponsores the OSSF. There is no reason to trust them.


Nothing here suggests that you should need them to publish a security bug.


Put it a different way:

Open Source now is in every single walk of life and is only going to become more critical to everyone online, their banking, their healthcare and so on.

As such most people would want that software to be governed. Preferably with transparency, right to reply / be heard and have their needs taken into account.

It might not have to be bereaucratic, long winded or inefficient, but it will have to be government.

And it's going to need to be a government of international needs and concerns.

The only good point is we might get to shape it for the next decade while it all starts up

www.oss4gov.org/manifesto


Yeah, I gotta admit I read about Governing Bodies, Working Groups, and Technical Advisory Committees - and instantly pigeonholed this as “for academic tenure track or Big Corp evangelists and subject matter experts only”.

Looks way too much like a bunch of suits and the occasional skunkworks greybeard deciding they want to tell open source project maintainers and contributors what to do.

Maybe they’ll do some good, it doesn’t sound like a thing I’d ever want to be involved in (you know, unless IBM or Microsoft or whoever offer me a Distinguished Engineer role where I can choose to dabble here at my leisure on their great big firehouse of dimes...)


Another giant corporate bureaucracy packed to the brim with directors and codes of conduct. This stuff is the death of the open source community spirit.


How does this initiative compare to other foundations that have similar responsibilities (judging from the website and pinned github repos), like say, OWASP [1] or the CSA [2]?

Will this foundation focus on reducing patch times and offering services like integrated bugtrackers and direct contacts and open policies about what happens to submitted critical bugs and exploit PoCs?

I'm asking because when thinking of "enterprise open-source", the WebKit bugtracker comes to mind ... where literally nobody knows what's going on. All bugs are private once submitted and not any external contributor can see what's going on until literally years later somebody starts to actually read it; and yes that's also the case for critical remote exploit bug reports.

Personal Opinion:

I personally cannot vouch for Microsoft's policies, as they cease-and-desisted me and threatened to sue me in the past for disclosing a RCE/priv escalation report that was NT related. They also caused an illegal police raid (in a legal case where they didn't even charged me with anything and therefore my lawyer couldn't find out anything except about the illegal police raid reports). Well, and I still haven't gotten back my hardware after waiting more than 5 years.

So yeah, I guess every company that's part of this initiative should look on their own shitty policies and previous legal actions before claiming they actually want to get contributions. Almost always reverse engineers get threatened by lawyers once they report critical remote exploit-level bugs. Quite literally, I'm not making this up, and it's known around the netsec/infosec scenes.

As long as "hackers" are painted as the bad guys for reverse engineering and trying to submit patches and fixes, and even have to be anxious about not getting sued by the company they're trying to help - nothing will change.

[1] https://owasp.org/

[2] https://cloudsecurityalliance.org/


It looks like OWASP is an "Associate Member" of this Foundation


Are there plans for donating funds and engineering talent to open source projects that may not be equipped to handle staying abreast of the latest security practices?


There are active discussions around this, especially in the Securing Critical Projects working group (https://github.com/ossf/wg-securing-critical-projects). These resources will always be scarce relative to the number of open source projects that could benefit, so there's a large focus on developer best practices, improved tooling, and "secure by default" configurations. These are described in the working group README pages (https://github.com/ossf) in more detail.

There are a few ways that OpenSSF and member organizations are already funding direct security work for open source projects, and I'm hoping this expands significantly in the near term.


There are active discussions about this to eventually do that, yes. It was hard to kick that off during a pandemic, so the decision was made to create the foundation first so it could be discussed and eventually worked out. In the meantime there are informal activities to try to directly help some projects right away, while we work out something more expensive.

That said, there is no way to directly help every one of the millions of Open Source projects, so there is a big interest in doing things that help many projects at the same time.



If the general state of security in the tech industry is to improve, it will come through open source collaborative projects, where software is finished and shippable when engineers are satisfied.

It will not come through commercial software, where profit pressures ensure that there will always be business decision makers who innovate by spending ever less on security until finally it blows up in their face.

The security of proprietary software will improve on average only insofar as it is constrained to improve by open source dependencies.


An interesting idea for open source security to tackle:

What I don’t get is why everyone rolls their own infrastructure scripts still?

Where is a mono repo for Terraform or SDK based code that is openly vetted? The same goes for Kubernetes, Helm, Ansible...

Very few web tech problems are so Byzantine they need humans to write bespoke config


Probably because needs--and therefore tooling--is constantly evolving. IME the subtle differences add up too. Trying to maintain conformance to convention is itself something of a rat race.


My experience over the past year mirrors this. I'm regularly surprised that there aren't more generally-accepted and boosted solutions to lots of the minutiae that comes with DevOps/cloud config. CloudFormation and Terraform are both pretty bare bones - they give you tools to describe any cloud resource and the way they relate to eachother. That's great! But I'd rather not be left in charge of defining how cloud resources can securely communicate - I'd rather include an AWS/HashiCorp-supported module that predefines configuration for adding a Redis cache to something, or letting only certain resources connect to a database.

Terraform modules were a nice start, but I've pretty much never seen a module I would trust using. It may just be my own bad luck, but the vast majority of the TF Modules I've looked at are A.) A single maintainer, B.) 10 stars or less, C.) Haven't been updated in 6 months. The combination of the above 3 do not leave me feeling confident in including a library. I wish that there was an easier way to tread the paths of those that have done the work.


I can offer one, DebOps[1]. It's a git monorepo that contains a set of Ansible roles and playbooks that can be used to manage Debian-based servers, VMs or containers. It's designed in such a way that almost every configuration option can be overridden from the Ansible inventory - you are not expected to modify the monorepo itself, so that you can get updates over time, but you can still customize the result to your needs. Playbooks and roles are developed in the open, all the secrets and inventory configuration is private.

[1]: https://debops.org/


I don't know how related this is to the OP, but I do agree, and I think a big part of the answer is just how young all these tools are. I wouldn't be surprised if 5 or 10 years from now, infrastructure and software deployments are "solved" to the same degree as IDEs or web browsers are today.




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: