Hacker News new | past | comments | ask | show | jobs | submit login
The right thing for the wrong reasons: FLOSS doesn't imply security (seirdy.one)
37 points by Seirdy on Feb 17, 2022 | hide | past | favorite | 49 comments



It does imply trust however. I go out of my way to read source code and I'm a lot more comfortable using code I've read compared to opaque binaries nobody really knows a thing about. Free software is not immune to vulnerabilities but it is quite resistant to people doing shady stuff just because they think they can get away with it. Now with reproducible builds it's gonna be even more trustworthy.


Yep. The pushback against FLOSS software could be seen as a delay tactic to allow the continued distribution of software with known flaws (even in commercial environments with source-available agreements, as other commentors have alluded to. if you don't have both the public code and reproducible build of it that you can verify independently, then you may have been provided with human trust, but you do not have software trust).

I'd expect that once software trust reaches a sufficiently high-quality baseline, we'll begin to see tricksters retreat (or bolster) network-level traffic gathering and interference. Not all tricksters are hugely sophisticated -- but many of them do get themselves into increasingly complicated situations, often with significant access to resources of finance and leverage -- so that movement would probably be accompanied by a lot of braying for attention and FUD-style tactics (they'll need to find ways to cover their tracks -- something that is often purely about producing distracting noise and search results nowadays, in the era of search engines).


Have you noticed a pushback against FLOSS? I'd be interested if that's the case.

I went out of my way in this post to make it clear that I'm vehemently in support of FLOSS for a solid list of reasons; it's just that "security" is far lower on that list than some readers would think. There's a reason I only decided to post this months after two of my previous posts in support of FLOSS gained traction ;).

Unfortunately, "FLOSS doesn't imply security, but it's certainly helpful. Just set your expectations straight and remember that security isn't a checklist but an emergent property that stems from a variety of factors uncovered through detailed analysis" is a bit too long of a title so I had to make one that looked like I was picking a side before making it clear that I wasn't. Titles aren't good at capturing nuanced views.


It's OK, this was me jumping on my usual soapbox, and not a direct response to your article (I should do better at staying on-thread-topic, in general).

In the venn diagram of source-code-related security properties, the fact that proprietary code can be secure and that FLOSS software can be insecure aren't controversial to me, so I think I'll tend to be aligned with your core arguments.

The pushback that I notice (or perceive? maybe they're different?) is that most large tech companies - regardless of background - seem stubbornly opposed to offering their products and services as FLOSS through-and-through, despite what I think are fairly apparent, technically sound, morally conscious and defensible arguments that the code for the products everyone relies on in life could and should be FLOSS.

But: I'll go away and read your post in a bit more depth before adding any further thoughts.


Roughly speaking: yes, you make fair points that source code isn't required for a number of different security research approaches (and, as you indicate, many research practitioners essentially isolate the software they're investigating and then attempt to see what it does at a binary level and/or at runtime).

Although I suspect that I'm missing other things to add to the conversation, I'd argue that availability of source code -- at least in the Zoom and Intel ME cases -- would reduce the overall time-and-monetary-cost of identifying suspected flaws. And also of nullifying invalid insecurity claims! So that's another argument for FLOSS: let's try to dissuade vendors from (appearing to?) waste our researchers' and defenders' valuable time.


Besides, closed source also doesn't imply security, so at least with FLOSS we have the source code in case we have to investigate.


Seirdy's article is mostly focused on bug-finding in the binary domain, by fuzzing, memory analysis, decompiling and other techniques. He makes the entirely correct observation that having source to audit is only one part of thorough debugging, because many exploits are only manifest at runtime in the context of specific hardware, operating systems, and build chains.

Seirdy does not denigrate source auditing as some interpretations of his words here seem to say. This feels like quite a mature article and there are implications he touches on but doesn't fully explore like Thompson's "Trusting trust" rabbit hole of the "malicious compiler" and the fact that security by obscurity has some serious clout if you can compile for non-standard hardware. The Non Specific Agency may have a zero-day for your Debian package, but it won't irk the program on your FPGA emulated Fairchild F8 Microprocessor.


(am author) I'm actually going to touch on that Ken Thompson article in a follow-up about how sandboxing can improve not just security, but user freedom/control too (if implemented a certain way).

It'll go over how software itself should be considered untrusted (citing Thompson); however, even if it is "trusted", it still might consume untrusted content (often by reading data from arbitrary files or the network). Part of what it means to use free software IMO is having less dependence on a vendor, and reducing trust is one of multiple things that can work in that direction.


I didn't like few implications author makes.

> One of the biggest parts of the Free and Open Source Software definitions is the freedom to study a program and modify it; in other words, access to editable source code.

You don't have to have FLOSS-compatible open source license to run security audits on the code. For instance: Microsoft allowed government entities to check Windows security-related source code for many years. Just having access to the code is enough for audits, regardless of the license.

> One such reason is that source code is necessary to have any degree of transparency into how a piece of software operates, and is therefore necessary to determine if it is at all secure or trustworthy. Although security through obscurity is certainly not a robust measure...

If code is not open sourced it doesn't mean security through obscurity is employed. It simply means there's no public access to the code. This is a very common misconception.


> Just having access to the code is enough for audits, regardless of the license.

No. who guarantuees that the code is the one, which is compiled to the binary you are running?


You can use Reproducible Builds to compile the source code and get the exact same binary:

https://reproducible-builds.org/


Okay, event you are assured that the source code you see are the origin of the binary you run, still no one can guarantee you there are no security concerns. The reason is simple: software are written by human, at best they were created with good will and there were due diligence. But doesn't matter how careful the developers and the auditors(if there were any) were, there are for sure security related bugs. Making it worse, there have been and going to have lots of deliberately planted backdoors. Given the complexity of the software and the cost of auditing, even if you have all the original source code, most likely you are unable to reveal planted backdoors. And if you indeed found something, how can you tell whether it was bug or a backdoor?


This part of the subthread was about ensuring the binary you run was built from the source code you have. Of course there could be bugs in that source code and or bugs in the resulting binary, so you have to audit both of them and fix anything suspicious.


Apologies for my ignorance here, but what is the current "market penetration" for reproducible builds? As in, if you combine all sources of FLOSS binaries, what percentage of total binaries follow use this system?

Looking at the site you linked, am I to correct that this is still early days in actual implementation?


There are lots of open source distros involved in the Reproducible Builds project and many of them are running CI to identify potential sources of non-determinism and identify regressions etc. For example currently 83.4% of Debian unstable amd64 is reproducible on the CI system (which builds twice in two different build environments and compares the builds). The variations tested for unstable are more than those for bookworm/bullseye though, so the numbers aren't easily comparable. Also, For Debian at least, there is some work on reproducing existing binaries from the archive using the Debian snapshot service, but I'm not sure where that is at.

https://reproducible-builds.org/who/projects/ https://reproducible-builds.org/citests/ https://tests.reproducible-builds.org/debian/reproducible.ht... https://tests.reproducible-builds.org/debian/index_variation...

Personally I find the Bootstrappable Builds project way more important and interesting. They are aiming to go from less than 1000 bytes of audited machine code (not assembly) plus all the necessary source code all the way to a full Linux distro. They are impressively far along already. They use multiple techniques for bootstrapping higher levels, including writing new implementations of languages written in other languages, using old versions of languages that were written in other languages etc. Some details here:

https://bootstrappable.org/ https://github.com/fosslinux/live-bootstrap/blob/master/part... https://github.com/oriansj/talk-notes/blob/master/live-boots...

Edit: some talks about bootstrappable:

https://github.com/oriansj/talk-notes/blob/master/talks.org


with opensource you can do this. but with closed source like from microsoft, that is not automatically given.


In the context of “government agencies”, they can just order Microsoft to make it possible, if they care.

In the case of Windows for Warships[0], one of the arguments (if they absolutely had to use Windows and nothing else) might be something like “Dear Mr. Gates, if you don’t empower us to do our job, we can’t guarantee that North Korea won’t retarget our nukes at your face. Sincerely, the Royal Navy.”

[0] https://en.wikipedia.org/wiki/Submarine_Command_System


You could have some sort of reproducible-builds escrow organisations; for eg release your code to them, they would do a build and publicly sign hashes of the build they produced.


There are engineering solutions to this, as another comment mentioned.

But, you don’t necessarily need an engineered solution to it, either. Trust can be established through non-engineered means.


trust can be established otherwise, but with software it is extremely hard to notice that trust violation for closed source software


If you're really that paranoid that you believe such conspiracy theories, then simply compile and run the source code they provide, and run that instead of the binary they also provide.

But you have to admit, it's a pretty implausible conspiracy theory, since it would be so straightforward for you to do that. Because they're literally providing you the source code WITHOUT the backdoors, which you believe they don't want you to compile and run.


how can i compile microsofts code? and there are easily systems which are so complex if you compile them the wrong way, you get other behaviour or more bugs.

and it may be also human error, not a conspiracy theory. if you do not have the source code and the recipe to build it yourself than you can not guarantuee that the inspected sourcecode (which was given to you either by disk or by a datablob over the internet) is the binary which you are currently running on your windows.

hell, perhaps they shipped an update which broke an security measure, and the update was shipped after you inspected the source.


> For instance: Microsoft allowed government entities to check Windows security-related source code for many years. Just having access to the code is enough for audits, regardless of the license.

You don’t need a FLOSS license to audit the code. You just need the clout that comes with being a government able to launch meaningful antitrust action. Easy-peasy!


Another thing people forget is that reverse-engineering is easily accessible to a determined actor. Even if Microsoft hadn't released source code to schools and governments, they could just reverse-engineer it through decompilation tools like Ghidra.


Yep, proprietary software can even be source available. This is not even uncommon with interpreted languages. It’s just easier to enforce compliance by withholding it.


"Imply" is a big word that requires logical proof. Even formal verification may fall short of implying security, unless it's from the transistor level on up.

The issue is that, rather, non-FOSS implies the existence of significant hindrances in the area of security. It also implies a dependency on a single vendor, and their responsiveness to incidents.


Yeah, I'm not sold on the "FLOSS is more secure" idea either. Good on you for trying to write down some arguments against it. But unfortunately, comparing the security of closed-source software to the security of open-source software is too difficult. You'd basically have to take two competing programs, one open, one closed, and spend many human-hours trying to hack both, and then publish your results. I believe that in such a test, open-source would perhaps have MORE security bugs. That's just what my gut tells me.

Instead, people simply look at the number of security holes patched, and see that FLOSS projects report and fix many more holes. So surely FLOSS is more secure, right? Or, they rely on folk wisdom like "With enough eyes all bugs are shallow" which aren't any more proven in the real world than "FLOSS is more secure".


The most concrete counterexample I can think of is the Windows XP leak -- as source code was leaked Microsoft seemed to be really annoyed because security flaws were kept all the way up to Windows 10 (in the name of backwards compatibility).


Sure, they SAID that they were annoyed because of potential security issues. But you know what really annoys companies? Their secret sauce being easily viewable to competitors, and the extra workload their legal department will have tracking down people who host the leak and sending them DMCAs.


But they supplied source code to other companies: https://web.archive.org/web/20081216125724/http://www.micros... (this is suspected to be how the leak was started in the first place).


Would you give me your Social Security Number if I asked for it? Why not? Would you give it to your bank when opening a bank account? Why?

When you join a company, you have to sign a NDA that covers not only the company's trade secrets, but any trade secrets of that company's partners that you are exposed to. So, Microsoft is somewhat free to share their source with select partners whom they trust will keep it secure, and only share with employees who need it.

I don't think Joe's Software LLC would have been able to get the Windows source code, even with an NDA.


"You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code." -Ken Thompson

"Given enough eyeballs, all bugs are shallow." -Eric S Raymond pretending to quote Linus Torvalds by mis-attributing his own wishful fallacy as "Linus's Law"

Then there's Theo de Raadt's salty quote about ESR's ridiculous "many eyes" argument that Raymond deceptively calls "Linus's Law":

https://groups.google.com/g/fa.openbsd.tech/c/gypClO4qTgM/m/...

"Oh right, let's hear some of that "many eyes" crap again. My favorite part of the "many eyes" argument is how few bugs were found by the two eyes of Eric (the originator of the statement). All the many eyes are apparently attached to a lot of hands that type lots of words about many eyes, and never actually audit code." -Theo de Raadt on ESR's "Linus's Law"

Actually, that fallacious "many eyes" argument was "formulated" by Eric S Raymond (to whom Theo was referring as "the originator of the statement"), which ESR misleadingly named "Linux's Law" in "honor" of Linus Torvalds, who never even made that claim, which is ironic because it actually dishonors Linus by being an invalid fallacy.

https://en.wikipedia.org/wiki/Linus%27s_law

>Validity

>In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate. While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".

>The persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum. Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain. In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking". Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.

>Empirical support of the validity of Linus's law was obtained by comparing popular and unpopular projects of the same organization. Popular projects are projects with the top 5% of GitHub stars (7,481 stars or more). Bug identification was measured using the corrective commit probability, the ratio of commits determined to be related to fixing bugs. The analysis showed that popular projects had a higher ratio of bug fixes (e.g., Google's popular projects had a 27% higher bug fix rate than Google's less popular projects). Since it is unlikely that Google lowered its code quality standards in more popular projects, this is an indication of increased bug detection efficiency in popular projects.

The little experience Raymond DOES have auditing code has been a total fiasco and embarrassing failure, since his understanding of the code was incompetent and deeply tainted by his preconceived political ideology and conspiracy theories about climate change, which was his only motivation for auditing the code in the first place. His sole quest was to deceptively discredit the scientists who warned about climate change. The code he found and highlighted was actually COMMENTED OUT, and he never addressed the fact that the scientists were vindicated.

http://rationalwiki.org/wiki/Eric_S._Raymond

>During the Climategate fiasco, Raymond's ability to read other peoples' source code (or at least his honesty about it) was called into question when he was caught quote-mining analysis software written by the CRU researchers, presenting a commented-out section of source code used for analyzing counterfactuals as evidence of deliberate data manipulation. When confronted with the fact that scientists as a general rule are scrupulously honest, Raymond claimed it was a case of an "error cascade," a concept that makes sense in computer science and other places where all data goes through a single potential failure point, but in areas where outside data and multiple lines of evidence are used for verification, doesn't entirely make sense. (He was curiously silent when all the researchers involved were exonerated of scientific misconduct.)


I would say that the law about enough eyeballs and shallow bugs, call it what you will, works remarkably well. It doesn't imply that bugs get fixed, only that they are shallow. Notice the vagueness of this statement.

When some eyeballs find excuses to not fix the bugs or find a more entertaining problem to bikeshed, other enterprising eyeballs are busy creating 0day exploits for bugs they shallow.


I cited multiple pieces of FLOSS that employ and benefit from fuzzing, making an example out of a Linux bug that allowed sandbox escapes that was found through fuzzing rather than source analysis.

Humans suck at finding subtle issues purely by reading source code. Running software is a more effective means to detect bugs. Once a bug has been detected, source code is really helpful to isolate and patch it.


Does it work remarkably well? Are bugs significantly less common in open source software? I do not really see any indication of this in practice.


It works well. It doesn't imply the bugs will be fixed. Sometimes it implies an exploit will be made. Sometimes the list of open bugs grows embarrassingly big, and that's it. Shallow.

But, nevertheless, transparent it is.


>It doesn't imply that bugs get fixed, only that they are shallow.

No, actually, it intentionally does imply that the bugs will get fixed: that's the whole point. Where did you get that idea -- have you even bothered to cast your own eyes on the original source of the quote, or any of the discussion around Heartbleed and other bugs, or is that just your "feeling"? If you go back and read the original source of the quote in context, you will see that it explicitly states that the key point is that bugs will get rapidly FIXED, in no uncertain terms:

http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral...

>[...] Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

>8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. [...]

>But the key point is that both parts of the process (finding and fixing) tend to happen rapidly. [...]

>In practice, the theoretical loss of efficiency due to duplication of work by debuggers almost never seems to be an issue in the Linux world. One effect of a ``release early and often'' policy is to minimize such duplication by propagating fed-back fixes quickly [JH]. [...]

Notwithstanding the fact that ESR himself incompetently and dishonestly claimed to have found bugs in climate analysis software and didn't bother to fix them, because his actual point was to falsely discredit the scientists (whose exoneration he still has not acknowledged), not to altruistically advance science, but to mendaciously deny science. Is that what you mean by "bikeshedding"?

Did the finding and fixing of the Heartbleed bug happen rapidly because it was open source? No, definitely not! Open source is not magical pixie dust.

https://www.datamation.com/open-source/does-heartbleed-dispr...

>Does Heartbleed Disprove ‘Open Source is Safer’?

>[...] Or, as Eric Raymond famously said, “given enough eyeballs, all bugs are shallow.” Yet, somehow, Heartbleed appears to have existed for over two years before being discovered. It may even have been used by American security agencies in their surveillance of the public. [...]

>Implicit in the description is not only the idea that peer review can substitute for software testing, but also that no special effort is needed to detect bugs. Simply by going about their business as developers, FOSS project members are likely to notice bugs so that they can be repaired.

>This claim has not gone unchallenged. It is a statement of belief, not the conclusion of a scientific study, a rationalization of the fact that peer review in FOSS has always been easier than software testing. Moreover, in Facts and Fallacies about Software Engineering, Robert L. Glass claims that no correlation exists between the number of bugs reported and the number of reviewers.

>Yet despite the claim’s weaknesses, it remains one of FOSS’s major assertions of superiority. Heartbleed seems an exception that at least challenges the widely believed rule, or maybe even overturns it completely. [...]

>A more useful analysis has been offered by Theo de Raadt, the founder of OpenBSD and OpenSSH. De Raadt notes that malloc, a memory allocation library, was long ago patched to prevent Heartbleed-type exploitations. However, at the same time, OpenSSL added “a wrapper around malloc & free so that the library will cache memory on its own, and not free it to the protective malloc” — all in the name of improving performance on some systems.

>In other words, the potential for a bug was detected and patched, but was by-passed by an engineering decision that favored efficiency over security. Perhaps, too, the wrapper was never examined closely because it was assumed to be trivial and to add nothing new. It had become an established part of the code that nobody was likely to modify. But, whatever the case, de Raadt concludes scathingly, “OpenSSL is not developed by a responsible team.”

>Assuming that de Raadt is right, then one take-away for FOSS is that all the eyes in the world cannot be counted on to catch basic design problems.

>Taken together, Segglemann’s and de Raadt’s comments also suggest that assuming no special effort is needed to discover bugs is a mistake. Perhaps more attention needs to be paid to formal reviews and software testing than FOSS traditionally has managed. The fact that FOSS development often involves remote cooperation does not mean that log-in test or in-person testing sessions could not be added to many project’s development cycle.

>What Heartbleed proves is that FOSS needs at to examine the unexamined assumption it has held for years. Greg DeKoenigsberg, a vice president at Eucalyptus Systems, summed up the situation neatly on Facebook: “we don’t put enough eyes in the right places, because we assume [bug-detection] will just happen because of open source pixie dust — and now we’re paying the price for it.” [...]

https://www.esecurityplanet.com/applications/why-all-linux-s...

>Why All Linux (Security) Bugs Aren’t Shallow

>Zemlin quoted the oft-repeated Linus’ law, which states that given enough eyes all bugs are shallow. That “law” essentially promises that many eyes provide a measure of quality and control and security to open source code. So if Linus’ law is true, Zemlin asked, why are damaging security issues being found now in open source code?

>“In these cases the eyeballs weren’t really looking,” Zemlin said.

https://www.zdnet.com/article/did-open-source-matter-for-hea...

>Did open source matter for Heartbleed?

>Open source does not provide a meaningful inherent security benefit for OpenSSL and it may actually discourage some important testing techniques. Also, panhandling is not a good business model for important software like OpenSSL.

>Let's stipulate that OpenSSL has a good reputation, perhaps even that it deserves that reputation (although this is not the first highly-critical vulnerability in OpenSSL). I would argue that the reputation is based largely on wishful thinking and open source mythology.

>Before the word "mythology" gets me into too much trouble, I ought to say, as Nixon might have put it, "we're all open source activists now." For some purposes, open source is a good thing, or a necessary thing, or both. I agree, at least in part, with those who say that cryptography code needs to be open source, because it requires a high level of trust.

>Ultimately, the logic of that last statement presumes that there are people analyzing the open source code of OpenSSL in order to confirm that it is deserving of trust. This is the "many eyeballs" effect described in The Cathedral and the Bazaar, by Eric Raymond, one of the early gospels in the theology of open source. The idea is that if enough people have access to source code then someone will notice the bugs.


It's one of those things where what is said works in the way it has been worded, not in the way its author thought it worked.

I mean, at its face value, the bugs have a better chance of being fixed because more people can see them. And the open source license, technically, empowers every user.

Yes, even today, I can fix whatever I please and disregard whatever everyone says about "being a part of the community" and "decision process". And I can distribute the fixes on my own.

In this sense, the "law" is working, it's just as obvious as the fact that it's way easier to run a company in a country with fewer regulations and simpler tax law than in a country with more regulations and more complex tax law. However, there are also other confounding factors which may negate this maxim.

For example, the bystander effect is pretty big. Sure there's someone else fixing this. Or there's the fatigue of pull requests being enqueued for years. Or there is, on one hand, a stigma that you have to have a galaxy brain to work on X, while at the same time people making X may make mistakes but no one feels it's right to question them, because they are "galaxy brains" by definition.

There's also this factor of eyeballs being tired, not really interested, not looking really carefully. And yet, once known, things do get fixed rather rapidly. In open source, I haven't seen yet an equivalent of the AWS status page.

So, yeah, it doesn't guarantee you things. But when we're talking about all else being equivalent except for the license, an open-source X will be better, on average, than a closed-source X. Most areas of work benefit from transparency.


All that, and you never mentioned Log4J


FLOSS doesn't imply security, it allows verifying it. Closed-source software doesn't allow verifying it, so it's safest to assume it's insecure.


> Closed-source software doesn't allow verifying it, so it's safest to assume it's insecure.

The bulk of the article was a response to this claim: it covered how closed-source software can have its security analyzed through black-box techniques. I recommend giving it a read before dismissing it.


Lack of FOSS implies lack of security. By default software is insecure and has malfeatures which need turning off.


> By default software [...] has malfeatures which need turning off.

The problem rather is that such software is not simply considered malware by the public.


Not at all, any more than FOSS implies security.


[flagged]


> I’d consider the proprietary Google Chrome or Microsoft Edge more secure than Pale Moon or most webkit2gtk-based browse

Now consider Chrome had a widely exploited (by a large ad network) known (by everyone, which means also Google) third party -> first party bypass for a long time. This destroys the security model, but I'm guessing the ad network loved it. This is what you get for trusting a project created and maintained by an ad company.


I'm expecting your exploit to be something like "automatically signs you into Google search". I don't like that sort of thing (in fact I use Firefox, mostly for moral reasons) but I disagree strongly with your general argument.

The average person has their personal information known by data brokers. All using an insecure niche product does is add more attackers with their personal information. In addition technically-legal attackers constrain themselves a bit to not violate the law too much, whereas Russian gangsters have less incentive to restrain themselves.


The spirit of your rebuttal is couched in the belief that a known oppressor is better than a party that could be an oppressor, because you know what to expect. I categorically disagree, but that is okay.


No, it's that when you have a known oppressor adding a new one gets you nothing. Your not replacing American data brokers with criminals if you simply use a weird browser


This user copy-pasted my comment from lobste.rs without context and added their own unrelated link to the end.

Original comment: https://lobste.rs/s/8ajhgl/right_thing_for_wrong_reasons_flo...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: