Ok, this one’s baffling, especially learning at the end that FLOSS is bad because it’s meant to de-emphasize the “libre” part.
All OSS software is inherently without cost, that seems unquestioned here. So free only ever means one thing to non-laypeople, in this context. So isn’t FOSS already the neutral middle ground between OSS and FS??
Regardless, I’m struggling to conceive of how a piece of software could be OS but not F. I guess if it’s, like, surveillance software known to be used by governments…? Maybe OSS that is paradoxically restrictively licensed, threatening any forks or unauthorized compilations with legal action? That seems like a terribly naive proposition, but I’m sure it’s been floated by at least one MBA…
In other words: you can argue all day about the justifications for OS’ing your S being more related to removing cost barriers or to sharing control, but in the end, you clearly have to do both. Making “F[L]OSS” redundant at best, confusing at worst!
Surely I’m missing something, bc I know this has been litigated for many thousands of hours both pre- and post-Eternal September. But rn it just comes across as baseless pedantry
Initially there was no Open Source or OSS. There was Free Software. This started a long time ago with things like GNU and later Linux, BSD etc. Richard Stallman codified what it meant for software to be Free. Specifically, it had nothing to do with cost. The "free" was free as in freedom, not free as in beer. Unfortunately English uses one word for both senses. Romance languages like Spanish still retain both: gratis means free as in beer and libre means free as in freedom.
Open Source was a later "rebranding" of free software by some people who fell out with Stallman and wanted to emphasise more the practical advantages over the ethical ones. Stallman wasn't happy because he felt (and still feels) the most important thing is that each person should be able to do computing freely.
Anyway, long story short, free software is nothing to do with cost. Horrible acronyms like "FLOSS" are to try to make everyone happy.
> All OSS software is inherently without cost, that seems unquestioned here.
I don't think so, you could charge money for FOSS (like charging for a built binary but having the source be FOSS) and it'll still be as much FOSS as any other FOSS out there. It isn't very common to do so, but there isn't any inherently wrong or incorrect with charging for FOSS.
How could you charge for a binary if people can just compile it on their own...? Honor system? I guess you could make it inconvenient to compile, but then is that really OSS?
It's interesting to reflect on what you're saying: you'd pay someone who forces you to, but not otherwise. So if someone built a house for you you'd only pay them if they threatened to come and burn it down, or they kept some way to remotely lock you out of it?
Hmmm. Well it’s software/IP, so as always I think we need to stay away from “would you download a car” talk.
With that in mind, the proposition is basically just the honor system. Which maybe works a little sometimes, among professionals? I paid for SublimeText to support them, for example. But WinRar is a very compelling counter example.
It feels like publishing a pdf of a book but with a big red “don’t click this until you’ve Venmo’d me $5!” above it. Regardless of what I individually would do, that’s just kinda… goofy?
> How could you charge for a binary if people can just compile it on their own...?
You don't make a binary publicly available, then you put the binary behind a paywall.
Some examples:
- Ardour - Lets you pull down the source and compile locally for free, or you pay to use their compiled binaries. Author/creator of Ardour hangs around on HN, maybe they could share their experience if they see this.
- Radium - Another DAW like Ardour, does it the same way.
- Fritzing - Designer for PCBs, same approach, pay for the binaries if you'd like, but free to compile from source if you can
I'm sure there are many more examples out there, but these are the ones I thought about when I wrote my previous comment.
It's interesting what the "Decreasing Technologies" graph implies by omission. Assuming this means that technologies not included in that graph have been consistent through the same time period. Without a corresponding data view for "Increasing Technologies" there's not really any data on current trends.
That's a fairly inappropriate comparison in level of expertise necessary to design a system: LVL beams are picked by structural engineers, not by framers who generally install them where they're needed, based on a structural design which they have no say in because load calculations aren't a framer's job.
HVAC installers are not merely system assembly specialists, they're system design specialists as well in nearly all cases. Or at the very least they outsource HVAC system design to experts who are familiar with required air flow, static pressure, air changes, condensation formation and evacuation, and yes of course whether the system is appropriate for cold climates (cold climate heat pumps are notably different from temperate or hot climate ones: coils are larger to capture more heat from the air outside temperatures are very low); they have different refrigerant systems; and way more insulation to prevent the cold affecting operation; some even incorporate resistive heating elements... or a gas furnace in cases where their efficiency would drop below an acceptable threshold.
One of the biggest and most important part of an HVAC system design is sizing for climate and dwelling. It should be extremely suspicious to any installer that their design system efficiency would be so much higher than the real world system performs.
I'd be shocked if professional HVAC installers couldn't spit out several reasons why the system might be performing so poorly just by reading this blog post. Notably the absurd assertion that the installers were professionals despite wildly overpromising and underdelivering. Some contractors acting professionally doesn't make them professionals.
As others point out in this thread, implementing a system that meets the stated design goals roughly on target is what a professional does. I've seen some absurd lambasting of PV solar installs as an example of empty promises. Again, those are tell-tale signs of deficiencies, not an indictment of the underlying technology.
Which is the most irresponsible part of this senator's post. Given your post and prominence, the least you could do before publishing something like this is check your basic assumptions: that the installer did a fine job.
As someone who recently had a heat pump installed, your expectations for minimum wage workers is rather unrealistic. We asked for one, they estimated the cost and installed it, and we paid the bill.
It was their second installation, ever. Red state with lots of gas installations and all that.
The onus of understanding that it wouldn’t work under a certain temperature, and would get lower in efficiency the lower the temperature, was on me.
This letter feels as though it is overlooking a large point of contention.
>The scientists who wrote this horrible code most probably had no training in software engineering, and no funding to hire software engineers
Shouldn't the argument be, that for research that is reliant on coding models, funding be allocated to experts that can assist in creating said models (software engineers)?
The conclusions from the first critical code review cited:
All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.
On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.
That's the most biased opinion imaginable by an astroturf group. There's no information on who they are, but the article currently on their front page is by Toby Young, who is a Spectator/Quillete guy and to be found backing almost all stupid ideas within British politics. https://www.spectator.co.uk/article/This-lockdown-may-kill-m...
Yeah really, why don't we leave out all modelling duties to companies who optimize for more money instead of leaving it with the only actor trying to optimize towards actual public health?
If you're willing to dismiss all companies as optimizing for more money, it seems only fair to say that academics optimize for prestige and publication in good journals.
Whom would you like our society to rely on to generate quality work?
I don't think there is a satisfactory answer to this question. Public research becomes more and more of an industry every year with the publish-or-perish game, while a solely private solution is obviously open to very biased conclusions.
There is no smart solution to a stupid problem. But the truth is that _as an institution_ the NHS is the only actor whose mission is to optimize towards public health.
In most fields, our society relies on private industry to generate quality work, even when the work is very important and doing it wrong might kill people. I'm not an anarchist, I do recognize there are reasons that the government should provide some things. But the idea that private industry uniformly produces bad results because they don't care about anything but profit just seems silly to me. Producing good results is profitable!
> it seems only fair to say that academics optimize for prestige and publication in good journals.
On one hand you have lots of people arguing that the legal duty of a company is only to make money for its shareholders. When large companies fail at that goal, it's bailout time.
On the other hand you have peer-reviewed journals where authors are incentivized to find accurate results, and researchers will cite articles on the basis of their veracity (or, if incorrect, as punching bags). Of course that's a fallible process and just as vulnerable to cronyism, but when researchers are caught cooking the books they're discredited, not rewarded.
Authors aren't actually incentivized to find accurate results, but rather publishable results, which typically means novel. Researchers also cite articles based on their impact, not their veracity. There are plenty of instances of retracted results continuing to be cited as if they are still accurate.
There are issues with both industrial and academic research, but I do think that industrial research is more transparent in its motivations.
Everyone working for a living optimizes their work towards making more money. That doesn't change for people funded with public money. It's pretty widely accepted that the people in charge of public funding (politicians) sometimes act outside of the publics interest for self-gain.
I make no claim as to which achieves better results for the public because it's such a complicated problem, but I think it's rather naive to just assume publicly funded incentives are more aligned with social health than private incentives.
That last part is mindblowing given the fact that the health insurance industry in the US is trying to argue it shouldn’t have to pay for COVID-19 treatments because it’s part of a pandemic and not part of normal medical treatments.
That was a shitty code review. Seeding issues like the ones cited don't affect the results of a Monte Carlo simulation, and there are tests in the repo, just not automated ones.
The section you quoted shows the reason for the review's sloppiness. The reviewer set out to find a way to justify their own beliefs instead of to actually read the code.
Damn straight. The only sense in which it was not a shitty code review, is that it didn't actually review any code. Looking into the linked tickets is a time consuming faff, but people should actually do it before taking the blog post at face value.
I agree. Moreover, the cited bug ("predictions varied by around 80,000 deaths after 80 days") doesn't really seem to impact the over all policy implications: no lockdown means exceptional number (400 to 480k) of deaths.
Unless the entire simulation is bogus, it comes off as nitpicking.
Doing a Monte Carlo simulation means you adjust the seeds to get different runs. It doesn't mean your program can read uninitialised memory or reuse variables that weren't reset to zero and still be correct.
Where are people getting this idea that you can just average away the results of out-of-bounds reads and race conditions?
What does reading uninitialized memory or reusing variables that weren't set to zero have to do with seeding issues? Read my comment again and reply to its content instead of making up a comment that you would like to reply to.
Back in 2008-2009 my then PI tried this: it worked for a while, but then it proved to be untenable. Not because there wasn't enough money (there was) but because the university did not like the idea to get someone on board just to help researchers develop software, and thus there were so many roadblocks at one point that it was impossible to go on.
I don't honestly know how much I can disclose (FTR, I was not the one doing that job - I was working as a postdoc there at the time), but I'll try to summarize it briefly:
- PI got a couple of EU funded grants
- PI wanted to build a program / programs out of some ideas he had before moving to the institution he was currently employed at
- PI hired a software developer (actually a software engineer) to do this job
- Statistician drafted algorithms, developer created the software
- I used the software for my Ph.D. thesis, got hired at the lab of the PI as a postdoc, new requirements arose due to the way I used it
- Developer made a preliminary version of a new version of the software following discussions with me
- University made it very hard to keep developer on the team, due to bureaucracy and kind of hostility towards this kind of employment
- At some point (I can't recall the details exactly, but it was something that spanned almost one year), the form of keeping the developer on board was no longer possible
- PI offers an alternative contract, but it is financially wasteful to the developer (not the fault of the PI, but the way certain things work in my country)
- Developer leaves the project
Also the university, to my knowledge, complained that the developer cost a lot (IIRC, the project was paid at market rate, so in line with other, non academic software projects).
I can't comment on the quality of the software we used (sadly it was never open sourced) as it was in Java and I only have a passing understanding of the language, but the approach of having a dedicated developer IMO worked (and also net quite a number of publications over that period).
For this to happen there would need to be a nationally accepted PE certification for 'software engineers' on par with other engineering disciplines. It's unreasonable to expect non-experts, and experts from other fields, to perform credentialing on a case by case basis.
I included the "nationally accepted" bit since I'd prefer to see some incremental improvement over current models, but I don't have confidence that an international standard would be accepted in all relevant contexts. It could still be administered at the state level.
I'm also ignoring the requirement that candidates must start with a degree from an ABET-accredited institution, which seems to feature prominently in proposals from IEEE and others. Ideally I think there should be some way around that, but alternatives I'm familiar with aren't great either (e.g. FINRA).
Being an expert means you know your tools well. If you can't code well and coding is a critical part of your toolset, then you're not an expert yet. You don't get a free pass because you allegedly are strong in other parts of your craft, especially if the part of your craft you're weak in can be this problematic.
It seems like for production it just compiles the stuff (the things that need it) and puts everything into a /dist folder but it does not indeed bundle it. The author advises relying on HTTP2 to serve the files.
Exactly. Zwitterion relies on ES modules in production and development, so bundling is not strictly necessary. See the README for information on performance implications.
What confuses me is their reluctance to attempt to use other third party distributors (itch.io, GOG, etc.) when Steam clearly isn't working in their favor.
Although the author did mention this in the article. I still wonder how much some of these indie games would benefit from partnering with a marketing consultant. Listing a game on steam is basically free marketing for the game. The author mentions only 1% of his sales are from direct sales. I wonder how much having an experienced marketing pro could improve that percentage.
https://www.gnu.org/philosophy/floss-and-foss.en.html