Hacker News new | past | comments | ask | show | jobs | submit login
Stop Building on Corporate-Controlled Languages (deckc.hair)
199 points by nandalism on Jan 18, 2023 | hide | past | favorite | 313 comments



My response to the plea in this article is simply "No thanks." If Go gets that bad, I'd be happy to use an ungoogled fork of it, or migrate to another toolchain or language, or whatever needs to happen. But until then, I'm not going to preemptively switch ecosystems and banish technically good options from my tool belt because I have fears about what could happen.

I want production quality toolchain and runtimes. That's hard. Go has a high quality library of cryptographic functions. Hard. Go has a fast usermode thread scheduler with preemption on many platforms. Hard. Go has an incredibly low latency GC that doesn't need much tuning for most workloads. Very hard. Some of those hard solutions are self-inflicted by the choices made in the language, but the fact that they are solved so well is what makes it valuable. A mediocre implementation of Go would have less value. A mediocre implementation of a random language with a tiny ecosystem by comparison would be even harder of a sell.

I am by no means trying to say that Nim is low quality or not interesting for anyone. In fact, I actually think Nim is cool. It is not the only other smaller programming language I like. I have a strong affection for Zig as well. There are unique properties that make these languages desirable. Zig comptime is really cool for example. I like stuff like this.

But: I also think Go is a great piece of production-quality software. I know many people hate the language now, especially now that the honeymoon has thoroughly ended. But I still like it. I feel highly productive in it, the ecosystem is good, and at the end of the day, I know I can make reliable and fast software in it.

Corporate control is a shame, but the truth is that corporate control is not the problem at all. The problem is funding. Because anyone can fork Go, but can you pay maintainers? Can you run the CI, the website and playground, host the CDN with the downloads? Etc. Sometimes the answer is yes, especially with how much GitHub subsidizes a lot of those things, but in general the answer is no.

"Corporate control" is not the problem itself. Governance is just an outwardly visible consequence. The true control comes from maintainership and stewardship. Because if nobody is stepping up to the plate to take that role, then whoever is doing it today effectively has control over the project.


> I'm not going to preemptively switch ecosystems and banish technically good options from my tool belt because I have fears about what could happen.

This is exactly where I am at. I use Windows/C#/.NET without any shame for absolutely everything.

The more developers who give me grief on some principled basis that "<megacorp> bad, so everything related is bad", the more I double-down on my position. These non-technical arguments are read as desperation and incompetence from my perspective in 2023.


Long time .NET developer here - I think criticism of Microsoft as being worse than other companies, particularly at that time, is valid. For the longest time, choosing .NET also required it to run on Windows Server which, besides the cost implications, isn't great at scale. They also used it to try to push SQL Server. .NET Core solved some of this but some skepticism is still justified.


In the late 90's I used to sit next to a team of about 20 VB developers. When MS decided to make VB obsolete, those folks had to re-learn software development and were getting intern like opportunities because suddenly all that VB knowledge and experience was worth exactly zero. Hopefully y'all fare better when they decide to nuke C# and .NET.


If they had to relearn their craft because they changed tools, they weren't very skilled to start with. You've drawn the wrong conclusion here.

And yes, I hopped off the .net train after 15 years of using c# and took a job writing python, and some JavaScript. I had done JavaScript before, using jQuery. Now I had to learn react. And they used MongiDB as the primary data platform. I had spent the last decade working with SQL.

And it was fine, because my skill isn't tied to a single tool.


This is really naive about how much of effective programming is about knowing the frameworks, libraries, corner cases, idiom, etc of each language.

Sure, you can write-in-C in all languages, as the saying goes. But to be a professional, and to paid as a professional, requires a lot more.


> all that VB knowledge and experience was worth exactly zero

Except that didn't happen at all. Visual Basic still exists.

From Microsoft (2020):

> One of the major benefits of using Visual Basic is that the language has been stable for a very long time.

(Source: Microsoft, on Visual Basic support in .NET 5: https://devblogs.microsoft.com/vbteam/visual-basic-support-p...)

As of January 2023, the latest version of Visual Studio (2022) still offers to scaffold me a new Visual Basic project!

And VB exists in other forms, too. The other day, I was coding VBA macros for an Excel spreadsheet for my business. Worked great and saved me a bunch of time.


VB6 does not equal VB.Net.

I personally know developers who switched careers after Microsoft transitioned to .Net. I worked with them around 2001.

They didn't understand OOP concepts and were not interested in having to relearn everything. It felt totally unfamiliar and strange to them. Microsoft tried to make the syntax somewhat familiar, but at the end of the day .Net is an object-oriented programming language and VB6, try as it might, was not. And that was too steep of a hill to climb for them.


It did not on version 1.0, but most of the differences were fixed in later versions.

As if COM as used in VB6 isn't OOP.


Seriously? Good riddance to the "developers" that couldn't adapt. The job market is littered with garbage developers, and it's exactly the type of developer that can't adapt to a new language or style of programming that are the ones that aren't really meant to be a developer. Hopefully they found their niche in the economy. - Ex-VB6 programmer turned VBA programmer turned VB.NET programmer turned C# programmer turned Python programmer turned Javascript Programmer (and all the non-pro projects and gigs inbetween, C++, Go, Elixir, Java, Kotlin, etc).


VB.NET has only superficial similarities to VB 6 and before. Microsoft discontinued the original VB in favor of VB.NET, but the engine lives on in VBA and Microsoft has used VB6 to build various internal things (I think some of the UI to Windows Defender is VB6).

But for most "Microsoft shops", VB6 was a dead end, and that left a lot of VB developers out in the cold. They had to reskill with the .NET stuff (significant, especially in the early 2000s), or starve.


> They had to reskill...

This always baffles me. It doesn't surprise me, but it baffles me. Why are so many professional developers having to "reskill" to adapt to a new language? We're not necessarily talking about new domains (that would be different, there's a lot I don't know about server administration and automation, for instance, given my background primarily with embedded and desktop systems). But a language? And a procedural/imperative language? VB, C, C++, Pascal, Delphi, C#, Java, JavaScript, Ada, Python, Rust, Go, Fortran, etc. are all, fundamentally, procedural/imperative languages with varying degrees of OO, functional, and metaprogramming capabilities distinguishing between them. Some are bigger jumps (I'd point to Rust as the biggest in that group) than others, but what "reskilling" does a programmer or developer need when switching between languages in roughly the same language family? It's not like they had to jump to Haskell or Prolog or something that was majorly different.


VB6 is an entirely different beast. A professional VB6 developer who didn't have formal training was very well insulated from what their code actually did - that's not to say that they weren't, or that their weren't extremely talented VB6 developers. It's just that for many developers who fell into it from scripting or who invested all of their time and learning into the pre-.NET ecosystem, they had to choose to relearn a significant amount of technical content. Even though the languages were syntactically similar, jumping from VB6 to VB.Net meant learning an entirely new framework and object model, and having to deal with some issues that were hidden from them before.

In many cases it was as different as using BASIC vs Pascal or even C to write CLI tools (a hurdle I jumped in high school in the mid-90s, and despite doing alot of stuff with PEEK and POKE in basic, understanding pointers took me longer than I care to admit :P)


Did you use VB6? It was less a language and more of a wysiwyg windows desktop GUI application builder. If that's what you learned in school and it was your first professional job, you would have a very difficult road ahead of you learning something like C. Maybe 3 out of the 20 were successful moving on. Some VB apps had almost no code at all that wasn't auto generated by the platform and bound to UI elements to handle button clicks and so on.


Just like using VB.NET with Windows Forms, and version 2 introduced back all the Me stuff and some of the semantics that were initially left out.

I not only used VB 6, I used version 3 when VBX was the only option for 3rd party components.


I did, yes. But not a lot.

> Some VB apps had almost no code at all that wasn't auto generated by the platform and bound to UI elements to handle button clicks and so on.

Then they weren't programmers, so I could see them having to "reskill" (actually become programmers). But if they transferred to another wysiwyg toolkit then there is still no reskilling, just relearning (as I mentioned in my cousin comment to this one). Knowledge isn't skill, it's just knowledge. If your knowledge ends at "I know how to use this particular toolkit" and your skill is "I can do it quickly, make it functional, and make it pretty" then the skill should be transferable while the specific knowledge may not be and you may have to learn new knowledge. But the skill remains.


I sat in on a meetup of professional Microsoft admins once. It was... different. They seemed mystified by the concept of variables in PowerShell. Like, the idea that a variable can hold different things at different times, and the behavior of a script can change depending on what's in the variables... that astonished them.

A lot of professionals who work on Microsoft platforms are not like us. They're not even like Mark Russinovich or Raymond Chen, real smart people associated with Microsoft systems. They're filing clerks or middle managers or sales people for traditional companies. White collar working stiffs who aren't programmers by trade, they learned a bit of programming to automate their real job or add value because computing and the internet were becoming critical to their business, and they sort of fell into a developer role from something else. In Microsoft developer persona terms, they're Morts. And there are a lot of them, hidden in companies and organizations around the world. This was probably even more true in the 90s than today. And it was especially true of VB, which was explicitly designed for the Morts of the world.

And a Mort would be absolutely flummoxed by the changes between VB and VB.NET. VB is VB, drag some controls around, double click and add code. Its syntax and semantics are borrowed from QBasic. VB.NET is basically C# with a more Pascal-like syntax. You now have to worry about, like, classes and encapsulation and stuff. Any significant VB6 code will require considerable changes to run in VB.NET. If VB is what you're used to, VB.NET seems like intolerable change for change's sake. People toward the Elvis and Einstein side of the developer spectrum don't mind the differences; they're only as bad as, say, those between Java and JavaScript, but to someone for whom programming is a job, not a calling, those differences can be death by a thousand papercuts.

Microsoft actually caught a lot of heat for this back in the day, and people started calling VB.NET "Visual Fred" because it was so different from classic VB -- and difficult for classic VB programmers to get accustomed to.


> They seemed mystified by the concept of variables in PowerShell.

Then they hadn't been writing batch files before PowerShell existed, either. They probably weren't programmers. Windows admin at least used to be very UI-/wizard-driven.


That just shows their Mort-ness in their own domain though. In the Unix world, a sysadmin may not be a software engineer, but the ability to cruft up a program in bash, perl, or python to automate some task would be table stakes for the job. On Windows, not so much.

These admins learned just enough to be competent at the job that was in front of them. Many (not all) VB and VBA programmers are the same way.


> This always baffles me. It doesn't surprise me, but it baffles me. Why are so many professional developers having to "reskill" to adapt to a new language?

stretch your thinking a little, it's really not baffling.

you have people who teach themselves to code as teens or younger, then major in computer science then take jobs as professional developers by which time they've been experienced in a variety of programming paradigms, and where the nature of computer science is natively appealing to them.

In some large companies they might find themselves working alongside some people who were English majors/barristas and recently took a coding bootcamp to earn a better living writing some VB. Yes it's going to be harder for the latter group to drop what they know to pick up languages with more complex CS constructs and semantics. It's entirely reasonable for a not-so-experienced VB programmer to consider themselves a professional developer, even if it's a profession that they just took up, or even if they've been at it awhile but all their knowledge is within the VB ecosystem.


I 100% second the baffling nature of this. Maybe not baffling, but telling. If you can't relate your programming experience to new languages and paradigms, if you aren't curious and driven enough to learn the differences and likenesses between them, then you aren't a real developer. No shame in that, not everyone is cut out for it.


Different languages have different libraries with different functions that take different arguments and have different data structures. The tools you used with the old language might be different or required modification to work with the new language.

Maybe libraries they had written to solve problems in their domain have to be recreated.

It seems like switching from Delphi to Java or Go and changing the entire ecosystem would take a bit of time to get back to making the same productivity as with the toolset one has been using for a decade?


> Different languages have different libraries with different functions that take different arguments and have different data structures. The tools you used with the old language might be different or required modification to work with the new language.

Those are all knowledge differences, not skill differences. Programming is a combination of knowledge and skill, the skill part transfers between languages and environments very well. The knowledge, not always. But learning a new (to you) suite of libraries is not "reskilling". You're not developing a new skill by learning Java's API versus C#'s API.


> VB.NET has only superficial similarities to VB 6 and before.

There are differences, but the similarities are more than superficial. Some of my first apps were in 6, and then I transitioned to VB.NET in the early 2000s. At the very least, it pretty familiar and intuitive.

Even VBA is pretty similar to the VB.NET apps I developed in terms of OOP, syntax, autocompletion in the editor, event handlers, and so on.

Again, there are definitely differences, but there was no "[relearning] software development and getting intern like opportunities" like the comment I was replying to said, and my knowledge and experience was definitely not "worth exactly zero." It was just an evolution.


> I think some of the UI to Windows Defender is VB6

... What?! Amazing they still have running tooling to build it, and I don't even want to know what bad surprises hide in the redistributable runtime.


The VB developers I knew used the exact same super high quality IDE to start learning C# and a lot of the concepts either transferred or they would have had to learn them anyway to hop to VB.NET so.. this all just feels like normal forward progress to me.


I'm not totally sure what that means. VB and the change to .NET are kind of a one-off event. A "VB developer" could mean anything from someone writing VB/ ASP in a text editor with minimal syntax highlighting to someone using a drag and drop editor and never writing code to someone who was really just writing fancy Excel macros. If you want to suggest some of the people working in a GUI only had trouble making the transition, I suppose it's possible, but .NET offered VB.net and VisualStudio still had drag-and-drop editors. So why would they need to be interns? Because people started organizing code differently in folders and linking DLLs?


At the height of VB6 usage, there was no VB.net or Visual Studio. There was the VB6 windows binary that was a GUI form builder and IDE. ASP was brand new and for web development, so there was not much overlap in the dev community. I don't think it was even possible to write VB code in a simple text editor and compile that without the VB IDE. Cold Fusion and Power Builder were similarly corporate controlled and suffered the same fate.

A C programmer during the same time period saw almost no changes to their language or job opportunities.


C# and .NET is an MIT licensed code base.

Not to mention with a community that built an alternative implementation so strong that MS ended up buying the company the emerged from it. And this was before .Net Core was fully open sourced.

I’m not worried about C# or .Net getting nuked.


I was there in those days too. VB was a very beginner-friendly language, more akin to what we call today "low-code" than other languages. Those developers didn't have to "re" learn software development, as you think, instead they probably never knew software development well enough to use languages like VB.NET (which is what all of the VB devs I worked with ended up on). "C# and .NET" is equal to any language you can think of in terms of complexity, so C# devs are going to be just fine.


It was technically possible for someone to write a VB compiler and enough of a runtime to integrate VB code with Visual Fred and the rest of the .Net ecosystem. That this apparently never occurred to anyone says interesting things about that development culture, but the big lesson is to not rely on a single source for a language implementation or tooling in general. Go not only has an open-source primary implementation, it has, like, a few thousand righteously angry hackers in its culture who would take Google abandoning it as a personal affront and would work on a replacement Go implementation in a pure example of spite-driven development.


If you know C#, you aren't going to have to "relearn programming", and C#/.Net isn't going the way of VB anytime soon. Moreover, there's so much code in the wild written on this language/platform that there's not going to be any great loss of employability.


C# is open source


vb6 does not exist anymore. It is the reason that I refuse to use Microsoft products anymore. I learned other languages, Just not Microsoft languages.


Maybe we should be investing more time into creating meta-platforms so that we don't need millions of dollars to bootstrap production-ready languages.


I don't think it's a platform problem... It's a money problem.

I've got a toy language I've built at home for my own reasons. Lots of developers do.

The difference (especially in these modern times) between my toy and Rob Pike's toy is that Rob is paid by Google to develop it, hammer out the details, work out the bugs, and support it. As are the whole team Google dedicated to it.

Some languages get supported by volunteer armies (Lua comes to mind, last I checked), but TypeScript, Go, and Rust are supported / maintained by corporate money and it's hard to compete with that because a corporation can throw money at a language that isn't done yet while it's harder to get volunteers excited about a broken toolset.


I constantly hear that "we can't do it without corporations" and yet there are so many counter examples.

Most of the corporations were built on free languages and ecosystems which existed before they did.


There's been a shift away from vague distrust of corporations (think Slashdot) to full-on embrace. The fallout from that is we can't imagine platforms that don't require tens of millions of dollars to bootstrap, and we cling to the ones we have because they're so expensive.

Too big to fail is not the hacker ideology I grew up with. Maybe that's gone now.


There's very little in the open source space that competes with, for example, npm + TypeScript + VSCode for usability (where "usability" here runs the gamut from package / dependency management to code comprehension / documentation / refactoring to debugging).

That kind of polished integration often costs money. In contrast, in the open source world we limped along with gcc + gdb for decades until Clang was able to force the issue (which was funded by Apple, Google, Microsoft, ARM, Sony, Intel, and AMD).


That strongly depends on your definition of "usability". If it means appealing to the lowest common denominator, that's probably fair. Open source software tends to lean technical, because it's made by technical people for themselves. If it means being useful to the people who use it, then I don't think your argument holds water. NeoVim and Emacs at least fairly compete with VSCode, npm is not at all special in the world of package management (Julia's Pkg and Elixir's Hex come to mind as OSS superior offerings), and the world of programming languages is chock full of better languages.

To directly counter your "npm + TypeScript + VSCode", I counter with "Hex + Elixir + NeoVim".


> If it means being useful to the people who use it, then I don't think your argument holds water

I recently worked on a project involving a couple Docker containers, one of which wraps a Postgres DB and the other extends it.

Some of the features of the extension are broken because the Postgres maintainers changed something key in their Docker image (they disabled the DB listening on localhost) and the scripts the extension generates assume localhost is accessible.

In the (volunteer, free) open source ecosystem, nobody is paid to have that be their problem to solve and so problems like that, to my observation, rarely get solved because these two groups are unrelated to each other and it's nobody's job to make sure "the whole offering" is easy to use.

(And as an ardent and frequent user of Emacs, I simply must see the config file someone claims to have put together that competes with VSCode for managing TypeScript projects. That's a hard claim to swallow without evidence; I've tried, and it's extremely non-trivial. Emacs just pulled in eglot for LSP support... Not that one couldn't self-install it earlier, but every time you have to mutate a system to one-off your own solution you increase the maintenance burden of that system.)

... I'll have to carve out some time to look at Elixir. In my world, of course, "Doesn't have a tool to compile to JavaScript that has made it to v1.0 yet" is a major strike against it, but my problem domain is specific. ;)


> Some of the features of the extension are broken because the Postgres maintainers changed something key in their Docker image…

I’m not trying to set the wrong tone here but… how hard is it to modify a docker image that is used as a fundamental part of a production system? There isn’t someone who’s job it is to wrangle these things?


Not hard; once we knew why it broke it was easy to fix. Took half a day to figure out why it was broken though.

Key thing is: these two teams maintaining two separate Dockers aren't part of the same org, so there's no incentives to align either image to not break the other. If they were commercial entities driven by maximizing userbase or revenue, there would be reason for them to minimize breaking each other (especially if they were the same company; then someone's in charge and can tell one team or the other "Fix it or you're fired").


I found Dahl’s approach to packages with Deno very interesting. It decouples package management from the runtime’s tool chain. So a package manager isn’t needed. (Instead a proxy can be used as a cache similar to how an artifact registry would work.)


I'm not sure what you're hearing is "we can't do it without corporations" so much as it is "the ones sponsored by corporations are better."

I mean, GIMP exists, but any one of a half-dozen commercial products built by fewer people in less time are easier to use and have better feature lists. It's not that it's not possible for GIMP to be best-in-class, it's that it isn't best-in-class. (And sure, dear reader, GIMP works perfectly well for you. Great!)

Given more time, I'd love to explore the idea that "most of the corporations were built on free languages and ecosystems" given how many of the companies we're talking about today seem to have some roots in C, developed under corporate control. But definitions are everything in a statement like that, so it probably would take a lot of time to sift through and ultimately convince nobody either way.


Google's Go team is not that large AFAIK.

It's probably my love affair with Lisp speaking, but I really think there is space for PLs to implement 80% of production-level features with 20% effort and improve slowly as they turn out to be blockers. There's also a branding thing: users who are put off by lack of corporate funding won't be working on the ground floor of building a PL anyway. That's fine.

LLVM/JVM/WASM give leverage to both indie and corporate langs. More of those types of tools are great.


The difference between your toy and Robert, Rob, and Ken’s toy is that yours was built for your own reasons and theirs was built for the purposes of software engineering at scale. Whatever your thoughts about Go suiting that purpose, that was the intent, and why they were supported by Google in doing it.


Exactly. This is something I believe will happen in a few decades (progress in this are is very slow).

The JVM showed that a runtime can host a multitude of languages, compile to multiplatform bytecode that require only a "small" runtime (the core of the JVM is pretty simple, the complexity is on the stuff on the edges, like the stdlib) to run on anything and can be ported easily to any architecture. WASM is the new incarnation of the idea and if the people behind its specification stop worrying about politics and start getting stuff done, I am almost sure it will become the new "universal" runtime.

Usually, new programming languages have a huge mountain to climb in terms of libraries that are required for everything these days (encryption, multiple data formas like YAML, JSON, XML, TOML and even the old ones that are still widely used in niches, like ASN.1... HTTP, Database Drivers, logging, monitoring, graphics, mathematics, the list goes on and on... we don't really want to rewrite all the stuff that exists... but interop between languages is very hard and sometimes very inefficient unless it's based on a common set of primitives (like Java bytecode... WASM still needs a lot of features like GC and interface types to be a real alternative).

Zig and other languages that have very good and easy interop with C work around this problem. But I think that the future will be a universal runtime, maybe WASM, maybe something else that doesn't exist yet (or that's not yet known to most of us).


> ...easy interop with C work around this problem...

Think this is a good point. Languages like Vlang, Dlang, Nim, etc... Use their interop with C to their advantage. They are not starting out in the wilderness, but can build off of the already strong C ecosystem, then add unique advantages that their userbases want or prefer.


JVM has moved beyond bytecode now. See my sibling comment. Truffle takes the JVM to the next level - you don't have to compile your language to bytecode anymore, you write an interpreter for the source code (or a binary form if you have one already) and then the JVM turns that interpreter into a JIT compiler, giving it many other features and tools along the way. You aren't constrained by what Java bytecode can express.


The JVM showed nothing new in that regard, that idea predates the JVM for a couple of decades, and Gosling even refers it was his past experience with such platforms that made him go with a bytcode based format.


I've had very good luck with Racket ( https://racket-lang.org/ ), in that regard. It's a Lisp which is geared towards the implementation of languages and their interoperation. There's even a strongly typed Haskell-like language - Turnstyle - implemented in it as part of ( https://www.ccs.neu.edu/home/stchang/popl2017/ - the paper: https://www.ccs.neu.edu/home/stchang/pubs/ckg-popl2017.pdf ) and the even more powerful ( http://lambda-the-ultimate.org/node/5587 ( the paper: https://www.ccs.neu.edu/home/stchang/pubs/cbtb-popl2020.pdf ) ) Turnstyle+ which implements dependent types on top of that language as well. Which is just plain cool...


That's what GraalVM is. Created by a corporation of course, but it's open source.

GraalVM has Truffle, which is an API for creating languages. If you use it, you get a lot of features for "free". All you have to do is write an AST interpreter. I wrote about it a long time ago here:

https://blog.plan99.net/graal-truffle-134d8f28fb69

Misc things you get:

• Advanced JIT compiler with support for static and dynamic typing, multiple CPU archs.

• Advanced multi-threaded portable runtime.

• Several cutting edge garbage collectors (you don't have to use them).

• Native standalone single binary for your language's interpreter/jit.

• Interop with other languages! (call into js, python, ruby, java, kotlin etc).

• Can also call into WebAssembly and native code compiled with LLVM, your JIT will optimize across the boundary.

• Can implement your language stdlib in any other language (see interop) meaning programs written in your language run on every major OS out of the box.

• Profiling support.

• Debugger support (chrome devtools).

• Sandboxing.

• Heap snapshotting.

• Code coverage support.

• Some Language Server Protocol support.

Probably more I forgot about already. It's really a pretty amazing generalization of the JVM and drops the cost of creating competitive new languages through the floor. IDE integration is the weakest part, but as you're writing your AST interp in Java, making a converter to the IntelliJ PSI AST API isn't much more work so an IntelliJ plugin would be easy. And IntelliJ is itself open source so you can fork it to make a dedicated language IDE if you want.


Like RPC and IPC and SWIG? We have that.


The same goes for huge open source projects like Kubernetes. The amount of money to run the infrastructure for CI/CD and CDN is in the millions per year. Someone has to pay for that and the big companies are the ones that fit the bill. Google mostly pays for it but others have started contributing also. For example, Amazon recently announced that they are funding part of the infrastructure also.


How much does AWS make from just EKS? Tossing some change to support k8s ecosystem seem like a pretty good investment.


*foot the bill



No. Foot is right, meaning "pay". "Fit the bill" is also used, but not in this context.


Yes. _Fit_ is right, specifically in this context. Re-read the comment you're talking about. In fact, either work fine. I'm taking issue with the pedantry we're engaging in here (while admittedly adding to it).


The context was clearly about payments. So it's foot. It's not particularly ambiguous even.

I don't correct people to be pedantic or argumentative. I love being corrected and learning from others, but I suppose not everyone does.


Yes, it is "foot the bill" in regards to paying for it. My mistake but I am glad everyone had some fun learning!


One small note, but Nim's comptime support is on par with Zig's. It's preferred using comptime and templates over macros when possible.

Here's an example (1) of a compile time object to key value layer I did without macros. It's a basic wrapper to split an object's fields into separate key-value pairs for storage. The problem is that the storage system only has 16bit ints for keys which makes collision a large possibility. So this wrapper checks at compile time that there aren't duplicate keys.

1: https://github.com/EmbeddedNim/nephyr/blob/main/src/nephyr/e...


As I said I love golang. I invested a lot of my own time into learning it and its ecosystem. The time developers spend learning a language and its libraries should not be discounted.

Maybe we overestimate how much corporate backing is required to make a language a success. After all we had successful languages and ecosystems long before any corporations became interested in funding such things.

You mention golang's crypto. Is it true that all that code is native to the go project or are they mostly just wrapping other open source libraries which have been created by 3rd parties? Any open source language could do the same without massive investment (many have).


Go's crypto libraries are all native, and a good example of why corporate backing is very useful. It's unlikely that you would get someone like agl to develop such excellent cryptography libraries in the first place without being sponsored to do so. Yes, there are some talented developers who will donate their skills, but I would wager that the majority of the most experienced developers are going to be busy with work, and large contributions are only going to happen as part of that work.


go's crypto is all original to go (other than possibly some assembly implementations ported from other projects), and nearly uniformly excellent.

I would take the go TLS implementation over, say, openssl (which is what many/most other languages end up using) any day of the week.


Name a popular language without corporate backing.

C was corporate, AT&T.

C++ was corporate, AT&T.

Java was corporate, Sun.

Maybe Perl wasn't corporate. Had a great run but faded.

Python? Maybe, but Guido van Rossum worked at Google and Dropbox for many years.

Ruby? Is popular because Rails, corporate.

JavaScript? Mozilla.


Lisp, and OCaml are about as close as I can get. I would argue Haskell (several of GHC's core team worked at MS though). I would also argue Python was popular before Guido worked at Google. Those 4 all came out of academia though so they had backing just not corporate backing.


OCaml - INRIA and Jane Street, Facebook, Microsoft research

Lisp - IBM, Xerox, MIT, Apple

Python - CWI, NIST, CNRI, afterwards Guido joined Google in 2000


PHP? You could argue Facebook but they mostly went and built their own thing that at best was inspiration of later PHP versions.

As far as I know currently the only real paid full time contributors are from the relatively newly formed PHP foundation.


Very much Python.


> Maybe we overestimate how much corporate backing is required to make a language a success

It depends on what "success" means but I can tell you for a fact even marginal languages, in the grand scheme of things, it is very very expensive to run infrastructure that is well oiled and usable for the modern programmer that they can rely on, unless you are careful. You may not have many of the same luxuries others have. The demands are generally high, even if users of smaller languages are more forgiving.

I used to help run Haskell.org. I think people would hardly call it a top 5 language or anything with a gazillion programmers. We didn't have the luxury of designing our systems around infinite free GitHub bandwidth by exploiting Git repositories or anything like that, back when they were designed; so we had to stick with what we had, which was "A server running a daemon we wrote that stored files on the filesystem." Hardly "corporate" in any sense. It still used many dozens of terabytes a month on bandwidth for the package system. The only reason we were able to handle that is because CDNs like Fastly can eat the cost for us, for free, and because we got major free tiers from hosting providers (RIP Rackspace) to provide the servers. It can easily run into thousands of dollars a month for things like this, and that's before you get into assholes who try to ruin things by making it even harder. Oh, and I'm not even counting the actual money spent on the engineers, in terms of hourly wages, spent on this. I worked for "free."

The rise of integrated CI systems in particular, in most software projects, has had a tremendously positive effect, but one of the externalities associated with this is that they demand tremendous resources from upstream systems like this.

> After all we had successful languages and ecosystems long before any corporations became interested in funding such things.

People really need to understand that "the passage of time" is a real thing and has many consequences. It turns out the world is not the same as it was 20 years ago or 40 years ago. No amount of denialism will change that. The CI system demands are a good example of this.

> Is it true that all that code is native to the go project

Yes, the cryptography libraries for Go are written by experts on the Go team. Anyone can write a cryptography library and even have it work. Not everyone can write a library high quality enough to ship to billions of people as the default in a language with a good API, rigorous quality control, and active security review. That is what Go offers, it's not just a simple piece of code.

> Any open source language could do the same without massive investment (many have).

Sorry, but you're wrong, and it frankly indicates to me you have no experience in this, unless you simply believe that people's time and engineering effort isn't valuable or worth money. Can I ask what programming languages you have helped design and run the infrastructure and developed community libraries for? Because I dislike Go as a language for many reasons but you can't get away from this. The native crypto stacks for e.g. Haskell took years to reach relative maturity, same with Rust. Those projects weren't marginal, many people believe them to be very important, and people worked actively on them. Unless you simply believe "multiple talented people working for years on something of critical importance" isn't equivalent to "a massive investment", in which case I simply don't know what to tell you. It's a hard project. There is no way around it.


Thank you, that was very informative.

> Any open source language could do the same without massive investment (many have). What I meant by this was wrapping other open source libraries like libsodium etc. rather than implementing crypto libraries from scratch. I, wrongly, assumed golang might also be doing that.

It is true I have no experience implementing cryptographic libraries.


Based on the article, the person appears to be also talking about privacy, security, and freedom. Where the corporate tools can unexpectedly and stealthily be phoning home or the direction they go in are corporate controlled too much. This is less of an issue with truly/more so free and open source tools like Nim, Vlang, Free Pascal, Odin, Dlang, etc... They don't have the same kinds or levels of corporate gotchas attached to them. They are more "tools of the people". In various languages, corporations are more akin to users and donors, than they are dictating the direction and forcing their way, despite what the majority of users may want.


Corporate control is very much a problem. A big one.


You're speaking about issues with corporate control in Go like it's some sort of theoretical thing. It's not. It's been here for ~5 years.

Those "high quality library of cryptographic functions" you mentioned?

Well, thanks to one Google employee, it is impossible in those "high quality" cryptographic functions to, say, disable in any way specific algorithms...except by modifying Go's source and recompiling it.

This is (and I wish I were making this up) because said employee has decreed that TLS/1.3 and all the algorithms in it are the best thing since sliced bread (guess who was on the committee working on the TLS 1.3 spec!) and it couldn't possibly be insecure. Because, you know, history is not rife with examples of cryptographic functions (and perhaps more frequently, their implementations) being discovered to be vulnerable (in some cases, purposefully so thanks to NSA subterfuge.)

Further, they stated that even if it were discovered to be insecure? Well, Google will just either fix it or disable the insecure part, and release an update, and if you can't apply that update in a timely fashion, you have "bigger problems":

> Finally, about (removed) third point: if a security issue were to be found, we would publish a small security patch for all supported releases, like any other security vulnerability. A deployment that can't apply those timely has a bigger problem and won't be saved by TLS cipher suite configurability.

Because when you wake up in the morning to a TLS/1.3 vulnerability reported in The Reg and go "oh shit", it's clearly not faster to push a configuration file change or change a bit of your own code to immediately disable the affected algorithm...than it is to wait for the mitigation to make its way through Golang's release process, get picked up by the distro you use and wind its way through that release process...all the while you're vulerable, and have been for weeks or months because Google bribes people to keep their mouths shut via the bug bounty program and/or 'responsible disclosure' policies.

Because production environments always have the ability to push out new versions of a core package.

Because it's never easier to push out a configuration or code change, both technically and administratively (ie change control.)

Because it's not often easier to validate and if necessary roll back a configuration or code change versus a package change.

Because clearly every organization that uses Golang as part of its infrastructure has the technical know-how and infrastructure to grab Go source code, modify it to disable a crypto algorithm, compile it, and package/distribute it.

Because there's never a delay between when a vulnerability is discovered by blackhats, discovered by whitehats, reported to the project, mitigation makes it to release, the distro's package maintainer gets around to creating/modifying/applying the patch to whatever version they're actually still distributing, testing that / getting it signed off on, and pushed to update channels.

So, there you have it. One ivory-tower asshole who has probably rarely worked on an infrastructure team, who does not seem aware of, much less care, about the potential consequences caused by his decision...for people who are not part of the same organization that maintains the programming language.

I'm sure that if you work at Google and a problem with TLS/1.3 or Golang's implementation of it are discovered and reported to Google, you can count on that fix being made and deployed across Google's infrastructure almost instantaneously.

The rest of us, it seems, can get fucked and be vulnerable for days or longer.


Replace "Go" with "Internet Explorer", and see how that reads:

"My response to the plea in this article is simply "No thanks." If Internet Explorer gets that bad, I'd be happy to use "works best on internet explorer" banners, or migrate to another browser, or whatever needs to happen. But until then, I'm not going to preemptively switch ecosystems and banish technically good options from my tool belt because I have fears about what could happen. I want production quality websites and animations. That's hard. Internet Explorer has a high quality library of windows-specific functions. Hard. Internet Explorer has a fast browser which leverages windows-specific optimisations great. Hard. Internet Explorer has an incredibly liberal interpretation of web standards, making other browsers break 99% of the time, affecting most of my target users. Very hard. Some of those hard solutions are self-inflicted by the choices made in the browser, but the fact that they are so ubiquitous and integrated with the microsoft ecosystem so well is what makes it valuable. If the web standards are missing the many lovely MS-specific extensions of Internet Explorer, a more standards compliant browser would have less value. A standards compliant browser with a tiny user base by comparison would be even harder of a sell.

I am by no means trying to say that Netscape is low quality or not interesting for anyone. In fact, I actually think Netscape is cool. It is not the only other smaller browser I like. I have a strong affection for Opera as well. There are unique properties that make these browsers desirable. Netscape addons are really cool for example. I like stuff like this.

But: I also think Internet Explorer is a great piece of production-quality software. I know many people hate the browser now, especially now that the honeymoon has thoroughly ended. But I still like it. I feel highly creative in it, the ecosystem is good, and at the end of the day, I know I can make reliable, flashy websites in it.

Corporate control is a shame, but the truth is that corporate control is not the problem at all. The problem is funding. Because anyone can try to deviate from web standards and implement windows-specific extension hacks in other browsers, but can you pay maintainers? Can you run the CI, the website and playground, host the CDN with the downloads? Etc. Sometimes the answer is yes, especially with how much SourceForge supports a lot of those things, but in general the answer is no.

"Corporate control" is not the problem itself. Governance is just an outwardly visible consequence. The true control comes from maintainership and stewardship. Because if nobody is stepping up to the plate to take that role of introducing microsoft hacks in browsers, then whoever is doing it today effectively has control over the internet."


I don't think I understand the ask here. Author's claim is they're concerned about corporate ownership of the languages but the examples they cite of concrete issues is the system "phones home." Well, so does Python every time I pull a pip package in. So does every package manager.

Is there an implied "I don't trust the phone-home features of package management systems supported by corporations" that doesn't apply to non-corporate-supported development ecosystems? Why is that?


Ironically, a small programming language is much more vulnerable to telemetry-code injection by its maintainers than a large one like Go, where multiple non-Google-affiliated members of the community are actively following each commit made to the compiler source code. As long as you build your compiler from source (and have always done so, per Reflections on Trusting Trust) then you benefit from those eagle-eyed auditors for free. That said, it's important that the language have good governance with representation from firms other than the original creator.


Pip is separate from python itself. With a given language can I download packages with curl and install them myself? I think I can trust curl.

The problem is not only that the tool connects to the networks, but who is behind the tool. Google is a company whose business is collecting all the information on people it can. I don't think those in control of python/pip have the same incentives.


there's nothing preventing you from downloading go code manually without the package manager and using it that way


Or even easier--pointing your Go tool at a package proxy of your choice.


yup; you can easily run your own or just set GOPROXY=direct to use go modules without a proxy at all.


I think this is a fairly misguided rant, and ignores the real priorities (and risks) that I have as a developer - both personally and professionally.

I'm happy to use languages funded by corporations - the incentives for them are clear, they fund development and work on the tooling and spec for their own use-case - they garner additional support, momentum, and goodwill by releasing the language under an open definition (and sometimes also release an open version of the tooling around the language).

I don't really know what the author wants from a package management tool. At least personally - I fully expect it to communicate with the package host provider, and for them to track information about what I'm downloading... It's a network based tool that fetches remote resources someone else is hosting (usually for free).

Decent package managers will also support self-hosted repositories, and allow configuration for 3rd party repositories.

I also don't find this sort of tracking particularly malicious... not any more than I would find it both reasonable and sane for a store to be tracking how many customers they get a day, and to pay attention to what their hot-selling items are.

Further - GCC is absolutely and example of a corporate provided language being adopted and tooling developed outside that corporation's control. Not sure what the author is smoking... but C was developed under corporate control at Bell labs of AT&T. Further - there's still a wild amount of closed source tooling around C that comes out of Microsoft, and is absolutely high quality (and not cheap).

Mono is an example of this for C# - Corporate language, open implementation. Javascript was developed by Netscape (another corporation) and now has dozens of different runtimes. Some open, some less open.

----

Basically - What the fuck is the author talking about?


He admits C was corp sponsored though.

"Admittedly C also came from a corporation but it came free with every unix install and soon after I started using it, Richard Stallman et al. gave us GCC, a free C compiler."

But at the time AT&T couldn't spy on its CC users though so the risk was lower I guess.


> But at the time AT&T couldn't spy on its CC users though so the risk was lower I guess.

Sure they could... What do you think there were doing when they were selling System III & System V?

They compiled customer lists, and sales contacts, contract agreements and everything else you'd expect from an org selling commercial software. Was it "automated" in the same way that tracking use from a package manager is? No. Was it tracking? Fuck yes.

Unix wasn't free until the FSF began the GNU project...

Hell, even the BSD variants of unix were sold - they weren't free software. Are there free implementations that exist today? Yes. They exist because the item being sold and maintained by the corporation is NOT the spec for the language/os (the recipe) it's their implementation of it.


UNIX was "free" though, because it was available in source tapes, which none of the competition was, and due to the restrictions on AT&T research use, they were only allowed to charge for a symbolic price of tape replication and sending costs.

The lawsuit and request to take the Lion's commentary out of print came after AT&T got allowed again to take profit from their research.


> particularly malicious... not any more than I would find it both reasonable and sane for a store to be tracking how many customers they get a day, and to pay attention to what their hot-selling items are.

This is substantially less than what code running on your machine can do, which is basically unlimited its spying capabilities. Yes, this is a problem in itself that needs to be fixed.

Otherwise you appear to willfully misunderstand. These may not be your priorities, but taking offense and framing them as "crazy" does a disservice.


> Otherwise you appear to willfully misunderstand. These may not be your priorities, but taking offense and framing them as "crazy" does a disservice.

So make your own Go compiler & package manager and stop complaining? The language spec is open.

Or if putting your money where your mouth is feels too hard...

Use an implementation of a corporate sponsored language that has free and open tooling around it... ex: GNU C/G++

Or fuck it - use the GCCGO compiler... https://go.dev/doc/install/gccgo

Because it turns out if this really is your priority - there are active ways to handle it because, despite being corporate sponsored, the language definition is still open...


Already do. Still feeling like a personal attack to discuss this idea?


> Otherwise you appear to willfully misunderstand. These may not be your priorities, but taking offense and framing them as "crazy" does a disservice.

Show me any non-corporate sponsored language that is not a toy, and is seeing serious traction. (hint - I enjoy Nim, but it's not a valid answer).

Unlike the original post implies - most of the languages in his list are corporate sponsored...

D was created at Digital Mars

Open Pascal is... an open implementation of pascal, which was created by IFIP, but most variants are dead and Borland poured a fuck load of money into turbo pascal. So both gov + corporate sponsors

Nim... is arguably not corporate sponsored, but it comes right out of open Pascal (which was) and it has trivial use at large (I still enjoy the language - I'm not about to suggest we use it at work).

Same as Zig - which is probably the closest to being a true open project in the list (IMO) but which still builds heavily on c++ tooling. (Honestly - I'm most excited about Zig, it's nice and is very close to self-hosting)

Steel bank comes right out of Carnegie Mellon (and if you don't think colleges are corporations... boy I've got news for you)

Vale... well I actually don't really know anything about this. Honestly one of the first time's I've seen it referenced at all, I'll have to look it up some time.

---

So again - WTF is the author talking about?

If he just wants a free compiler (Free as in free, not as in beer) they exist for basically any large language out there.

If he wants to control language direction and goals... well - he's welcome to write his own language but otherwise I find no compelling difference between a guiding committee/creators on what he considers a "good" language, and the company making decisions for "corporate controlled" languages. If anything - at least I can usually predict what the corporate controlled languages will do, even if it's not always what I'd like...


Just to turn your own example against you for a moment: Mono is dead. If you were using it for your cross-platform WinForms desktop app, as my team was, you're now stuck with no updates and no migration path to MS' new offering, MAUI (though it still doesn't run on Linux, and nor did the several UI frameworks in between). At least Mono is still getting patches. They even recently did a stable build after a long hiatus.

I believe the cause of Mono's death was a combination of MS buying out the company where many Mono devs worked (Xamarin), and the huge rearchitecting MS did for .NET Core. The latter being something that on paper sounds great for openness and cross-platform support—but in practice, it hasn't turned out that way. The .NET team is mostly MS and they still play favourites with Windows (and increasingly Android, which is fair) and Visual Studio.


Eh - I don't really know that dead is the right term, and I think it mostly served it's purpose (It's actually still getting fairly regular commits, but I agree that it's no longer keeping pace with dot net core)

I would also draw a pretty clear distinction between C# the language, and something like WinForms.

And that's really the whole point - implementations differ in functionality exposed and features worked on (hell - just the compatibility issues and differences between clang/GCC/MSVC is a great example). The corporate implementation is usually the most feature full because it has the most resources poured into it.

That's not a problem. That's the community being able to take advantage of those resources. If/when the company stops being a useful partner, they're free to ditch them and fork (and this is historically how things like Linux/OpenBSD/Gnu tooling exist...)

So again - if you have a real problem with using the corporate release, use the open versions. Contribute to them.

But I don't find it a compelling argument to say that just because a language is sponsored by a company we should avoid it. Doubly so if the corporate version is licensed well.


Mono isn't dead per se, as it is what still powers Xamarin and now Blazor WASM.


> I'm not so sure about the openness of D-lang so I've left it out

D is the most open language you will find. It is Boost licensed, which has the least restrictions of any language you'll find:

https://www.boost.org/users/license.html

The compiler is 100% Boost licensed.

Nobody pays me a dime for D.


(The one and only WalterBright! What an honour.)

Terribly sorry about that. I've moved you into the list. It's been a long time now but I really did like D and its meta-programming features.


Thanks for the quick correction!


They should, but that's another issue.


Open source projects are under financed and their maintainers are overwhelmed. I would rather read proposed solutions about that because that seems like the more important problem.


The economics of independently lead open source is still a problem with no solutions in sight.

The economics for corporate controlled open source are quite clear. It's a cost saver for corporations to open source solutions to common problems. This gets others to buy in, which spreads maintenance costs and ensures that no one is seriously winning in the domain of the project. Essentially de-risking cost/benefits in the domain of the project.

Another problem, which the author didn't discuss, is that corporate controlled software is often designed by a committee, or designed by those with political capital within the corporation. There is not usually a strong selection mechanism for good system designers, or good open source leaders. Go is a happy and rare exception. Projects coming out of corporations are often not as well designed as projects which rose to prominence organically through differential amplification by the community.


I’d also like to throw another counter example out. C# and Typescript from Microsoft are both excellent languages and are led by Anders Hejlsberg.


The C# language team are doing a great job. Most of what I hate about C#/.NET—such as the lack of a UI framework that runs on Linux, as I just bemoaned in another comment—was either inherited from the earliest versions or got forced through by management at MS.


Anders Hejlsberg doesn't have anything to do with C# for a few years already.

His focus is TypeScript.

Mads Torgersen has been leading C# design since version 7, more or less.


Is that the same Anders Hejlsberg who developed Turbo Pascal and Delphi for Borland, another company which microsoft successfully crushed in the 80's and 90's? (Hint: It is).

We could have C# 30 years before we did if it weren't for microsoft.


C# came out in 2000; Microsoft didn't even exist in 1970.


The economics of all this is where it's at, and so important to understand.

Look at through a lens of scarcity. What Google sells that is 'scarce' is not the Go runtime - it's ads and other things.

But an independent open source project doesn't really have a source of 'scarcity' that it can sell.

https://journal.dedasys.com/2007/02/03/in-thrall-to-scarcity...


Since the 70s, Washington has largely abandoned public investment in favor of the private sector. Zero surprise that this dynamic emerged in the software industry.

Of course, in an alternate timeline where the government successfully funds public/"open source" software, this post would be titled "Stop Building on Government-Controlled Languages".


Even worse, neoliberal policies have taken things developed with public dollars and gave them away to private firms for them to close off and take ownership of.

Look at what happened to NSFNet and how it's no longer a network that's open but now a part of a large, billion/trillion dollar oligopoly with no obligation to serve anyone who won't make them a profit.


Every where in "west". 1975 is the critical year.


1971 was when Washington really abandoned the Westphalian, internationalist, UN-arbitrated world order in favor of seeking unipolar world domination, by abandoning the gold-backed USD in favor of a violence-backed USD/"petrodollar".


Thank you for the info.


> Open source projects are under financed

Freemium format. It floats the WordPress plugin ecosystem. That ecosystem has $1 bn size. Likely larger now since that statistic was made 1-2 years ago.

You give the base software and most if it free. You provide paid addons for niche, corporate, business use cases etc. Everyone is happy - the userbase gets their software free, advanced & business users have advanced features that could not have made it into the main software in addition to having paid support. And most important for everyone - the project & its organization can sustain itself and it can keep maintaining its software.


How do you accomplish that for a programming language? Make the language open source but sell a proprietary debugger? And how does that solve the problem corporations controlling languages, given that you would presumably have a corporation controlling at least the proprietary components?


> How do you accomplish that for a programming language?

That's a more difficult case of course. But I believe enterprise support and custom enterprise features/addons could help in the case of languages.

> corporation controlling at least the proprietary components?

WP plugins are still GPL. But the users can buy, download them from the original source, install them on their site, and update them from their site with one click. Coupled with premium support, this works well. Its a bigger risk and hassle to 'pirate' the plugins from 3rd parties rather than just getting everything directly from the source. (though such 'piracy' does have impact).

In any case, one organization controlling at least the proprietary, custom enterprise components is much better than the alternative.


Look at Laravel. Open source framework creates saas opportunities, conference opportunities, sponsorship opportunities.

Look at tailwinds, open source lib with saas opportunities that netted the creator millions of dollars


Think Pycharm.


So become a corporation?


I think whatever format that works would be ok, right? A cooperative, or a foundation with an attached company (a la WP foundation + Automattic etc), whatever that works would be ok.


I typoed corporation. My point was that you would become the very thing you were trying to avoid.


There is no problem in organization. What matters is who controls an organization and how it works. A corporation is just an organizational entity. It can be run in many ways. It does not have to be organized and run like a privately owned capitalist corporation.


The difference isn't organization, it's whether the goals of the project need to be yoked to a profit motive. If they are, invariably you're going to eventually put the cart before the horse.


I am just going to link to a comment that I agree with - because I don't agree that open source projects are under financed.

https://news.ycombinator.com/item?id=33985785


Terrible arguments.

Golang and Android Studio are both open source, and you could compile them yourself. Telemetry can be turned off in Android Studio, and you can use stuff like flatpaks to isolate the software from your system, and completely turn off networking permissions if you don't trust the settings.


Also, you end up with very similar questions for non-corporate languages like Nim because you don't really know who created the software, what their motives might be, and whether the binaries you are receiving are really what the source code says it is.

Reproducible builds help, but once you are going down this path of verifying instead of trusting, then it doesn't really matter who built the software.


I have seen this perspective a lot in government and adjacent entities. For them, commercial software and corporate open source has a clear financial motive. If they can't identify why a project exists and continues to receive support, they see a security risk, either via direct compromise or project abandonment and the associated supply chain rot.


and they aren’t wrong to do so.

Browser plug-in author gets bored and sells out customer base is a well tread story.

Takeovers of well known packages are another.

Most of these ecosystems do not offer proper sandboxing for the things we take from them, so it’s easy for things to grow an appendage that abuses our prior assumptions.

Apache’s Java ecosystem is full of consultant abandonware and tripwires.


> Browser plug-in author gets bored and sells out customer base is a well tread story.

Not even just "get bored" but, just a plain matter of incentive. Salaries are a strong incentive against undermining your employer's work. Now, of course, there are people who are incentivized because they are altruistic and good at heart, but that's very difficult to measure. It's much easier to demonstrate aligned interests monetarily.


But that concern exists on the other side just as much! Companies get compromised, companies abandon projects. And companies, even though they make money from a project, decide that they could make more money by having it ship them data on the side.


True, but I wouldn't say "just as much." An ongoing financial relationship usually means that the purchaser can expect reasonable advance notice if a project is getting discontinued.

And in more sensitive agencies, if the government purchases software, they generally have the name and contact information of who they can arrest if the purchased item becomes or is found to be malware.



I agree, but I do think that you have to think of the ecosystem of the language if the major cooperation steps away or dials back, maybe thinking of them more like 'dependency'. Go lived through community alone, how well will twitters projects hold up without their involvement because I imagine it's not very lean for twitter to spend time on.

I think this argument is better had for frameworks and JavaScript libs. Have caution about who you trust, especially in this new phase of everyone pivoting around AI/BI applications


This is the real threat. So much “open source” technology is just free community editions of closed source software. If any of these companies decide to change course for any reason, you’re screwed. Who’s going to pick it up? You? No, you were coasting along on someone else work, you don’t have a good expertise or slack to support it. But someone who law will right? Nah. They’ll just shrug and pivot, as will you.

Maybe I’m just bitter, but I’m on my third “open source” text editor because of machinations inside megacorps suddenly deciding shutdown perfectly fine projects.


> Maybe I’m just bitter, but I’m on my third “open source” text editor because of machinations inside megacorps suddenly deciding shutdown perfectly fine projects.

Advice for choosing a text editor:

• Avoid Electron.

• Avoid custom-looking GUI toolkits.

• Avoid IDEs, unless they're highly-modular with a rock-solid, simple base.

• Avoid complicated build toolchains.

I'm using Geany at the moment, which… almost meets the third criterion, and does meet the fourth. If Geany support stops, I'd vaguely be able to make changes to it; the code's written in C, licensed GPLv2+, and it's well-documented. (If GTK2+ or Scintilla support stop, of course, no dice; I don't understand any of that area of my software stack well enough.)

Of course, there are things I want it to do that it doesn't; Geany isn't perfect for my workflow. Better than trying to write my own, though!


Same holds true of any software you didn't build yourself. EMACS devs get collective brain fog and start using VS Code tomorrow? I can't support that, same goes for Vim or just about any other editor out there.


You literally picked the two of the longest lived and most successful open source, and not effectively corporate owned projects.

I really don’t think this is the critique you think it is.


VB had a lifespan of more than a decade and was seen as a pretty safe bet until it wasn't.


"Terrible arguments. [...] and completely turn off networking permissions"

So it basically goes back to what the OP argued, right?


No. OP argued that you should not use the software because it might phone home, and the author doesn't think the tools should phone home for any reason. Yet, with flatpak you can specifically turn off networking permissions just for that app, and thus remove the concern.

Also, similar concerns would exist for non-corporate software, so the corporateness is irrelevant to whether to use a free tool which you have full ability to inspect and compile yourself.


I don't know why you'd run a flatpak you don't trust with all that permissions micromanaging. That's the kind of thing you do when reversing malware, not when using everyday tools. The idea that that kind of flatpak even exists already portrays a very sad state of software.

Corporateness is also not entirely irrelevant, since the incentives are different, and 'corporate' is highly correlated with proprietary, spyware-ridden software. Also, the author talks about corporate languages (and then throws in the tools too), which have their own different set of problems.


The author specifically calls out Google/Go, Apple/Swift and Microsoft/C#, around tools that do compilation and package management spying on you. It's worth noting that this potential exists in both corporate-controlled and non-corporate-controlled languages, but ... have there actually been any incidents of Go/Swift/C# doing anything sketchy here? IMO this argument needs specifics, because non-corporate projects can do the exact same thing.

The other concern is:

> Using java for free software was the first misstep. We were warned against it but ignored those warnings. Much later the oracle/google battle showed how precarious it is to build on languages controlled by corporations.

There's certainly an argument there, but it also happens with non-corporate open source. To give an example:

- Scala (programming language) was released in 2004, as a non-corporate open-source project

- Play (web app framework for Scala and Java) was released in 2007, as a non-corporate open-source project

- Akka (actor system lib for Scala and Java) was released in 2010, as a non-corporate open-source project

- Key people in these projects form Typesafe (now Lightbend) in 2011. A company that provides premium support and tools around open-source projects, largely centred around Scala/Play/Akka

- A few months ago, in 2022, Lightbend changed the Akka licence, made it proprietary ("Business Source Licence") and very expensive at large scale

Software that starts out as more "pure", non-corporate open-source can still turn the tables on you and charge large licensing fees later. But at least if it's open source from the start, it can be forked, e.g. for Akka, there's this Apache fork that was started after Akka changed its licence: https://github.com/apache/incubator-pekko . This is the key open source protection, and it's true for both corporate and non-corporate projects - if the maintainers start doing things people disagree with, anyone can just fork it.


I don't see the benefit of eschewing a proprietary runtime and them running your program on a proprietary OS owned by the same company.


Are you referring here to Apple/Swift/iOS/OSX, and Microsoft/C#/Windows? If so, good point, if you're writing iOS mobile apps, or OSX/Windows desktop apps, you're pretty tied to Apple/Microsoft regardless of the language you choose.

Not so applicable to Go - ppl mostly use it for writing servers running on Linux.


[deleted]


I work for Google now on an open source project, and have worked on open source language projects for them before.

This is just not how things happen. At all.

There's a lot of fear in our post about things that _could_ happen, but haven't actually happened in Google's history, or aren't unique to corporate sponsorship.

> 1. I wouldn't be able to refuse if they told me to add tracking, analytics, AI "learning", or lock-in mechanisms.

Google just doesn't do this with OSS projects. I've never seen on mine, or heard about it in others, or heard news of problems. I did work near a CLI that sent some usage analytics back (so they could see crash reports, etc) and some people complained, and it was removed. It was never nefarious and the idea that Google is trying to make better ad profiles or somesuch via OSS projects is frankly ludicrous.

> 2. If at any point they thought that Vale competed with Golang, they would shut it down. It doesn't matter that they're completely different (Vale doesn't even have a GC); if any person in the chain of command even had the _perception_ that it did, Vale would die. > 3. If at any point they wanted the headcount for other things, they would shut it down.

How is Google stopping funding a OSS project any worse than Google never funding it in the first place? They can't "shut down" the project any more after paying you to work on it than before paying you to work on it. If you want to work on it for free, just do so.

Yes, you _might_ have to fork the official repo if it got moved to under Google's GitHub orgs, but contrary to your fears, Google has a good track record of handing off OSS projects they don't want to maintain. There's a whole OSS office to help with this and an official process for transferring ownership and copyrights.

You're also allowed to work on non-Google OSS projects for 20% if it fits in the spirit and goals of 20% time, so if Vale is even vaguely in-line with Google work and goals, you could have just... worked on it. Did you actually talk to anyone at Google about this?


[deleted]


> The alternative is to keep it open-source

I can't imagine that there would be a reason to close it to work on it at Google. That would be a very unusual situation.

> This isn't just a side project for me.

Then you probably made the right decision to work on it full time, but only as an alternative to working on it part time. If Google would have paid you to work on it full time, there really isn't any risk. If they ever tell you to work on something else, you can still quit.


> Google just doesn't do this with OSS projects.

So, android being an open source project, how does one turn off these nefarious repeated connectivity checks (phoning home)?

Or how does one modify `/etc/hosts` like we do on Linux?

Why do we have to go all the way to root the device or use a different OS to achieve these?

- https://android.googlesource.com/platform/frameworks/base/+/...


Android has a lot of open source parts to it, but nearly all users are running a commercial version and it's not managed as an open source project, so I don't think it counts.

Developer tools are different.


My comment was in response to "Google just doesn't do this with OSS projects."

Also the source code link I provided was to vanilla Android, not to a commercial version running on someone's phone.


Can you guarantee this in four dimensions? i.e. after new management takes over?


Yes: new management can't un-open-source something. If it's out there, it's out there. They could close future work, but not existing work.


If they employ the core team, and own the immaterial rigths, they could

1. Make next version propietary, or particially propietary

2. Keep updating the closed version, no more core team for open version

3. After some time the open version is too oudated, too far behind the propietary version

4. Ambiquity on lisencing also deteriorates intrest

For examples see, Qt, MySql, Java


It's true that the health of an open source fork isn't guaranteed when there's a split, but OpenJDK looks healthy? I haven't been following Java much.


Yes, and could damage the product/brand in the process. mysql, openoffice, and oracle come to mind. It is survivable but often not a win in the long term for the original author. Seems like a personal decision however.


Question about Vale's positioning, where does it exist relative to Zig and Nim? At a brief glance Vale seems closed to C than C++, which seems to put it in some of the same use cases as Zig, but perhaps a bit higher level. Is that an accurate assessment?


Zig is great for low-level use cases like embedded, especially where performance is a high priority and memory safety doesn't offer as much benefit as in other domains [0].

Nim's also geared towards systems programming, though it does so with an RC foundation. It's features are well-designed and complement that pretty well.

Vale's more geared towards the higher-level cases (games, apps, servers) where memory safety, performance, and developer productivity are all priorities. It's meant to be more of a "software engineering" language, focusing on keeping things loose and decoupled in the large, while offering tools like regions [1] (not done yet) to eliminate overhead everywhere it can.

[0] https://verdagon.dev/blog/when-to-use-memory-safe-part-1

[1] https://verdagon.dev/blog/zero-cost-memory-safety-regions-ov...


Rust will probably encompass most of these use cases eventually. Features like e.g. RC with efficient cycle collection as found in Nim, or generational regions as found in Vale, will simply be implemented as add-on crates in Rust, complementing the existing borrow checker.


There are some design decisions mean Rust can't work with generational references and regions, unfortunately. They're incompatible with their form of borrow checking.

In Vale, each struct has a generation in theory, but they are often merged with their parent struct. It requires some pretty interesting logic which can't be implemented in Rust.

Regions require truly immutable references, which Rust doesn't have. Their shared references are unfortunately foiled by the RefCell escape hatch.

Additionally, the rest of Vale's future design prioritizes developer productivity more than Rust has, in my opinion:

- A coroutine-like mechanism instead of Rust's async/await function coloring, and structured concurrency which doesn't involve the Sync/Send "data coloring" problem.

- Vale's borrowing is done on the region level and on an opt-in basis, so users can decide to only use borrow checking where it makes sense.

- Linear typing ("Higher RAII") which allows for different static checks which aren't quite as infectious as aliasability-xor-mutability.

I love Rust, but it has some quirks that make it much more suited to low-level development than these higher user cases. I don't see how Rust can fix them, but there are some smart people working on it and I hope they figure out a way, because the world needs a fast and safe language that's easy to learn and focuses on productivity, even if it's not Vale ;)


I don't disagree that Rust's developer productivity story is still far from mature, especially wrt. async and its ecosystem (and the Rust design team agrees that this is a concern) but if you're going to argue for Vala as a "future design", we should similarly assess Rust's long-term potential. The language is improving quite fast and adding major new features (consider GAT and improvements in const evaluation) despite being quite stable and reliable in other respects. Very different from most "corporate"-focused languages.

(Don't get me wrong, I'm quite fine with also focusing on hobbyist, proof-of-concept languages for potential improvements to developer productivity. But then I'll probably want a Pony.)


That's a fair perspective. I don't have the same faith in Rust's dominance for high-level use as these problems seem inherent to their kind of borrow checking, but its possible they can compensate for it with improvements like the ones you mentioned.


I'm not sure that'll be likely. If anything Rust gets more complicated with GADT's and such. While GADT's makes sense, Rust's fundamental philosophy lends toward verbosity and low level semantics in my opinion.

Don't get me wrong, it encourages very memory efficient code but its also tedious to write. Take RC for example, but you can already do efficient RC in Rust. It's just not as pleasant or easy as in Nim. Rust Rc's still require a fair bit of details about using and borrowing the memory, passing references to non-rc functions, etc.

Also Nim could just compile to Rust as a backend. ;)


Good for you, but I don't quite get the reasoning. I wouldn't expect any guarantee that Google will fund work on an experimental open source project indefinitely, or to pay for work they no longer care about. But why not accept funding on a temporary basis? When they get bored with the experiment and funding dries up, or they want to go in a direction you don't like, the source is out there and you could keep going on your own.


I find myself only replying to people who disagree with me. That is sad.

I've added Vale to my blog post. It looks very interesting.

Congratulations on taking a stand for what you believe in.


First time I've seriously looked at Vale. Looks pretty great. One thing that I didn't see in the guide, probably toward the end (maybe in "Patterns") is some discussion about error handling.


I tend to agree with this type of stance against corporate ownership.

Simply because ownership is power. Power in the hands of individuals is harmless as it is mostly Brownian motion; on the other hand, power in the hands of huge corporations (or governments) can turn nasty very quickly.

This is the reason I stopped using VSCode even if the tool is very good.

Ownership is too important, with heavy long-term political consequences to be ignored or traded for short-term convenience.

Basically, anything running "in the cloud" or slowly converging to run there should be a red flag.

Without being paranoid or a luddite, you can do everything with mostly local-tech or at least diversified enough to avoid giving too much power to a single actor.


> This is the reason I stopped using VSCode even if the tool is very good.

So you’re not going to use the tools that you think are the best and most productive and instead depend on the kindness of strangers volunteering in their free time?

And before you cite “Linux”, look at who the top contributors are - all corporations.

Yes and how deep down are you willing to do everything on your own and does it give you a competitive advantage - ie “does it make the beer taste better”?


Richard Stallman once said he'd rather use software that was slightly less good over non-free software.

If you think this is insane or don't understand the viewpoint, I refer you to the fable of the wolf and the dog. https://fablesofaesop.com/the-dog-and-the-wolf.html


I would rather refer you to the reality of the starving developer.

I don’t know about you. But I can’t seem to kick my addiction to food and shelter.

The best method based on my experience and talents that I know how to adequately feed my addiction is by exchanging my labor doing software development for money.


> I don’t know about you. But I can’t seem to kick my addiction to food and shelter.

Which is why it's all the more important for those of us who can afford to be principled to do so.


>before you cite “Linux”, look at who the top contributors are - all corporations.

That is true but no single corporation has control of Linux. Linux (the kernel) has managed to harness corporations to fund developers so they can work full time on it without giving said corporations "editorial control" over Linux.

Individual corporations that pay Linux developers have some influence over the type of things that are worked on (by paying people that specialise in some parts of the kernel over others) but they don't get to decide what actually goes into the mainline kernel.

Unfortunately the same isn't true of most languages where there is often a single corporation behind it.


I use Sublime Text over VSCode, a paid tool, and I am glad to pay for it as I use it something like 8-10 hours every day. The licence is $80 and can last for years and the tool is very high quality.

The answer is not always open source, even if I think that the contribution of open source to the overall ecosystem is both positive and invaluable, as it clearly empowers individuals.

The dangerous trend is concentration of everything. We don't have to use every services from Google or Apple etc.

I only use Google Maps from Google, nothing else. I use iOS from Apple, but I barely use my phone these days.

We need more diversity to have less corporate control, increase healthy competition and mitigate the political risks.


There have been great FLOSS dev tools for decades—it's one of the few areas with "an embarrassment of riches." Other areas so-so, but dev tools are top notch and ubiquitous.

Definitely one area we don't have to compromise principles.

It's also important how its distributed, if in a main distribution at least a pair of eyes or two have looked at the source.


The dev tools that are not developed or supported by a for profit corporation that are “top notch” are not “great”


Yeah, I don't want to use the Microsoft language (C#)

Certainly not the Oracle Language (Java)

I like the Google Language (Go) and it's BSD so maybe safe-ish?

I'm not in that ecosystem, but I'd use the Apple language (Swift) if I had to.

Maybe the JetBrains language (Kotlin) is okay?

I hear lots of buzz about the Mozilla language (Rust), maybe they have a good history of open source stewardship.

The Guido language is pretty friendly (Python) ;-)

But I'm pretty done with the Larry language (perl)


Kotlin is not just okay, it's pretty great IMHO. Elegant and expressive, feels familiar. The only reason I only use it when I need to is JVM, and the fact that googling anything Kotlin-related brings up results android. Just answering your question^^


Completely agree.

Outside of Android app development, AWS uses Kotlin for its backend systems: http://web.archive.org/web/20200706214913/https://talkingkot...


Rust isn't a Mozilla language. It started there but it has it's own foundation and isn't under Mozilla's stewardship.


> remember when programming languages were free?

This is actually the golden age of free programming languages.

Previously, if you wanted a high quality compiler, you had to either get it from the vendor of your operating system or license one for a lot of money. Now, you have access to multiple high-quality compilers.

In addition, even if you had a free compiler, you would have to pay for floppy disks or a cd-rom. Now you can download it for free.

There is also a lot more information for learning about new programming languages thanks to the Internet.

Finally, you also have access to libraries/frameworks like LLVM that make it much easier to build your own compiled languages. For example, Rust probably would have taken much longer to get where it has, if it was not able to leverage the LLVM infrastructure early on for things such as optimizations and cross-platform compilation.

Now with regard to corporate control/backing of languages, if you look at computing history, you will see that most successful programming languages have been backed by some corporation (there are exceptions, though).

Fortran was backed by IBM. C and C++ was backed by AT&T and later Microsoft (especially for Windows). Pascal was backed by Borland. Java was backed by Sun. Rust was backed by Mozilla.

The question is not so much if a corporation backs a language but rather how it goes about building a community around the language. IMO, I think Mozilla did a very good job of building a very broad community around Rust so it became more than just a Mozilla language.


Way before Borland existed, Pascal was created at ETH Zurich and enjoyed its early rush of popularity from free (minus media & shipping charges) distributions for CDC hardware from the University of Minnesota and for the P-machine on many platforms from UCSD.


UCSD Pascal was famous for being slow and crashing. Turbo Pascal was faster than any other compiled language available, and it kept getting faster.

Had Borland not gotten greedy, Delphi would still be widely used.


Compiler was one thing - debugger that would let you step through the code was serious expense.

All the tooling we get nowadays for granted was insanely expensive.


> remember when programming languages were free?

Cheap maybe. Turbo Pascal was 49.99. Linux was at 0.01 and very few people had access to hardware that could run gcc.

I thought it will be a rant against corporate controlled languages because you can't rely on them long term. Instead it's that the ide or package manager phones home.

Well so does apt or macports?

Edit: actually Turbo Pascal predates Linux 0.02 by 8 years.


There's a hell of a lot of fuzzy thinking in this article. Exactly what "ecosystem" tainted by Google's articles of incorporation pollutes the author's machine? I know it is the modern style, but I preferred discourse when words had meanings. "Corporation" is not a word that means the thing the author is obliquely suggesting. Their precious GCC is maintained and distributed by a Massachusetts corporation doing business under the fictitious name "Free Software Foundation, Inc."


I also miss the days when words actually meant something. Back when "spyware" actually meant "steals your data, without you knowing" not "collects useful info to help the developer".


I never liked these arguments.. We can’t live without some trust, it is simply impossible the same way we can’t avoid all risks in life. Putting our head in the sand doesn’t solve anything.

As for the concrete languages mentioned, Java is probably the safest bet out of managed languages, not only does it have a proper specification (both the language and the JVM), it can be carried forward by multiple companies single-handedly, it is that critical piece of infrastructure. Also, even from an incentives point it doesn’t make sense to put backdoors or whatever, as they themselves use it very heavily, so each big company in effect “checking” the others.


I feel this way about TypeScript.

That said, I see its value. We use it at my company.

TypeScript is open-source but created and (I think) pseudo-owned by Microsoft, which has had terrible ethics over the years, including the 3 E's [1]

[1]

""Embrace, extend, and extinguish" (EEE),[1] also known as "embrace, extend, and exterminate",[2] is a phrase that the U.S. Department of Justice found[3] that was used internally by Microsoft[4] to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences in order to strongly disadvantage its competitors."

https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


These Microsoft examples are 20 years old. The company has changed leadership to a team that embraces open source years ago and I think they've done a pretty good job demonstrating this embrace. They have adopted open source Java, they further open sourced .NET, they've embraced Linux containers in Azure and WSL on windows, etc. Might be about time to reconsider this perspective of 'hate Microsoft'. Full disclosure: I work at Microsoft and these things I listed are a big part of why I moved there this past year.


"they've embraced Linux containers in Azure and WSL on windows"

I am open to the possibility, that Microsoft changed, but this example is a classic "Embrace, extend, and extinguish" tactic by my understanding.

Linux is strong with developers and certain tech, but by incorporating Linux, Microsoft makes devs have many good sides of linux, but with all the nice proprietary windows extensions. So they stay on windows.

And that means less devs switching fully to linux to struggle with drivers and co. meaning less solutions there, so even more devs stay on windows and just use the Linux goodies.

Effect extingushing remaining linux users on the desktop.

But of course, they offset that very effectivly, by making me fight their system to not show me advertisement, track me or update at a very inconvinient time for example. Which is why I still love linux, aside from bugs, it does exactly what I want and when I want. I am in control. With windows I feel like I am renting something, where the contract and services can (and sometimes will) change any moment.


> And that means less devs switching fully to linux to struggle with drivers and co. meaning less solutions there, so even more devs stay on windows and just use the Linux goodies.

Isn’t the fact that Linux is still more of a headache an argument for using a product from a company that has a profit motive to provide a good user experience?


Most of the hassle comes from picking a random computer full of parts whose OEMs explicitly and only support Windows and playing does this work with Linux wherein the answer ranges from yes, to yes with kernel version n+, yes with an out of kernel driver, yes with some manual configuration, to hell no.

If you dealt with an OEM that ships a computer with Linux they would iron out these issues for you. If you choose to be your own OEM you must do so. Most people complaining about Linux hardware support have decided that good support means working without issue on whatever they throw at it including the laptop they bought for $200 7 years ago from walmart and that any difficulty in installing or operating it is an indication that volunteers haven't donated enough infinite free labor on the off chance that someone wants to install linux on one of the 7 units of that model still in existence.

A more realistic expectation is that there are good range of products supporting Linux not that absolutely every machine be supported. Good support has been available including devices that ship with Linux installed for years.


> Most people complaining about Linux hardware support have decided that good support means working without issue on whatever they throw at it including the laptop they bought for $200 7 years ago from walmart and that any difficulty in installing or operating it is an indication that volunteers haven't donated enough infinite free labor

But you know who has thrown labor at getting that $200 laptop to work? Microsoft and Google (Chrome OS).

It’s amazing how many more people who are willing to work for money than to work for free.


Which illustrates my point about being willing to pay for a Linux specific OEM. Complainers almost always opted in to being OEMs then complained about the work they opted in to. Linux isn't free windows for every computer in the world.


if developers move from Linux to Windows + WSL as desktop, in my experience, is because Linux as desktop doesn't offer a great experience for everyone. In some terms, companies trying to sell Linux, did a bad work to get it done well.


Well yes, there are many reasons why linux has problems and people go ways to avoid them, but my point was, that Microsoft did not embrace linux for their new love of open source, but to eat its marketshare.

I mean, Linux was never significant on the desktop, but had and still has significant market share for developers. In University I was basically tought how to use Linux and despise Windows. Microsoft does not want that obviously.

edit: but according to this chart, linux is actually still gaining market share

https://gs.statcounter.com/os-market-share/desktop/worldwide...


Yeah, well pretty insignificant market share. Working many year in this market, Linux, Linux Desktops and etc, I can tell you: Microsoft isnt looking to the Linux desktop market share, but to the Mac OSX market share. Windows + WSL is a real contender to Mac OSX as development desktop.


Ok, that's a good point, that the real target is OSX.

Still, I don't think they are happy about SteamOS for example. I mean the absolute numbers are still very low, but if gamers start to see linux as a alternative, Windows might get a problem. And there is still a significant portion of linux only developers and not all of them are FOSS fanatics, but pragmatic, but still don't like the walled garden of OSX.


They embraced to fight Apple, and nowadays it makes more sense to be compatible with Linux than pure POSIX, even the surviving UNIXes have some form of Linux compatibility layer.


Do you know any linux users that have switched from linux desktop to Windows because of WSL? I think it's just leading to Windows devs embracing Linux.


I know it keeps me more on windows, when I do not have to switch to the linux partition to do something particular and I see new devs not making the switch to linux at all, when they can get the job done on windows.


No, but I know quite a few that used to buy Mac laptops and now buy Windows ones.

That is the target market, developers that want a POSIX CLI experience and don't care about GNU/Linux to start with.


> changed leadership to a team that embraces open source years ago

Before or after they were shaking down Android OEMs over FAT? Microsoft didn't change, they're just operating in a market where they can't get away with as much.


> Before or after they were shaking down Android OEMs over FAT?

Looks like that was 2010: https://en.wikipedia.org/wiki/File_Allocation_Table#Challeng...

Nadella started in 2014.


What about forcing Windows upgrades? What about endemic telemetry with no/constant-shifting off switches? What about dark patterns to all but force folks to create a online account? The first big tech co to jump in bed with the NSA?

https://news.ycombinator.com/item?id=31727293

The truth is the culture at MS hasn't changed much even as the world around it has.


None of that is about their relationship to open source.

There are still lots of reasons to dislike Microsoft, but their interaction with open source has dramatically improved in the last 10-20 years.


To clarify my point, is that if we know where the priorities of MS lie... and as demonstrated we know from experience that the needs of MS come first. Then extrapolating to their open source telemetry is a no-brainer.


After. That was also closer to a decade ago now than not. (It was settled in October 2015. Microsoft released an Open Patent Agreement with Android manufacturers a few years after that and dropped licensing fees at that time.)


When Microsoft committed a whole host of criminal acts including funding a criminal pump and dump scheme targeting competitors with fraudulent lawsuits Satya Nadella had already been among MS leadership for years. If you were part of the MS leadership team in the 1990s and 2000s you were at least tacitly OK with profiting from crime. The only difference between then and now is deciding that open source can profitably be used and can't realistically be crushed.

It's like the mob boss deciding there is money to be made working together so he stops trying to have you whacked. It's certainly a better position to be in but not one that engenders trust nor should it.


Microsoft is certainly embracing open source. It looks like Mono is still with us after open sourcing .NET, though it's now sponsored by Microsoft. They bought GitHub and kept it running. Are they still doing the extend with useful features not found elsewhere part? I don't follow their stack closely enough to know.


Visual Studio Code's Free version is crippled by, among other things, not being able to use certain language servers.


How about the part where certain plugins and extensions which work with Azure and some other Microsoft assets are not permitted in the forks of VSCode?


I'm not sure I believe this. IME, the developer experience of .NET on Linux and Mac platforms is definitely subpar compared to Windows. I've tried to get started on F# several times and have always run into bugs and incomplete/inadequate documentation.


F# will always be a niche language compared to C#. The VSCode extension for F# is bad, while the C# extension for VSCode is just as good as Visual Studio for C#. I'm not apologizing for the .NET team, but if you want to be productive with F# then you really should stick to Visual Studio on Windows.


F# has been working on much better language servers for VS Code (building on top of the good parts of C#'s work and the larger Roslyn compiler infrastructure ecosystem), but also F# is much more a "community" project than much of the rest of .NET and a lot of it is "at the pace of open source" rather than "at the pace of corporate initiatives", for both good and bad.


I exclusively do dotnet on macos and I consider the experience to be mostly superior


Have you tried JetBrains Rider? I've found it pretty great on Mac and Linux.


TypeScript is the least concerning one. If Microsoft somehow does something so outrageous that you can't stomach using TypeScript anymore, just compile it to JS permanently and call it a day.


When I describe Haxe to people, the inevitable question is 'Why not Typescript?'

This article is pretty much expresses the answer I give.


Alternatively: "Stop complaining no one eats their vegetables and make a vegetable that people want to eat."


People don't know what they want to eat until aggressive marketing convinces them to have a taste.


Yeah? I guess Big Garlic Bread is out there somewhere twisting their mustaches and watching the dollars roll in.


I guess moralizing could be seen as an extremely ineffective form of marketing, if it's done at people rather than problems. That's an interesting angle, thank you.


Why do some software engineers work and think this way? There are a ton of things in society built on corporate-controlled and even proprietary languages and software tools, from buildings to bridges and more.

There are quite a few open-source projects and packages that I would love to be corporate controlled so that I could get some help and support that I would gladly pay for.


There is salt on my kitchen counter but I don't put much concern and exactly how it got there. Some transactions (go ahead download our cool stuff) over the internet comes with strings attached, some don't (very few). The aggregate effect of for profit entities embedding themselves across hardware they don't own but in one fashion or another, start controlling, is harmful. On whose behalf my router and my pc are churning computations and transferring bytes anyway? Should compilers and computer language be like salt. Now you have it, use it, no strings attached. I'm so glad that spices can't be licensed.


I was unpleasantly surprised when I installed Dart last weekend and received a warning about it reporting back to Google Analytics by default(?!), so I can definitely sympathize with the concern.


I would expect no less from the king of data mining. I'll use Google's developer tools, but the first thing on my mind when I install them is "Where's the opt-out for the analytics?"


I don’t see the issue with using corporate backed languages. Java (the language, the core libraries, and the compiler, etc) are all GPL2. GoLang is BSD licensed, c#/dot net is some mix of MIT and a few others.

So what’s the problem? There doesn’t appear to be any risk here.


There's some risk. A language and it's tooling can be GPL or BSD-licensed, but if it's corporate-controlled, the following things can happen:

- The corp still doesn't need to take direction, nor input, nor patches from the community.

- The corp can steer the language and frameworks wherever they want. Sure, you may be able to then fork it, but is that fork going to gain any traction? Probably not.

And it definitely will not work on that corp's Official Operating System platform.


> The corp still doesn't need to take direction, nor input, nor patches from the community.

>The corp can steer the language and frameworks wherever they want. Sure, you may be able to then fork it, but is that fork going to gain any traction? Probably not.

These both apply to not-corporate projects as well.


But if it's a non-corporate project, and the lead/owner/BDFL decides to go in an evil (or just stupid) direction, and I fork it, my fork has a better chance of gaining some traction. It's not perfect, but it's somewhat better.

(Or so I suspect - I haven't ever been in that position.)


I would assume this also applies to corporate projects as well.

Perfect example: There are 2 Plex alternatives I know of that sort of fit this mold. One is called Emby that started as open source and turned into a closed source commercial product. The other Jellyfin which is (from what I can tell) an open source fork of Emby from before it went closed. Both seem to have picked up some amount of traction.


Touche.


The languages may not make money for the corporation sponsoring them, but they can be used to implement corporate strategy for competitive advantages and you would never even know. The risk you're taking using a language like this is that the sponsor does something with it that is in their best interest instead of yours without your knowledge.

When I started in software development in the late 90's, avoiding proprietary software was a best practice. SQL - great, open, use freely. TSQL and PL/SQL - avoid because of MS and Oracle.


That was my first thought - Java has been corporate controlled since at least 1999 when I started using it and... so far none of the Bad Things that were predicted have happened to it.


Did you read the article? The author is concerned about spyware.


That just seems like a weak concern though. The code for the java compiler is open source. The code for maven is open source. So how is Nim safer from spyware?

If I’m concerned about spyware, then my language and compiler are low down on my list. My OS, my phone, and my network are far higher on the list.


This appears to be extreme aversion to risk. From a cost/benefit, it doesn't make sense.


I would highly recommend Odin to anyone who has looked at Zig or Nim (as the author did here). Also not corporate controlled, but used in production at several corporations.

https://odin-lang.org/


My problem with Odin is the same as Zig. I think that manual memory management is a bad choice for most applications and only really has a place in very low-level or performance-critical code.

To me, the most promising newer languages all attempt to make automatic memory management faster, more predictable, and more scalable. That’s what I think is so compelling about Rust. Koka is a promising upcoming language as well, and there are many others in this space.


Outside of Rc and Arc, which part of Rust memory management is automatic? The whole pitch for Rust is making manual safety management safer by using more sophisticated compile time checks. It certainly doesn't make automatic memory management faster or more scalable. (Rc and Arc are both regressions from tracing GC in that regard.)


The whole ownership and borrowing system is a type of automatic memory management. It just happens at compile time instead of runtime like most automatic memory management systems.

Manual memory management is possible in Rust, but is not typically something developers need to interact with. There is typically no need to manually free objects when you are done with them. You just let the system destroy the objects automatically when they go out of scope.


> There is typically no need to manually free objects when you are done with them. You just let the system destroy the objects automatically when they go out of scope.

By deciding where to scope the variable with the destructor/drop function, you've already made a manual decision about memory management. The compiler implicitly inserting a call to the destructor does not automate the decision of when/where to allocate or free memory - its just syntax sugar over the decision that you already made. This is just as true of Rust in 2023 as it was of C++ 40 years ago.

With true automatic memory management like tracing GC or reference counting, you have no idea where the or when the memory will be freed as you write the code, and the answer will usually be different over different invocations of the same code.

> The ownership and borrowing system is a type of automatic memory management

No, it isn't. The borrowing system is completely orthogonal to memory management. You can write a function that takes a borrow, do all sorts of things with that borrow, including forwarding it along to other functions further down the call chain, and the memory backing that borrow could be statically allocated, dynamically allocated with the default rust allocator, or allocated by some custom solution like a slab or pool allocator. The code reads the same regardless of the memory management scheme because you make the decision on how you will allocate (and eventually free) the memory before you ever create a borrow. The borrow checker can help keep you from making use-after-free errors, but it doesn't dictate when, where, or how memory is freed. That's still up to the programmer.


Even in languages with a garbage collector you make decisions that impact the lifetime of an object. That is not what makes the memory management manual.

For instance, you can still have a memory leak in Java if you maintain a reference to an object that you never intend to use again. It’s still up to the programmer to prevent this.

> With true automatic memory management like tracing GC or reference counting, you have no idea where the or when the memory will be freed as you write the code, and the answer will usually be different over different invocations of the same code.

This just isn’t true, and it would be anarchy. You know that the object will be freed sometime after the last reference to it dies. This is the whole reason to use automatic memory management.

Reference counting approaches typically go even further and free the memory immediately when the last reference is eliminated.

And this is basically what Rust does with ownership. When the object goes out of scope and ownership is not transferred, the memory is freed. It’s just that it doesn’t always need the reference counting part. But you can rest assured that it will be freed.

With manual memory management, the programmer must manually free the object, and there is no automatic method to do this once there are no references. That is what makes it manual. The programmer might forget to free the memory or they might free memory too soon and try to use an object after it’s already been freed. This is not the case in Rust.


> And this is basically what Rust does with ownership. When the object goes out of scope and ownership is not transferred, the memory is freed. It’s just that it doesn’t always need the reference counting part. But you can rest assured that it will be freed.

That's manual memory management. Like I said, the fact that you didn't explicitly write the call to free doesn’t make it automatic - that's just syntax sugar. Explicitly transferring ownership involves the programmer manually telling the compiler "keep this alive." If you're the one doing the bookkeeping, that's manual. Rust helps you with the bookkeeping, but it doesn't eliminate altogether.

As opposed to a tracing GC, where you don't need decide on the scope of the value itself (just the scope of the reference), leave lifetime annotations, use std::move or the like to transfer explixitly ownership, or write out destructor/drop procedures to tell the compiler exactly how to free memory. The runtime does all the bookkeeping on its own.

> The programmer might forget to free the memory or free memory too soon and use an object after it’s already been freed. This is not the case in Rust.

Yes, it's possible to run into pathologic cases in any algorithm for automatic memory management where memory isn't freed - that's the nature of Turing completeness. No scheme for automatic memory management claims to be 100% foolproof. That doesn’t mean that tracing GC is actually manual memory management.

> This just isn’t true, and it would be anarchy. You know that the object will be freed sometime after the last reference to it dies. This is the whole reason to use automatic memory management.

That's a surprisingly literal interpretation of what I said. What I mean by "you have no idea" is that if you read a function that uses reference counting or tracing GC, you cannot say for sure if there are going to be deallocation when that function is called, even for a pure function where you know the particular input values to that function. That's because the decision to deallocate depends on the whole program state, including references to the same value that may be held by some third party library that you linked in. As opposed to Rust, where (unless you are using Rc or Arc) you could annotate that function with comments about where things will be deallocated and if you understand the semantics of the language you will be right every time. Not because you are a genius who can divine the machinatioms of the compiler/optimizer running through algorithms to automatically insert frees, but because you actually made those decisions yourself, whether implicitly or explicitly, by leveraging the semantics of the language. Like I said, this is not a new concept. You could write C++ code without any instance of new, delete, malloc, or free all the way back in the 80s.


This argument is pointless because we have different definitions of automatic.

Can we at least agree that Rust’s model prevents the following errors that manual memory management is prone to?

- Using an object that has already been freed.

- Failing to free an object that can no longer be accessed.

To me, preventing those problems is the main benefit of automatic memory management. C++ will happily let you make either one of these mistakes, and that’s how Rust is different.

I agree that Rust imposes a cost in the code to accomplish this compared to garbage collection. It’s up to the individual programmer to decide whether they think it is worth the benefits.


> By deciding where to scope the variable with the destructor/drop function, you've already made a manual decision about memory management.

That's not really what's meant by manual memory management. MMM almost always means things like malloc/free, arenas, buffers with pointers, etc. I would not call stack allocation "manual", and IMO Rust's system is closer to stack allocation than it is to either manual or automatic memory.

In terms of DX, bugs and ergonomics, you only really have two schemes: ones which requre some kind of free() call and thus permit use-after-free bugs (manual) and those that make it impossible (ignoring weakref and friends). Rust is not manual by virtue of no free() instruction. In the same breath, "automatic memory management" is not an exact antonym of manual.


Is it really used in production at several corporations, or is it more that several corporations happen to use a popular product that was built with Odin?

Besides the JengaFX people, who else is using this language?


What does Odin do better than Zig? To me the languages seem very similar except that Zig has its very heavy use of comptime.


Zig's comptime idea is definitely an edge to Zig. It's an elegant feature that solves a lot of problems. However, using it for parametric polymorphism is very clunky. Not that it's a bad way to think about PP, but right now the ergonomics are just not there yet.

Odin has a sophisticated PP system, which supports something like dependent types last I checked.

Odin's context system is excellent. It is similar to implicits in Scala, but Odin's is by far the best implementation of the idea that I have come across. If Go wanted to make context part of the language, they would do well to understand Odin's context.

Odin doesn't do OOP. Zig does. I don't know why a C replacement would try to.

As far as leadership, Andrew (Zig) is the better open-source leader. He's better at engaging with the community, attracting people to his cause etc. But as far as a language designer, GingerBill (Odin) really "gets" it. If you listen to him talk about language design, it's very clear he's given it much more consideration than pretty much anyone.


> Odin doesn't do OOP. Zig does. I don't know why a C replacement would try to.

What sort of OOP does Zig do, exactly?


The kind with objects that have methods. Look at the examples. They don't have a class hierarchy, but they have something very similar to Go where a data type (object) "owns" a set of procedures (methods).

https://ziglang.org/learn/samples/

`try int_queue.enqueue(25);`

Go has a pretty sane approach to OOP, with methods forming the basis for interfaces (one of Go's most powerful concepts) very nicely. But Go is not really a C replacement. I know Rob Pike said they "started with C", but realistically Go replaced Java in its niche. It's not a C replacement in the way that Zig and Odin are.


I don't really see this as OOP. It's just sytax sugar.


Odin looks super interesting. Thank you!


Install those languages in a VM if you're concerned about things they phone home about. Not sure if the metrics being sent is the biggest problem unless you're super paranoid about intellectual property theft.


> Not sure if the metrics being sent is the biggest problem unless you're super paranoid about intellectual property theft.

Exactly, there seems to be a high level of paranoia with some developers about any software that "phones home".

I can see possible fears stuff being sent in crash reports if you are working on highly sensitive software or really worried about IP theft, but other than that I don't think telemetry in products like VS Code is sending the contents of your hard drive to Microsoft. They use that limited information to improve the tooling, which benefits everyone.

Maybe I'm just getting old and tired, but I leave it on and am not overly concerned with it. I used to be a lot more critical of things like this, but everything is so interconnected these days it's not worth the battle. Besides, I'm sure every megacorporation already knows my entire life story by this point with all the really invasive tracking that goes on on the web and on my phone.


I liked the article, while disagreeing with a few parts of it.

I agree with the main premise, having independence from large corporations, in the same way I like individual countries to be autonomous and not beholden to other countries (and one world order/government agencies like World Economic Forum can go to hell…)

I would add Python, Common Lisp, and several Scheme implementations to the list at the bottom of the article.

Nim looks like a very nice language but as a niche language it probably lacks broad classes of libraries that I would like to have available.


Yes and instead build on languages and frameworks with no guiding principals or cohesive strategy like the clusterfuck of the front end ecosystem and Node.


"I had some problems building Zig from source, so moved on to Nim (which I also failed to build from source as it set my laptop on fire). I gave up at this stage and installed the pre-compiled binaries for Nim. So, just by luck I ended up choosing Nim for a deeper look."

GCC will build from source on NetBSD, even on low resource computers. It has been quite reliable for me over the years.


> C also came from a corporation but it came free with every unix install and soon after I started using it, Richard Stallman et al. gave us GCC, a free C compiler.

Until Sun decided to create user and developer SKUs for UNIX and everyone else in the UNIX space followed, along. Only thereafter did GCC start to get really used, until then it was largely ignored.

This also ignores that WG14 participation isn't free beer and for GCC/clang to be compliant with ISO someone has to actually buy the standard documents, the free draft and final versions aren't 1:1 the same.

> GCC, in turn, allowed the development of new languages like Perl and python. No corporations in sight! We got a lot done with these languages. It's not like we are incapable of creating ecosystems without corporate "help". We have proved that, with countless projects in the past.

Perl and Python came to be didn't had anything to do with GCC, and both were at one time or another sponsored by corporations.


As someone who uses committee/consortium developed languages, I would like to remind the author how insane such languages can be


I would say "Stop Building on immature language"which is exactly what Nim, Crystal are etc ...


   Nim … relies heavily on exception handling as opposed to golang's explicit rejection of exceptions
If exceptions aren’t your cup of tea, look into using stew/results and questionable instead:

https://github.com/status-im/nim-stew/blob/master/stew/resul...

https://github.com/status-im/questionable#readme

Re: std/db_sqlite, you’re probably better off using sqlite3_abi:

https://github.com/arnetheduck/nim-sqlite3-abi#readme


I thought this was _just_ a rant, but apparently it was a rant (maybe a bit misguided) AND an argument for what's also my favorite programming language; Nim.

As long as you're fine with whitespace-based syntax, Nim will bend to anything as well as C at least, add on top of that compilation to JS (not WASM, just plain old JavaScript), you _could_ develop everything with Nim.

Alas, I don't do that either. I still love Golang, and the extensive support and familiarity of JS on the web keeps it as my main language. Someday I'll need a program again, that'd be as easy in Python as in Nim, and where the development wouldn't benefit from the effortless concurrency of Go.


The post fails to connect the dots to explain why I would want to "stop using corporate-controlled languages". The author mentions that they stopped using them because they "phone home about all kinds of things". But that implies that I dislike any and all "phoning home" (which is the vast majority of instances is simply anonymous statistics to help you, the user, find the most popular packages) enough to shun it. But I also don't think that the author's intention was to convince, and this is more of a "I'm better than the rest of you and here's why" piece.


> But that implies that I dislike any and all "phoning home" (which is the vast majority of instances is simply anonymous statistics to help you, the user, find the most popular packages) enough to shun it.

I totally agree with this. This "phone home" scenarios aren't rummaging through your file system or reporting back what websites you are visiting.

It almost seems like there is an underlying level of paranoia, or people are working on extremely sensitive stuff that they are concerned will be reported back or caught up in some poorly anonymized telemetry reports. I personally leave telemetry on and am not concerned with it. If it helps improve the tools I am using on a daily basis, go for it.

This is different than the analytics on the web which know everything from what food I like to what music I listen to and a lot more. We're talking performance metrics and stability issues, not what I may want to purchase today.


Ok, show me a "truly open" language that feels as nice to write, is as well balanced, and is similar technically to Swift and I'll consider it. As far as I know no such thing exists.

This is why there's pressure from within the Swift community to improve cross-platform support and the number of use cases Swift is viable for rather than rallying around some other more open language, and I'd assume the same is true for the communities surrounding any corporate-founded language. People flock to these languages because they're filling needs that other languages don't.


"GCC, in turn, allowed the development of new languages like Perl and python"

wut?


hard to implement languages in C without a C compiler


>Unfortunately this is no longer true as compilers and editors now have integrated package managers. We expect them to communicate on the network and we don't really know everything they might be communicating.

I really agree!

Elixir is my main workhouse language, I love it and the Phoenix platform as well. I also really enjoy using Nim. It's _fast_ and looks great, easy to write too.

Nim is one killer web framework away from the main stage. It will capture a lot of attention once it has a batteries included web framework. Think Rails, not Sinatra.


"I refuse to install android-studio since I am sure it will be phoning home about all sorts of things."

I wonder what the OP use to build Android apps. Perhaps simply gradle?

BTW, I use Go ocassionally for building API stuff at work, and Nim for personal projects. Since my main responsibility is building Android apps, I don't see myself leaving Android Studio, unfortunately.


> I'm focusing on compiled/statically-typed languages here so will be skipping over Common Lisp (a venerable language we should all seriously consider)

<3 CL is compiled and SBCL gets us many type warnings and errors, and Coalton can get you as much compile-time type checking as you wish (Haskell-like on top of CL).

https://github.com/coalton-lang/coalton/


postulate: languages that provide the most utility are more likely to be controlled by corporate interests because corporations that depend on those languages have a stake in its future and will invest in it. this includes web standards, openGL, and basically anything with a governing board composed of multiple large entities.


Another alternative to Go is V. Differences between Go and V:

Syntax

https://github.com/vlang/v/wiki/V-for-Go-developers

Features

https://vlang.io/compare#go


the article seems to be mainly about Go, but iirc Amazon now employs most of the Rust steering committee, so much so that there was an ex contributor complaining about it a while ago.

Java used to be backed by Sun Microsystems, but now it's really open source. Nowadays (iirc) Oracle and Red Hat are the main contributors.


https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....

I disagree that Java is an Open language. Experience says otherwise.



You can point me at that... and Google still got sued. As long as Oracle is involved as much as they are. Java is a dangerous language, in a legal sense.


This is a good idea. Corporations primary interests don't align with the needs of the public.


Genuine question: How did android turn out to be such a privacy nightmare while still being an open-source project?

"I am sure it will be phoning home about all sorts of things." is 100% true in case of Android.


Because (to my knowledge at least) there are no hardware vendors flashing unmodified Android builds into their phones.


The logic of this article.

1) Forget how members of hacker community earn a living.

2) Tell hacker community about your favorite programming language, tell them to use it and not CorpoLang™.

3) Forget how members of hacker community earn a living.


“C come from corporate.”

The author does not bother to understand history about Bell Labs and why that organization is probably most true in the sense of “free in spirit “ even it is nominally part of the AT&T.


>Python Syntax: Nim syntax is vaguely like python, indenting and colons for block structure

I am an ops automation person quarter-time, and I use Python for all my work.

I wish Python did not make whitespace significant.

I won't pretend I was going to learn Nim tomorrow, but knowing it is whitespace-aware takes my interest to zero.

---

Now on-topic, having RTFA, the complaint is "I don't want a corporation having analytics about my usage".

>Something I read recently and my own experiences with the golang package proxy reminded my how much I trust the golang tools on my machine, and yet how little I should trust them.

Why not? Why not trust them? What pictures of your vacation, what personal source code are they leaking?


Is there any evidence or context that I'm missing for his claim that the golang command-line tools are "Google spyware"?


I'm confused. When I go get ... is it "phoning home" to Google? I could wireshark it, but since we're all here...


It accesses the module proxy proxy.golang.org, which is run by Google. If you want to opt-out of this module mirror, you can turn it off by setting GOPROXY=direct.

The proxy has a clear privacy policy: https://proxy.golang.org/privacy It collects anonymized usage metrics like other package registries (rubygems.org, nuget.org, crates.io) do.


Okay, thanks. Seems harmless enough and has an opt out.


Go => Google => spyware is a bit of a stretch. Come back once you've grown into your big-person pants.


The only reason why software didn't have telemetry before was because software engineering was in a less mature state. Being against software that has telemetry just shows you would like to use software using substandard engineering processes.


This post is, as other commenters have alluded to, a case example of why you should upvote based on the article's content, not it's title.

Great title, very poorly-articulated article.

tl;dr: It wants you to use Nim for $reasons.


I’m gonna build on what people pay me to.


Who controls JavaScript?



Also even if phoning home is a concern: Thanks to GDPR Google has to say what they save. Even if they save more, they need a way to ask for permission and to delete if I don't want that.

Of course Google could dismiss the law but then they have to pay massive fees.


Take off the tinfoil hat.


Come off it. This isn't 2001 anymore. Corpos dominate tech in every facet. Data harvesting has never been more egregious and rampant. This data can and has been used (read weaponized) in ways nobody could have predicted. It's important we cover our asses and wash our hands when we make decisions about long term projects. To dismiss these concerns and opinions as crazy is disingenuous or at worst borderline malicious.


Jeez, I hope this person didn't write this blog on a device made by a corporation! And I certainly hope it's not hosted by any corporation in any way!


Remember when programming languages were free?

When your compiler/editor didn't call back to some corporation every time you compiled code?

When our package managers weren't linked to data aggregators watching our every move?

When we used free tools to build free software.


I have to say that you must be pretty young. You don't seem to remember the days where you used to pay for compilers.


> Remember when programming languages were free?

Yes, now. I also remember when they were not free.

> When your compiler/editor didn't call back to some corporation every time you compiled code?

Yes, now. Just do not use Visual Code.

> When our package managers weren't linked to data aggregators watching our every move?

Yes, now.

> When we used free tools to build free software.

Yes, now. Just do not use Visual Code or Github.


Hmmm... back in the day there were IBM PL/I, Microsoft BASIC, Borland's Turbo Pascal, ...

Post 1990 or so the FSF came out with gcc, gcl, etc. Since then there are free languages like Python and PHP. However, many open source projects are "corporate dominated" for better or worse. LLVM would not be the quality framework it is if Apple hadn't invested in it. Linux got SMP scalability thanks to IBM. No Google, No V8, No node.js. A language like Nim might have no corporate sponsor now but if it catches on it may very well get one.


We have more options for languages and editors than ever before that directly compete against commercial options. And a decent amount of the time, they're fundamentally better and widely used commercially.

I certainly do remember the main options being: buy commercial software, download an inferior freeware, or go through a huge effort setting up emacs to be a low quality version of one of the first 2.

Oh, and now it's ok/expected you don't have to use windows for everything, because that was the case for a while.

IMO, it's never been easier to use excellent software while avoiding corporate bullshit, so I'm glad we're not back then anymore.

Edit: FWIW I'm early 30's programming for about 15-20 years of those (arguably!)


When I was a kid, there were no free compilers. You had to pay for tools like Turbo Pascal. You were either lucky if you found a decent free C compiler to download off a BBS or you pirated one.


When the original 128KB Macintosh came out, I started programming on it in assembler because I couldn't afford any of the compilers. Being able to work at that level affected the trajectory for about the first half of my career, and I think for the better. Growing up in a world with free compilers (and complete-enough scripting languages too) seems very different.


> When your compiler/editor didn't call back to some corporation

If you use compilers/IDEs from megacorps when there are so many great alternatives, you only have yourself to blame.


When was this? Year ~2000 through ~2010?


If you're building for a proprietary OS, then yes the tools sometimes aren't free - e.g. iOS development. However we can 100% use free tools to build free software running on a free OS, with the simplest example being C code compiled by GCC running on Linux.


> When our package managers weren't linked to data aggregators watching our every move?

I haven't heard of this before (other than TFA's bare question-raising). Do you have an example?


Same here. Doesn't even sound like a problem, it's not like you have to make an identifying account to access it.

I'd imagine having the data lets you detect/fix issues faster and flag malicious packages much easier.


Golang out of the box aggregates and analyzes usage metrics of modules whenever they are downloaded.


Which package management system doesn't do this?


On top of my head: CPAN and Debian's since both are distributed with mirrors not controlled by the the main project. It is possible that some mirrors save statistics but there is no aggregation of it. But, yes, most package manager servers track download statistics and as long as they throw away the IP addresses I do not see any harm.


True, I quickly checked

https://rubygems.org/

https://www.nuget.org/packages

https://crates.io/

... they all show download statistics of their packages.


Which also serves a useful purpose for devs, it's an easy way to avoid typosquatting by making sure you're not looking at a package with a similar name to the one you want but with only 2,000 views instead of 2,000,000


As long as they only store aggregates I cannot say that I see that issue.


Didnt read the article but based on the headline I agree.

Currently working for <small but influential company A> doing real time surveillance on an enormous amount of data. Using a proprietary language owned by <a competitor company B>

Technically, theres no conflict of interest because we, as a regulatory body, are a seperate business entity from the company A- but honestly what interest does company B have in addressing our concerns with their software? They arent incentivized to fuck us over nut they have no incentive to help us out either.

We are running into so many problems. And I know for sure that we could, with about a million dollars worth of billable hours in total, create a product vastly superior to cokpany Bs product. But progress marches on and were already bought in. So we just deal with the problems.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: