Because in 2021 developer tools are fundamentally not profitable. Jetbrains is the exception, not the norm. Developer tools are loss leaders for large corporation to acquire developer goodwill and mindshare. Previously we sold developer tools/PaaS. Programmers do not like to pay for their tooling. You have to sell to management, and when your target is management, then the metrics are completely different from what developers want.
This is why no-code and low code are so much more successful than developers tooling startups in terms of revenue and profitability. The people using are the same people who are running the business. The value proposition is clear. "Better developer experience" alone is not sufficient for selling to enterprise. And programmer productivity cannot be quantified in lines of code written. This is hard point to explain and get across on HN because this forum is inherently dismissive of the value of RPA and RAD software.
Over-monetizing their dev tooling was a significant contributor to Microsoft's loss of dev mind-share over the last decade. Free software took over the world because any kid in their bedroom could download Linux/GCC/MySQL for free.
Want to work in .NET/MSVC? You just run into barriers (gimped "express" versions, no free documentation etc.) Yes this has changed now, but it's been a long time coming.
"Kids in their bedrooms" had cracked versions of MSVC.
Even at my school, I don't remember a single student paying for any software. Teachers were obviously aware of that and while they didn't encourage piracy openly, when they showed something on a high priced software, "go ahead and get the cracked version if you don't already have it" was almost implied.
We also worked with Linux and free software a lot, that's just to say that price never was an issue, because we didn't pay.
It is debatable but I think publisher were happy with that situation, students wouldn't pay in any way, they might as well learn on tools that their future employers will have to pay for.
Things have changed. Most notably, the primary way of monetization is online services now. Software is often given away for free and piracy is either pointless (because free software) or much harder (because a large part of it is online).
As for charging for dev tooling, yes, it is hard, because charging for software is hard. That is, unless you have an online service component. That's a reason why "collaborative" tools are so popular now.
At my school, about 10 years ago, any student who was either a CS major or taking any CS class got free access to full versions of pretty much every MS product except enterprise versions. That’s full versions of Windows, .Net, SQL Server, Office, you name it. There was no need for anyone to pirate anything.
You have to jump through a bunch of hoops, if you even know about what's there. You have to make sure the right thing is registered in the right way, and if something's broken down in the link between your university's login setup and MS's setup then good luck getting someone to fix it as a student. It's actually a huge difference in practice from what you get with open-source (or piracy): download the thing and run it, that's it.
It's a pointless argument. You had access to all the software with a regular student license, it was obviously never meant for any production workload and never meant to be covered with any kind of support.
Since piracy was/is the norm, you might as well provide a few version.
Adobe CC apps used to be so easy to pirate, but recently Adobe has been going hard on anti-piracy detections (I imagine it is indeed getting abused by some businesses somewhere). I was under the assumption that a majority of their mindshare was from high school kids learning on cracked adobe then asking for licensed creative cloud apps at work.
This times 100. The open source alternatives are ridiculously good. Figma and Blender are the ones I use and we've made some extensions for specific jobs painlessly.
The free stack is even catching up where paid software used to excel - modern, usable UI. We're happy to drop a few hundred on donations.
I would never pay for dev tooling as a SaaS because it would put me completely at the mercy of the provider.
I’d be happy to, and do, pay for some dev tools that I get to own for life. The Jetbrains model is really a great match I think for both the company and the customer.
For me it was my dad's friend who worked as an enterprise Microsoft dev, and would give me his old MSDN CDs when he was finished using them. So I got Windows NT, Visual Studio, and complete API documentation -- even Windows 95 when it came out.
Not sure what you mean by “no free documentation” but even when I worked there we just used MSDN’s free online website like everyone else. VS did cost $$ but not sure it was overpriced, I’m still less productive on Linux than I was with the real VS (code is nice but not the same). I’d pay for a Linux version in a heartbeat.
The real reason Linux took over is Windows just didn’t make a good server. Even if you looked past the bloated footprint, licensing was a nightmare just to figure out what you needed to buy let alone the licensing costs themselves.
Even good dev tools can’t make up for a bad platform.
Interesting. Is that a reason behind it? I can see why the 2000s might seem like "last decade" (it might still feel like the 2010s), but the 90s are another decade behind...
It's probably not a joke. Many people report things like this when talking about time. Our brains are just not that great at accurately thinking about long periods of time.
Also at that time you could for about 60 dollars buy 'the win32 bible' which had pretty much every call you wanted in print form. Also about 50 bucks got you the CD with the docs. Only when you went to the 'I want MS in a box' MSDN that you paid more. MS dominated in that market in the 90s because they had tooling that was wildy cheaper than most of their competitors on other machines. Sun/IBM/Apple easily priced their docs in the 20k+ market. I bought many of these docs for these different archs at the time. MS was by far the cheapest of them. Borland and Watcom had 2 different setups with and without docs. You paid accordingly (usually 100-150). Also once the internet came around MS put its docs up on the web pretty quickly. They were about equivalent to the CD's. I would say around 96/97 they did it. I have not paid for MS docs since. I paid a few times for MSDN as I would need a lab of machines and ACLs were dumb expensive.
There is an incredible UserVoice thread that's been going for years asking for a Linux version of Visual Studio. It's such a microcosm of Microsoft problems.
- Naming confusion between three completely different products Visual Studio, Visual Studio for Mac, and Visual Studio Code.
- Complete failure to comprehend how old VS's codebase is. "Dotnet Core runs on Linux, right?"
- Most people use about 10% of the functionality it offers. Unfortunately no two users use the same 10% of it's features.
I think the cost of the tooling did matter but I also agree about licensing costs for their server products.
The first time I got to work with Linux for money (as opposed for fun at home) was when I had a project where the MS Licensing would cost a lot more than the hardware it would run on.
They let me do it with Redhat, PHP and PostgreSQL. They bought the box from Dell and that was it. No licensing to track or pay. The freedom was just amazing and made business sense.
You can now deploy .net core apps on Linux. Even SQL Server can run on Linux now. Plus, if you want to, debug it locally on your Windows machine on Linux using WSL (as simple as changing a drop-down to WSL and it'll install everything required on your WSL instance).
On the negative-side, the asp.net core team is obsessed with dependency injection and async, which results in tons of shitty boilerplate. So you either have to strike off the beaten path and get nice clean, terse code, but constantly fight the tooling, or accept their dogmatic styling and end up with ridiculously bloated code.
I haven't worked with ASP.NET Core for a while now, but what exactly do you refer to with "ridiculously bloated code"?
async-await doesn't add much extra code, save the occasional await keyword sometimes?
Yes, that's just the DI that does that, the async/await just gives you ridiculous call stacks and untraceable errors, usually for zero performance gains.
We should enter a world where people pay for open-source. OSS has been fine, but they never had the correct resources for marketing and UX. The should be a paywall for access to repos, even if you could theoretically find the code everywhere else, but ey, want the code updated automatically? $5 per month. (I’m paying 1% revenue for OSS but as long as it’s not everyone, it doesn’t make the developers filthy rich and able to hire UX and marketers, and that is what we need).
> but as long as it’s not everyone, it doesn’t make the developers filthy rich
You asked for suggestions, it is right there. Ask people to pay for access to the repos, or let them download for free from alternate sources. It will make big businesses pay because they require a chain of custody, while the amateur can still download for free, and it conserves all other advantages of OSS.
I do have a feeling that your comment was adverse, I constantly regret being generous when people talk to me like a thieve anyway, just because I give, but not enough (How much is enough for people like you?). Also, you are possibly trying to solve the problem of freeloaders by having possibly adverse comments, not seeing that I’m trying to solve your same issue of freeloaders by finding an incentive which both conserves all advantages of OSS and still leverage money en masse from larger businesses.
* The customers making serious money off improved developer tools - the kind of customers who'd pay $10,000 per seat for a great developer tool - are big businesses.
* Big businesses basically won't pay for things they can get for free. Oh, they might occasionally sponsor a conference for PR purposes - or even pay for developers implementing features they want - but nobody's paying $10,000 a seat for Eclipse out of the goodness of their heart.
* You might think I'm saying "Well then, closed source tools all the way!" but the tools in other engineering sectors that do manage to extract that much money from companies (SolidWorks, Altium...) have a bunch of problems as well - mostly around user lock-in efforts blocking anyone making compatible tooling.
How about FOSS have a commercial licence and a body to collect payments from enterprises, pays out the open source devs. Basically a more focused and more opinionated/vocal patreon.
> Over-monetizing their dev tooling was a significant contributor to Microsoft's loss of dev mind-share over the last decade.
I disagree. I think the biggest contributor was the loss of Windows as the platform to reach paying customers.
All throughout the 90’s people paid massive amounts of money for Visual Studio and MSDN subscriptions. There was also a huge ecosystem for such things as VBX or OCX controls to simplify development.
Developers easily paid the price for Visual Studio because that was the way to reach paying customers. In addition, Microsoft was very aggressive in giving free copies of Visual Studio to students. My university had a program where you could get access to all of Microsoft’s Operating Systems and Development tools for free through your .edu email account.
Then the web and mobile came, and Windows as the place to reach paying customers faded away.
I think you nailed it. The web & phones took over, and the world underneath changed. Servers and mobile tended to go UNIX-based for obvious reasons, so it's not hard to see why people moved toward developing on UNIX-based platforms. Had those revolutions not happened and if we were still on desktops, we might very well still have been dealing with expensive tooling licenses for Windows (and expensive Windows licenses too).
There's clearly a bit of a chicken and egg problem here. I would say that the web revolution gained so much momentum because so many developers could easily get the best tools to learn and experiment with for free with zero barriers.
It's possible to imagine a world where few ever thought to push HTML/CSS/JS to their breaking point to get desktop-like functionality from a website, because developing for the Windows desktop was free and a nice experience.
> I would say that the web revolution gained so much momentum because so many developers could easily get the best tools to learn and experiment with for free with zero barriers.
IMO (based on my limited view; I haven't studied the history) it happened because the market for smart mobile devices enlarged (one could argue Apple basically created the future & the market with iPhones, which Blackberry didn't manage to), Firebug and then Chrome made it easier to develop for the web, and developers found that these tools let them make cross-platform apps from a single codebase, with a much larger audience to go with it too.
Which is why I didn't see it as being an issue of price-point: I don't recall any better alternatives to it, whatever the price. Even if you were willing to shell out $1000+ for software tools, what alternative tools could you have bought to make it nearly as easy to make apps that ran on practically every major platform out there? Your only practical option for language was JavaScript (or if you were really desperate, Java, or I guess Flash), and Firebug and Chrome had (and many say Chrome still has) the best dev tools for JS. I don't see how being more willing to spend money on dev tools would've changed this on the client side.
On the server side, Windows had more to offer (and perhaps ASP.NET was a decent alternative), and prices probably mattered a ton there. But that's not primarily about dev tools, that's about the Linux being free and Windows Server very much not.
Yes. Maybe if MS strategically invested in excellent, free tools, Windows Phone would have the best apps, and win most customers.
"Developers, developers, developers" was right direction, insufficient magnitude.
Note that Android chose Java, not for its performance on constrained devices, but for... developers.
I really can't see how to tell how plausible this is, that MS Phone could have won, through better long-term strategy, of free dev tools.
As someone who did cross platform development for iPhone, Android and Windows Phone way back when, Windows Phone did actually have the superior dev experience by far (talking about WinPhone 7+ here). It wasn't free, but neither was iPhone development.
They didn't have any market share though, so there wasn't much money to be made making apps for them. I suspect they failed because they launched 2-3 years after Android and iPhone, so the other platforms had accumulated the network effects of an existing user base and app ecosystem that they couldn't catch up to. And they tried hard, IIRC, Microsoft offered to build a Snapchat client for Snap Inc, and to pay them to be allowed to do so, but were denied.
No, MS Phone became irrelevant because there's no space on the market. Indeed there's barely space for more than 1...
Apple managed to claw its position out of pure first-mover-advantage, and it's choice to build on top of a proprietary API. Had they gone with some HTML/js morass, it would have been easily dogged down and ground to dust on the interoperability battlefield.
Microsoft could wrestle Android out of Google's hands though... I mean, why not spinning their own Microsoft-oriented build around AOSP?
>and it's choice to build on top of a proprietary API. Had they gone with some HTML/js morass,
>it would have been easily dogged down and ground to dust on the interoperability battlefield.
Actually, initially Apple expected people to deploy apps as webapps and provide links on the homescreen. There was no ios SDK. Only after a lot of loud complaining by devs did Apple release any tools or an SDK for ios native apps.
I think it was Joel of StackOverflow who wrote that that was not the case at all. His statement was that MS would have loved to give away Visual Studio, however they wanted to make sure there was a viable 3rd party market for developer tools. They didn't care what you developed with just as long as you were targeting Windows. Visual Studio pricing was designed to create a price floor that any third party dev tool company like Borland could rely on.
MS dev tools were freely accessible in the same nudge-nudge-wink-wink way that Adobe tools were. I don't think I ever attended a LAN party where people weren't trading cracked versions of the best MS had to offer. And MS never lifted a finger to stop this process. Hobbyists and students had access to everything MS had to offer and often wanted to keep using them in a professional setting.
Java was in the right place at the right time have a Hype Cycle that aligned with availability of the internet and search engines.
If you were graduating and thinking MFC or Java you just had to look at the price of the bookshelf of books required to get anywhere with the former and have some second thoughts.
Also, I remember trying to download their software onto my modest home PC as a teen, and just the size it was taking up on the my drive, and how long it took to download everything scared me away. I wanted space left over for games.
Similarly, I see people leaving jetbrains every day. VS Code is getting better and better, and jetbrains tools are becoming more and more niche. While jetbrains may always be ahead, the gap is widening.
I doubt it - VS code is nowhere near a proper IDE like VS or IDEA - the only place where it's comparable is JS/TS because of how close those two teams are.
It does work when you needs something light weight - but it often craps out on large projects - ironically it slows down way more than the IDEs. For example I'm writing a Flutter/Dart project ATM - the VS code plugin becomes unusable after a while of working on the project even on a relatively small ~30kloc project. IDEA works just fine and has much better tools.
"VS code is nowhere near a proper IDE like VS or IDEA"
This statement kind of misses the point: it's not that 'VS Code is an IDE', rather it's that the value of an IDE may not be as much as what some think it is.
I would have made the same conclusion a decade ago, but with sufficient plugin/ability ... I have come to prefer VS Code along with many others and have little reason to go back to using an IDE.
That's not to deny the IDE uses case - for specific kinds of projects it's fine, but there's no doubt VS Code - a non-IDE - is picking up in an a ton of areas wherein traditionally and IDE would have been a first choice.
Even for basic stuff like symbol rename in a project it's behind good IDEs - search and replace is tedious when a tool can make it more specific without effort.
And like I said it often craps out on autocomplete alone with larger code bases for me (had this happen in Dart and C#)
It's the perfect IDE/editor for JS/Typescript - the rest is subpar.
For example I'm writing a Flutter/Dart project ATM - the VS code plugin becomes unusable after a while of working on the project even on a relatively small ~30kloc project.
I use VSCode on a project that's about 3* that right now (not including things like node modules or config) and it's never failed.
Are you sure the problem lies with VSCode rather than the Flutter/Dart plugin?
Oh it's absolutely the dart/flutter plugin. Just as it was the C# plugin when I attempted to use it as a VS replacement, and it was down to Rust plugin when I tried that.
So in the end the only decent experience I had with it was Typescript and JS.
Compare that to IntelliJ or Visual Studio. If I was mostly doing Web frontend or node I would 100% use vscode. Otherwise I just find it's too unreliable and I don't want to waste productivity energy on it. (I wish it was on IntelliJ level, I would pay for that with vscode remoting possibilities, more than I pay IntelliJ even)
I doubt it - it feels like it's more of an issue with language server implementations, Typescript is excellent even on large projects (better than IntelliJ and VS IMO)
Indeed, I just moved from VSCode to PyCharm a few weeks ago as VSCode and their error checker/syntax highlighter is vastly inferior. Looking to do the same for JS at some point but for now VSCode is still the best I’ve found for it.
Every time I change developer tools it's because the one I've been using has become bloated with add-ons as part of the install. When the installer has options for choosing packages, they aren't granular enough to be worthwhile. So the IDE becomes a hinderance, using memory, CPU, and screen real estate intrusively. JetBrains is there now, but it still has better language support for my needs than anything else more lightweight. I use VS Code for everything unless I absolutely need to fire up JetBrains.
> So the IDE becomes a hinderance, using memory, CPU, and screen real estate intrusively.
The screen space used is highly configurable, but if it wasn't, I'd agree. I don't care about memory or CPU at all.
Maybe it's different for other languages, I mostly do PHP + JS/HTML/CSS, but PHPStorm vs vscode (+ plugins for both) is not even close, and PHPStorm makes me so much more productive and takes so much annoyance out of my day that investing in a beefier machine seems like a small price to pay. I've easily made that money back once a month just because I get more done.
I like vscode, but I pretty much exclusively use it as a text editor and notepad.
PHPStorm gives you so much, it feels like too much because it takes forever to load.
Developing with php should feel fast. I feel like phpstorm slows me down. I like sublime, it opens the last project in .5 seconds. For me ctrl-p start typing is usually quicker than scrolling down a tree structure.
I want to use phpStorm daily again but it gives me zend eclipse flashbacks. I wish they offered a light verson.
You might be having a different usage pattern than me. I typically work on only a few projects in a day, and I have those open in parallel and switch between windows.
It's a shared process, so they're not each taking up resources (I wish that was changeable, since they've had and continue to have some annoying bugs around it).
And VS code with C++ is downright horrid, almost nothing works properly using the official C++ extensions. Yes it autocompletes and it sometimes manages to find the right files when you switch header/source, but that's about it.
I still use it at work though because we don't have CLion there and that code base does not use CMake, but only because it is just slightly better than plain VIM with some plugins. But compared to CLion + CMake it's just one small step beyond a glorified text editor.
> Programmers do not like to pay for their tooling.
Because it is too much work to convince the company that spending $500 on JRebel to have me not go on Hacker News for 5 (and it turns into 15) minutes while the thing compiles (my last company). I also have no real stake in whether the product ships in one month or two so I am not paying for it myself.
To pay for tooling, productivity needs to be a priority. I have never worked anywhere where productivity was discussed.
Ultimately, the tools I try out are the ones that don't make me involve other people in it, whether to inform or to get approval. This is extremely important at the beginning - particularly when I don't know the tool beforehand, so testing it is a bet.
Thus, extrapolating from my experience, I'd consider the main problem with paid tooling is that it usually requires getting other people in the loop. Even if it's sold in a way where you could use your personal paid license at work, that fact is very often unclear from the license text - unclear enough that you probably don't want the risk of procurement/legal disagreeing with your assessment.
(If we're talking SaaS/anything with on-line components, there's no way I'm touching this on my own - I have enough headache with exports control around remote work, I'm not going to risk anything I do be considered technology transfer.)
I feel I'm not atypical with this - I suspect developers in general have an opposite relationship with software licensing to that of their employer. For a company, OSS is random, unpredictable - while commercial licensing is safe, because there's a contract and a way to sue someone. For a developer, OSS is easy to understood, zero risk, no need to involve other people - while commercial is completely arbitrary, every piece of software has a different license, and thus it's very strongly preferred to involve management/procurement/legal in this, because you don't want the liability on your shoulders.
I cannot possibly agree more with that last paragraph, you’ve summarised that so well.
A discussion I was party to recently was debating which of open source vs several competing commercial products to use for a new development. Chief complaint raised about open source product by the team members on the business side of things was “oh it’s open source, that means there’ll be no support right?”. Comparatively my teammate and I had the reverse reaction: we know for a fact the open source alternative is at least 90% as good, if not better than the alternatives, we don’t have to wait to go through the rigmarole of purchasing, approvals, waiting for someone to sign contracts, finance to sort things, finding out you need another key because another person will be working on it as well, discovering the support is invariably shitty, and then being stuck with it when it turns out to be poor software and the business side won’t budge, because that means they’ll need to do things again (which will take weeks at best) and they’ll inevitably just end up re-signing the contract for another year.
Some companies will care. Some won't. I mentioned something like this to mine and they upgraded the build server in response.
For Jetbrains products, if you enjoy them you can use your personal license commercially. Your company just can't pay for it or reimburse you for it. This is the route I go because I use their products for personal projects too. For me it's a no brainer at $150/year for their all-products option (...for the 3rd year, 1st is $250, 2nd is $200).
I'm toying with the idea of selling developer tools only as personal licenses that can be used commercially. Like a driver's "license", companies must hire "licensed" developers to drive the software. (also licensed plumbers, electricians, accountants, surgeons etc - though true all are skill/knowledge credentials, not simply purchasable; so include an exam).
Companies need to hire and pay more for "licensed" developers.
It gives power to developers - and then I don't have to sell to managers or deal with uninterested clerks.
> It gives power to developers
It gives power to senior, rich developers, at the cost of newcomers. It's a toxic zero-sum play, which contributes to, e.g., shitty 10x overpriced health-care.
Same here. It costs me less than fifty cents a day to be able to use all the JetBrains IDEs and developer tools and get all their updates, both for my personal projects and for work.
I can only speak for myself, but to me, "no brainer" is an understatement.
Don't forget there are a couple hundred countries besides US. I make a relatively decent living by our standards, but nowhere near close to $100k/year. $150-250 per year is quite a bit of money for us.
It should be possible for a small devtools company to price differently based on locale, but I expect the support costs would then dominate any of the lower prices customers.
I think the core point missed in this thread is: developers write software for a living. They do not want to pay for something they would love someone to pay them to write.
You see this in other professions. A car mechanic doesn't want to take there car to someone else. A doctor tries to avoid general checkups with other doctors. A realitor will sell their own house. A grass cutter doesn't hire someone to cut his lawn. The person who makes lawnmowers doesn't buy one he makes one even if it takes longer.
I would love for someone to pay me to write an IDE. I've been in that situation a couple of times in the past (SQLWindows, Visual Basic) and it was a lot of fun.
However, my current job is developing a voice response system for restaurant drive-thrus. I don't have time to write my own IDE right now!
So I farm that out to JetBrains and get to use all of their awesome work for less than fifty cents a day.
If that's too much, their free versions are very good too.
I may also take exception to this:
> A doctor tries to avoid general checkups with other doctors.
Wouldn't the opposite be true? As far as I know, every psychiatrist has a psychiatrist, every counselor has a counselor (or should), and I would guess that every doctor has a doctor.
A doctor is more likely than the rest of us to have particular insight into their own health, but I don't think they try to do it all alone.
Of course I'm only speaking for myself. If anyone prefers to write all their own tools, more power to them!
> Then there is this saying that if a lawyer represents himself in a court, he has a fool for a customer
Also making the reverse assertion true too: that client has a fool for a lawyer :-)
Of course, many unsuspecting non-lawyer clients also have fools for lawyers; it's hard to tell whether or not your lawyer is any good (unless his name is Saul Goodman)
$250 (the initial all-you-can-gobble) is a day's wages or less for anybody making more than $62500 (post-tax) per annum, which at least in the US is not a lot at all.
That $250 isn't (generally) tax-deductible. At the moment, I'm making ~€80k as a SWE in Belgium (reasonably good pay but not the highest), and my net income is closer to €150/day. At the top of my career, I expect that I could increase that to at most €200/day because taxation is brutal.
This is not to say that I don't pay for tools; I subscribe to the full Jetbrains set among other things. It's just that that wasn't as simple of a decision as you present it.
Is it not in Belgium? In Germany you can count this as 'Werbungskosten' which you can deduct from your income. Anything you use for your job or for professional development counts as this type of expense. Don't other countries have similar concepts in their tax systems?
On my team of three, my teammate and I had to create a presentation for our project leader that he could share with our director making a case for buying ReSharper licenses for the two of us. Then of course our director declined to spend the money. It was tedious and infuriating.
I've spent many, many hundreds of dollars for development software out of my own pocket, because, just like with high-quality hand tools, good software tools pay for themselves over time. Even given what I make in the manufacturing world, these expenses don't account for a lot of my take-home pay, and I make half (or less!) of what hot-shots in tech make. I'm not worried about being reviewed on my productivity, I just want to get the work done faster and better for myself.
I think you make an interesting point. The employer maybe thinks like this: If those dev-tools are so great then let the programmers pay for them? If it increases productivity like they say and price is small compared to their salary, wouldn't they want to pay for it.
One alternative approach I could think of that the employer should give each programmer a dev-tools -budget per year. That would encourage programmers to experiment with better tools and find the best ones.
Depends on the company and the manager. If someone on my team wants something at or under $500 then no problem unless they have a habit of buying tools they never use. It's my job to make sure my boss doesn't raise a fuss about it and not the developers.
I don't think you have this right. The problem is not paying for developer tools, the problem is investing your time in proprietary solutions that might go away any time. For a long and resilient career developers have to stick to tools and technologies that are not dependent on ephemeral corporate support.
I'm fine to pay for tools (and I do). But I hate the idea of becoming dependent on proprietary tools. Imagine leaving your job and going to the next and because they don't pay for a tool you've become critically dependent on half your skills are useless.
And its not just for myself but I think its harmful that it creates barriers within teams and organisations. All the investment in infrastructure and knowledge connected to the tooling can only be shared with the people licensed to use it. If your team processes depend on it then nobody outside the team can even properly work on the software.
So we end up in a catch 22 where I will say, we can pay for software as long it is still perfectly practical to develop our code without it. But if you extrapolate from that, it means nothing we ever pay for can have a very high value proposition, and ergo we can't justify paying for it.
That is my feeling as well I never wanted to become dependent on tools. Right now I use WebStorm and I'm happy paying for it. But I don't really have dependency on it, I could easily switch to VSCode but I think I am happier with WebStorm.
The issue is it took me long time to learn how to use WebStorm effectively. That is now a good reason for me to stay with it.
Thats why I pay for my individual full jetbrains package. First year was $250, next year $199 and the next $149. Quite an investment the first year ok. But now I own my tools, and it turns out I also use them a lot for personal projects so I'm happy with that as well. I would not work for a place that doesnt let me bring it or offer it to me (because I can afford saying no).
AFAIK this is why Microsoft let windows and excel be pirated for personal use up to certain level.
Proprietary tools is worse for developers / companies that develop with many languages and / or other toolings (unless it doesn't interferes with the process itself).
> Because in 2021 developer tools are fundamentally not profitable.
Interesting...
But monetization might not be the root of this since we have very sophisticated tools delivered as open source. The question then would be why the "dev comunity" is not interested in building tools like those mentioned in the article?
My guess is that tools like Reflexion Models doesn't ring any bells for junior/mid-level developers. They don't know exactly what to optimize when it comes to long-term maintenance. That's why we have so many parsers, linters, etc. and now typed languages (again!) and not Reflexion Models.
The other day I was looking for something similar to Reflexion Models: a tool that I could describe high level modules and its files (like a package), describe the dependency hierarchy between then and check if there is any calls between modules that break the hierarchy. For instance: getting a alert if there is a call to a repository from the a controller (DDD). It's a common problem for big teams with junior developers that could be solved by automation.
Senior developers with 20 years of experience do not "believe" in debuggers, IDEs or writing tests. Same things are reinvented every 3 months with a new name written in a custom font. We are in many ways still in the bronze age of software development.
No, I know how to use them, except that in my cases I never need to look a the debugger because I rarely need to know the state of the program at a given time.
I need to know how the program ended up in that state. I can't reproduce this from a core dump or via a debugger.
That’s definitely not what a debugger is. You can’t add prints to a coredump and the run->add prints->compile->loop sucks a lot, especially if the bug is hard or slow to trigger.
I think the main promise of an IDE is being integrated. But maybe it could just be done with libraries.
Debugger might be a nice interface for prints in kernel.
It is also not usable with distributed systems where prints are also not all that useful and you need some kind of central logging.
But developing a desktop application without debugger is like writing code with a line editor. It can be done but it is not very productive (I can still remember editing BASIC programs one line at the time).
> Here is an idea, what about being paid in the exact amount that one is willing to pay for their tooling.
A better idea: what if all companies had to pay for all the software they expect to make money using, right down to the metal? No leeching off linux for you. Also no gcc, llvm, postgresql, mysql, python, ruby...
When people try introducing proprietary software into the FOSS ecosystem, I find it equally "incredible" how little they acknowledge that most of their piddly 10k lines of "magic" depends on the tens of millions of lines of code underneath being written by people who decided to take the other path.
It sounds like you're better off out of FOSS, but I can guarantee your job is propped up by its existence, almost no matter what you do.
My dear, that was exactly what used to be when I started programming.
Plenty of successful companies used to live from selling all the stack.
In fact GCC only picked up steam the day Sun decided to start charging for the Solaris C compiler.
Thankfully the GPL hate crowd, by pushing MIT/BSD licenses has just brought us the future that will be the reinvention of the public domain and shareware of the 80's.
Computer systems have gotten much more complicated. The closest you can get to the "full stack" ideal is probably Microsoft, and even they've given up on maintaining a web browser. Full-stack worked for a while, but nowadays you'd need to be a goliath. You can't even pay for all the components you use, since many parts of today's critical infrastructure are community projects with no corporate backing.
Those of us on Apple, Microsoft, Sony, Nintendo, IBM, Unisys, SAP, Oracle,... platforms mostly do pay for the whole stack, even if it means paying to various vendors.
But so many of proprietary tools are horrible! Some years ago, I worked in embedded space and after experience with Nordic's small devices and Analog Devices IDE for Blackfin, I dread using any sort of proprietary tech.
Those were pretty expensive products (we are talking $5k-$10k price range) and they were working way worse than just a single emacs window with command-line uploader.
The amount of time I wasted fighting with those tools was huge. And the worst thing, this was really time wasted. If I spend a week setting up OpenOCD I can at likely apply this knowledge to my next project. This was not the case with proprietary tools.
Paid is not always better, and we should not feel obliged to pay people just because they are in the same industry.
you are using the wrong analogy. You are implying that people are using proprietary products, but not paying for them - pirating or stealing them instead.
This is very much not the case. True, people are not paying for tools, but they are not using them either.
It is like a person went to fast food place and got a food poisoning. From now on, they are avoiding all fast food places and cook their own food instead.
You could argue they are missing out or wasting their time, but there is nothing immoral or un-ethical there. Even if I am a commercial cook, I don't have a moral obligation to go to restaurant. And if I have working open source tools, there is nothing wrong with saying "let ADI keep their overpriced IDEs, I will stick with Linux, gcc and gdb"
Yes, if I am a cook and I am voluntary giving out food for free you can expect to eat gratis. (Perhaps the place has recently opened and they are attracting new customers?)
If I am a cook and I didn't invite you, then this a theft and you are breaking the law.
You are welcome to place any RFPs you want, but you might not get too many proposals for them if you won't pay ehough. You are still welcome to do so, there is nothing illegal or immoral in doing this.
----
I think we have too many similes in this thread, so I am going to say it directly:
When choosing a software tool, there are many factors: monetary cost, time spent getting the license, time taken to get started, time doing the primary task (like writing application code), time reading documentation, time spent fixing bugs, experience learned, how easy to transfer knowledge for other people, and so on.
A lot of times we can ignore "monetary cost" because work will pay; but even then proprietary software does not always win. It often has much better "time to get started", but it is often worse in other aspects. We should choose the best tool for the job, and let the market decide.
For example, in embedded world, proprietary software is common, expensive, and for some reason, very bad. If I had to start a project on Blackfin, and my company would be ready to pay for whatever ADI SDK licenses I need, I'd still choose Linux/gcc/gdb if I could. This is not because "I don't want to pay for tooling" -- my org would pay anyway. No, this is because ADI SDK is horrible and ignores all modern software development practices. And ADI IDE is slow, crashes and does not support threaded debugging.
On a more minor front, I have an expense budget, and I am sure my work would be happy to Beyond Compare license for me, but I keep using open-source kdiff3. I am sure BC is nice and I wish them all the best; but I like to be able to teach other developers on my team, and they are not likely to have a BC license.
> You are welcome to place any RFPs you want, but you might not get too many proposals for them if you won't pay ehough. You are still welcome to do so, there is nothing illegal or immoral in doing this.
Ah, so you don't want to play by the same rules, thought so.
It’s so weird watching tech people drool over 3D printing hardware, raspberry Pi stuff or woodworking tools.
I suspect there’s overlap between “but your own software tools” and this group but I’m fairly sure there are at least some experiencing a degree of cognitive dissonance.
Thanks! But honestly I used most of them and expected some life changers instead. Borland’s VCL was the one indeed, but now it’s gone.
The issue with paying for all of that is that as a developer, I simply don’t need it for that price. If the world of FOSS suddenly became expensive, we’d just rewrite few tools from scratch, cause it’s not so hard. It only takes few man-years to implement a decent scripting language with ui toolkit and then you have a world of developers who, like you, do not understand what’s in these 10gb monsters that cost $$$$/year that cannot be done with that FreeLang+FreeUI+FreePackageManager+FreeEtcEtc. FOSS and Enterprise are not competitors, these are natural ways of developers life and of enterprise life.
I agree, and then they set up a price. Piracy is especially bad when you’re pirating from a particular person.
But not all developers have that “I made it and I will charge them the maximum equilibrium out of it” mindset. That’s where “free” part of FOSS began. Do you think that it’s unfair to use things which someone published for everyone to use?
Depends, only when accepting to be paid exactly the same way.
What I don't agree is the expectation to be paid, while refusing to pay other developers for their work.
Even the gratis stuff I use, back in the early FOSS days I bought CDs and magazines that distributed the software, the few times I went to FOSDEM the full volunteer package, nowadays books written by community members and occasional donations on stuff I use regularly.
So it is just not empty words about what is right.
Developers write developer tools. Developers wants to be paid to create the tools not pay someone. If they can't find someone to pay them then they will use a foss product and at times contribute back.
Why would you expect a toolmaker to buy someone else's tools?
> Programmers do not like to pay for their tooling.
I can't count how many times I see someone using Sublime Text professionally who hasn't paid for it. If anyone should have empathy for a developer, it's a developer, likewise with someone who can afford it. Once I put this together, I realized developer tools are for the most part a business dead end.
Companies are made up of people. The only people who might understand the value that they can provide are the programmers, but the programmers are not familiar with the tools because they don't want to pay for them in their personal/hobby projects.
Such an important point that. I would just add that the lack of enthusiasm for paid tools I think is because of the lack of significant developments in the space itself. The fact that we can still use clis and 40 year old tools like Emacs without losing much on the benefits of the modern day tooling system says everything one would need to know. And I don't mean it in a degrading way, but there needs to be a paradigm shift from the HCI side. You can't just throw in a debugger, syntax highlighter, linter, and expect people to pay for it; when I can do all of that and more on Emacs with probably a weekends worth of tinkering.
I remember going to a sales pitch for proprietary mobile app automated testing software . I was the developer looking at this product. It had crap documentation, it was hard to use, and our mobile app was not designed to be testable. But internal efforts to improve the speed at which QA happened were slow. Even though this software was crap, management bought it because it was a problem they could through money at to try and buy their way out of not listening to developers. We wanted to make the app testable and use free, open source tools that would actually work.
I went to work at a place that made a code obfuscator, thinking I’d get to work on it. By the time I arrived they were essentially EOLing it (providing support, spending nothing on sales and no new development), so they assigned me to something completely different. JetBrains and Atlassian really are the exceptions.
> Programmers do not like to pay for their tooling.
Eventually they’ll have to. Everything is trending to SaaS offerings and eventually we’ll be paying way, way more. The amount of money people spend on SaaS CI blows my mind and if that’s any indication the move to massively over priced SaaS tooling is inevitable.
I'm not sure I buy this argument. The one group of people who certainly can create good alternatives that don't rely on that lock-in effect are programmers, after all. It's their livelihoods and happiness we're talking about, not to mention often their own pockets feeding the SAAS beast in the case of small businesses and freelancers, so they have a lot of incentives to do exactly that. And there are millions of them, many of whom already contribute freely to projects they find interesting anyway.
The pricing of SaaS CI or any kind of advanced server infrastructure is engineering salaries saved, not raw AWS bills. It's completely B2B software and you could do very, very well with just 1000 corps as your customers.
SaaS CI is valuable because it's not seen as something a programmer could easily make by themselves, unlike the vast majority of SaaS offerings, a lot of which are more marketing than code (not that that's a bad thing, but don't ask the average programmer how highly they value marketing).
May be I'm missing something, but CI looks like the thing that's trivially implemented by a few bash scripts. It won't be as pretty and as reliable compared to battle-tested service, but generally it'll work. I did that in the past and it worked pretty well for our uses.
The most valuable part of CI where I work right now is it being run on a reference setup, with persisted builds and logs attached to each merge requests.
We are all supposed to run tests, coverage and benchmarking localy. The point in CI is to guarantee nobody cut corners and keep verifiable proof of good builds.
We could fulfil the same requirements with a hand made setup on a box somewhere uploading subsequent artefacts, but I am not sure we would be happier on any specific aspect. Even cost wise, setup and maintenance could non trivial compared to the amounts we're paying now.
A few bash scripts work up to some size. For many projects at some time demands increase and you got longer test runs, which eventually require more machines and and different independent build steps (maybe you don't want to build the core libraries which are rarely touched for each commit of other parts etc.?) and at some point it breaks apart and you need something "proper" especially if you then also want to integrate test results with code review and track them from the single planning tool.
All the chefs I know go out to eat way more than anyone else I know (don't know any bakers, though). Graphic designers I know, buy art and graphic designs (don't know any jewelers, either). Musician friends buy tons of records and tickets to live shows, friends who brew, buy more beer than friends who don't brew.
You hit the point on the head. The chefs might go out of their way to buy their direct product (consumer software, like games or hardware, like ipads) but would not buy a shit ton of kitchen pots beyond the basics.
I have a family member who's a young musician, and he says musicians don't really care about audiophile grade speakers or headphones really.
I imagine people who like to make things but don't like to spend money are attracted to software in particular because it is one of the few things that one can make with very little investment. Many of us may have found different loves if other hobbies weren't so expensive. This, perhaps, biases the type of person you find in the profession.
The disconnect I don't think is about protecting their turf.
I think it's mostly about 1) not having an instinct for how powerful the tools are and 2) not being able to internalize the value of leverage vs. cost.
$1000/year 'feels' expensive to a developer. That's 'a lot of money'.
But if it makes you >1% more productive, it's worth it.
Some of these tools are not just productivity enhancements but multipliers.
But that takes a different kind of intuition that very few people have naturally, you kind of have to be in a position to make those decisions.
Those that have the instinct often are not programming.
I’ve paid for various tools over the years: ultimately, I’ve never really found that the commercial tools are significantly better for the problems I have. I’ve generally found that Emacs and/or Unix shell utilities make it really easy to make a tool that solves exactly the problem you have while commercial tools end up having frustrating limitations or are really hard to learn because they solve the general case instead of the 70% case that you actually have.
> But if it makes you >1% more productive, it's worth it.
For an individual, it depends on whether your compensation is tied closely to productivity or multiplication.Being 1% more productive is not going to get your average person anything in a corporate environment.
"What's in it for me? Right now? In the short term?" is not very civic posture.
The 'bigger the corp' the 'bigger the productivity gains' available to the team and customers. A 1% gain would probably be worthwhile though it's probably the purview of a director or CTO, that said, developers are part of the process.
Making little productivity gains here and there is literally how the world moves forward.
It's why we are wiping our rears with toilet paper and not seashells.
One of the commenters made the point that 'such tools are not always very useful' which is entirely reasonable, but to the extent they are, the math works.
In an adjacent area - we use Jira, which we all loathe, but it's considerably better than nothing for example, it does the job and it's worth 20x the meagre cost to us. Such is an example of productivity leverage. (And of course, it's not always the case that it works out so well)
You get paid next year a rate based on how productive you are today.
Even when companies don't give proper raises, this still ends up applying when you quit and go to another company because you've spent x% more time actually coding instead of waiting or busy work, and you can command a higher salary.
> You get paid next year a rate based on how productive you are today.
Not particularly. While highly paid developers like to pretend the industry is some platonic ideal meritocracy, no one has any real ability to measure individual productivity, or to predict it with any precision in hiring, so, no, that's not a particularly good approximation of reality, certainly not where a 1% improvement would would even be noticeable in the noise of assessment.
Realistically, you might be justified in saying that my pay 5 years from now will be affected by the aggregate productivity in the past of developers generally and the broad, easily observed factors that conventional industry wisdom assesses as individual predictors of productivity within those broad aggregates. But, with that in mind, investing time in promoting online the idea that things I already have on my resume are predictors of productivity, while unlikely to have noticeable real impact, is probably a better cost/benefit trade-off than paying for dev tools, even if my employer was going to let me use personally purchased tools, which they aren't.
Outside of corporate work, in the personal development time I spend which does have impacts on bit actual productivity and my ability to have indicia of productivity that the market values and where I do have the choice of tools, assessing and paying for dev tools is more friction than benefit. I did it more when the gap between paid tools and free tools in quality was much higher and (because the internet as an easy discovery/distribution medium was not what it is today) the friction between the two was closer. Even though at the time the financial cost compared to my means was much greater.
An interesting paradox of the programmer job market is that spending time mastering developer tools and reaping the corresponding productivity improvements at one job has zero value when confronted by a whiteboard coding interview for another job.
But they may have value once you get past the interview.
Don't forget another interesting paradox - spending time mastering tools and technologies tangentially (if at all) related to your current job has low value now, but is also what makes it possible for you to take that another job a year later and double your salary.
This is a negative approach - and the opposite of what you would want on your team.
The narrow view that some 'productivity gain' will result in 'less work' is almost entirely not true, unless you're doing commodity work, like answering calls etc..
An Engineer who can do the work 10% faster because of better tooling, isn't putting himself or peers in jeopardy, rather, moving up the stack a nudge to work on slightly more important problems.
Any company that counts technology as a pillar of competitive advantage needs to adapt responsibly and make process improvements especially as they become normative.
People that work against this are literally a drag on the company, like the kind of unions that forbid other groups from 'moving the hammer because it's not in their job spec'.
Consider that helping the company be more productive is part of your job, and that you're not going to lose work as a result of it, probably the opposite :).
Because you can’t make every tool you’ll need to program effectively. Unlike bread, which only takes hours to make, good tooling will take months to years if done yourself.
The disconnect is not odd for me. Developers know you will essentially pay for a sub-optimal experience. Proprietary software vendors almost always treat their users as dirt. The piece of software would almost always be better if it were open source. And developers know this.
I think this is a temporary phenomenon. We're still in some ways reeling from 2000 and 2008, where free tools came to dominate (because no one could afford the paid ones).
If you get paid to program, then your employer can pay for tooling (assuming salaried work, if you're a contractor then you need to buy your own stuff).
I think there is some bias in that younger developers are the ones with the time to share their opinions over social media, but also have the least amount of money / influence with which to buy tooling. Certainly, when I started programming, I didn't have any budget for tools. I was lucky to even have a computer. Thus, I had no choice but to use free tools. People hold on to that experience long past the point where the monetary cost of tooling is meaningful.
Influence is the biggest problem, which is why I mention the junior vs. senior divide. When you want to buy software at work, you aren't typically given a budget and a credit card with which to buy the tooling. You have to justify every case. No junior developer wants to be the squeaky wheel that's always begging for cash for tooling, so they do without. Or, if they do decide to buy tools, they're met with the "alternatives analysis" phase of approval -- you invested time learning this tool that you now want to buy, go do that three or four more times while still completing your normal work, to make sure we don't waste any money. (You can see why writing your own stuff from scratch is so popular -- no approval process. You just do it.)
Tools are also special in that they come with a time cost and a dollar cost. I have never seen a product that will save me time without any effort on my part, but would love to buy such a thing if it existed. Instead, you have to pay the fee, and then learn the software. (So the actual cost is your hourly rate * the time to learn, plus the actual dollar cost. At least free projects or writing it yourself don't have the actual dollar cost. The training cost leaves the most sour taste in my mouth. I've wasted countless hours reading, excuse the term, dogshit, documentation for something I paid a lot of money for. Every time this happens I think to myself that writing a version of this software that actually works would have taken longer, but at least I'd be enjoying myself. Never forget that your mental health has value.)
Finally, software vendors are doing a terrible job, in general, of pricing their product for the developer market. The biggest problem that I run into is per-user pricing. It gets costly very quickly, either in money or in toil. The monetary cost scales with the number of people that use it, and if you want to avoid the anguish of deciding who is a first class citizen that gets access and who is a second class citizen who has to beg someone with access to do something for them, you have to buy an account for every user. Even for things that one person may use once a year, you take a lot of autonomy and the potential for ownership away if they can't use the tool. But, they charge you like every user is using it for 8 hours a day, 40 hours a week.
Personally, that has been a recurring problem for me. The tools that take me longest to get approved are minor productivity enhancers with per-user fees. Tools that have a per-organization or usage based cost are easy. (Examples: CircleCI, Sentry.) Tools that have a per-user fee are hardest. (Examples: Github, Codecov, Cypress Dashboard, Retool.) I work on a cloud service that had a per-user charge, and it went exactly as I expected -- few people would sign up, and when they did, they shared accounts to keep the cost down. We changed it to a usage-based fee, and a month in the people that actually sign up and start using the service increased dramatically. No longer do customers hit a paywall where their 0 -> some usage costs an amount of money that requires an in-depth approval process with their higher ups. No longer do customers hit a wall where some team members have to be made second-class citizens that can't use the software. People can gradually go from a $0 bill to a $0.01 bill, and invite everyone on their team to contribute, without having to come up with some way to share accounts. It's really great, and everyone that sells per-user licenses should think long and hard about how many people they are flat-out turning away.
Anyway, my point is that developers aren't selfishly collecting money without spending it on software. I'm sure they'd love to, but there are a vast number of complications standing in their way. Remove the complications to collect their money.
Hey there, I'm from Codecov and I just wanted to say, this point is really smart. I had never thought about the challenge of per-user pricing precisely this way.
Codecov was the first per-user service that I bought successfully. I like the feature that lets you pick who to apply your license to. In our case, our office manager manages the billing, but she doesn't need to view coverage reports. So that saves us like $12, which is appreciated :)
But isn't there also the opposite effect if they must pay for usage then they start minimizing their usage, and start actively looking for free tools to do the same?
Also as mentioned by other it takes time to learn to use the tool by using it. But now you would think twice whether use it and pay for using it just to learn it.
Actual proof that not capitalism can work. Free software writers don't need a profit motive. They mainly do it for the love of creation itself. They are an implementation of the ideal un-alienated individual that marx dreamed all humans could hopefully become like.
Celebrate developers refusal to commodities their means of production. I hope it stays firmly in the GNU ecosystem.
While I support the principle, what actually happens is that the companies which make money use your work to generate that money for free. And your desire to build something useful 'for the love of creation' means people who could have got paid to support their families can't because you've undercut them in the market with your zero pricing, using the privilege which allowed you to spend significant time on building something valuable for the hell of it. Then when you try and charge for something this is viewed as somehow immoral, thus pressuring you to provide your work for free and making the multinationals even happier.
> Free software writers don't need a profit motive.
Yes until someone takes their free software and packages it as a cloud service. Then all you hear is "I want to get paid for the software I shared for free."
There's a big difference between paying for the required tools for a technology and the bonus tools. Java's basically impossible without an intelligent IDE, and C++ isn't far behind. But Ruby/JS/Python? You're just fine with VIM+Tree (if that).
Then you can upgrade with Sublime (or the like), and things like Visual Assist (by Whole Tomato) which made a huuuge difference in my first job.
I disagree that developers don't like to pay for their tooling, I think the issue is that fewer people will be using the tools if they are not free and that means that languages that are bound up to expensive tools (e.g Delphi) are going to lose the network effects other languages gets.
Thus even if the language itself is inferior, that will be made up for by the much larger number of people who can create libraries and tools and answer stackoverflow questions.
Note that there is a free IDE for all the languages Jetbrains makes IDEs for.
Developers are a tough market (I am one). They're also a pain to manage, expensive, and still human(ish - if you're lucky) :p, with all the ego and mistakes that come with the territory. No-code for simple wiring/boilerplating once good enough/widely adopted enough, will kick our asses. But then again on the opposite side, in many cases business types bring mainly money to the table, leaving the expensive nerds to do the rest, and have fun with their RG machines in the process.
> No-code for simple wiring/boilerplating once good enough/widely adopted enough, will kick our asses.
By "our asses" I assume you mean programmers'? I don't buy it. People think coding is hard because they think thinking is hard. No-code doesn't remove the latter part, it'll only make things easier for programmers.
If anything it makes getting into programming easier, but that means more programmers not fewer.
Do not forget the possibility that no-code systems will result in a worse outcome but still technically get the job done, with a lot less sum total thinking required.
Most software is plumbing, not competitive advantage, so it doesn't really matter how good it is if it technically works.
If you doubt that, I point you to the world of enterprise software, where terrible software flourishes and thrives.
I will not be at all surprised if no-code systems eat the bottom of the software market, replacing many software developer jobs.
'No Code' generally have a different use case. For every 'software project' there are probably 5x as many 'simpler projects' that require a basic front/backend but not much material CS knowledge, and that's where no-code shines.
'No Code' should be a euphemism for 'Didn't Need Code In The First Place'.
In other words, there's a legit level of abstraction there with it's own world of tooling etc..
Edit: I will double-down and say that within 5-7 tears, the number of 'no code' projects will be greater in number than the number of 'code' projects [1]. 'WordPress' is the original foray into this: people want to make simple sites, they don't want to have to deal with tons of tech. Shopify, is a kind of ultra 'no code' solution - arguably it's a service not a 'no code' solution but it exemplifies the potentiality of entities wanting to solve problems while not getting lost in the hey.
> Programmers do not like to pay for their tooling.
Programmers definitely don't want to pay for tooling they haven't used yet. Of course, once they have found a free way to use a tool, they aren't likely to go back and pay for what they already have.
One of the biggest things that turns programmers away from tooling is UI/UX. Every programmer is particular about the way they interface with their tooling. The less familiar a tool is, the steeper its learning curve. The more simplified a tool is, the less powerful its abstractions are.
UI/UX norms for tooling need to be rethought. Trying to make a tool appeal to a wider audience of programmers is extremely difficult because every tool has baked-in assumptions about how it will be used.
Ubiquitously popular tooling tends to be extremely configurable and extremely malleable. The more control the user has over a tool, the more useful it will be to them, and the more interested they will be in using it.
There's a reason the shell hasn't died yet. Even with a long list of gotchas and decades of cruft, shells allow users to personalize them to an attractive extreme. The same goes for Vim/Emacs.
> Ubiquitously popular tooling tends to be extremely configurable and extremely malleable. The more control the user has over a tool, the more useful it will be to them, and the more interested they will be in using it.
I disagree; if the workflow and UI offered by the platform is good, I don't need to change it. Case in point is xcode, it's very opinionated in how people work with it, but it works.
I don't feel like Xcode is a good example of developer tooling ubiquitously popular because of developer preference. If you want to develop for iOS it's a requirement.
Depends on what you call tooling. Aren't all the SaaS services needed for modern dev part of tooling? Most companies pay for that stuff. Could be Digital Ocean, GitHub, Auth0, Chargify... To me, those are all dev tools because they do stuff I don't have to do manually, just like an IDE is a dev tool that's more powerful than notepad. They're not standalone services because they don't do anything without me configuring them. And they're really only useful for devs, not anyone else.
Digital Ocean is, to me, a great dev toolbox and keeps me from wasting time setting up machines, IPs, etc. We gladly pay for it. Even tools like Notion are really dev tools just deployed as SaaS.
I think dev tooling has just moved into SaaS and probably gotten more profitable, if anything.
Yes, developers are in the way. A selling point of SQL was management could use it directly for analysis; and spreadsheets were a kind of no-code solution. Yet developers were still needed.
Wikipedia's description of RPA suggests something somewhere has gone terribly wrong:
RPA systems develop the action list by watching the user perform that task in the application's graphical user interface (GUI), and then perform the automation by repeating those tasks directly in the GUI.
There are plenty of profitable developer tools out there. I usually encounter 2 issues when evaluating a dev tool:
1) Complexity & bad UI/UX. It's often not clear what i'm buying or all the features are hidden "under the hood". Many devs don't understand that user experience matters, even if the users are also devs.
2) Trust. This also ties to the first point, i need to have a clear picture of what this thing you're selling me does. I don't trust someone else generating code for me, i don't trust magic solutions and i don't trust tools that look like they were built in the 90s.
I think there are plenty of opportunities for more developer tools but unfortunately it takes a unique kind of dev to build a product + engineer a marketable solution.
>Jetbrains is the exception, not the norm. Developer tools are loss leaders for large corporation to acquire developer goodwill and mindshare. Previously we sold developer tools/PaaS. Programmers do not like to pay for their tooling.
It's really funny to me that this is the top comment, because one of the reasons I use Eclipse instead of Jetbrains is due to the tool mentioned directly in the article, Mylyn. Any time I mention it, programmers always respond with, "Huh? What's that?" I choose Eclipse over Jetbrains, not because Jetbrains costs money, but because Jetbrains is not as feature rich. Eclipse is more capable than Intellij Idea at any price.
More capable perhaps, but I’ll take software I can use over Eclipse any day. I repeatedly get pushed towards eclipse for a number of different things all the time and every year I give it a shot because I know things can change over time... and every year I walk away deeply frustrated and spend another year throwing anything that requires working with eclipse based tools, strait into my mental trash bin.
I dunno; there's a lot of SaaS that are in daily use by developers; build tooling, version control, code reviews, chat / collaboration tools, etc. Editors not so much anymore, but I've got an intellij license which I use intensely.
I mean I get where you're coming from; for solo developers and small clubs, free tools get you very far. But at larger enterprises you'll want to pay for the extra services and less management.
> Programmers do not like to pay for their tooling
Turbo Pascal and Think C enjoyed some success in the 1980s and 1990s.
Then again it looks like those environments were simpler, more responsive, and more enjoyable to use compared to the clunky modern IDEs of the 2020s.
(And of course there were fewer free alternatives, and modern apps have to deal with horribly complex APIs, networking, concurrency, distributed systems, etc..)
> This is why no-code and low code are so much more successful than developers tooling startups in terms of revenue and profitability
These are developer tools though, just marketed differently. "No-code" and "low code" are simply fancy terms for high level languages, and the developers in this case are people without little or no experience in traditional programming languages.
Yeah I was working on a diagram software [1] that looked similar to VS Code and can be extended easily. Applied to YCombinator but they didn't think it's worth it, so .
[1] This was the Angular+Electron version (ported to React afterwards) with the Visio logo: https://imgur.com/a/UDkitDm
Even the hosting and download function of app stores is valuable. It replaces auto update frameworks embedded in the app and earlier download sites were all super suspicious and full of ads with fake download buttons.
Even Sourceforge started out good, then sold to someone who put adware in the downloads - although it's back to good now.
> Once AGI is realized, no human developer will ever be paid again.
Well, we'd be in the boat with teachers, reporters, authors, attorneys, bankers, management, some artists... and a lot more of professions which were done in by AI without a G earlier.
Here's to hope it all happens fast and goes straight to UBI worldwide.
I've paid thousands through the years to buy various versions of Visual Studio. While there are free alternatives I don't even bother paying the money because of the value that beautiful piece of software offers me.
The kinds of tools like those in the article suffer from being outside of the "main loop of coding".
What is the main loop? Well, as it's defined since the rise of interactive computing, it's typing in the editor, compiling, then testing and debugging.
Thus we optimize for:
1. Typing code fast(or typing less code)
2. Compiling code fast(by doing less compilation)
3. Minimizing testing and debugging time(by narrowing our definitions of testing to that which is easy to automate).
The main loop of coding is not software development. It does not study the domain problem, define new benchmarks for success, untangle communication issues, or refine principles of development. It is a path to coder flow state, feature factoryism and shipping the org chart. These things sort of resemble development - a happy coder delivering features within a definite architecture does serve someone at some point - but also manifest dysfunction. Development failing is just shrugged off as bad programming or bad management.
Tools like a Whyline or Reflexion models are not in the main loop. They are an invitation to look carefully at whatever horror has been wrought and address it at a deep level. But that really needs a researching mindset, not a shipping one.
In practice, the available tools very slowly move in the direction of the suggestions of research. There are improvements, but they need projects that put together the pieces and make it a goal from the language design(the main interface) upwards.
You are not paying attention to the issue at hand here:
the data here is the issue, because it drives the logic.
The moment your code is driven primarily by data you need to start building tools that infer decisions from that data.
Some automatic inference engine tightly integrated with your code is just going to give you the location of where a bunch of cells from the matrix are multiplied.
As many on this discussion have noted, JetBrains tools not only don't collect dust, they are widely used, and the company is profitable. The company is privately held. I hear that they are constantly turning down investments. This proves 1) there is money in tools; 2) The VCs who routinely say "there is no money in tools" are insufficiently imaginitive, which is not surprising if you've spent any time with VCs.
Why is JetBrains so successful where others have failed? A few thoughts:
- Intellij was released in 2001. Eclipse was a close competitor for a while, which made no sense to me. Intellij just worked, and it was intuitive. I found Eclipse to be uglier, slower, flakier, crashier, and far less intuitive. Haven't heard of it in years.
- It was always the case that at least some version of Jetbrains tools are zero cost. I got hooked on it early on. I have been using emacs far longer, and yes, while a sufficiently dedicated developer can make emacs behave like an IDE, it really isn't one. Intellij just worked out of the box. In the preceding two decades, there were high-end development tools available (Lisp machines and their all-encompassing environments, Lucid for C++, Purify for C/C++ memory problems, Atria, Rational. Trivia: Did you know that Purify was founded by Reed Hastings, the guy who went on to found Netflix?) The tools were expensive, and I don't think there were free versions. These companies all went out of business, or were acquired.
- Jetbrains is incredibly focused. Having been through several VC-funded technology companies, I can easily see how VC brain farts can destroy a company's focus. (This dynamic isn't entirely the VCs fault. Founders stretch the truth to get funded, and then VCs insist on the fairy tales they've been told.)
- Jetbrains has expanded to other languages, and other kinds of tools (besides just refactoring Java code). The number of languages, technologies, and frameworks that they support is just mind-boggling.
- Consistent focus on improving developer's lives, through steady improvements in functionality, performance, and quality, over 20 years at this point. They started far ahead and have widened the gap.
OPs tools look useful. I suspect they would attract a much wider audience if made available through Jetbrains products.
I always have this weird sensation when people talk about Jetbrains.... I feel like some in some circles, usage of Jetbrains tools is ubiquitous, while in others it is unheard of. I have been a professional software developer for 15 years, and have never seen the tools in use anywhere I work. But I read things on the internet where people will say things like “well, every ruby developer uses Jetbrain” and seem to not even question this.... weird how circles like this form.
It's similarly shocking to encounter entire companies where nobody uses a debugger. Millions in revenue, dozens of programmers, and the level of debugging is still sprinkling printf-statements and desperately perusing source code
My hot take: Schools don't touch the practicalities of professional programming enough. Homework is mostly small-time enough that you can keep the entire codebase in your head, and solve issues with printing variables. At least my CS master's degree was very much limited to 1950s level tool use.
I found that my use of debugging peaked a few years into my software career, and then fell off rapidly. Debuggers are great for single-threaded code, less useful for multithreaded. And close to useless if your problem spans processes.
As the problems I was debugging got more complex, I found that log files became far more useful. The logging is always there (assuming you do a run with the right logging levels selected). You can get a high-level view of what's going on far more easily than in a debugger, which has you rooting around at a very low level. You don't have the phenomenon where you just miss the critical thing while in a debugger, and then you have to laboriously start over. If you miss the critical thing with logging, you turn on more of it and run it again.
If you have one of those bugs that occurs extremely rarely, just keep running the code, and turn on logging. When the problem occurs, you have the evidence leading up to it.
This assumes you have the right logging there to start with. I put a lot of development time into my logging code -- making it informative, compact, readable, carefully selecting what to log and where and at what logging level. I make sure that I get log rotation right, and then I use some simple scripts to download, consolidate, split and search the log files.
"Desperately perusing source code": Yes, you need to read and understand code in order to debug it. You obviously know that, so I'm not sure what "desperately" is meant to convey.
>You don't have the phenomenon where you just miss the critical thing while in a debugger, and then you have to laboriously start over. If you miss the critical thing with logging, you turn on more of it and run it again.
This is rather confusing to be honest. Are you saying restarting the software is hard, or that restarting it is easy? If you just missed the critical thing, just add a conditional breakpoint ("Stop here if ptr is null") and restart the software.
Restarting is easy. But you know how it is. You get to your breakpoint, go forward carefully, and then step over something you should have stepped into. Oops, start again.
Colleges generally teach computer science, not programming. They're different arts with different goals.
Bootcamps teach ... I don't know, really. I keep seeing an ad to "become a developer in 14 days," and I waver back and forth between amused and angry.
I think there's room for a happy medium, some kind of trade school for software development. A curriculum that teaches you practical skills like debugging and clean architecture, and prepares you for an actual job.
I suppose it depends on your comfort level in the stack you’re dealing with. I’ve been writing software for 40 years. I’ve been doing Rails since almost the start, and find RubyMine slow and overwrought. However, I had to do some Java work (which I had never done in anger before) and found that IntelliJ really is worlds better than Eclipse. So much so that I paid for it out of my own pocket, even though I knew the project was doomed to failure. I hear good things about Rider, but I have yet to try it, as, after all these years, I’m pretty comfortable with VS. Rails work is EASILY done with a terminal and a text editor; the others not so much.
Yes, odd. Data point: me and most of my SE friends/coworkers pay for JetBrains IDEs (I was pretty deep into emacs and vim for many years prior to switching).
I definitely haven't "switched". In fact, I have a key binding for escaping to emacs to work on the current source. I find that useful because emacs kbd macros are so much less clunky than the Jetbrains equivalent. But yes, Jetbrains products have taken over much of what I used to do in emacs.
Not OP but they're not saying they want to use emacs keybindings, they're saying they want to use emacs keyboard macros [1], which I agree really are a very handy feature having no equivalent in JetBrains.
Emacs keyboard macros have an inferior equivalent in Intellij (and, I'm assuming, their other products). But also, emacs just somehow feels faster. Maybe keystrokes are marginally faster, maybe it's my expectations, but there it is.
I know that Intellij has emacs key bindings, that's not the issue, I'm happy with the Intellij default bindings.
In Jetbrains products, I am CONSTANTLY hitting Esc to Tab to deal with the helpful predictive suggestion. I find that feature sometimes useful, so I have not turned it off. But it definitely produces a different and maybe slower experience than typing in emacs. Maybe there is a mental mode switch between typing and handling the hint?
It's really stack dependent. In Java & Android, it's jetbrains IDEs, iOS is Xcode, etc. I think it comes from what that subindustry has converged on the best served dev stack for them. If jetbrains isn't the most used, then it's not the best.
I was talking about an industry stack. Plenty of open source java software devs use IntelliJ to develop their open source modules and libraries, not to mention there are free editions of their IDEs on top of that.
I think a very important point you are missing is that JetBrains was founded in Eastern Europe. Their developer costs when they were founded were probably ~20-25% (if that) of what developers cost in Silicon Valley, yet still at an amazing skill level.
I honestly don't think JetBrains would have been able to succeed if they were founded in the US.
A thing which made Jetbrains successful probably was their deal.with Google. By making AndroidStudio free they got a large userbase. This also was my first direct encounter with them.
I see that AndroidStudio was released in 2013. Intellij was pretty well established by that point. No doubt this deal helped them grow far faster than they would have otherwise.
Yes, they were known and used especially in Java-Sprache before and had a good quality product.
Having a company like Google pushing it into the AppDev community and offering it for free certainly have attention and certainly a nice financial contribution from Google.
> Intellij was released in 2001. Eclipse was a close competitor for a while, which made no sense to me. Intellij just worked, and it was intuitive. I found Eclipse to be uglier, slower, flakier, crashier, and far less intuitive. Haven't heard of it in years.
Eclipse is still in use.
And don't let the above scare you: it is not as beginner friendly as VS Code or Netbeans but it is a walk in the park compared to learning emacs or vim from scratch.
It is just different, and if you come frome a file-centric and line centric workflow you should probably just as well get used to thinking about classes, functions interfaces etc: i.e. don't think "I'll open this file, look for that line", instead you jump to a class or a function or even better just go to the implementation of the function you are looking at. It just works.
And you mostly don't cut, paste and update manually, you just ask eclipse to move it and then verify it in the scm commit view before you commit: "Moved x to y to make z possible"
The common thread I see in this is people pouring their heart and soul into a super awesome tool, and then moving on with their lives. The tool was made for one version of one language, and the world moves on, too. But then I think about the tools that I do use.
Package distributions. Good word, I am not happy here. Still an unsolved problem in general, some languages tackle it better than others.
Testing frameworks. It's getting better. Big props to zig, for including tests for free with comptime. But in general, it's piles of code that somebody maintains for each language. Often, there are multiple tools because the language devs don't pull them into first-class components.
Debuggers. There's pretty good tools out there. They're clunky to use, but there are multiple front ends that can handle multiple languages, thanks to a common data format.
Code formatters. Props to zig, go, and rust for building these in. But for most languages, it's DIY.
Common theme here: all of this stuff is fairly generic, but each language tends to do its own thing. Tools aren't officially part of the languages, neither are they generic. Except debuggers! There's a common interface, DWARF support becomes a part of the language, and (another key point) language devs use it -- so they don't rot.
Developer tools can be magic, but unless they're generic, I expect them all to rot.
Edit: Oh, I forgot documentation generation -- like debugging, there's been some convergence around some generic tools and it's pretty easy to build support into / on top of a language. And newer languages use these tools internally to build their docs. Great!
And how could I forget compilers! LLVM might save the day for all of this stuff (except versioning and distribution... ick). Build a tool that can grok LLVM-IR, and you've solved the problem for most languages out there.
> And how could I forget compilers! LLVM might save the day for all of this stuff (except versioning and distribution... ick). Build a tool that can grok LLVM-IR, and you've solved the problem for most languages out there.
You do still have to maintain it. LLVM is just a compiler IR and reserves the right to change at any time. (same goes for other compiler IRs of course)
I suppose you could use SPIR-V or PNACL which are based on fixed versions of LLVM.
This is all rather vague. These tools collect dust because software nowadays doesn't only have to be written, it needs to be maintained and adapted. And these examples weren't because they weren't useful enough. Sure, they worked at their specific task... but none of them was a game changer or enhanced the power of the developer in general. It's not that dev tools are not used, just that most of them are directly written by the programmers that need them, for their own use, without much care for life expectancy or limitations. Just to get that thing done when they need it. I don't really understand much the point of the article. If an idea is good, well executed and presented to its target audience, maybe it won't eat the world, but it will have decent chances to not collect dust.
In my opinion this article is a cry from people that don’t understand how to find a market fit and then “sell” their idea. I think it’s pretty typical point of view from academics that their idea is ground breaking and should be know by everyone. However, things don’t work like that in the real world. Meritocracy doesn’t make the best tool rise to the top. You also need to solve a problem for a huge amount of people and then do a bunch of PR and educate.
The point the article was trying to make, not the tool descriptions. Though to be fair, the point was probably only to showcase some cool ideas to inspire people to consider dev tools more. Maybe I tried to read too deep into it.
“Whyline” debugger is useless if you are trying to understand why your LIVE code ended up in a certain state; unless you are ready to run your live code in a debugger.
Finding all places where the variable had changes is on the other hand easy without Whyline.
> These tools collect dust because software nowadays doesn't only have to be written, it needs to be maintained and adapted.
Everyone loves making and using a new, shiny, "life changing" tool. Very few people would stick around at a job (or indeed, apply for a job) to maintain that tool.
Creation is easier compared to understanding and adaptation to change.
Developers chasing their tail writing clever programs to read spaghetti code instead of not writing speghetti code.
I get it. The problem is very attractive. I get turned on thinking about it too but I would rather programmers knew how to compose code that other humans could read instead of dreaming about using AI to read incomprehensible first drafts.
Teaching other developers how to write code is not fun I guess. I'm guilty too. I pull up a PR and scroll down and find a loop with some IFs and God forbid a map or some other inline atrocity sitting in a well named method with well named variables (they sort of get it)...and there I am trying to judge intent. I give up and just go to lunch in disgust, I don't have time to teach someone how express intent, nor do I have time to guess the intent. It's obvious that the code is looping over some list and digging into an object and blah blah...but why!??? The important part.
When I'm long dead maybe we will require programmers to write code that other people can understand.
>The former always massively outweighs the latter.
This is not enough of an argument. You can't predict what code will be read so we try to make it all readable. I consider this to still be beneficial even if most of the code is never refactored.
Yep, you can't know in advance what code will evolve.
I see that as an argument for my position, though.
It's faster to write less-readable code. Documenting all your abstractions takes time, as does making sure all your names are ideal, as does making the abstractions in the first place (and they're often the wrong ones if you do them up front).
A business only has so much money to spend on a tool or product, and they want maximal bang for their buck.
That comes from shipping stuff fast and seeing what sticks.
Once you've seen what features are evolving and matter to users, those are the ones you start refactoring and improving, both for functionality and readability.
Obviously, do what you can to make your first pass readable. Don't spend lots of time on it until you have evidence this code will be modified often, though, because on average it isn't.
I kind of hate this conclusion, and I'm not very good at it, but that's where I've landed these days.
Sure, stand-alone developer tools mighte be collecting dust, but many tools are simply baked into the things we develop with.
* Extensive debugging tools are now built into browsers.
* Most popular languages have extensive linters, formatters, and code quality tools.
* Tools like Hot Loading and Live View simply become part of the language you're using. Heck, Hasura and Postgraphile are essentially just a set of tools with some very intelligent wiring.
* Profiling is basically standard on all popular languages
* Storyboard and component driven design means you can quickly translate a lot of design work to code.
I think your points are complementary to the article. It does seem like if a tool reaches a certain popularity threshold, programmers start porting it to every language and environment under the sun. And that often takes the form of building things in to larger tools.
But the academic tools this post highlights are typically standalone and haven't reached that threshold. My suspicion is that the author's point applies most to academic tools that have trouble reaching a critical mass.
1. IDE dependency. If you have to use IDE X to get the feature, then your tool will live and die by IDE X. A lot of development tools lose their user base when an IDE or editor falls out of favor.
2. Language/compiler specificity. This one is rough, because most dev tools are dependent on language tooling to do the magic. Productivity hitches are different in different languages. In a day, I may work with Python, Go, JavaScript, SQL, bash/zsh and several DSLs. If your tool doesn't work with 2-3 languages I use, I may never even notice your tool. A lot of static analysis based tools work well with C and Java and nothing else. It's kind of the SmallTalk trap: amazing tools, but you have to use SmallTalk to use them.
3. Solving stuff that really doesn't matter to the programmer. As a manager and as a business owner a 15% in developer productivity is great. As a programmer, I'm not even going to notice that I'm getting 15% more done. And even if I do, it may not even matter.
> The reason that Reflexion Models are obscure while Mylyn is among the most popular Eclipse plugins is quite literally because Gail C. Murphy, creator of Reflexion Models, decided to go into academia, while her student Mik Kersten, creator of Mylyn, went into industry.
> Other fields of computer science don’t seem to have such a giant rift between the accomplishments of researchers and practitioners.
I'm not sure that's actually true. I can't think of a field I've studied where research generated software from a decade ago still worked out of the box. If you write software for Java 7 that proves your concept is feasible, the existence of Java 14 doesn't disprove it, so nobody funds the research to port it. This is as true for bioinformatics as compilers.
There's also a pretty strong hurdle in simply educating the software development community. I suspected if you did a sufficiently large poll of developers, less than 5 percent have set a breakpoint and used a debugger in the past year. It seems even renowned experts believe debuggers are not valuable[1]. And I suspect at least half of people who claim proficiency in git can't correctly explain the difference between the rebase and merge commands.
And that, IMO, is the mystery we need to explain. If the hypothesis is that weak but free tools are undercutting the market for advanced tools, why don't developers learn and use the tools freely available to them? Are the tools too hard to learn, not powerful enough, or something else? I suspect there's a time tradeoff going on -- learning how to use a tool takes time, so if you believe you have a slow method that will solve the problem, you might prefer it to investing in learning something new. Especially outside the research lab, where there is uncertainty that any given tool will actually solve the problem at hand, and a researcher hasn't handcrafted a tutorial to make your assigned task feasible.
Debuggers are very specific tools, and I've never really liked them.
Basically, you get a tiny view of what your code is doing, and have to slowly build a mental model of what's going on through time. I have found that prints and code reading (in a good IDE which is able to track variable references!) are often enough, and usually faster,for the kind of software I write.
Now, things are getting better: mozilla's RR is a game changer(reverse debugging on actual, large projects ), and more recently, pernosco (built on RR but very close to science-fiction) has at long last made me like debuggers again.
I use breakpoints when software supports it out of the box. Unfortunately, many third-party tools, IDE and Jupyter extensions in particular, are often of poor quality/WIP, are hard to use and can, at times, break my environment. For this reason, when a tool requires that I install additional packages, I am usually pretty reluctant to give it a try.
Curiously, even seemingly simple tools, like variable explorers, are usually implemented relatively poorly in most IDEs. I like JetBrains's implementation most, but it isn't without issues.
BTW, anybody has any recommendations for good and reliable, developer tool for Python and R?
The problem really is a lack of solutions that just work with no setup required.
The only two environments I have used where things just worked were VB.NET using Visual Studio while at uni and Dart using the IDE/editor they bundled in the early days (I think they discontinued it).
Every other time I have attempted to get a debugger working for a language I have spend hours without getting anywhere.
I use very few developer tools, mostly because I couldn't get them to stop doing things I hated, or because I couldn't get them to stop doing things I didn't need, or because I couldn't get them to stop doing things I didn't understand. There's a common theme there.
The autocompletion stuff really bothered me because I know what I want to type, my fingers are already doing it, and I had to stop them. I don't want to learn to retype just for this tool. No.
Certainly, some of them seemed like they would benefit from some kind of beginner mode, intermediate mode, and so on. I remember not having touched Visual Studio for over a decade, and then looking at it and feeling like a chimp dropped into the cockpit of an F-15.
I'm the opposite. I had mixed feelings about developer tools, but they got much better once I started to embrace them instead of trying to customize them each step of the way. After you adopt such way, it's hard to come back.
I am used to completely not worrying about formatting. Sometimes if I want some fast code I type it in a single line with semicolons, press Ctrl+S and VSCode automatically formats the entire file according to the "official" language formatting guide.
Same with autocompletion. I program in OOP languages, so I really like long names such as AbstractMachineDevicePool. I really like that I can write AMDP, press ctrl+space, enter and it puts AbstractMachineDevicePool into my code.
Ironically, and perhaps opposite to the point you're trying to make, this is part of the reason I prefer to evolve my Emacs towards being an IDE than to use an actual IDE. Bolting advanced tooling onto Emacs takes lots of ongoing work, but at the end I can be damn sure I actually understand what the tooling is doing, and why - and I can easily patch any piece of it to stop doing things I hate or don't need.
I can totally sympathize with this idea. When you pick a tool to make a specific task easier, you want it to do exactly that.
I see this all the time in conversation (with younger developers, mostly). They find it very difficult to understand how one can possibly survive without an (advanced) IDE, and most likely look at me as exactly that chimp.
I'm a very basic user of IDEs. I don't use almost any of their advanced features, nor do I memorize anything beyond the most basic shortcuts. I can't even justify the investment to learn them properly, as context may require me to change IDEs (and that context is often an accumulation of grievances).
On the other hand, I live inside the CLI, have no trouble composing basic tools to do what I want, and has been so for ages.
I'd much rather have a toolbox of individual tools that I can choose from and mix-and-match over time, than a fancy all-in-one powertool that would trap me with sunk costs like a bad marriage.
I understand where you're coming from, but it also sounds similar to insisting on hand saws and hammers, because chainsaws and nail guns have too much complexity.
My research is also on dev tools, and I have tried to get my research findings adopted. It is hard. Companies often want to release incremental, small features, not major UI overhauls.
Even when I've done research at companies or been funded by them to study their products, it is virtually impossible to get it adopted into the product. Again, the incentives and motivations of PMs are very different than researchers.
Companies often want to release incremental, small features, not major UI overhauls.
Speaking as a product leader who has been in a position to decide on these, I'm not opposed to major UI overhauls, but many that have been proposed to me:
- Are motivated by a design team that wants to justify their jobs or make a mark on a product, with the UI considerations secondary.
- Have major issues and design weaknesses themselves.
- Throw out important aspects of functionality in an attempt to be "fresh and simple".
- Ignore that we had our last major overhaul 12 months ago, and that one was in response to the major ovehaul 24 months ago.
Requiring evolution rather than revolution is a hedge against these problems. I'm not generally opposed to overhauls because I'm familiar with the importance of UI. For example, I'm very unhappy with Robinhood as a trading platform, but damn, switching to Fidelity is painful because their UI is so terrible.
I wish I more consistently got to work with talented designers. I put a lot of effort into getting good designers, as poor designers can wreak havoc.
Great points. I also want to add that it's extremely important to try not to disturb power users when your target audience is developers. They (us) use muscle memory to do things, they know exactly where things are and how to achieve things... move something a few pixels or bury it into a sub-menu, and you break someone's workflow, which is really annoying and requires days or weeks or re-learning.
Major UI overhauls in developer tools like IDEs would be disastrous IMO. Jetbrains does it very well IMO: small changes on every version, mostly cosmetic so the tool keeps looking modern, but very rarely anything that breaks power user workflows. But that still happens...
I remember two recently: they changed the search UI so that Cmd+O on Mac would no longer find files (you need Cmd+Shift+O to include files now), only type names. Because sometimes we copy the file name from a log or stack trace, that was annoying as hell to remember to press Shift to achieve the previous behaviour (the justification was that the search dialog became much richer and categorized now, which is fair enough).
Another one: they started pushing a new "non-modal" commit dialog... similar to how VSCode does it... I hate that, I just want my modal dialog as that makes it easier for me to separate what's actual source file and what's just a "view" of the diffs! But luckily, as is often the case, they have an option to use the previous mode which I love so much, so that was just a minor inconvenience while upgrading.
"They later wrote a Java version, jRMTool, but it’s only for an old version of Eclipse with a completely different API. The code is written in Java 1.4, and is no longer even syntactically correct. I quickly gave up trying to get it to run."
"A few years back, my advisor hired a summer intern to work on MatchMaker. He instantly ran into a barrier: it didn’t work on Java 8."
"A couple years ago, I tried to use the Java Whyline. It crashed when faced with modern Java bytecode."
He's answered his own question. Developer tools wear out quickly as their environment changes. Unless they have support that keeps up with the development environment, they die.
I completely get how hard it is to sell tools to developers, but I am not sure everyone gets the why.
I would pay $1000 tomorrow for an Emacs that doesn't flicker and is several times faster, and is easier to extend and modify -- and I'll give the money happily, convinced that I made a fantastic bargain.
I'd happily pay a monthly subscription for a compiler SaaS that makes my production binaries as small and fast as possible (and be integrated in a CI/CD system so the binary never touches my computer and just gets deployed to production).
But there are three problems.
1. The commercial entity developing the tool might fall on hard times and give up. What are the programmers relying on the tool to do? Most companies also never open-source their stuff once they go belly up either. You will be stuck with a non-updated tool whose value (and thus part of your value as well) will only be declining with time.
2. Nobody is interested in investing real money and professional programmers to make the open-source and free software better. You have stuff like `semgrep` and many others that are barely limping along -- and they deserve A LOT OF LOVE! We will all benefit from them! But the incentives just aren't there for the players who can make the true difference.
3. Our employers are unwilling to absorb the costs for better tooling. The minute you ask for a commercial license for something you are met with "why are all other 20 million Python programmers doing just fine without it?" and the discussion immediately falls apart.
In order to never suffer from #1 we all stick to the lowest common denominator which is both logical and sad.
To solve #2, charity funds should be given to professional teams that specifically tackle tooling.
Not sure #3 can be solved. Culture changes are extremely hard and even when they happen they are as slow as a glacier moving. On the other hand, I've paid for commercial tools in the past when I was convinced they'll increase my productivity. So sometimes we should view this as an investment.
#3 someone always pulls the "Linus Torvalds programs using nano. Real programmers don't need better tools.". And our culture is extremely toxic and bro-y, making it impossible to admit that maybe we're not all Torvalds.
#1 is not the issue, or rather, it's the same for any tool or software you buy it has nothing to do with tooling.
#2 is a separate issue entirely.
#3 If your boss has difficulty wondering why it's good to spend $100/year on something that saves you a lot of time, find a new job. I suggest that's not the case, and that there is a disconnect. Business managers generally are not dumb, just do the math: 'This tool gets X% more done or saves Y% time, which gets us $Z in value' - it's very good return for our investors.
I feel like every library, every framework, every programing language, compiler, database, VM, debugger, logger, tracer, metrics monitor, editor, source control, artifact manager, build tool, dependency manager, deployment automation, etc. are examples of developer tools that are not collecting dust.
My guess is that the tools which try to provide program synthesis just don't have good ROI, too costly to build and maintain in a working state and too hit and miss from the user, and that's why they've been collecting dust.
So let's get this straight. People on this thread work as programmers, and want to get handsomely rewarded for their work by their employer, but are, all too often, too tight-fisted to pay for tools -- made by people in the same industry as they are -- which will make them do their job better and deliver better results. What does this say about people in the profession? Shameful.
If somebody creates a product which I don't view as being worth the asking price I have no obligation to purchase it, regardless of whether I share a profession with the producer.
There's less money in developer tools than in selling to the general public.
I've seen some wonderful products, but it seems to be at odds with reasonable pricing because of the limited number of sales available.
I've seen some really interesting tools though. I remember trying purify once decades ago and it was like magic. It unobtrusively inserted itself into your code and found memory leaks and faults you never realized were there. And then I asked about pricing and it got into this complicated per-seat licensing nonsense that scared everyone away.
Same thing happened with electric cloud. I remember trying it out and it was very well done. It unobtrusively helped you run make. It figured out all your dependencies for real (it instrumented the filesystem accesses) and magically parallelized all your builds. And then pricing killed it all.
I think a Netflix model would have worked better. Provide a set of heterogenous dev tools for a low monthly fee, fund lots of individual tool developers, and be the tide that raises all the boats.
(coincidentally purify I believe came from reed hastings who also did netflix)
Taking a tool from an academic research project to an actual product is very difficult. This article points out several of the ones left by the wayside because the final leg of that journey is boring, tedious, and grueling. It’s kind of an obvious situation actually.
This is kind of what I was thinking. Most of these sound useful, but they also sound like situations where the primary end goal was a paper, not a polished, widely usable software product. The practical realities of what it takes for enterprise devs to adopt tools at a large-enough scale to make the tool sustainable are just too much for a single academic paper to overcome. In particular, these sort of tools feel too specialized to really catch on as a FOSS community project. For developers with limited free to time to 'volunteer,' they usually want to spend that time making "primary" contributions -- the actual systems and end-products -- rather than secondary contributions in the form of tooling.
Hence we see the tools that survive are tools that we first recognize as products, even if they're free. VS Code is a good example, I think. Or, if they're not free, they're paid for (e.g. Jetbrains), and the money makes maintaining/developing the tool sustainable.
Not to wax too poetic but it has the feel of academia trying to start a fire with a flint and not understanding that they have to properly construct the rest of the firestarting kit -- clear space, get kindling, arrange the rest of the wood, make sure it's all dry. They just see the sparks coming off the stone and keep asking "Why isn't this fire starting?"
Let me put it this way. Instead of saying, "Wow, this tool had promise, but it didn't catch on because (??) and because it didn't catch on it hasn't been maintained," the observation should be more like, "Hmm, maybe if this tool had been properly maintained it would have caught on by now."
We happily burn through tens of thousands of dollars in excess aws billing because we have cargo culted that optimization=bad, but we don’t want to spend $500 on an IntelliJ license
Coworkers code kept overrunning limits on resources and either getting evicted, OOM’d or grinding to a halt.
I gently suggested that they profile and optimise the code - and offered time to help. They insisted instead that “oh no we can’t do that! It’s just how it is” and insisted that we instead throw another machine or 2 on the cluster.
Pointing out that fixing the underlying problem would save us money on instances and probably meant that we’d not have to spend more time on cluster management as a result of this app ever again than it would have taken to optimise it even slightly was totally lost on my boss.
2 months after I left: coworker was messaging me asking what we could do to get the costs down.
> About 10 years later, at the Human-Computer Interaction Institute at Carnegie Mellon, Amy Ko was thinking about another problem. Debugging is like being a detective. Why didn’t the program update the cache after doing a fetch? What was a negative number doing here? Why is it so much work to answer these questions? Amy had an idea for a tool called the Whyline, where you could ask questions like “Why did ___ happen?” in an interactive debugger
Am I just confused, or does it sound like program slicing (1979)?
On the security assessment side of tech we face similar problems that these types of awesome dev tools could help us solve.
Our clients either:
1) Have no docs (48%)
2) Have outdated/incorrect docs (48%)
3) Have correct and updated docs (2%)
Tools to understand source code/app architecture and increase understanding would make application security easier since there would be less incorrect assumptions and those doing security assessments would be much more effective and efficient in their work.
It’s a shame there aren’t too many cross language tools. You build your clever new tool for Java and python devs get it for free. As I understand it, language servers are an attempt to do some of that. I wonder what extent that line of thinking applies to more sophisticated things like OP is talking about. Or are the commonalities between languages only limited to surface level magic?
Language servers and debugger integration (via Debugger Adapter Protocol) are indeed a way to partially achieve it, but they're generalized only to the extent the protocol generalizes languages and operations on code - you get relatively uniform interface on the client (editor/IDE) side, but all the magic bits normalizing a particular language/compiler need to be implemented on the language server side.
I'm hopeful, though. I think that at least the "Reflexion Models" thing could be replicated with what LSP gives you.
I think the "WhyLine" idea is already mainstream, at least for web UI development.
I remember back in the days, if you wanted to know "why" a html is rendered the way it is, you just had to change/delete random parts of it and see what breaks. After firebug was introduced (mid 2000s?), you can just right click to inspect element and it will highlight the html element responsible for it.
Nowadays, I am reasonably happy with the type of questions that the chrome dev tools can answer with just a few clicks. Why did some network request fire? There's an initiator column that links to the source code. Why did this frame take so long to render? There's a flame graph where each bar links to source code. And there are plenty of extensions you can install to help with the particular level of abstraction you're debugging at (for example inspect React components instead of html elements).
I think that tools that collect dust are just not great. The article gives some internal Facebook tool as an example, but it is really a research paper. I asked some friends at Facebook and from my (small) sample I couldn't confirm from anybody to have heard of anybody who used the tool, which means it was maybe just a PR stunt. This is not a problem for research and it is a great paper, but there's just a lot of overselling in the field.
On the other hand, everywhere I worked, developer tools that did their job well were easy to adopt. JetBrain is a great example.
Developer tools are scaffolding to the product you build. When the code is working, there is dubious value in tracking external tooling that helped get it made.
Note for some things, this is less true. Valgrind and similar tools are an example. But those are few and far between.
More common, are projects that are essentially tied to a developer tool that goes away. Flash is a marvelous example of this kind. It was a very good toolchain to build things. Heaven help you if that is what you built up your skills for. :(
And this is ignoring projects that only build in the ide.
I used to be an academic; so I can appreciate the hard work that goes into writing software to support research is.
A key problem is with this claim:
"The programmers with the WhyLine were 4 times more successful than those without, and worked twice as fast."
That tells me the whole story. We have a researcher who developed a tool, then did some controlled study (involving students?), wrote a paper, and then moved on to the next project. Nothing wrong with that, from a research point of view but you should moderate your expectations. These tools are prototypes and effectively abanware. There is no community of users behind them. There is no community of developers behind them. That was never the intention. It's not even clear if they were ever supposed to do more than help argue some point made in a paper. These were never intended as proper tools but merely as prototypes for tools. Call it an MVP of a tool. There was nobody to take it from MVP to reality so the thing predictably went nowhere.
From an MVP to something that a developer would use is a whole journey. Mik Kersten, who is called out in the article, did exactly that. I remember his name from when I was still an academic, 20 years ago when he was involved with things like AspectJ, which was a research project that made the successful transition to the real world. He was part of a development team associated with a research group that did some awesome work. AOP survives to this day as a key part of e.g. hibernate and Spring. That's no accident. Mik and others worked on these and other projects for years with active support and a community of users. There were a few similar projects out there without this dedicated effort and they've slided into obscurity. E.g. IBM's subject oriented programming comes to mind.
This kind of effort ever happened with the tools in this article. It takes more than releasing some tool and then expecting user adoption. No offense, but academics tend to be not so great developers. It's a reason I left the academic world because I had more than a little impostors syndrome telling others how to develop software (my field was software engineering) when I knew I did not have a lot of bona-fide software engineering credentials because I did not engineer a whole lot of software. I've since fixed that problem.
> I’ve argued before that this is because the difficulty of building tools depends more on the complexity of programming languages (which are extremely complicated; just see C++) than on the idea, and that, until this changes, no tool can arise without enough sales to pay the large fixed cost of building it.
Language servers helped with some of this. You can write a standard language server that does a very good job and pool the talent of the entire community to add small useful features, instead of having to write a new implementation even if you just want to create a new editor.
Just the other day I saw a comment on HN about wanting something like the whytool, as well as having a way of seeing whether a function has any possible path of being called to/from the current function. Those are very useful features, but would require a large platform of language analysis to build on.
It would be nice if language servers had that as a supported feature, but seems like a difficult problem to solve. The naive approach would be to store every mutation in memory and where in the code it happened, but that might have horrific memory and performance considerations.
For the ancestor/descendant checks, the naive approach would be to have a recursive "find usages" check that checks if it ever hits the target, but, again, going anything past a trivial number of recursions could have an explosion in memory usage and time cost.
Honestly, my experience is that developers are, on the whole, incredibly conservative about changing their ways. When I was still an active Java dev, I would, along with a couple of other peers, solve a lot of problems in one of our codebase with the YourKit profiler, but the use never spread - people would prefer to grind out the thing they already know and thrash around blindly to spending a day or two learning an easier way of solving a problem.
Granted, if I had a nickle for every profiling tool that claimed to make code easier, more secure, faster, etc which didn't live up to its promises, I wouldn't need to work anymore.
It's great that you found one that worked in your particular situation, but my exerpience has been quite the opposite. And it's not from a lack of being forced to try.
We aren't incentivized to maintain research prototypes. On the flip side, industry seems to have very little interest in adopting our findings into products.
> We aren't incentivized to maintain research prototypes
There are still examples where research prototypes have evolved into successful products, either as open source projects or commercial companies. The researcher needs to be passionate enough about getting adoption and real-world impact in order for this to work.
> On the flip side, industry seems to have very little interest in adopting our findings into products.
IMO, developer tools need to be wayyyy better to overcome inertia. This statement might not hold in other fields, e.g., ML/AI.
Any manager worth their salt would invest in a tool for 4x productivity. Facebook and Google spend a fortune on internal tooling. And yet these tools are forgotten. That’s why I’m suspicious of the reported gains.
As someone who is thinking to release some dev tool I came up with a fresh (silly?) idea I want your opinion on: make a dev tool free for individual devs, i.e. MIT or similar permissive license, but oblige non-tiny companies i.e. 10+ total employees, to donate annually an arbitrary sum of money. Even 1 cent or dollar will do, the idea is that if they go through a hassle of asking their accounting to do a money transfer they will probably pay a little more especially if their devs praised the tool. One question I cannot answer is how to put it in words and especially in legal words. Other is will it really work or am I just day dreaming. If they stick to minimal pay of 1 cent/year it still is sort of great because I have an official customer and can list it on the "used by" page for extra advertising bit.
> to donate annually an arbitrary sum of money. Even 1 cent or dollar will do, the idea is that if they go through a hassle of asking their accounting to do a money transfer they will probably pay a little more especially if their devs praised the tool.
My experience is that procurement at mid-sized companies and larger will not like this. You either need to be a small enough expense that someone can expense it or you need to have a pricing model that makes sense to procurement. If you are the former you make it challenging for individuals to buy licenses for their full company, which is an additional problem.
Seems like a narrow view of developer tools. Isn't this the bread and butter of GitLab/Microsoft and others? I work in a dull industry and the perception isn't we need new tools to understand our abstractions. The main challenges I run into are that we need more staff to run and maintain new ways to work with our APIs. How do we keep track and find our APIs? Who owns the database? Am I allowed to connect to this database? My main job is to figure out how to make an on-rails experience so that when a developer leaves after two years new ones aren't relearning the flavor of the month pattern or tech someone wanted to try. It's a broad enough problem I'm sure it can be productized if you keep it general enough.
I am just surprised, after all this time, there is not a mainstream language+runtime that is intrinsically based on full state management. There are glimmers, commercial debuggers for Java and Javascript that support "stepping backwards," and the magnificent Redux DevTools (that I loved for its visceral usefulness, but stopped using when it seemed it was no longer mainstream, plus Redux boilerplate turmoil ugh).
A high level language streamlined on such an approach could be extended to be like executing within a git codebase, with a very visual experience. But the language and runtime need network effects.
I don't have a CS background, maybe this already exists and is not too exotic for full-stack work, I'm all ears if so.
I started a small company making software components. It was tough getting people to pay for them. A few years later big companies started doing the thing for free so that was the end of that.
Back in the job my employer bought Rational Rose, it was super expensive and one of the worst designed slowest pieces of junk I've ever used.
Now there were so many free libraries and tools around I'm never paying for anything again.
I use to teach Alice to kids and I remember being confused why why line existed. No one ever used it and the teaching books did a poor job of highlighting its importance. It’s a very excellent idea and is done in modern languages like C#. The compiler is the syntax checker and can show unreachable code, bad access, etc.
The tools shown in the article are not IDEs or editors but more like static analyzers. I think that an increasing number of people want to check if the finished product satisfies what they actually want, and there will be a sizable market for these if security/reliability auditing is going to be a normal thing.
Programming language interpreters are tools. How do Python, Node and Ruby survive? How about the core tooling? Are the devs well paid to work on this? Do they do it just because they love it? How do the foundations around these tools get money to support their work?
One of the few developer tools that genuinely made it from academia to production were Martin Fowler’s automated refactoring tools (first for smalltalk, then Java).
If the program was written to manipulate LLVM bitcode instead of Java bytecode it would have fared even worse.
It would be interesting to know the specifics of the syntactic incompatibilities in the Java 1.4 program, since Java is usually extremely conservative when it comes to syntactic changes.
These tools are mostly useless because they take time to discover and learn to use. In the meantime you could have solved the problem. Developers who waste all their time searching for a magic tool are not productive.
They're also often not integrated into whatever environment is in use. Which means the developer has to be sure to save their files, open this other tool, which has often not been updated to new standards, wait for it to load their JAR files or whatever, and then use a ten year old interface to use the "magic".
This is why no-code and low code are so much more successful than developers tooling startups in terms of revenue and profitability. The people using are the same people who are running the business. The value proposition is clear. "Better developer experience" alone is not sufficient for selling to enterprise. And programmer productivity cannot be quantified in lines of code written. This is hard point to explain and get across on HN because this forum is inherently dismissive of the value of RPA and RAD software.