But I'm getting the feeling that a lot of people worry a lot about the runtime. They break up their applications into microservices not to solve a problem, but because they believe it's the best thing to do because $reason. But you shouldn't do that until you're actually running into an issue and have exhausted the other, simpler options.
I mean I wouldn't mind having or experiencing an actual scaling problem directly, but so far I've never seen it. I've seen a LOT more organizational and technical problems stemming from intentionally building a microservices architecture though.
But building just the one codebase is dull.
But as you can see in the comments in this thread. The coolness of micro services seems to be slowly fading. Feels a bit like the state of noSQL in 2014.
But when you are building systems it becomes slightly more complicated. Especially those that require that the data science team write their code in Python / PyTorch and the application team write their code in Typescript.
It’s really annoying.
the solution, however, would be to hire a small number of really good developers and have code reviews run through them - hopefully teaching the more average developers how to code better in the process.
At least one of them will be guaranteed to be built with some technology/language (probably Scala or Elixir) that one of your engineers (who's since left) was interested in for a few weeks.
Asking in interviews questions like “how many namespaces are you managing or expecting to manage” and often times teams are talking about a single microservice and bringing in k8s “just in case”
To which I’ve oft said “good luck in your endeavors”
From the resume-driven-development angle, probably. I do the same though.
If you prefer to work in a sane environment though, wise choice.
The problem is companies with poorly designed monoliths hit complexity problems more quickly and often. If software architecture is not a core competency, solutions architecture is probably not either, and poorly designed microservices are a completely different beast. A distributed monolith is inherently more difficult to maintain and operate.
If you only need to understand and maintain one of them, sure.
That would be extremely rare though. You have to understand the entire system and all the interaction between all the parts. At which point the 10x complexity of excessive microservices will kill you.
Maybe if you're CTO. Never seen that in case of individual contributors.
You're not likely ever going to be at the same level of detail on everything all at once, and there can still be fuzzy parts - third party partners in particular can be difficult, especially if the org is not good at making known who's the SME for what - but you absolutely can build an understanding of the architecture as a whole, and that makes you able to dive in and work on just about anything without needing much spinup time.
(Granted this assumes good technical and architectural leadership early on that has specified standards for interservice communication, intraservice code design, what layers exist and how they interoperate, etc. That said, Wild West microservices might still be less painful to work on than a Wild West monolith - I don't actually know, but I could come up with arguments either way.)
I'm thinking more from the perspective of a small team supporting a large operation, because those are the kinds of situations where I've spent a lot of my career. FAANG scale would be a novel experience for me, as I suspect it would for most.
Not to say you can't get in trouble that way, for sure. It's its own kind of tradeoff, not least in that you need more ops support than a simpler infrastructure would require, and you need someone with the architectural vision and authority to ensure things don't get out of hand. It's definitely not where I'd want to start with anything, and you need a strong team to make it work, but I can say from experience that it can work amazingly well. Like, about twenty people supporting about $2B ARR well.
(Sure if you're google sized, nobody will understand or even know the entire system. But nearly every company on earth is smaller than that.)
We have to keep in mind that not all software systems are designed by savants, and even ones that are might be constrained by business needs from rearchitecting them when they need it
Microservices are generally less complex.They can appear more complicated because companies who are choosing to transition usually do so after a problem. These problems are often due to poor monolith design.
Teams that design poor monoliths typically have neither a software or solutions architecture proficiency
I still haven't done anything with k8s, but I suspect it's the solution to this problem, even if it would take that same herculean effort to finally get everything moved to using it (we don't even run containers, we run our own application delivery system that has some similarities without the process isolation, because we've been doing so since before Docker existed).
That feels like a self-inflicted wound that will now only get worse with k8s.
What's the alternative? It's not like we're some web property with an application, it's an ISP, and we sell internet services, and we've been around long enough that we still offer traditional ISP services (many for free) such as email, hosting, VPN service, etc, as well as telecom services. These often need to communicate with each other in some way, as well as many back-end services we've written to serve our staff.
Microservices come with problems, but honestly I can't see a large monolithic app (or even a few of them) being much better development wise, even if it might have been easier to test and set up a development environment.
So far I would not say it is worth it for most people. I also hate how much the community just runs random shit from the internet without any strategy for security patches or digital signatures.
Fixing up the docker images will give you some variety outside deciphering cut and paste garbage cryptic mess in helm charts.
Too much monolith ends with abstraction inversion, which is a Bad Thing.
Refactor that estimate by an order of magnitude.
Most are already legacy next year. (Sadly I'm in that situation too, migrating off an expensive AWS cluster to a single Jenkins machine...)
Primitive is not the right word to use here. Low-level, perhaps, but the antonym of high-level is very rarely “primitive,” depending on what you’re doing. Yours is an interesting take given the article’s context.
I’m familiar with one system that was initiated in the mid 1970s with 15 million lines of COBOL. The vast majority is cut and paste dreck that does god knows what.
There’s lots of bad C code, but in many ways the cobol stuff is the equivalent of some crazy collection of interlinked excel files.
Hiding the computer from the programmer is not automatically better in the general case. It’s better in many, usually in industry. That’s a subtle distinction and fully understanding it will command a salary differential going forward, for a few reasons. Not understanding it is how you get a $500k AWS bill for Spark to process a few billion records that fit in RAM on a laptop.
I was once spoke to a consultant who in a past life fixed big COBOL projects. He said that one of the common technological and organisational challenges he faced was getting people to use ‘malloc’. Apparently there are COBOL programs that just SEGFAULT when the input reaches a certain size, and some developer has to change an array allocation size. COBOL developers would be hostile to the idea of dynamic memory allocation, and he’d have to explain that the rest of the world had been doing that for the last 30 years.
Some COBOL dev came in and proclaimed they could do a websites just like anybody else. I asked what they did when input became too big. ”Oh well. The transaction will dump a bit, and then the duty operator will find some dev to look at it and increase the buffer” was the response.
The whole security aspect of the buffer overflow was completely alien to them. The idea that a random internetizen would try to attack the site was unthinkable, let alone they might come from another country. A botnet? of internet connected stuff? That happens only on TV. Besides, with 4 billion IP adresses, the chance an attacker found us was minuscule.
Lucky for me they tried to sell that theory to the infosec team, then got shouted at for half a hour.
Fortran’s allocatable variables can’t leak memory, can’t be used after being deallocated, and can’t be deallocated twice. Bounds checking is also trivial and all decent compilers have a flag to do it automatically.
That said, it is not unpleasant to use for numerical codes, when you’re not working with people stuck in 1977 (so, no common blocks, everything in modules, decent array syntax, etc). It fits its niche quite well, with Python for everything else.
I’m curious to know if these COBOL programs actually depend on IBM mainframes because of that.
Using IEEE floats for money often has... amusing results.
And show people https://0.30000000000000004.com/
Also see https://stackoverflow.com/questions/30215023/how-cobol-handl...
See for example this, a software float for crappy ARM processors.
I was more so wondering, if a COBOL compiler (given how low level the language is) makes a lot of assumptions about the platform. Even assumptions about potential quirks.
And I never imagined an x86 could deal with decimal floats.
Just for your reference about x86 and BCD.
Nobody in the top 50 CS programs worldwide teaches COBOL. Most of the COBOL programs run on mainframe where it's at least a few k$ to even get a look at the documentation or hardware. So nobody bothers with it.
And most (older) programs don't folow modern conventions either. Custom application specific databases? Check. Lost source code/incomplete? Check. No version control? Check.
And most folks running these programs are in orgs that are chronically underfunded anyways.
Or Object Star.
These can be done better than a seat of the pants noSQL database, but sometimes people also got creative in the past too.
I've seen all of those databases being used in legacy government departments.
And nobody does that on mainframes or using COBOL.
Not doing COBOL but am working with an ancient application supporting business operations and this is my hell. Also, the databases are likely ancient, outdated, and poorly documented (at least according to my PTSD).
"Oh it uses an oracle db? What version? 10? FML - you know that versioncan legally drink in Canada, and the EOL for that product is going to have a bar mitzvah in a couple years, right?"
My Dad was the consultant IBM hired to fix issues in system/32/34/36 and AS\400 software for customers in Australia. He had a title related to RPG, but I don't know what it was. Growing up we had a (functioning) system/36 and later an AS\400 in the shed. He's never stopped developing, still works as a .Net developer, but hasn't been able to get through the recruiter filter for RPG and cobol jobs, possibly because he hasn't worked with recent generations of these tools, even as he is getting hired for .Net jobs on major systems.
That's pretty neat. RPG is apparently the default high level language used on IBM midrange systems (AS/400).
I looked at a beginners tutorial here: https://www.ibm.com/support/pages/coding-rpg-iv-beginners-tu...
It seems very similar to other procedural languages, and one of the examples shows a very easy FFI interface to call out to C. They make a wrapper for printf.
But yeah, as far as mainframes go, at least here in the US, IBM - and specifically AS/400 - is pretty much the only game left in town.
The product line supports IBM mainframes and a few other platforms, so I assume it's easier to target the mainframe first and then the other platforms.
Then there's a bunch of mainframe stuff still running around on IBM clones (especially in Japan, afaik). Unisys is still up, and there are still active Burroughs Large Systems deployments (now under ClearPath name, I think?) and few other niche systems.
I think Unisys is still around...
They had to make it open decades ago. Now it's too late.
I am surprised how many government/bank systems run on 50s-70s code; it makes you wonder if we kept in practice writing those kinds of things since then.
Probably not, but they are machines designed for throughput, with a multitude of intelligent peripherals that pipe data in and out of the CPU, which will be running at close to 100% speed all the time. The CPUs themselves are quite impressive (they present at Hot Chips regularly) and each cluster of 4 sockets share almost a gigabyte of L4 cache. Everything is also constantly checked, audited, logged for security and reliability.
I wouldn’t be surprised if these machines thoroughly smoked a rack of souped up x86s at a similar budget in transaction processing applications.
You can buy a truck anywhere and run it on almost any road. The locomotive has to be from the one locomotive company and good luck running it outside of (really expensive) tracks.
The way I’ve heard it described, it’s not the features of a mainframe that sell it now, those are commodity. Instead it’s the existing code that ties companies to mainframes - they don’t want to rewrite logic they likely don’t actually understand the details of if it’s already been tuned and tweaked to work for their business? To that end the future of mainframes is probably virtualization of hardware and improved static analysis tools to better support codebases we don’t understand because they’re decades old, etc.?
A lot of companies are running applications that rely heavily on AWS managed services. If you're tied to DynamoDB or Athena, you're in no better position than being tied to IBM mainframes.
Fast forward years later, why would they pay the mainframe tax and lock themselves to a single provider?
This bit of the article struck a chord with me: "The issue is institutional knowledge — when the people who wrote an application 20 years ago leave, the remaining people often don’t know the application with anything close to the same intimacy."
Code readability, testing practices, and the types of patterns employed (or really the types of anti-patterns avoided) are more important than the language used if you expect anyone to be able to maintain a system over time. These are like embedding artifacts of institutional knowledge in your code to guide future maintainers. The choice of language alone doesn't carry any institutional knowledge.
This is very true, but also a bit of a fallacy - in particular for unexperienced developers.
Saying it in the way you did can create the impression that the choice of the language is not important as long as the mentioned properties are taken care of. But it does not mention that these things are not independent of each other. The choice of language has a huge impact on readability, testing and patterns (all else being equal).
I would phrase it the other way around: because code readability, testing practices, etc. are so relevant, it is important to choose a language that makes these things as easy and productive as possible.
Then there is the whole stack REPL development experience, which probably Powershell + .NET + COM/UWP on Windows is the closest we get today.
Or if you want a closer example, RAD development tools for the desktop versus what SPA still aren't able to offer on the Web.
Yes, many organizations have made the mistake to switch language in hope of solving their problem with a messy code base when it was their own bad practices that where at fault all along and then consequently just repeating bad practices in another language.
It is not the language that is the limitation, it is the programmer.
Well, in a world where "talking to resources" is most of what we want software to do, that doesn't sound great, frankly. You can see how that would cause problems with people who are trying to use a website to interface with an unemployment system written in COBOL, which is the example the article leads off with.
Can someone knowledgeable of COBOL expand on this? It sounds... not quite right, like it's missing some surrounding context. I feel like plenty of C's common security problems could be boiled down to "just reads and writes".
My take of that comment is COBOL is very popular on mainframe systems and those mainframe system work around the concept of transactional batch processing.
Processing is done in batches and the batches are control bay a transaction manager (i.e. CICS for example).
In that model a COBOL program is just a step in the processing and is fully controlled the transaction manager.
So in that sense it does just read and write data as one step in a much bigger batch of steps.
As such the security is handled by the transaction manager layer.
Of course there will always be security flaws, anyone who says otherwise is probably trying to sell you something, but reducing attack surface is a good risk mitigation strategy if done correctly.
Free as well! Take it for a spin.
The problem is such as this case where the company seems to think that people aged the same die at the same time? Which doesn’t seem to have any plausible basis in fact or a reasonable job requirement.
Wouldn't an actuarial table suffice?
So COBOL is in good company. ;-)
As you very rightly pointed out, no one ever goes, "The networking stack in your smartphone is written in C, a nearly 50 year old programming language" but governments underfund their IT infrastructure and then go, "Welp, nothing we could do, COBOL is sooooo old."
The amount of mismanagement that gets deflected away by pointing at COBOL and tapping into the largely justified distaste the development community has for COBOL is a real convenient out for the people responsible for that mismanagement.
This matters, because with C I can connect my existing C software to a web server, which lets me tie ancient software into the modern infrastructure. I suspect that's harder to do with COBOL...
Latest Cobol revision is from 2014 and you even have Visual Cobol, https://www.microfocus.com/en-us/products/visual-cobol/featu...
What's really interesting to me is that SQL and ML were created in 1973 and 1974 respectively. And yes, those languages have evolved over time, and yes, they have some warts and telltale signs of age... but overall they still are considered very high level languages, and are still incredibly popular in their own right.
For a recent project, I chose F# (an ML descendant) and SQL, not because I had to integrate with legacy software written in those languages, but because they're still great languages.
It was always designed for business type applications.
Back in the 1960s assembly language was used for writing the missile defense systems and other systems programming [later C and Ada, etc became prevalent].