Every time this topic comes up, it reminds me of a web app I wrote back around 2007 that was deployed to a over 2000 locations. I deliberately used "boring" technologies. The entire front-end used under 100 lines of JavaScript. The backend was simply SQL Server, and the queries were written in SQL instead of some ORM. The output was just HTML. No special tooling was used, no "minification" or "tree shaking", or any such thing. Just hit the build button and "copy to deploy".
For about a decade I used to turn up to that customer annually for "maintenance", which primarily involved importing some CSVs that changed every year, and also updating the logo images and icons to match any rebrands.
In that time the system had two million users, went through 4 OS upgrades, 3 database upgrades, and went through the 32-bit to 64-bit upgrade also. The underlying runtime had 3 or 4 major updates, depending on how you count it.
Zero outages, no problems, only the occasional performance regression that could be fixed by poking the database statistics to get things back in their groove.
The problem was...
You see, all of the above was a problem, because it didn't keep me employed. I was not the "hero" for saving the day. Entire teams of people weren't involved. There was no visibility at the senior management level. Nobody got scared, or had to throw money at it, or hire consultants to review it.
So it had to go.
It was replaced by a system that cost about 500x as much (a 9-digit sum), got rolled back for failing to meet requirements, and then got additional funding and was eventually forced upon its hapless users.
That, apparently, was doing things "properly". That got everybody involved. Everyone got their beak wet. All the way up to government ministers and their lobbyists. Multiple consultancies were engaged. Reports. Audits. Reviews.
This is why we can't have simplicity: because it doesn't scale.
Isn't the moral of the story that you didn't charge for it correctly?
If you charged a fixed annual maintenance fee then you would have felt very clever having made a ton of money not having to do anything
Administrators would have also felt good that they had you as insurance of sorts bc the way things stand you make no money and can disappear at any moment
I think they would have still been uneasy that he was a SPOF, even if he had charged a lot more. And there's also a cognitive disconnect if a one-man deliverable costs above a certain threshold.
The SPOF fallacy is always managements favorite way to waste money.
My current shop we had a system maintained by 1 guy, as like 10-20% of his job/time. I can guarantee you he is not paid even $500K.
This of course was deemed bad & risky, so we must engage a set of vendors to deliver a replacement. 2 years, a dozen subcontractors, and 7-figure annual bill later.. and they still haven't replaced 10-20% of this SPOF. Literally spent $5M so far against max $100k of this guys salary. There are still no signs that SPOF can give up the responsibility even next year.
No one has been fired over this. In fact the decider has been promoted.
Both of these issues are often solved by incorporation.
A one-vendor deliverable can cost whatever, and corporations can certainly agree to keep supporting something for a certain time.
Doesn't actually matter if the corporation has one member, and no coherent plan for what happens to the contracts if that person steps in front of the wrong bus. It makes the relationship legible in the way that the contracting party is comfortable with.
I've seen this at startups. Management starts off informal, and promotes people that have good visibility. In the early stage, these are people that quickly churn through feature lists and deliver solid code.
Later, drunk on its own success, management continues to promote based on visibility. In a mid sized company, the most visible engineers are the ones that manage to ship broken code, then respond to nail-biter business continuity ending pages at 2AM.
At this point, the engineers that built the product and keep the lights on vest out, are passed over for promotion, and then leave.
In the next phase, the company's product stops delivering on its core competency, but hopefully it has monopoly leverage, so whatever.
Finally, the big company has a come to Jesus moment, and tries to course correct. This step is fraught with peril. It rarely works, and instead usually leads to a revolving door of process-fiddling / agile-promoting execs. This happens because execs that could solve the problem necessarily realize the root issue is middle and upper management; organizational antibodies pick up on this and isolate such threats to the staus quo.
The only execs that succeed at this point are the ones that somehow delegate 100% to low level managers and ICs while giving their peers the impression they are micromanaging and making massive organizational realignments or something.
That's not what he said, sounds more like anybody (reasonable) could have handled it alone, or for redundancy in a small team. Boring standardized battletested tech is the opposite of creating a walled garden, and he states (understandably) that this was the problem, for him and the product itself.
Very sad story, but seeing what lasagnacode over-and-underengineered-madnesses-at-the-same-timr are put up today, more than completely believable..
Nowhere in the grandparent's post says that it was a "walled garden", or even that it was closed source. The fact that only one person was needed doesn't mean there's only one person available. OP even said he worked for a company in a reply. The rationalisation automatically assumes that the grandparent is either incompetent or lying by omission, which is very uncharitable.
Even if all those problems were true, if it was really analysed as risky, the proper thing to do is to bring in one or two more engineers, perform audits, ask for the full source if it's not available. Ask for documentation. Heck, OP said it's not minified: try to reverse engineer it, if need be. Perhaps it's not even necessary!
There's absolutely no need to bring a 9-digit-sum team to replace a working system made by one person, even if this is common practice in our industry. Not before all other rational avenues are pursued if there are problems.
What also pisses me off is that what happened on the other side might have been caused by companies like the ones I worked for. For a long time I worked for consultancies, and it was routine for sales to "translate" our feature lists into procurement rules (sorry don't know the term in english) and give that to companies and government so we would be the only ones able to answer.
And the worst part is that software engineers go on with this tune because they enjoy so much overengineering everything from scratch.
Didn't say it was a walled garden. But management has its own ways and quirks I said it was possible that the situation was seen by mgmt as a walled garden.
And I answered to that already on my second paragraph.
Taking the nuclear option after merely "seeing [something] as" risky without exhausting the much-cheaper remaining options is not "somewhat understandable, if not plain reasonable". And it's not "ways and quirks": it's incompetence at best or corruption at worst.
This kind of situation might be common, but it is not understandable nor reasonable.
For better or worse there are tons of both reasonable and unreasonable factors as to why a large company would replace a part time developer's side project with something that costs 9 figures.
You don't know those reasons, the person you replied to doesn't know those reasons, and in fact the OP probably doesn't even know those reasons (they "used to turn up to that customer annually for maintenance").
Without understanding that it can be simply and cheaply fixed by training second person is gross incompetence. Those single cell morons should've been fired instead.
Heres a proposal: As organizations grow, the size of a solution becomes imcreasingly proportional to the size of the organization rather than to the size of the problem.
This definitely one narrative, but there are lots of reasons that could also contribute to a change. A few:
1. Your system was likely more complex than you describe here; What's generating HTML with dynamic data between the database and the client? Did you "100 lines of JS" have any dependencies?
2. Maybe your company wasn't charging for this simplicity and peace of mind correctly. Companies would pay for for SaaS-style products that looked a lot like subscriptions even back in 2007, we just didn't call them that.
3. It sounds like this was run on-prem. That's expensive (and scary) for a lot of companies if supporting software is not part of their core skill set.
4. We're not solving the same problems today as 2007; much of that low-hanging fruit has been picked. I'm guessing your original system was internal facing at your clients; everyone wants to integrate into much broader client-facing workflows now.
5. If you were only doing annual updates not much was changing. That's awesome but implies a pretty static problem domain.
There are countless more motivations, and the baseline has shifted dramatically. I'm not saying the reason you present is wrong or not the primary one, but it's dangerous to attribute malicious intent when there are lots of "simpler" reasons as well.
>> This is why we can't have simplicity: because it doesn't scale.
I'm not sure your example leads to this conclusion. Simplicity is a set of abstractions. When we expand the domain broadly enough they start to leak. This is related to, but not the same thing as scaling.
I do feel like the necessary complexity of SQL maintenance and dependency patching was thrown under the rug here. But then again maybe the client completely firewalled development and operations
I have personally seen this within organizations. Certain leaders can appreciate the simple approach, but not others, because it doesn’t increase headcount. For other leadership types it is drama and increasing headcount that drive their careers.
Also you the developer will get scant recognition for finding the simplest solution. That’s the kind of thing that doesn’t get appreciation up the management chain (usually.) It should but it doesn’t.
The perception is not: wow this will save us millions over time by allowing us to do more with less. The perception is: so this guy did this project that turned out to be simple.
It's the same problem as preventing versus curing: the latter is much more expensive but much more flashy. In most companies the owners are the only ones who would care about doing things as efficiently as possible, but those are often also the most removed the line work. They get all their information filtered through middle managers who are competing with each other for their next promotion, so unspectacular news often doesn't make the cut for being passed upwards.
The main exception I've found so far is in making tools/systems for myself, since then it is easy to convince the owner about the benefits of simplicity and easy maintenance :)
100 lines of JavaScript and a backend consisting of stored procedures that output HTML does not sound like a ‘boring’ technology choice. It sounds very exciting - anyone approaching this codebase to do things to it will likely have lots of very interesting questions about how this system handles lots of things.
The solution might have good answers to all of those questions! It is perfectly possible to build a well-engineered system using those technologies!
But on almost every level the answers to those questions are going to be surprising.
Whereas the same thing built using webpack and a Ruby on Rails to Postgres backend will be much more legible.
I understand the annoyance of it being replaced for a more complex more expensive system, but I would also like to know: What was the reasoning provided and what did decision makers truly believe about the whole thing?
It was an open government tender process, for which I, or the company I worked for was not eligible, despite the tender being "open". You see, a decade-long pedigree of actually having implemented the software used for this purpose did not qualify us for replacing it with a v2.0.
There are rules, you see? They have to be followed! Or else.
Or else bad things might happen, like money being wasted.
The fact that the end-result of this process was that a 9-digit sum was spent on something I spat out in my spare time in under a year -- and was used for a decade -- was of no relevance.
> what did decision makers truly believe about the whole thing?
Their concerns started only when the whole thing blew up and started making headlines. Then nothing happened to them personally, so their concerns evaporated along with the taxpayer funds they had wasted.
I think most readers, including myself, empathize with you and understand the frustration and absurdity. But you are also telling just one side of the story (yours) and I imagine that v.2.0 specs had certain requirements and features, possibly required by legislation, that needed to be followed and implemented. When you say, dismissively, There are rules, you see? They have to be followed! that's when I, and likely others, start to wonder if you are really providing the full story, or if you actually even understand the differences between your simple app and the updated version.
Nah, this is full story 99.9% of the time. I worked for government and this happened all the time - nobody ever got fired for choosing 500x more expensive IMB general solution versus something that you customized for the stakeholders and has 0 issues and million users and 0 incidents. I had personaly many such products being on the side of the government once, and on the side of the private vendor after that.
One example - I created Help Desk system for the public finances of entire country using Redmine and other FOSS tools. The cost was 0, the time to implement it was single year of not so focused work and it served hundreeds of thousands of people. Then IBM took over with its service desk, implmenting it for years and costing infinity. They could get into tender, I could not since I and my team are small company. The funny thing is that stakeholder subteam abandoned it and returned to my solution (with 0 maintenance since I left the company).
This is typical. You need to know how government works to understand it. I understand it, but do not approve it. I am also not frustrated about it, its just how this world works currently, in majority of the countries as far as I know.
> Nah, this is full story 99.9% of the time. I worked for government and this happened all the time
Same experience, also in the private sector.
> This is typical. You need to know how government works to understand it. I understand it, but do not approve it. I am also not frustrated about it, its just how this world works currently, in majority of the countries as far as I know.
Yep, I mean the issues with unnecessary jobs and inflated projects and budgets is not exactly news, I think it's just part of society's struggle to adapt to a post scarcity economy, while not shortening the amount of working hours. It's not really surprising that it also affects software.
No this is really how stupidly it works.
Government software consulting is insane.
The licensing/certification stuff basically creates monopolies.
My spouse worked at a digital agency a decade ago, that it turned out was basically a near-monopoly provider of certain types of software for the local government.
The thing was, none of the work was actually done by them.
It was all subbed out to 3rd party dev shops who couldn't qualify themselves for the required licensing.
Further, they subbed out all the dev offshore.
So the government was both overpaying for offshore devs, and thinking they were spending money locally because the intermediary happened to be local.
They could have gotten the same work for 40-50% cheaper just skipping the front company, or spent the same and hired actual local devs they thought they were.
You're either preaching to people who agree with your perspective or talking to a well-tread HN persona where all management is incompetent nincompoops and the world would be a better place if only devs had unilateral powers in all areas, including those where they have no experience or even visibility. You are being quite charitable to place the majority in the former category. See follow-on comments (both current and soon-to-come) for supporting evidence.
Just to agree with the OP, I've just gone through a government tender process to buy a piece of software for my organisation. The number of people who could bid on the tender was incredibly limited. We've ended up with a 'solution' where the best and cheapest company was excluded from bidding. Mainly cause they struggled with our byzantine tendering process, that gives us 'best value' according to our procurement team. It's not the only broadly failed IT system that we have which has gone through these processes, so it's not a one off either.
We're currently busy throwing away solid pieces of open source software that have worked well for years in favour of enterprise garbage.
Government software contracts are never meant to succeed. They are meant to burn as much cash as possible. Everyone I know who has worked in Arlington has the same story. Huge headcounts. Billable hours. Literal coked out VPs on yachts.
This is too simplistic view of the state of the affairs.
If that was so, countries would not work at all. There is always a service that needs to absolutelly work, or your government is fucked and lots of its people. For those projects you absolutelly need to hire those that will provide desired outcome without failure. Most of the services are not so crucial and in those you can have such failures without much of a problem, it even seems "good" sometimes as you must employ number of people to fix service mistakes constantly.
Your only mistake, then, was that unlike a consultancy based solution, not enough people were able to take credit for it. I know it sounds counter intuitive, but to sell an idea, it's best to make every buyer think it was theirs all along, then only will it stay in place.
In the US, especially with federal money, this would be ample justification for a congressional inquiry and a potential fraud, waste, and abuse claim.
The usual outcome of the investigation is uncovering a bunch of people just saying that they were doing what they were told to do, and no one taking the common sense approach of looking at the current vendor. It might push one or two incompetent middle managers into retirement.
That said, it may get fixed for the next round of bids. It may have long term change depending on which congresspeople were involved.
The only result of this would be millions more dollars "investigating" version 1, led by the bureaucrats who made the decision to build a v.2, including paying an army of consultants to find every possible flaw and non-compliant feature, in order to justify their decisions. The horns will really come out then... v.1 did not achieve 100% accessibility according to OSHA, cookies had the potential to leak data, the JS packages underneath were not vetted and compliant, no guard against denial-of-service, the list of possibilities is endless... point being, when you force gov't officials to find a flaw in something because their job is being questioned, they essentially have unlimited resources to find that flaw and justify their own existence.
Yes, government tender has rules. And if the decision makers don't follow the rules, they can suffer all kinds of consequences, including personal bankruptcy and jail time. Obviously they wouldn't bend the rules just because it would save government money and lead to a better outcome.
> You see, all of the above was a problem, because it didn't keep me employed. I was not the "hero" for saving the day. Entire teams of people weren't involved. There was no visibility at the senior management level. Nobody got scared, or had to throw money at it, or hire consultants to review it.
there are a few other instances of that:
- an old article about Michelin (french tire manufacturer) quoting some scientist of theirs "We can make a million hour tire.. but what would we sell"
- recently people said their rust code cause too much downtime for coders because it was too stable too early
flip side of the same issue:
- very often people game their work to ensure benefits: stash duties for later so you can appear busy, or overwhelmed (and claim promotion because you have so much to do)
The global system doesn't reward to true optimization, it allocates people on useless tasks, at best for lower risk, but smart people doing things solid and fast could be using their talents on other problems.
Kickbacks galore, my brother works in ed-tech. He said one state rewrites their public school report card system every two years like clockwork because of that.
You know the crazy thing is that moderately sized private companies I've worked (500-5000 people) have plenty of that kind of BS.
I have seen a piece of mediocre software or service vendors sold as a panacea to my last 2 shops, skip POC phase and just get purchased off the back of someone (not in IT) very very senior being buddies with the founders. No users or technical people were asking for it.. just gets rammed down from place up high until a reasonable enough niche is found to put it into PROD.
It ends up being a solution in search of a problem, with a couple years finding where to use it, a couple years finding it inadequate, and then a couple years removing it again.
It sounds like your project ended up working out well (ignoring the replacement). But one thing that would be hard for me when starting a project like this: How do you know that over time it won't grow into something terribly unmaintainable? You don't have an ORM, but then perhaps over time you re-implement most of the functionality of an ORM, and now new people need to learn that. Of course, you can start with out one and bring one in when it is needed. But in my experience that's hard to actually do because feature N + 1 needs to be implemented now and there's no time to migrate everything over to (ORM that would have been nice to have to make feature N + 1 easy to implement.)
I'm just using ORM as an example, or course.
Anyhoo, I think there's probably some other dimensions than "boring". Seems like you used "less" tech, but I'd say in the java world Spring and Hibernate are boring, or at least "popular", in the sense you can hire devs anywhere with some experience.
By devoting time to code maintenance and refactoring in between features N and N+1 (or at least N+M). The code doesn't just magically go from 5-10 SQL queries to being completely unmaintainable without an ORM overnight. When and if it grows into that, you'll see it coming.
That doesn't work, of course, if you're not considered to be "working" unless you're hacking on a new feature right now that'll be deployable by the end of the week, but it seems like OP was allowed to develop in a sane way.
You either start with or without an ORM, depending on your assessment of whether the project is gonna need one.
If you start without one, you still have to partition your code well enough so that retrofitting one doesn't cause a huge mess. Basically keep your "raw SQL queries" in a centralised place (file or folder), rather than strewn together in controllers/views/services. And you should do exactly the same if you use an ORM. Isolate the fuck out of it and make it "easily removable" or "easily replaceable".
Also keep the "effects" of your ORM or your non-ORM away from those other parts too: your controllers, views and services should be totally agnostic to whatever you're using in the data layer. When you add subtle coupling you lose the possibility of changing it, but it also makes your project less maintainable.
This is easier said than done: in dynamic languages or with structural typing like Typescript it's very easy: it's all objects, anyway, so ORM or no ORM it's the same. In stricter languages like Java it might lead to lots of intermediate data structures which are verbose and causes problems in itself. Or the middle ground: use primitives (lists and maps) rather than classes and objects, although ORMs like Hibernate will make things difficult for you, since they're not too flexible about how they're used and their types tend to "creep" all over your project.
-
Most unmaintainable projects don't become unmaintainable because people "forgot to prepare". They become unmaintainable because people assumed everything is permanent, so there's no penalty to using everything-everywhere. So there are "traces" of the ORM in the controllers and views, the serialisation library and serialisation code is called in models in services as a "quick hack", the authorisation library is called from everywhere because why not. You quickly lose the ability to easily reason about code.
The same applies other areas. I could make a treatise of how game developers love sticking keyboard-reading code absolutely everywhere in the codebase.
I think you may be biased towards the reasons to replace it, given that it was your creation, and we're not hearing the whole story.
It may well be that the system was difficult to maintain _because_ it used a bespoke framework of vanilla JS and handcrafted SQL Server queries.
Or that they wanted to improve the workflow of importing CSVs, and build modern features around it, which would be a mountain of work.
Or that the company outgrew it and it was difficult to scale.
Or, you might be right, and it was politically and financially driven. But then the technology choice wouldn't have mattered, and you could've chosen a more complex stack just as well.
I appreciate the sentiment of trying to keep things simple and not jumping on the bandwagon of the latest trends, but sometimes choosing a popular framework is not a bad idea. Particularly in corporate environments where the project is not owned by a single person, churn is high, and new developers are expected to eventually take over maintenance.
One day of maintenance per year is not "difficult", that's basically the point!
I didn't use or create any JS frameworks, which is a part of why the maintenance was easy!
The customer was a government department, and their scale changed only with population. That is: slowly.
> sometimes choosing a popular framework is not a bad idea.
Ironically, the replacement product used a popular but out-of-date technologies such as Enterprise Java Beans. They overused OO paradigms to a hilarious degree, and needed something like 2000x the server capacity to host their application.
Keep in mind that the data, userbase, requiements, etc... are all identical. This is a like-for-like replacement.
They needed an entire team of people just to babysit the infrastructure, which now took a decent chunk of a data center. My app could have handled the production workload while running on my laptop.
Were there 2000 independent systems / SQL server instances running or just one? 2K separate deployments to manage (with 1K users each), does sound a little scary. Of course, perhaps that is not what is going on at all.
Which is actually kinda funny, because some of the "complex" technology the OP is railing against allows us today to manage thousands of databases both easily and efficiently... IF the systems are built with a more current approach. This is why I try to understand ALL of the motivations for disruptive change and not immediately assume incompetence and self-interest bordering on criminal.
Write a cloud formation / terraform template that involves O(1) machines and deploy 2000 identical copies.
Option two:
Write a template that deploys O(N = 2000) interdependent services across roughly 3-10x as many machines, and deploy one copy.
From what I can tell, you are arguing for option 2. It is strictly worse than option one. In addition to being more complex, it has a few nines less reliability, and costs 3-10x more for the hardware. The dev and CI hardware budgets is going to be 10x more because you can't test it on one machine, and it has bugs that only manifest at scale.
Source: I do this for a living, and have been on both sides of this fence. Option 1 typically has 5-6 nines (measured in chance a given customer sees a 10 second outage), option 2 never gets past 3-4 nines (measured in at least N% of customers are not seeing an outage).
The modern vs old technology debate has nothing to do with this tradeoff. If you want, you can build option 2 with EJB + CORBA on an IBM mainframe, and option 1 with rust and json on an exokernel FAAS.
I'd argue for Option 3, which is to try to understand the workloads placed on the original system and then design the new system based on this. I think having 2K independent database servers would not normally be optimal for 2M users, but it is possible.
If the old system is exceeding uptime SLAs, meeting all business needs, and coming in under the budget for such an investigation (it sounds like the total operations budget was less than 10% of one engineer's time), then why bother?
I don’t know the situation, not touching it may have been optimal. I’m suggesting that if it was going to get re-written, I would at least study the basic parameters of the problem by reviewing the workload of the current system.
This was a multi-tenant centrally hosted application. There were 2000 sites served, each with kiosk PCs and some associated special-purpose hardware.
The actual application code ran in just four virtual machines in two data centres.
No templates, no Terraform, no microservices, etc…
Just vanilla ASP.NET on IIS with SQL Server as the back end.
The efficiency stemmed from having a single consolidated schema for all tenants with a tenant ID as a prefix to every primary key.
Shared tables (reference data) simply didn’t have a prefix.
The vendor product that replaced this was not multi-tenant in this sense. They deployed a database-per-tenant, and lots of application servers. Not one per tenant, but something like one per ten, so two hundred large virtual machines running twenty instances of their app.
Multiply the above for HA and non-production. The end result was something like a thousand virtual machines that took several racks of tin to host.
Management of the new system took serious automation, template disk image builds, etc…
The repetition of the reference data bloated the database from 50GB to terabytes.
It “worked” but it was very expensive, slow, and difficult to maintain. It took them several years to upgrade the database engine, for example.
That task for my version was a single after-hours change. Backup or rollback was about an hour, simply because the data volume was so much lower.
The simplicity in my solution stemmed from a type of mechanical sympathy. I tailored the app to the customer’s specific style of multi-tenant central hosting, which made it very efficient.
Of course, it is hard to say without knowing more about it, but it seems that jiggawatts solution is closer to optimal than the second one. The 50GB database could fit on a USB drive after all and we know empirically that a single SQL server database was able to handle the requests since the old system worked.
Also, the fact that a consulting company was able to turn a part time gig for one person into a $100M+ project at the taxpayer's expense is very frustrating.
Both the old and new systems were using licensing based on processor cores, not VMs or instances.
If I remember correctly, my version had something like 8 + 8 cores in an active/passive configuration where the passive node is free. There was also a single dev/test server also with 8 cores, but that's free too.
The replacement used a few hundred cores shared by the various instances and environments. If I remember correctly, they had something like 10-20 databases per virtual machine, and then about 5 virtual machines per physical host. The cores in the physical host were licensed, not the logical layers on top. (I can't remember the exact ratios, but the approach is the point, not the numbers.)
The "modern" cloud approach of having dedicated VMs for a single thing is actually terribly inefficient, and that approach would have bloated out the above to thousands of VMs instead of "merely" a few hundred.
The correct architecture for something like this -- these days -- might be to use Kubernetes. This provides the required high availability and instancing, while efficiently bin-packing and deduplicating the storage.
Still, you can't Helm-chart your way out of an inefficient application codebase.
Again, for comparison, my version could run on a laptop and had about half a dozen components, not thousands.
Figuring out how to reward simplicity, reliability, and maintainability feels like one of the most important unsolved social/human/economic issues in the software industry.
Seems there's only incentive to simplify at small companies where the employees feel they can save their own time or increase the value of their equity by delivering value to customers more efficiently. At large companies employees work 40-hour weeks regardless of their output and they're trying to impress a performance review committee, not customers.
Stop billing hourly. Instead bill for the value you provide. How you provide it should be immaterial, and so you can do it as cheaply and efficiently as possible while still reaping heaps of money as long as it creates value.
That's how all other markets work. Billing hourly is the death of progress.
I honestly think you did the right thing--your conclusion is more a cynical take than a truthful claim. You clearly had organizational problems, and that's beyond the scope of your code.
we don't have simplicity because people are inept. I would trust you to write a small CRUD php database without any security issues, however, the next goon that comes along and jams in a bunch of $id = $GET[id]; insert into where $is; and you have a major security issue
Frameworks exist so you don't roll your own stupidity into a bigger problem.
The new one had "hand rolled cryptography", which should make you twitch uncontrollably if you know anything about security.
The new application had, among other failings, hard-coded (unchangeable!) RSA keys used for communication channels. As in, all customers shared the same keys. I can't remember the exact specifics, but I swear at some point there was something like encrypted JSON in XML. Or was it encrypted XML in JSON? Does it matter which?
The old app that I wrote would happily take JavaScript or SQL snippets as inputs to any text field and do The Right Thing.
You don't want to know what happened to the new app when it was tested with malicious inputs.
The testing team were told "not to go too hard on it", because that would "derail the project".
I've become a fan of avoiding ORM's and API's between front end and back end for websites. Want a page that shows a dashboard of xyz? Write the right query that fetches exactly what you want, render the HTML, and return it.
Super simple, and abstractions are at a great minimum. No SQL->ORM->API->frontend, each with their own twist on how they model the world. A splash of JS (perhaps via HTMX or Alpine), and this can take you a long way.
A few years ago I start a dashboard project that was mostly raw SQL.
I then saw the team wanting to convert it to ActiveRecord, which they started. But lots of queries had to use AREL (Rails' "low level SQL AST abstraction"), since they weren't really possible or just too difficult to do in ActiveRecord.
But AREL is so incredibly unreadable that every single AREL query often had its equivalent in plain SQL above it, as documentation, so new people could understand what the hell it was doing.
In the end some junior was unhappy with the inconsistent documentation and petitioned that every query, simple or complex, AREL or ActiveRecord, had to be documented using SQL above the AREL/AR code.
Then they discovered that documenting using Heredocs rather than "language comments" enabled SQL syntax highlighting in their editors.
After that we had both: heredocs with the cute SQL and some unreadable AREL+AR monstrosity right below it.
I still laugh about this situation when I remember it.
Presumably they just did whatever the standard provided mechanisms for their SQL driver were (such as parameterised queries). User inputs text in a comment box, and you insert it into database using such a mechanism and it's safe.
And if you're using, for example, Go's templating library, then it automatically escapes everything in HTML templates unless you explicitly override this default behaviour.
Well if it was only 100 lines of plain JS then how would one guard against reflection attacks? I.e. submitting HTML (like script tags) then getting that to render when others view the tainted data.
Because on this way of building sites, the user submitted data is escaped before it reaches the browser. E.g.: https://go.dev/play/p/MmNSxU5QfAb (hit run to see the output).
The JS wouldn't need to do any escaping, because it's not trusted to handle any unescaped data. It's operating on the already-escaped html template.
They certainly weren't using Go, or as stated, any framework. Also no mention of any type of web server; not sure what magical code was creating dynamic HTML from the database. Where was the business logic? Stored Procedures? No mention of more dynamic functions... No integrations... Sure sounds like a desktop browser-only app while the majority of the world today wants some mobile functions from almost every system.
There is a lot information, which is understandable but also conveniently supports a very unflattering narrative while simultaneously promoting the OP's awesomeness.
I think you're reading them far too strictly. I don't think they literally meant they were using nothing beyond JUST the SQL Server and then somehow getting HTML out of that, with 100 lines of JS on top. Unless I misread, I don't see anything that implies they weren't using something like PHP or ASP, for example.
Q: "how do they use the workarounds needed to secure the more complex approaches?"
A: "those security concerns don't exist in the approach, no workaround needed. That's part of the simplicity".
It just represents a fundamental misunderstanding, but it's not their fault, they've never seen anything else. Like someone using a JWT instead of a session cookie.
Just put the queries in procedures with parameters. Only store the procedure calls in your backend, disable arbitrary queries completely in your database permissions.
> The old app that I wrote would happily take JavaScript or SQL snippets as inputs to any text field and do The Right Thing.
I can't be the only one here who is both skeptical and a little turned off by someone who says "You can stick any user input into a database query and you'll be fine", with a condesending pat on my head.
Your comments continue to be incredibly one-sided and biased. The summary is "My work was perfect and the new system a steaming pile". Perhaps this contributed to your replacement.
Fundamentally it's mixing data and executable code such that the DBMS cannot properly distinguish between the two and can inadvertently treat data as executable code.
Parameterize queries very explicitly tells the DBMS "this is executable code, and this over here is data". Nothing anyone puts in the data will ever be mistake as executable code by the DBMS. THIS IS SAFE.
It is only safe for the SQL server. An injection attack could still be targeting a cache (to poison it with e.g. a malicious script), the browser (to steal data via XSS/CSRF) or the user (show an error message telling them to contact malicious number).
> I can't be the only one here who is both skeptical and a little turned off by someone who says "You can stick any user input into a database query and you'll be fine", with a condesending pat on my head.
Like how Google has worked the past 2 decades? OP said snippets then you gloriously paraphrased it into a completely different statement.
“ was it encrypted XML in JSON? Does it matter which?”
I’m sure there were meetings where it was discussed at length and the stupidest idea prevailed, because other peoples’ failures are more useful than shared successes in such an environment. And probably for “security reasons.”
For about a decade I used to turn up to that customer annually for "maintenance", which primarily involved importing some CSVs that changed every year, and also updating the logo images and icons to match any rebrands.
In that time the system had two million users, went through 4 OS upgrades, 3 database upgrades, and went through the 32-bit to 64-bit upgrade also. The underlying runtime had 3 or 4 major updates, depending on how you count it.
Zero outages, no problems, only the occasional performance regression that could be fixed by poking the database statistics to get things back in their groove.
The problem was...
You see, all of the above was a problem, because it didn't keep me employed. I was not the "hero" for saving the day. Entire teams of people weren't involved. There was no visibility at the senior management level. Nobody got scared, or had to throw money at it, or hire consultants to review it.
So it had to go.
It was replaced by a system that cost about 500x as much (a 9-digit sum), got rolled back for failing to meet requirements, and then got additional funding and was eventually forced upon its hapless users.
That, apparently, was doing things "properly". That got everybody involved. Everyone got their beak wet. All the way up to government ministers and their lobbyists. Multiple consultancies were engaged. Reports. Audits. Reviews.
This is why we can't have simplicity: because it doesn't scale.