The other team talked big and gave demos to higher ups and their higher ups and their higher ups. Web scale baby. And with that came the galactic infrastructure and all the buzzword needed in order to run it.
About 6 months later, they quit. They had spent their whole budget on astronaut architecture, that the didn't have any money or time left to build the apps that were supposed to run on said infrastructure.
"You are not FANG". But even if you are, JIT development.
It was merely a tongue-in-cheek to mean just build what's needed right now in front of you, then get it working at scale once there's an actual needs to scale. This is in contrast to starting from day one spending most of your time thinking of scale, then building everything so that it can infinitely scale (along with the complexities that come with it), only to find out that you have 8 users and your data can fit entirely in memory.
I've met more than a few folks who consider web templating systems to be 'overkill', and have 0 understanding of the risk of xss. Same with sql/db escaping - "I just write the SQL and run it, using all those libraries is just a waste of time", etc.
Many of the projects I've come in to over the years were doing with a "JGID" mentality. And they did "get it done", "it" was just a steaming pile of crap when it was done. "Why is this taking weeks to do - the previous guy was so much faster?"
Had this one last year:
"This was never slow before when X worked on it, you've made changes what have you done? We need to call X back in to the project".
X just "got it done". And X was a db admin who was writing code. And X decided it would be good to have views join against other views which joined against other views, and have some queries which used those views run in triggers.
When X was on the project, and there were 40 users, it was fine. They hit 1000 users, and things were 'slow', so they upgraded to a larger EC2 instance. X left, and I came in, and several months later they hit ~30000 users (not active, just ... user account records). The system was dying with more than 5 active users, because of all the views joining other views on 30k+ records.
Unravelling that meant deciphering all the views, all the queries, all the code that touched all of it, and rebuilding a moderate portion, without tests, known 'good' data, or anyone on the project knowing what 'right' was - they just knew when things looked 'wrong' (or slow).
BUT... 18 months earlier, it was "JGID", and it got done. I'm still a bit perplexed why a professional DB admin thought views joining against multiple other views was a good approach.
We've all got horror stories, I'm sure, it's just that the "JGID" mentality is often preached by competent experts as a good approach, but picked up by beginners as the approach used by experts, and has bad consequences.
When I mean JGID, I also meant not make it a steaming pile of shit :)
the tldr of my post is that people often don't know the difference. We also have grown a culture of people promoting "YAGNI", and its easier for some people to dismiss basic ideas with YAGNI.
Yeah, adding 5 language translations to your project on day 1 - YAGNI.
Storing passwords in plaintext? You need something more than that.
DRY up your code... unless the two code paths look similar but aren't the same.
YAGNI... unless "it" is a database backup, a load balancer, or test coverage of your signup flow.
KISS... unless your problem is complicated enough that a simple solution only implements half of what you need.
Microservices are great... if you've got more than one team and can support the operational overhead of SDN and service discovery. Monoliths are great... until you're pushing code to them every minute and they become impossible to refactor.
If the underlying views were performant- I'd assume the query optimizer would do the right thing(at least 90% of the time).
EDIT: I guess it depends - Just did more research and found this . As long as the views don't do unnecessary heavy lifting or joining unnecessary tables, it should be fine.
Had a query that selected from table X joining against view X which selected from view A, and view A joins against tables A,B and C, and table B also joins against a view which uses table A and C.
This was just bad. But trying to explain to non-tech people how bad it is, when "it used to work", is difficult. It used to work when you had 50-100 records. No one ever tested was this would be like with 30k records and 50 concurrent users all executing the same nested/circular view mess simultaneously.
But the fact that it was postgres is kind of beside the point. I don't know of any mainstream DB that would handle this well.
The short term fix was to do this large query once at the end of a process and cache the results; the set of queries in question were happening on a 'dashboard' view which everyone hit all the time. It would still cause problems with concurrency, because when 80 people would go through a process and get 'done' (think timed training exercises), the queries would still all be running more or less concurrently, and still cause timeouts, but it wasn't as frequent, because people tended to be staggered a bit more as they finished.
I have definitely been guilty of failing to test how my schemata behave with large data sets.
Oh then there's ORMs. I've seen ActiveRecord spit out some frankly batshit insane queries that would stump a room of Einsteins. But somehow PostgreSQL picked it up, chopped it into a plan and got to work plowing through an incredibly wasteful and repetitious query.
As have I. It's the dangerous part of the "get it done" approach. And there's no perfect approach - everything is a tradeoff. How much time do you spend dealing with situations that might never happen?
Experience does give you some grounding when making those tradeoff decisions. No, we don't need the architect the application to scale up/down to handle 25k concurrent users in 5 minutes; that's unlikely going to happen. Yes, we do need to spend the extra 2 hours installing and learning a templating system to avoid common XSS pitfalls.
I'm currently working on a service written by a bunch of hipster developers who, wrote raw SQL in Python. And they wrote views, lots of really difficult to understand views that are self-joining (on JSON fields no less!). The performance is really unpredictable given the input sizes. It's the only time I'd call performance of a database chaotic because given an input size (beyond certain safe ranges) I have no idea what the performance is.
The problem is from my perspective a key bit of your application logic get hidden, you're then bound to migrations to change it.
HOLY TAMOLE! You might have my story beat there. That's one level I didn't have to deal with. My sympathies!
Sometimes "make it so" from ST-TNG.
(Fwiw searching 'JIT development' does net first page results of people talking about JIT/lean manufacturing in a software development context.)
What's needed right now, but that doesn't take a complete overhaul when it's time to start growing.
The first one seems to do the wrong thing because there is no mention of any stakeholder at all. Getting it working first is great though.
The second one seems to do the right thing by demoing to stakeholders first. Then, it goes on to make it web scale with buzzwords (which doesn't seem good).
The best combination seems to be: 1. demoing to higher up and 2. JIT (I'm not familiar with the word, but I get what you mean).
Team 2 spends 6 months showing upper management architecture diagrams and flashy mockups. They blow a million dollars on cloud hosting their load balanced web servers. They've got Kafka, ElasticSearch, triple redundant SQL servers, and a 12 person team to run it all. But when asked to present the actual page they announce that nobody on the team knows HTML and they just kind of assumed somebody else would do that part.
Non-technical managers cannot tell the difference between the two teams.
The second team emphasised the importants of scale and how they were going to solve that problem. The apps to run on top seemed to slip everyone's minds because ticket were being closed hitting KPIs.