I have heard this a few times from different people/places, but why is it the case that at 50+ it is harder to find work? Assuming a regular retirement age, there are still many more years left in the career than a typical tech employment lasts.
> but why is it the case that at 50+ it is harder to find work?
I think this begins to be visible even sooner, 38 if you graduated at 23. The majority of the job market requires very very few 15+ years experienced engineers. 5 to 10 years of experience is a sweet spot - you will be easily hired. Everything below and beyond is a struggle, especially for the latter since very few companies need and are willing to pay for those skills.
And that's how you become unemployable with the irony of being at more or less what would be the peak of your technical capabilities. In years later on, people start to lose the drive.
I'm very good in my niche, but businesses just want 'answer to question'. I can provide 'answer to question while also making sure the answer-generating process is fully reproducible, data limitations are addressed and made visible, uncertainty has been calculated and is included in the answer'.
Not every question needs that!
Most people are willing to pay for Ikea furniture, not hand-crafted artisanal pieces. Ikea is good enough.
I must say that I have never thought about that before but that's the reality of the tech market. Many people I talk to, who are outside the tech, can't comprehend this at all - they all assume the same - the more experience and knowledge you have, the more competitive and therefore highly sought out you become. Not true at all.
I think landing (and keeping) a job in tech is challenging, whether you're a recent graduate or a seasoned professional with decades of experience. While the reasons for rejection may change with age, the key factors for getting hired remain the same: competency and collaboration. Demonstrating strong skills and being easy to work with will always be valuable—focus on these, and opportunities will follow. - a 40's something developer with 20+ years experience
Because most founders who made it in the field, did so at a young age and so they are biased to view old people who didn't "make it" and are still coding as incompetent even though it has nothing to do with reality. Reality is that being good in the tech side does not correlate that mich with being good on the business side. They are almost independent factors aside from some low baseline requirement of competence...
The baseline requirement of technical competence for extreme financial success in tech is so low that most big tech companies don't even hire rank-and-file engineers whom don't meet that requirement half-way.
There was a YC CEO that in a podcast basically asserted that innovation was pretty much done by people less than 30 years of age. I had been gearing to apply to that company until I saw that comment.
the average age for successful founders is 40 years, and i believe that is first time founders. so i do not believe that founder bias against age is the issue.
in austria/germany the problem is that cultural expectation is that people get paid by seniority, and also based on their experience and qualifications, regardless of the actual requirements for the job. it is also assumed that no one wants to do work that is below their qualifications.
that is, a 50 year old isn't even asked what their salary expectations are, it is simply assumed to be higher than what they want to pay, or rather, they can't bring themselves to pay someone like that less than they think is appropriate for their age. combine that with the perception that older people are less flexible and unwilling/unable to learn new stuff, and you end up with the belief that older people are expensive and useless or overqualified.
I wonder how that could be possible? There are proportionally so few of us old-timers around to begin with, given how much smaller the industry was and how rapidly it has grown over the last 20-30 years.
I won't be rehired anywhere near what I was making if I do find something, that's fairly certain. So I've put the onus on myself to generate the income I'm looking for.
Genuine question: I appreciate the comments about MongoDB being much better than it was 10 years ago; but Postgres is also much better today than then as well. What situations is Mongo better than Postgres? Why choose Mongo in 2025?
Don’t choose Mongo. It does everything and nothing well. It’s a weird bastard of a database—easily adopted, yet hard to get rid of. One day, you look in the mirror and ask yourself: why am I forking over hundreds of thousands of dollars for tens of thousands' worth of compute and storage to a company with a great business operation but a terrible engineering operation, continually weighed down by the unachievable business requirement of being everything to everyone?
I have experience using both MongoDB and PostgreSQL. While pretty much spoken here is true, there is one more scalability aspect. When a fast moving team builds its service, it tends to not care about scalability. And in PostgreSQL there are much much more features that prevent future scalability. It's so easy to use them when your DB cluster is young and small. It's so easy to wire them up into the service's DNA.
In MongoDB the situation is different. You have to deal with the bare minimum of a database. But in return your data design has much higher horizontal scalability survivability.
In the initial phase of your startup, choose MongoDB. It's easier to start and evolve in earlier stages. And later on, if you feel the need and have resources to scale PostgreSQL, move your data there.
they obviously didn't use vanilla postgres, but built some custom sharding on top, which is untrivial task (implementation and maintenance(resharding, failover, replication, etc)).
a) MongoDB has built-in, supported, proven scalability and high availability features. PostgreSQL does not. If it wasn't for cloud offerings like AWS Aurora providing them no company would even bother with PostgreSQL at all. It's 2025 these features are not-negotiable for most use cases.
b) MongoDB does one thing well. JSON documents. If your domain model is built around that then nothing is faster. Seriously nothing. You can do tuple updates on complex structures at speeds that cripple PostgreSQL in seconds.
c) Nobody who is architecting systems ever thinks this way. It is never MongoDB or PostgreSQL. They specialise in different things and have different strengths. It is far more common to see both deployed.
> It's 2025 these features are not-negotiable for most use cases.
Excuse me? I do enterprise apps, along with most of the developers I know. We run like 100 transactions per second and can easily survive hours of planned downtime.
It's 2025, computers are really fast. I barely need a database, but ACID makes transaction processing so much easier.
They failed every single Jepsen test, including the last one [0]
granted, the failures were pretty minor, especially compared to previous reports (like the first one [1], that was a fun read), but they still had bad defaults back then (and maybe still do)
I would not trust anything MongoDB says without independent confirmation
Reputation matters. If someone comes to market with a shoddy product or missing features/slideware then it's a self created problem that people don't check the product release logs every week for the next few years waiting for them rectifying it. And even once there is an announcement people are perfectly entitled to have scepticism that it isn't a smoke and mirrors feature and not spend hours doing their own due diligence. Again self created problem.
100? I had a customer with 10k upserts incl merge logic for the upserts while serving 100k concurrent reads. Good luck doing that with a SQL database trying to check constraints across 10 tables. This is what Nosql databases are optimized for...
There's some stand-out examples of companies scaling even mysql to ridiculous sizes. But generally speaking, relational databases don't do a great job at synchronous/transactional replication and scalability. That's the trade off you make for having schema checks and whatnot in place.
I guess I didn't make myself clear. The number was supposed to be trivially low. The point was that "high performance" is like the least important factor when deciding on technology in my context.
What's wild is you misrepresenting what I said which was:
"built-in, supported, proven scalability and high availability"
PostgreSQL does not have any of this. It's only good for a single server instance which isn't really enough in a cloud world where instances are largely ephemeral.
> scalability [...] no company would even bother with PostgreSQL at all
In my experience, you can get pretty far with Postgresql on a beefy server, and when combined with monitoring, pg_stat_statements and application level caching (e.g. the user for this given request, instead of fetching that data on every layer of the request handling), certainly enough most businesses/organisations out there.
Mongo is real distributed and scalable DB, while postgres is single server DB, so main consideration could be if you need to scale beyond single server.
I've been playing with CloudNativePG recently and adding replicas is easy as can be, they automatically sync up and join the cluster without you thinking about it.
Way nicer than the bare-vm ansible setup I used at my last company.
I think there is no distributed db on the market available with features parity to PgSQL. Distributed systems are hard, and sacrifices need to be made.
2. of any distributed DB which doesn't have jepsen issues?
3. It is configurable behavior for MongoDB: can it lose data and work fast, or work slower and do not lose data. There is no issues of unintentional data loss in most recent(5yo) jepsen report for MongoDB.
Distributed databases are not easy. You can't simplify everything down to "has issues". Yes, I did read most Jepsen reports in detail, and struggled to understand everything.
Your second point seems to imply that everything has issues, so using MongoDB is fine. But there are various kinds of problems. Take a look at the report for RethinkDB, for example, and compare the issues found there to the MongoDB problems.
PgSQL only defect was anomaly in reads which caused transaction results to appear a tiny bit later, and they even mentioned that it is allowed by standards. No data loss of any kind.
MongoDB defects were, let's say, somewhat more severe
[2.4.3] "In this post, we’ll see MongoDB drop a phenomenal amount of data."
[2.6.7] "Mongo’s consistency model is broken by design: not only can “strictly consistent” reads see stale versions of documents, but they can also return garbage data from writes that never should have occurred. [...] almost all write concern levels allow data loss.
[3.6.4] "with MongoDB’s default consistency levels, CC sessions fail to provide the claimed invariants"
[4.2.6] "even at the strongest levels of read and write concern, it failed to preserve snapshot isolation. Instead, Jepsen observed read skew, cyclic information flow, duplicate writes, and internal consistency violations"
let's not pretend that Mongo is a reliable database please. Fast? likely. But if you value your data, don't use it.
No, discussion started with question "Why choose Mongo in 2025?" So, old jepsen reports are irrelevant, and most recent one from 2020 is somehow relevant.
High availability is more important than scalability for most.
On average an AWS availability zone tends to suffer at least one failure a year. Some are disclosed. Many are not. And so that database you are running on a single instance will die.
Question is do you want to do something about it or just suffer the outage.
It's sad that this was downvoted. It's literally true. MongoDB vs. vanilla Postgres is not in Postgres' favor with respect to horizontal scaling. It's the same situation with Postgres vs. MySQL.
That being said there are plenty of ways to shard Postgres that are free, e.g. Citus. It's also questionable whether many need sharding. You can go a long way with simply a replica.
Postgres also has plenty of its own strengths. For one, you can get a managed solution without being locked into MongoDB the company.
Yes but updating nested fields is last write wins, and with mongo you could update two fields separately and have the writes succeed, it's not equivalent.
When you write to a postgres jsonb field it updates the entire JSONB content, because that's how postgres's engine works. Mongo allows you to $set two fields on the same document at the same time, for example, and have both writes win, which is very useful and removes distributed locks etc. This is just like updating specific table columns on postgres, but postgres doesn't allow that within columns, you'd have to lock the row for updating to do this safely which is a PITA.
Because you get the convenience of having a document store with a schema defined outside of the DB if you want it, along with the strong guarantees and semantics of SQL.
For example: let's say you had a CRM. You want to use foreign keys, transactions, all the classic SQL stuff to manage who can edit a post, when it was made, and other important metadata. But the hierarchical stuff representing the actual post is stored in JSON and interpreted by the backend.
The amount of perfectly functional messaging apps Google has gone through is crazy[1]. Each Google messaging app (GTalk, Hangouts, Meet, etc) is perfectly functional, but with an endless series of migrations, why would you stay around and every several months/year explain to the non-technical family members how the new version of Google's messaging product works?
Enter Whatsapp, which has been pretty consistent through the years, and of course guess which one people use.
[1]: Of course, it's crazy from a product management perspective - but from a "launch a new product to get the next promotion" perspective...
Once you've done that same "make a low-tech simple replacement" for all of sleep, context deadlines, tickers, AfterFuncs, etc all of which are quite commonly used... you've basically done everything these libraries do. At about the same level of complexity.
On a larger scale: Copenhagen and Vancouver both have fully-automated metro systems (i.e. driverless systems). Presumably there are many other cities with such systems around the world, and they probably all work nicely.
Fine for getting around different areas of the cities, but it's not going to drive you wherever you want to go though.
> The future for self driving cars are closed roads where only driverless cars are allowed.
Given that human-driven cars, trucks, cyclists are already on the roads and will be for quite a while to come, and pedestrians already cross it, you would have to build a whole new road network, with crossing points for human-driven vehicles/pedestrian traffic. Which is simply infeasible, both in terms of money, but also simply space, especially in built-up areas where the space is already fully utilised.
> So basically they're trying to do a "liveness" check, probably under the assumption that videos are too hard to fake (and hopefully they compare the ID documents against the video). Honestly, that seems legitimate to me. With data leaks and generative AI, it's going to be increasingly hard to do the kind of identity verification tasks online that we take for granted.
I worked for a company that required these videos in one of the markets they served. Some countries have decent digital ID solutions already in place, but in many it's just a picture of a driving license or such that is so easily faked/stolen. Kind of a shame how in many countries officially identifying yourself online is not implemented/implemented badly enough that no-one uses it, so instead we have this poor uploading pictures of private documents and videos of yourself fallback.