Hacker News new | past | comments | ask | show | jobs | submit | jamesrr39's comments login

I have heard this a few times from different people/places, but why is it the case that at 50+ it is harder to find work? Assuming a regular retirement age, there are still many more years left in the career than a typical tech employment lasts.


> but why is it the case that at 50+ it is harder to find work?

I think this begins to be visible even sooner, 38 if you graduated at 23. The majority of the job market requires very very few 15+ years experienced engineers. 5 to 10 years of experience is a sweet spot - you will be easily hired. Everything below and beyond is a struggle, especially for the latter since very few companies need and are willing to pay for those skills.

And that's how you become unemployable with the irony of being at more or less what would be the peak of your technical capabilities. In years later on, people start to lose the drive.


I feel like I'm hitting this now. Just turned 39.

I'm very good in my niche, but businesses just want 'answer to question'. I can provide 'answer to question while also making sure the answer-generating process is fully reproducible, data limitations are addressed and made visible, uncertainty has been calculated and is included in the answer'.

Not every question needs that!

Most people are willing to pay for Ikea furniture, not hand-crafted artisanal pieces. Ikea is good enough.


I must say that I have never thought about that before but that's the reality of the tech market. Many people I talk to, who are outside the tech, can't comprehend this at all - they all assume the same - the more experience and knowledge you have, the more competitive and therefore highly sought out you become. Not true at all.


I think landing (and keeping) a job in tech is challenging, whether you're a recent graduate or a seasoned professional with decades of experience. While the reasons for rejection may change with age, the key factors for getting hired remain the same: competency and collaboration. Demonstrating strong skills and being easy to work with will always be valuable—focus on these, and opportunities will follow. - a 40's something developer with 20+ years experience


On the one hand, I was declined by Google multiple times but ended getting a $10k settlement in a age discrimination class-action suit.

On the other hand, I just got hired at 55 and it wasn't difficult.


Because most founders who made it in the field, did so at a young age and so they are biased to view old people who didn't "make it" and are still coding as incompetent even though it has nothing to do with reality. Reality is that being good in the tech side does not correlate that mich with being good on the business side. They are almost independent factors aside from some low baseline requirement of competence...

The baseline requirement of technical competence for extreme financial success in tech is so low that most big tech companies don't even hire rank-and-file engineers whom don't meet that requirement half-way.


There was a YC CEO that in a podcast basically asserted that innovation was pretty much done by people less than 30 years of age. I had been gearing to apply to that company until I saw that comment.


the average age for successful founders is 40 years, and i believe that is first time founders. so i do not believe that founder bias against age is the issue.


in austria/germany the problem is that cultural expectation is that people get paid by seniority, and also based on their experience and qualifications, regardless of the actual requirements for the job. it is also assumed that no one wants to do work that is below their qualifications.

that is, a 50 year old isn't even asked what their salary expectations are, it is simply assumed to be higher than what they want to pay, or rather, they can't bring themselves to pay someone like that less than they think is appropriate for their age. combine that with the perception that older people are less flexible and unwilling/unable to learn new stuff, and you end up with the belief that older people are expensive and useless or overqualified.


One reason is there’s literally many times fewer roles for someone with 20+ years of experience.

And as time marches on, there’s more and more competition for those roles.


I wonder how that could be possible? There are proportionally so few of us old-timers around to begin with, given how much smaller the industry was and how rapidly it has grown over the last 20-30 years.


i thought standard advice is to chop your resume to last 10 yrs


I guess it depends on how many jobs you’ve had in the last 10 years. I’d only have 2 roles if I did that :)


Speaking only for myself:

- Minor health issues accumulate and become a distraction. Especially insomnia.

- Having worked on many projects and technologies that went nowhere, my enthusiasm for the work is diminished, making me less focused.

I decided to return to the last work that I found meaningful, which was as a software developer in the U.S. civil service.

I think this was the right move, although Trump and Musk are doing their very best to make me question that.


I won't be rehired anywhere near what I was making if I do find something, that's fairly certain. So I've put the onus on myself to generate the income I'm looking for.


What do you think about making your salary your top priority?


I think at this point self-determination has eclipsed a great salary from someone else as a priority. Plus I'm fairly certain I can have it both ways.


Bias


Could be bias, could also be that we just can't fake it anymore?


Anyone up for starting a job site for 40+ devs only?

How about an angel investing firm for 40+ founders only?


Genuine question: I appreciate the comments about MongoDB being much better than it was 10 years ago; but Postgres is also much better today than then as well. What situations is Mongo better than Postgres? Why choose Mongo in 2025?


Don’t choose Mongo. It does everything and nothing well. It’s a weird bastard of a database—easily adopted, yet hard to get rid of. One day, you look in the mirror and ask yourself: why am I forking over hundreds of thousands of dollars for tens of thousands' worth of compute and storage to a company with a great business operation but a terrible engineering operation, continually weighed down by the unachievable business requirement of being everything to everyone?


I have experience using both MongoDB and PostgreSQL. While pretty much spoken here is true, there is one more scalability aspect. When a fast moving team builds its service, it tends to not care about scalability. And in PostgreSQL there are much much more features that prevent future scalability. It's so easy to use them when your DB cluster is young and small. It's so easy to wire them up into the service's DNA.

In MongoDB the situation is different. You have to deal with the bare minimum of a database. But in return your data design has much higher horizontal scalability survivability.

In the initial phase of your startup, choose MongoDB. It's easier to start and evolve in earlier stages. And later on, if you feel the need and have resources to scale PostgreSQL, move your data there.


Mongo is Web scale.


instagram use postgresql and still web-scale (unless this was satire)



I have not watched that video since 2013 (wow!) and it is still hilarious.


they obviously didn't use vanilla postgres, but built some custom sharding on top, which is untrivial task (implementation and maintenance(resharding, failover, replication, etc)).


Choose Mongo if you need web scale.


a) MongoDB has built-in, supported, proven scalability and high availability features. PostgreSQL does not. If it wasn't for cloud offerings like AWS Aurora providing them no company would even bother with PostgreSQL at all. It's 2025 these features are not-negotiable for most use cases.

b) MongoDB does one thing well. JSON documents. If your domain model is built around that then nothing is faster. Seriously nothing. You can do tuple updates on complex structures at speeds that cripple PostgreSQL in seconds.

c) Nobody who is architecting systems ever thinks this way. It is never MongoDB or PostgreSQL. They specialise in different things and have different strengths. It is far more common to see both deployed.


> It's 2025 these features are not-negotiable for most use cases.

Excuse me? I do enterprise apps, along with most of the developers I know. We run like 100 transactions per second and can easily survive hours of planned downtime.

It's 2025, computers are really fast. I barely need a database, but ACID makes transaction processing so much easier.


MongoDB has had ACID transactions for many years. I encourage folks to at least read up on the topic they are claiming to have expertise in


They failed every single Jepsen test, including the last one [0]

granted, the failures were pretty minor, especially compared to previous reports (like the first one [1], that was a fun read), but they still had bad defaults back then (and maybe still do)

I would not trust anything MongoDB says without independent confirmation

[0] https://jepsen.io/analyses/mongodb-4.2.6

[1] https://aphyr.com/posts/284-call-me-maybe-mongodb


Reputation matters. If someone comes to market with a shoddy product or missing features/slideware then it's a self created problem that people don't check the product release logs every week for the next few years waiting for them rectifying it. And even once there is an announcement people are perfectly entitled to have scepticism that it isn't a smoke and mirrors feature and not spend hours doing their own due diligence. Again self created problem.


Last I checked they still didn't even implement pagination on their blog properly


100? I had a customer with 10k upserts incl merge logic for the upserts while serving 100k concurrent reads. Good luck doing that with a SQL database trying to check constraints across 10 tables. This is what Nosql databases are optimized for... There's some stand-out examples of companies scaling even mysql to ridiculous sizes. But generally speaking, relational databases don't do a great job at synchronous/transactional replication and scalability. That's the trade off you make for having schema checks and whatnot in place.


I guess I didn't make myself clear. The number was supposed to be trivially low. The point was that "high performance" is like the least important factor when deciding on technology in my context.


A) Postgres easily scales to billions of rows without breaking a sweat. After that shard. It’s definitely negotiable.


So does a text file.

Statements like yours are meaningless when you aren't specific about the operations, schema, access patterns etc.

If you have a single server, relational use case then PostgreSQL is great. But like all technology it's not great at everything.


The use a text file.

In all seriousness, calling Postgres’ scalability “not-negotiable for most use cases” is wild.


What's wild is you misrepresenting what I said which was:

"built-in, supported, proven scalability and high availability"

PostgreSQL does not have any of this. It's only good for a single server instance which isn't really enough in a cloud world where instances are largely ephemeral.


Do you mean ephemeral clients or Postgres servers?


If multiple nodes are needed, then why MongoDB and not a Postgres compatible distributed product like CockroachDB or YugabyteDB?


Thanks for these comments, I appreciate it.

Although I would point out:

> scalability [...] no company would even bother with PostgreSQL at all

In my experience, you can get pretty far with Postgresql on a beefy server, and when combined with monitoring, pg_stat_statements and application level caching (e.g. the user for this given request, instead of fetching that data on every layer of the request handling), certainly enough most businesses/organisations out there.


Great response. All arguments are valid and fair.


Mongo is real distributed and scalable DB, while postgres is single server DB, so main consideration could be if you need to scale beyond single server.



things still can be true, even if being wrapped into meme videos by haters..


Postgres has replicas? Most people use those for reads and a master writes.


This can take you really damn far.

I've been playing with CloudNativePG recently and adding replicas is easy as can be, they automatically sync up and join the cluster without you thinking about it.

Way nicer than the bare-vm ansible setup I used at my last company.


Calling MongoDB a real database compared to PostgreSQL is hilarious.

MongDB is basically a pile of JSON in comparison, no matter how much you distribute and scale it.


I think there is no distributed db on the market available with features parity to PgSQL. Distributed systems are hard, and sacrifices need to be made.


sigh

See https://jepsen.io/analyses for how MongoDB has a tradition of incorrect claims and losing your data.

Distributed databases are not easy. Just saying "it is web scale" doesn't make it so.


Are you aware:

1. That PgSQL also has issues in jepsen tests?

2. of any distributed DB which doesn't have jepsen issues?

3. It is configurable behavior for MongoDB: can it lose data and work fast, or work slower and do not lose data. There is no issues of unintentional data loss in most recent(5yo) jepsen report for MongoDB.


Distributed databases are not easy. You can't simplify everything down to "has issues". Yes, I did read most Jepsen reports in detail, and struggled to understand everything.

Your second point seems to imply that everything has issues, so using MongoDB is fine. But there are various kinds of problems. Take a look at the report for RethinkDB, for example, and compare the issues found there to the MongoDB problems.


> Take a look at the report for RethinkDB

RethinkDB doesn't support cross document transactions, problem solved lol


PgSQL only defect was anomaly in reads which caused transaction results to appear a tiny bit later, and they even mentioned that it is allowed by standards. No data loss of any kind.

MongoDB defects were, let's say, somewhat more severe

[2.4.3] "In this post, we’ll see MongoDB drop a phenomenal amount of data."

[2.6.7] "Mongo’s consistency model is broken by design: not only can “strictly consistent” reads see stale versions of documents, but they can also return garbage data from writes that never should have occurred. [...] almost all write concern levels allow data loss.

[3.6.4] "with MongoDB’s default consistency levels, CC sessions fail to provide the claimed invariants"

[4.2.6] "even at the strongest levels of read and write concern, it failed to preserve snapshot isolation. Instead, Jepsen observed read skew, cyclic information flow, duplicate writes, and internal consistency violations"

let's not pretend that Mongo is a reliable database please. Fast? likely. But if you value your data, don't use it.


In attempt to understand your motives in this discussion, I would like to ask question:

* why you are referring on 12yo reports for very early MongoDB version?


This discussion refers to the entire history of MongoDB reports, which shows a lack of care about losing data.

If you wish to have a more recent MongoDB report, Jepsen is available for hire, from what I understand.


No, discussion started with question "Why choose Mongo in 2025?" So, old jepsen reports are irrelevant, and most recent one from 2020 is somehow relevant.


High availability is more important than scalability for most.

On average an AWS availability zone tends to suffer at least one failure a year. Some are disclosed. Many are not. And so that database you are running on a single instance will die.

Question is do you want to do something about it or just suffer the outage.


I think major providers provide PG service with cross zone availability through replication.


It's sad that this was downvoted. It's literally true. MongoDB vs. vanilla Postgres is not in Postgres' favor with respect to horizontal scaling. It's the same situation with Postgres vs. MySQL.

That being said there are plenty of ways to shard Postgres that are free, e.g. Citus. It's also questionable whether many need sharding. You can go a long way with simply a replica.

Postgres also has plenty of its own strengths. For one, you can get a managed solution without being locked into MongoDB the company.


Citus is owned by Microsoft.

And history has not been nice to startups like this continuing their products over the long term.

It's why unless it is built-in and supported it's not feasible for most to depend on it.


that's fair, but that's true of mongodb itself too. I wouldn't count that against either of them.


MongoDB makes money selling and supporting MongoDB.

Microsoft does not make money supporting Citus.


Simple.

Postgres is hard, you have to learn SQL. SQL is hard and mean.

Mongo means we can just dump everyone into a magic box and worry about it later.No tables to create.

But their is little time, we need to ship our CRUD APP NOW! No one on the team knows SQL!

I'm actually using Postgres via Supabase for my current project, but I would probably never use straight up Postgres.


If learning SQL is hard, maybe software isn't the best choice of career.

Writing code and creating good software requires a lot of mental clarity and effort; that fact is never going to change, not even with AI.


Billions upon billions of value have been created just upon the premise that SQL is hard.

Firebase by and almost every NoSql technology is based upon this.


Postgres supports JSONB natively. It literally speaks mongo line protocol and you can shove unstructured json into it.

It has supported this since 9.4: https://www.postgresql.org/docs/current/datatype-json.html


I don't necessarily agree with the above justifications, but in my experience this is basically why teams pick Mongo.

It's easier to get started with.


Now there's a truth about MongoDB, it's easy to get started with.

But why is that the top priority?


Because some devs and teams prioritise “get to prod” above literally all else.

Maintainability? Secondary. Security? Secondary. Data-integrity/correctness? Secondary.


It’s hard to disagree with you on that part. PG is definitely not free to get starts with and requires a bit of setup (hello pg_hba.conf).


Yes but updating nested fields is last write wins, and with mongo you could update two fields separately and have the writes succeed, it's not equivalent.


Can you provide an example or documentation please?


When you write to a postgres jsonb field it updates the entire JSONB content, because that's how postgres's engine works. Mongo allows you to $set two fields on the same document at the same time, for example, and have both writes win, which is very useful and removes distributed locks etc. This is just like updating specific table columns on postgres, but postgres doesn't allow that within columns, you'd have to lock the row for updating to do this safely which is a PITA.


Even as a JSON document store I'd rather use postgres with a jsonb column.


Why is that? I found Postgres's JSONB a pill to work with beyond trivial SELECTs, and even those were less ergonomic than Mongo.


Because you get the convenience of having a document store with a schema defined outside of the DB if you want it, along with the strong guarantees and semantics of SQL.


For example: let's say you had a CRM. You want to use foreign keys, transactions, all the classic SQL stuff to manage who can edit a post, when it was made, and other important metadata. But the hierarchical stuff representing the actual post is stored in JSON and interpreted by the backend.


I thought this was sarcasm till the last sentence. Now I'm not sure.


The amount of perfectly functional messaging apps Google has gone through is crazy[1]. Each Google messaging app (GTalk, Hangouts, Meet, etc) is perfectly functional, but with an endless series of migrations, why would you stay around and every several months/year explain to the non-technical family members how the new version of Google's messaging product works?

Enter Whatsapp, which has been pretty consistent through the years, and of course guess which one people use.

[1]: Of course, it's crazy from a product management perspective - but from a "launch a new product to get the next promotion" perspective...


> Intel reserves the right to ask you to stop using the logo [...]

Serious stuff!


At the risk of appearing low-tech, a much more simple, Goroutine-safe solution for dealing with "now-dependent" code:

type NowFunc func() time.Time

func getGreeting(nowFunc NowFunc) string {

  now := nowFunc()

  if now.Hour() < 12 {

    return "Good Morning"

  }

  return "Good day"
}

And just pass in `time.Now` in for live code, and your own inline function for simulating a time in tests.


For just calls to time.Now, sure.

Once you've done that same "make a low-tech simple replacement" for all of sleep, context deadlines, tickers, AfterFuncs, etc all of which are quite commonly used... you've basically done everything these libraries do. At about the same level of complexity.


On a larger scale: Copenhagen and Vancouver both have fully-automated metro systems (i.e. driverless systems). Presumably there are many other cities with such systems around the world, and they probably all work nicely.

Fine for getting around different areas of the cities, but it's not going to drive you wherever you want to go though.


> The future for self driving cars are closed roads where only driverless cars are allowed.

Given that human-driven cars, trucks, cyclists are already on the roads and will be for quite a while to come, and pedestrians already cross it, you would have to build a whole new road network, with crossing points for human-driven vehicles/pedestrian traffic. Which is simply infeasible, both in terms of money, but also simply space, especially in built-up areas where the space is already fully utilised.


Quite right, it is infeasible to close roads for self driving cars only.

Which begs the question: How are self driving cars ever going to be mainstream?


> So basically they're trying to do a "liveness" check, probably under the assumption that videos are too hard to fake (and hopefully they compare the ID documents against the video). Honestly, that seems legitimate to me. With data leaks and generative AI, it's going to be increasingly hard to do the kind of identity verification tasks online that we take for granted.

I worked for a company that required these videos in one of the markets they served. Some countries have decent digital ID solutions already in place, but in many it's just a picture of a driving license or such that is so easily faked/stolen. Kind of a shame how in many countries officially identifying yourself online is not implemented/implemented badly enough that no-one uses it, so instead we have this poor uploading pictures of private documents and videos of yourself fallback.


Yes. See GDPR (max fine 4% of global annual revenue) or the new EU Digital Services Act (max fine 6% of global annual revenue).

These are both fairly new laws, if you look at the laws they replace (which themselves may not even be that old), the fines are a huge leap up.


Not just moved back there, but used the company Wi-Fi network!


My bad. Should have rtfa. Guess he's just an idiot.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: