Imagine this happening on a longer sail where help might be much further away, that’s kinda scary
I definitely would not be surprised if this ends up in people being prepared for these attacks in the future, if this keeps occurring, and I’m afraid that won’t do good for the orcas.
I feel I should bring up that in the EU there almost exists two worlds when it comes to GDPR: Germany - and the rest of the countries.
I’ve made software for the childcare industry, where the data concerns are greater than most other industries.
Nobody had any problem with AWS, or really any non-EU vendor, as long as they lived up to the GRPR agreements and could provide the usual agreements.
Only in Germany would you run into requirements to either host in Germany (at worst) or at least within EU (at best). Additionally, there’s a lot of German specific laws on top, that simply aren’t in the other EU countries, and the general population is also much more concerned about data privacy and residency than any other EU country.
It was a world of difference, and honestly enough for me that I would not enter the German market again if it meant needing to comply with any additional effort than the rest of the EU market.
A bit more of a rant: The hosting solutions in Germany are also quite atrocious once you get to a certain scale. Lack of proper managed services, tons of instability, insane maintenance policies, poor security support (eg no 2FA for many). Once you’ve gotten used to how AWS/GCP/Azure handles things, it’s hard to go back to that world.
That EU Sovereign Cloud will help nothing. The basic facts remain the same. Amazon is a US company and the US government can force Amazon to hand over the data using a secret FISA order. They can force Amazon to add a backdoor to get the data if they have to.
> I feel I should bring up that in the EU there almost exists two worlds when it comes to GDPR: Germany - and the rest of the countries.
Well, Germany isn't the country that made Google Analytics illegal. Other countries do care.
> Nobody had any problem with AWS, or really any non-EU vendor, as long as they lived up to the GRPR agreements and could provide the usual agreements.
I was in charge of the tech for a massive man in the middle company in Germany where we integrated with lots of companies to provide data for other companies. Noone had an issue with AWS because they were all using it. It's consumers who care and consumers who will make reports and it's companies that will pay the fine.
I make daily/weekly/monthly goals, and structure it in whatever App I use e.g. Linear, Todoist, or Notion.
- Monthly goals are very high level and few (e.g. “Make PoC for this”, “Redesign and relaunch blog”)
- Weekly goals are more tangible and limited (e.g. “Settle on approach for calling Rust from Swift code”, or “Finish design and styling of posts”)
- Daily are very concrete (e.g. “Set up UniFFI pipeline to generate Swift bindings” or “Implement new theme across blog pages”)
Sometimes things come up that I discover during implementation, and then I typically shift a daily goal to the next day.
Has worked well so far for giving me focus, and I then pick the daily goals based on the weekly focus from the list of many open tasks/issues I have in my various projects.
I set up each thing I’m working on as a Project in e.g. Linear, and immediately add a priority when I add things, which allows me to easily keep an overview of many smaller or larger projects I might have going on or want to do in the future.
While I do like paper, for me that’s only for ephemeral things. I prefer to keep things digital, allowing me to easily add stuff from my phone in the go when I get an idea while being out-and-about. I also write much faster on a keyboard and use the various tasks as the dumping ground for info while I’m working through something or researching something.
I’m definitely of the opinion that what Figma[0] (and earlier, Notion[1]) did is what I’d call “actual engineering”.
Both of these companies are very specific about their circumstances and requirements
- Time is ticking, and downtime is guaranteed if they don’t do anything
- They are not interested in giving up the massive amount feature AWS supports via RDS, very specially around data recovery (anyone involved with Business Continuity planning would understand the importance of this)
- They need to be able to do it safely, incrementally, and without closing the door on reverting their implementation/rollout
- The impact on Developers should be minimal
“Engineering” at its core is about navigating the solution space given your requirements, and they did it well!
Both Figma and Notion meticulously focused on the minimal feature set they needed, in the most important order, to prevent disastrous downtime (e.g Figma didn’t need to support all of SQL for sharding, just their most used subset).
Both companies call out (rightfully so) that they have extensive experience operating RDS at this point, and that existing solutions either didn’t give them the control they needed (Notion) or required a significant rewrite or their data structures (Figma), which was not acceptable.
I think many people also completely underestimate how important operational experience with your solution is at this scale. Switch to Citus/Vitess? You’ll now find out the hard way all the “gotchas” of running those solutions at scale, and it would guarantedly have resulted in significant downtime as they procured this knowledge.
They’d also have to spend a ton of time catching up to RDS features they were suddenly lacking, which I would wager would take much more time than the time it took implementing their solutions.
the right way to look at it - IMHO - is to interpret "lots of RDS experience" as complete lack of run-your-own postgreSQL experience. and given that it's not surprising that their cost-benefit math give them the answer of "invest into a custom middleware, instead of moving to running our postgreSQL plus some sharding thing on top"
of course it's not cheap, but probably they are deep into the AWS money pit anyway (so running Citus/whatever would be similar TCO)
and it's okay, AWS is expensive for a lot of bootstrapped startups with huge infra requirements for each additional user, but Figma and Notion are virtually on the exact opposite of that spectrum
also it shows that there's no trivial solution in this space, sharding SQL DBs is very hard in general, and the extant solutions have sharp edges
> even if it meant changing priorities 5 times in 3 days
That kind of thing is exactly what an EM is there to stop.
I’ve faced similar problems a couple of times, and it takes a lot of effort to correct the ship into a healthy Product/Engineering organization, but it can be done. There are tangible downsides to switching priorities constantly, and beyond inefficiency, team morale usually also takes a hit.
Now, without an EM, whose responsibility is it to stop this? No one’s really, it doesn’t land on anyone’s plate, so you either have to hope one of your colleagues with enough organizational leverage steps up, or do it yourself.
Honestly, the article lays out what I would expect the opinions of a junior or mid-level Engineer to be (it irks me quite a bit)
- Clearly not aware of the difference between “an individual” and “team dynamics”
- Unaware of the huge amount of organizational work that happens behind the scenes that someone needs to do (and I guarantee that you wouldn’t want to do that as an Engineer)
- Unaware of how much an EM shields their team as the frontline for questions, planning, prioritization, feasibility
- HR is never gonna be there to guide or mentor your Engineering career, which an EM will
Even the most senior engineers need guidance, and that’s what your EM is for.
The EM typically steps in to fill voids until they find qualified people to do these things. Not every Engineer is good as tech leads, nor as project managers, so it takes time to get them up-to-speed or find someone else.
Very excited for Cranelift for debug builds to speed up development iteration - in particular for WASM/Frontend Rust where iteration speed is competing with the new era of Rust tooling for JS which lands in the sub 1 second builds sometimes (iteration speed in Frontend is crucial).
Sadly, it does not yet support ARM macOS, so us M1-3 users will have to wait a bit :/
But usually, at least I don't build an executable while iterating. And if I do, I try to set up the feature flags so the build is minimal for the tests I am running
There’s been a recent trend of rewriting tools in Rust with insanes speed gains
- Rspack (webpack compatible)
- Rolldown (rollup compatible)
- Turbopack
- Oxc (linter)
- Biome (linter and more)
- Bun (writing in Zig, does crazy fast bundling)
There’s several parts here that are crucial to Frontend development
For production you need:
- Minificion of source code
- Bundling of modules and source code into either one JS file or split into multiple for lazy loading only the parts you need
- Transforming various unsupported high-level constructs into something older target browsers support
- Typechecking/compiling, or stripping TypeScript if that’s in use
Build times could easily go to 10-20 minutes with older tools.
The development loop also gets hurt, here you’d want the loop from saving your change to seeing it in the UI to be almost instant. Anything else means you’ll have to develop crutch methods to workaround this (imagine moving and styling components only to need to sit and wait during each small incremental change).