This reads like a Speaker for the Dead moment (from Ender’s Game): neither eulogy nor denunciation, but an honest accounting. Acknowledging the real impact without excusing the real harm.
This is neither prescient nor particularly insightful. Private equity metrics have been down for a while: fundraising down 20% since 2022, distributions down 11%, deal value and count down 60% and 35% respectively, exit value down 24% YoY.
PE is fueled by interest rates, and the entire thesis has flipped from revenue/growth to EBITDA. The shift is exposing some dogs: both PortCos that can't hide fundamental business model issues behind cheap capital and PE firms that can't lead operations and financing in a new environment. The correction is well underway.
What does “correction” look like in your opinion? PE impacts quite a lot of the job force, 8-9% of GDP; do these companies get sold, or just squeezed dry, to try and recoup investments during zirp?
For portcos, you'll definitely see the focus on costs. That means restructuring/layoffs, contraction from non-key markets, and reduced growth initiatives.
PE is going to be loathe to sell at a loss, though you'll see some horse-trading between some firms. So that would be a last resort, though we are already seeing some write-downs, like Vista/PluralSight last month[1].
More broadly, you'll see lower valuations and tightening in the credit markets that may affect macroeconomic slowdowns.
Most of this isn't exclusive to PE: interest rates and other drives are affecting non-PE similarly in the form of increased borrowing costs, tighter credit conditions, and general economic uncertainty. The contrarian view may be that PE portcos are better able to navigate those waters given the focus on business fundamentals and operating maturity.
I'm at the tail end of two of these, of ~10 in my career. They are always tough, always a bit of chaos, and all different.
Planning is important, and avoid committing to targets or deadlines until you have your arms wrapped around what needs to be done. This can be wide-ranging, and include: product parity, contract management, internal asset development (project plans, test suites, customer training, etc.), customer change management, and team throughput.
You have few clients but large impacts. You likely want to pick the friendliest one and give them generous terms to be the "test case". Expect it will take 2x longer than your estimate.
Do as much work on parity as you can: what are the differences between v1 and v2, and how will you bridge them? If data migration is involved, you will need tooling and team training.
Inevitably you will find that customers move slower than you like and are using v1 in ways you did not expect.
Day #1 of any N-month long migration/rewrite project I've participated in:
PM: "Fill out this spreadsheet with key dates leading up to the project completion."
Me: "First, that's your job, not mine. Second, I literally just got here, I haven't even drunk my coffee yet. Hi, my name is Jiggawatts. I've only just heard of this software we're migrating ten minutes ago."
PM: "Yes, yes, but the customer asked me for cost estimates and timelines."
Me: "I asked for a Lamborghini packed with supermodels, but I didn't get that either. Tough break, huh?"
PM: "It's not an unreasonable request!"
Me: "Without time machines and/or a magic crystal ball, it is. Do you have a time machine?"
Etc...
We all recognise this, and it's a symptom of an underlying problem.
Really, what ought to occur is incremental progress and demonstrable deliverables. If you go off into a cave for two years and come back with something the customer doesn't like, then you've caused a business catastrophe.
I've found that businesses and customers in general prefer incremental improvement. One trick in .NET land is to use something like YARP[1], which lets you totally rewrite the app... one web page at a time.
Another management trick on top of that is to not demo the last few steps. Complete the last few milestones of the project quietly, without reporting this up until the very end. I guarantee you that everyone in charge of the budget thinks they can "save money" by skipping the "last 10%", even though that results in 2x the ongoing complexity because it means the legacy components must still remain live and deployed to production.
I guarantee that the only way to prevent this is to lie to management. It is biologically impossible to insert these concepts into the brain of a non-technical manager, so don't even try.
We've gone through a round of layoffs, and I've been thinking about the same thing.
It's not that it's too easy—it's that it's too impactful.
The real answer is social safety nets. If you want to protect people, address the root problem that your life is dependent on having an employer. Proper unemployment or UBI plus universal healthcare makes losing a job annoying ("ugh now I have to find another one") vs. terrifying.
Jack up the corporate tax rate (on revenue) to pay for it—which should be a wash after reducing the load of severance, healthcare benefits, etc. that companies are paying today.
Better worker protections like the UK/Europe are mechanisms too—notice periods, guaranteed severance, etc.—but have their own chilling effects.
This has the added benefit of reducing the barriers of entry for individuals: people are more likely to leave bad jobs or pursue their own opportunities, which in turn should drive subsequent job creation.
I'm a big fan of social safety nets, but would you agree it needs to stop somewhere? I assume the people laid off here were very well compensated for their work and likely had ample opportunity to build their own reserves. A safety net should be set up in a way that it enables you to have a comfortable, but basic lifestyle. Or should society pay for the CEO's mansion's maintenance should he get fired?
yes social safety nets would be the usual way the impact/consequences are taken care of. At the same time america could also lean into the idea of the employer providing everything, in that case it would make sense for the employer to also take care of the consequences.
This post sent me down a rabbit hole on entomology.
In short: words change meaning. Addition as a term includes a history of positive definition in the 17th century of "devoting oneself to another person, cause or pursuit", to being "associated with excessive alcohol use" in the early 1900s, to "linked almost exclusively to excessive patterns of substance use" in the 1980s, to the modern medical definition—not made until 2013!—of "the most severe degree of the addictive disorders, due to pervasive/excessive substance-use or behavioural compulsions/impulses".
Indeed, for most of the late 20th century, no one could agree what it referred to! "The word addiction was deliberately omitted from four consecutive editions of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders [...] because it was considered a layman’s rather than a scientific term, pejorative, stigmatizing, and too difficult to define. There were simply ‘too many meanings’ (Alexander & Schweighofer 1988); the term lacked any ‘universally agreed upon definition’ (Buchman et al. 2011); the result of using it was ‘conceptual chaos’ (Shaffer 1986, 1997)."
So it hasn't been redefined to mean anything because it was never fully defined to begin with. Only in the last decade has it truly been formalized, and yes includes both chemical and behavioral dependency.
> I’m saying addiction as a term referred to chemical dependency.
Addiction does not exclusively refer to chemical dependency.
> Now it is being redefined to mean “people doing anything I don’t like and don’t think they should do”.
I don't know where you got that impression, if that were the case then I would expect there to be widespread disagreement about what is addictive.
Why is it not possible that both dependency and addictive substances are often harmful?
> Whatever you call it there is a tremendous difference in type between alcoholism or opioid dependency and social media use.
What does difference in type mean? Are we distinguishing between alcoholism and opioid dependency or between material conditions and it's presentation in social media?
> Calling them the same word is grossly misleading if not outright lying.
Alcoholism (or to be more precise, alcohol dependence) is a physiological condition. Gambling ‘addiction’ is a psychological condition. That is what I mean by difference in type. No amount of self control will stop Delirium Tremens but some amount of self control can absolutely stop you from gambling. But since a dominant political ideology wants to deny self control altogether it groups these two different situations together under the same word.
People tend to want to best of both worlds. They want to "own their software" (i.e., pay once, use forever) but they also want the benefits of SaaS: low up-front investment, cloud services, continual evolution, network effects, etc.
Photoshop is an easy example: would you rather pay $400 up front to have version X.X forever, or $10/mo forever to always have the latest version. That's a tradeoff! Consumers have voted with their wallets on #2.
Cloud services are even harder because you start talking hardware. "Owning Photoshop" is easy because it runs on my computer. I'm maintaining my computer for me and only me. What would "owning their software" even look like for, I donno, Github? Are you running your own AWS instance? Are other people running their own instances?
There are ways to build P2P software, or on-prem enterprise stuff... but no one really wants to buy it. They're ok paying $10/mo for the billions of dollars of infrastructure because there's really no other way to do it.
> Photoshop ... Consumers have voted with their wallets on #2.
Did they? Or did Adobe just stop offering #1, forcing customers into #2 whether they like it or not?
I'm partly being facetious, but it's also partly a genuine question. Is there some research you know of that shows a majority of people genuinely choosing #2, when both are on the table?
For a long time the standard model was "pay $X for the current version, then $Y to upgrade to the next version, if you want." That is pretty close to the best of both worlds, IMO, from a customer standpoint.
> Did they? Or did Adobe just stop offering #1, forcing customers into #2 whether they like it or not?
Completely agree!
I purchased CS6 around 2012 and use it to solve graphical tasks for a client even today.
I know that Adobes stock went up with ~25% (or was it more) when they announced the new subscription/robbery model, but I'm sure a lot of customers would like to do like me and keep getting value from the old investment by "owning" not renting CS<x>.
Only if you HAVE to be using the latest and greatest SW features, then the math might also work for the customer.
PS.: Too bad that my Macs are no longer able to execute CS6 as there is some 32bit code in CS6 and the newer Mac OS's only want to play with 64bit apps, that's a shame (but here Windows shine, by always(?) being backwards compatible (my client is a Windows business, so here that's not a problem…
> Did they? Or did Adobe just stop offering #1, forcing customers into #2 whether they like it or not?
I'm recalling that they did run both models in parallel, but couldn't find a reference in a quick search.
Regardless, the 25% stock price jump would indicate that from a "voting with their wallet" perspective, subscriptions were a rather unequivocal winner.
There are certainly pay-once-use-forever models out there, though my perception is that they're niche pricing models for a reason. (And still not outright ownership!)
Does that mean prospects and customers were happy about it? Maybe not. But I suspect the only answer that would have really satisfied the majority would be the unattainable "best of both worlds".
Killer marketing function: when I clicked through, I was immediately looking for a live example. You've got the carousel of screenshots, but show me what the actual output looks like.
Heck, add this as an opt-in in the signup process (or better, as a later opt-in once you've shown value), then showcase 4–5 artists. What artist wouldn't want a little free publicity?
Whether it's consumer trends or technical accessibility, it seems to be more of a "wasteland" than it was 10 years ago. [Old man shakes fist at clouds.] Meaning more "small and not very good games" or a focus on simpler concepts à la Wordle.
Did developers move to mobile? Was there something about Flash that reduced the barrier to entry that we have lost? Did consumer preferences change? Is this all anecdote and the indie game scene is thriving?
I spent many years in mobile and then recently decided to try the web stuff.
I believe the web is now Wordle or bust, by which I mean you need a wordle type sharing mechanic or you will not get repeat players. Players of webgames seem overwhelmingly to jump from one to the next, and while it is easier to get that first play session than from mobile app stores it is much harder to get them back. This is not helped by monetization on the web being awful.
An exception to that is the crypto space, which seems to not generate too much money either and suffers from the related phenomenon of players involved mainly to acquire things to express themselves. Some form of UGC seems a necessity moving forward.
Flash represented a very fixed target that was easy for everyone to reason about. (On mobile this is one reason iOS is much easier than Android). Typically the game itself wouldn’t even resize, it was just hosted on a page that did. It got a understandably bad name with tech people but as an art tool with programming embedded it was actually very good, just not a secure delivery platform.
The result is the web has barriers of entry low enough to keep competition insane while being complicated enough to deliver premium experiences on that it is practically impossible. Not a winning situation.
To be fair, boats can be inexpensive to buy. But they have running costs, essential maintenance costs, and a low cost slip likely only accommodates a very small boat and evicts you if you try to live there
More importantly depending where your boat is, you might have to deal with extreme heat, cold, wind, flooding etc, your water and waste disposal isn't plumbed, you might be generating your own electricity etc etc. The flip side is that you get the fun of going boating, but it's a lifestyle, not a cheap flat.
Sent from my narrowboat. (Shameless plug: for sale!)
That "drop out" concept just seems entirely wrong. Not only because its erroneously constricting the applicant pool, but because it strongly biases the earlier roles in the queue.
For example, consider two roles and two applicants, with fit scores as below:
Role 1 Role 2
----- ----- -----
Applicant A 96% 95%
Applicant B 95% 50%
Ignoring the "drop out" bug, under the algorithm described the system would evaluate all candidates for Role 1, determine Applicant A is the best, then move on. At that point, Applicant B is the best candidate for Role 2... even though they're not a very good one. Overall, not a great outcome (73% avg.).
You'd think the algorithm would want to maximize outcomes across all roles: the more optimal "best fit" solution would be Applicant B in Role 1 and Applicant A in Role 2 (95% avg).
(I'm assuming the reality here is that Role B isn't available at time of evaluation, so there's no way to evaluate the universe without waiting, which may be sub-optimal.)
at first glance the algorithm seems to reward compliance ("take whatever is offered") and severely penalize any teacher who insists on some placement (by refusing the first placement you are knocked out of the applicants, maybe for a long time)