> Speaking of cutting costs, the company is still pouring multiple billions of dollars into vaporware called “the metaverse”. News flash: no one wants to wear VR goggles to spend any time in a digital heaven where the role of God is played by Mark Zuckerberg and you can do anything you can imagine, including “work” and “shop”.
You can somehow feel that he has been dying to say this for years, but couldn't while he was still working for Meta...
But yeah, I can imagine how the decisions on layoffs usually go: "what are those guys doing? Probabilistic something or other?! No idea what that's good for! And wow, look how much they get paid!"
I see your point and don't mean to be argumentative, but a couple small corrections.
First, the pivot to "meta" was just over a year ago, so it hasn't been quite years.
Second, I haven't been shy about sharing my opinion internally, though I haven't been broadcasting it either. The first thing I said in our team group chat when we'd heard this announcement was (context, I am much older than most people on the team) "I'm old enough to have read Snow Crash the week it came out and IT WAS A DYSTOPIA, why are we building it?"
Third, this opinion is indeed extremely common internally.
Fourth, I genuinely have no idea how this decision was made; it was certainly not on the basis of net cost savings. We did the math.
> Third, this opinion is indeed extremely common internally.
It is intensely weird to me that people hold the opinion that their company is building something that is bad for the world, and yet they stay there and continue to help build it.
I know that's easy to say as someone not in that situation, but not always easy to do for someone who is. I get that people don't always have a ton of choice about their employment; maybe they are afraid of losing their health insurance, maybe they are on a visa and can't easily switch jobs, or maybe they simply aren't able to find another job that works for them. Less noble, but maybe the pay is just too good, and if they stay for just a bit longer, it will be life changing. I can totally sympathize with that!
But if this opinion is "extremely common", I would expect that a good number of those people would have the ability to leave, to the point that perhaps Zuckerberg would rethink his strategy.
(I don't really buy the "change things from within" explanation; 1) that rarely works, especially in a company the size of Meta, and 2) if tried, that clearly has not worked, given Meta's continuing trajectory.)
People are leaving. In droves. There has been a huge exodus of senior talent in the last year. Zuck mentioned that in an earnings call and rather than assigning himself any responsibility said that those people were unregretted attrition and lazy parasites who were just collecting a paycheck during the pandemic. (I am paraphrasing somewhat; you can look it up if you want his exact words.)
Regarding change from within -- that's what a team dedicated to improving decision making is for.
Why would he ever consider taking responsibility for something when he can't really be held accountable? It should be illegal to take a company public with special voting-only shares.
It is illegal on some stock exchanges. You are free to only buy your shares there. Also you are free to buy stocks on exchanges that do allow it but don't have two classes. No one is forcing anyone to buy Meta
> Zuck mentioned that in an earnings call and rather than assigning himself any responsibility said that those people were unregretted attrition and lazy parasites who were just collecting a paycheck during the pandemic.
Wow. What an effective way of encouraging others to leave while discouraging others from applying.
Remember: toxic work environments are reinforced from the top, and the top _only._ Toxic leadership begets toxic work environments, and it looks like Zuck is a right peach in being insufferably toxic.
I've often wondered this, especially when I quit a company out of principle, expecting that I wouldn't be the only one. But even when an entire department quits en masse, it turns out that none of us are ever as important as we thought we were. The guys on the top who make the real decisions are on the top because they are experts at driving their vision forward regardless of whatever happens underneath or around them. They might make pretenses toward being servant leaders or listening to upward feedback, but the reality is if they have convinced enough investors or board members to back them, or if they independently own a majority stake, they can and will do whatever they like. The rank and file employees do not matter.
I think most workers have accepted this reality. Workplaces are not a democracy. Even if most of the workforce disagrees with a policy that was approved from on high, there is little they can do to change it. Especially in the tech industry, where jobs are so highly-paid and sought-after, there isn't really the option of collective bargaining. And that's even assuming the kinds of snowflakes who tend to work in this industry would accept that Break The Build Joe deserves the same pay as Rewrote It In Rust Rob.
Personally I maintain my own principles around what will drive me to leave, but I have long given up hope that my decision to quit will have any impact whatsoever on the direction of the company. In the past it's barely even had an impact on the people I directly worked with. I'm just not that important, and neither is any other IC or low-level manager. I think most of us try to find a company whose direction we can tolerate just enough to continue working there for the cash. And, I suppose, there are thousands of people for whom Meta still passes that bar.
> The guys on the top who make the real decisions are on the top because they are experts at driving their vision forward regardless of whatever happens underneath or around them.
"The question is not whose values are correct but whether you can work in a situation where good work on your part will perpetuate values you don't believe in. If you leave, you'll be making the greatest possible contribution to your own well-being, and you just might help propagate your values."
Understanding The Professional Programmer, p19 - Gerald Weinberg
Case 1) FBWorker thinks its a boondoggle, but is neutral on Facebook, and can read the political tea leaves. Your basic mercenary corporate footsoldier. So stay and get FAANGpaid.
Case 2) FBWorker drinks koolaid (mabye because they cant keep FAANGpaid from forming their identity). Probably noob out of college or someone that just wants FAANG on the resume. Stay and build it. Probably not a lot of talent. And get FAANGpaid.
Case 3) FBWorker generally hates Facebook, maybe a holdover from pre-monetization FB days, why stop your enemy from making a huge mistake? Wait, AND you get FAANGpaid?
Case 4) FBWorker who is principled moralist who once believed they could change the world for the better.
One of the reasons Facebook is in trouble, especially if they actually think the Meta pivot is key, is that almost all their employees are Case 1 or Case 3.
Case 4 quit long ago or doesn't apply anymore.
The question is, is there even a semblance of a core of Case 2 in facebook? Doubt it.
So Marky Z is like a dictator now. Yeah he's top dog of his realm, but motivating people to any fundamental degree isn't possible. Money only goes so far. The organization has purged all the truly motivated talent.
Haha yes. There was always a lot of drama at Facebook about “leakers”. People HATE them, they don’t understand why they have no “honor” and don’t quit if they hate Facebook so much. They call for witch hunts in comments whenever something leaks.
People seemed completely unable to connect the dots, that if you think Facebook is evil, why not leak all their stuff to the press, and get PAID to do it?
> It is intensely weird to me that people hold the opinion that their company is building something that is bad for the world, and yet they stay there and continue to help build it.
Not really. People fear change. Engineers at Meta almost certainly have good pay, work with people they like, have a generally comfortable life.
I think the common opinion is that nobody will want or use the metaverse. Getting paid to build something that nobody will use isn't immoral, it's amoral.
Helping an advertiser trick kids into spending their time in a pupil dilation monitor/retinal scanner so that ads can be optimized for impact based on biometric markers isn't amoral, it's immoral.
I know it's an accepted sci-fi trope and people (including advertisers) genuinely believe it's the future, but advertising doesn't work that way.
When you peel back the layers and figure out how this sausage is made, underneath it's just sociology and broad target audiences. (Age, sex, location, social status.)
I mostly agree with your point, but it's a bit more specific than that. I used to work in airfare advertising and we could target ads at people who were interested in certain locations. You searched for flights to Paris and they were $600 and didn't purchase? Well now there's a weekend deal where it's $450 and we'll bid on Instagram ads that target you. You've flown to Europe, Asia and North America? Here's some Latam flight ads. Flew to Chicago over the Christmas holiday? We'll show you some Chicago flight ads next October.
We didn't get access to the actual user lists, but the queries you could build in your ad bids could include these factors. We were also in a special "Ads For Travel" partner program, so we probably had bid options not available to the average FB ads account manager.
Retinal scans do not help; however, 'eye tracking', which is currently available in Quest Pro and should filter down to the consumer tier headset in a year or two, does. When Meta knows what you look at, and for how long, and in which patterns, then advertising will legitimately have some 'next level' shit to work with.
The role of the retinal scans, I expect, has more to do with identifying which human is in the headset. In the metaverse, biometrics of that sort will be like cookies that you can't clear.
People are interested in VR for gaming. Not for the metaverse. Zuck should be building a console or something, not wasting time on something no one wants.
> It is intensely weird to me that people hold the opinion that their company is building something that is bad for the world, and yet they stay there and continue to help build it
Show me a big tech company (or any MNC) that is not "building something that is bad for the world". The broader question is why do people keep working for big tech, and that has been answered many times before (lots of money, smart colleagues, interesting problems, and a difference in perception on culpability.)
>The broader question is why do people keep working for big tech
Also, if the alternative is "work for a small company", some of us have tried that before and that hasn't been very good either, just for different reasons (terrible pay, abusive management, toleration of sexual harassment, etc.).
> t is intensely weird to me that people hold the opinion that their company is building something that is bad for the world, and yet they stay there and continue to help build it.
Most people work for the pay. Thats why you pay people: to make them work on something they dont necessarily like.
Not everyone needs to be or to act like an activist.
Some people are driven by more blatantly mercenary priorities than others, but most people prefer to perform work which aligns with their morality and gains them the respect of others; it's part of that whole "hierarchy of needs".
It is of course possible to meet one's higher-level needs in other parts of one's life, but if your career is actively opposing your values, it becomes that much harder to make up for what you are missing in the limited time remaining. If you can't make that balance work, burnout and depression are likely.
> most people prefer to perform work which aligns with their morality and gains them the respect of others; it's part of that whole "hierarchy of needs".
Your values can only be worth that much, and a big check will make you forget them faster than you think. After all, that's what corruption actually exists: people willing to compromise on whatever they believe or used to believe for financial benefits.
Also don't forget that people will rationalize everything. There's no one going to work every day and hating it - if you stay in a place you don't really like, ultimately you will find reasons to stay and to justify your choice.
> It is intensely weird to me that people hold the opinion that their company is building something that is bad for the world, and yet they stay there and continue to help build it.
Facebook is a giant privacy-violating advertising company that demonstrated repeatedly that it was bad for the world long before Meta happened. And yet they attracted a huge amount of great talent who knew going in what FB was. Microsoft did the same thing in the 90s.
People have their own reasons for joining such companies. Maybe it's stability, or the quality of the team, or freedom, or it looks good on a resume, or maybe it's just the money. I'm not going to judge somebody for any of that.
If I had been working at FB I might have tried to transfer to the metaverse team. Even if I had no confidence it would work it would have been much more interesting than that dying social network. And probably less evil, because it will probably die before it gets big enough to matter.
> Facebook is a giant privacy-violating advertising company that demonstrated repeatedly that it was bad for the world long before Meta happened.
The "bad for the world" stories I've heard turned out to not have substance (Cambridge analytica and the "harmful to young girls" study). Are you aware of something else?
Humans are too easy to manipulate for "change things from within." We try to please people subconsciously and it causes us to betray ourselves. Some people have grown beyond this vulnerability, but for many of us its permanent.
Some people work on making the metaverse more addicting, others work on improving instagram's accessibility, others work on abstracted computer science problems that only happen to be under the umbrella of the same corporation that makes Facebook
It's not black and white; there are degrees of separation. You and I participate in an economy that somewhere, many layers down, causes human rights abuses. No ethical consumption under capitalism.
It's impossible to separate completely. Everyone has to pick and choose how many layers of separation (and from what) are needed for them to feel okay about their effect on society. For you or me that might mean we won't work at Facebook or Amazon, but your startup's angel investors aren't angels either. Neither are the companies you buy your devices from, or the banks you lend your savings to.
This isn't to say we shouldn't care, but we shouldn't pretend things are clear-cut, and we should take care throwing rocks at glass houses.
> It is intensely weird to me that people hold the opinion that their company is building something that is bad for the world, and yet they stay there and continue to help build it.
From Oppenheimer onwards this hasn't been too unusual - but what is unusual I think in Meta's case is the fact it's so widespread by smart folks.
> It is intensely weird to me that people hold the opinion that their company is building something that is bad for the world, and yet they stay there and continue to help build it.
Hoo boy, the old "just take another job" trope. I've got news: First, in this late-stage capitalism neoliberal hell, people have to cling to their jobs for more reasons than anyone can count. Second, where are they going to go? Who is hiring that isn't also doing things that are bad for the world, at a rate that someone with a student loan, a family, a mortgage, and college-bound kids can afford?
Knock it off with the "just get another job" thing and come join the real world.
You know, the 'just get another job' attitude is annoying because of the lack of empathy, but the 'you have no right to talk about principles because I want to be upper-middle-class' attitude is annoying because even though we all rationalize, being aggressive about it to the extent that you denigrate a person for having ideals makes me feel a sort of doomed misanthropy.
"Big tech" - particularly facebook and google, I think are widely considered to be at worst actively evil and at best responsible for the race to the bottom in the internet experience and public discourse that's happened over the last 15 years. But they also appear to pay really well, have locally cool problems to work on, and have lots of work that has no relevance to destroying society, e.g. all the AI research they both do, which I don't think it's possible to overstate how important it's been for the growth of the field, in a very good way.
So it's easy to see the dilemma in a sense. It's maybe really just a concentrated example of capitalism generally. All money has blood on it
"I'm old enough to have read Snow Crash the week it came out and IT WAS A DYSTOPIA, why are we building it?"
This is such an incredible quote, I love it. And I love that you said it in your group chat. I wish more people in tech could see this forest from within the trees.
> But at least it was a dystopia where pizza delivery was an exciting business.
It wasn't an exciting business.
Pizza delivery was an exciting job (in all the worst ways a job can be exciting, or, TBH, just stressful), but the business part of it was quite boring. The corporate takeover denouement was perhaps the most boring scene in the book, and barely justified the "Chekhov's Guns" that were required to give a baddie a single "oh shit" moment with no payoff.
I am a huge fan of your work, Eric. I started out in PL and ended up in ML, working on far less impactful projects than you did :)
However, I don't believe metaverse is a waste of time, at least for business users. This can eliminate all office space use in about 5-10 years if the hardware and software is ready, which it very likely will. Or maybe a wall to wall LED screen will do the trick for remote social presence, making the headset redundant.
It can spill over from the office to personal lives like the PC did. But anyway, hope you continue to work on cool things.
Having previously worked at a data team before the pandemic eon which raison'd etre was "saving cost", the reality is it didn't.
We used advanced statistical concept and vended models for other teams/services to consume, but the truth is either it was just really obvious cost solutions that they didn't bother, or it didn't really work all that well (stats model are, unfortunately, prone to many interpretations).
I've since moved on to feature based team where I feel like I'm more on the ground. And recently was able to get rid of lots of cost just by.. looking at them. I remain skeptical of the teams that base itself on statistical models to reduce cost.
> You can somehow feel that he has been dying to say this for years, but couldn't while he was still working for Meta...
It's a pretty common sentiment in my experience here, de rigueur even. Expressing it in the way he does here is definitely frowned upon though -- one of the most interesting cultural traits I've noticed that Meta has fostered is an awareness and avoidance of cynicism. When people comment internally and there is even a hint of cynicism in what they say, they are frequently called out on it. Never seen such a thing ever before in my life. For me it's refreshing, but I imagine for some, depending on the topic and how negatively they feel about it, it could lead them to spiral and exit.
I've experienced this in a similar workplace before and did lead me to spiral and exit, to me it was absolutely exhausting keeping such a ruse up. There was something about it that felt so inauthentic, a bit toxic positivity, a bit hide-the-pain-harold.
It's like the workplace version of Instagram itself, where everybody shows their best side, is mildly ashamed of feeling anything but positive because of the collective emphasis on "good vibes only" and keeps any concerns, cynicism or suggestions that we're going in the wrong direction under wraps.
I think it's ultimately unhealthy (both individually, psychologically and to the company) and leads to the same problems you see in autocratic nations - the leaders only see everything going swimmingly.
I worked at Google when it was created and "went viral". Before memegen there was a strain of what I would call inauthentic positivity
I think memegen made it a lot more acceptable to be cynical, which was probably good, because the company was definitely drinking a lot of its own Kool-aid
Yeah, there's a ceremony where the manager puts a piece of paper with their username in a goblet and drops in a match, and then the non-managers all collectively intone "gone. gone. gone." and turn their backs. It's kind of sad, but the manager is allowed to keep their ceremonial robe as a memento, and there's usually a box of free cookies, so /shrug
> Before memegen there was a strain of what I would call inauthentic positivity
Too lazy to check, but did memegen appear before or after the whole Google+ debacle? Because from the outside Google did seem to drink a lot of its own Kool-aid back then (before and during the first stages of the Google+ launch).
In a way, the current discussions around VR remind me of those times, lots and lots of smart people refusing to say that the emperor has no clothes, for one reason or another.
Oh yea, but funny enough a lot of the critiques were shared via the internal version of Google+ since people actually took the time to write things and it had a pretty good UX. I don't think it's a great comparison to Meta ... this is anecdotal, but it felt plausible enough as a product strategy and in a self contained org that I think most rank and file gave it breathing room and tried to be constructive (save for a few cases where they'd made some bad calls). Keep in mind it was a period of huge growth and experimentation across the company, so there was a lot to feel positive about even if you were skeptical of Google+, and it didn't feel like they'd bet the farm, in the same way it might at Meta.
Memegen predates Google+. And there were lots of people loudly saying that the emperor has no clothes, but the VP in charge didn't care and many mandates forcing people to work on G+ were issued anyway.
Before. I made a Memegen meme about ten years ago comparing the sign in with G+ buttons to the classic Wikipedia Jimmy Wales begging-for-donations banners, and for many years it was one of the top 100 Memegen memes ever.
My last two jobs have had this lack of cynicism, but I would describe only one of them as toxic. We were going straight to the top and anyone who left was a traitor. The other simply considered the products to be incredibly important and life-changing.
I can live with keeping the cynicism to myself as long as the rest of the culture is ok.
I also wouldn’t want to work with a person that is just negative all the time but fair critique and moral checks should be encouraged, especially when holding so much power.
However well paid at the outset, such “cults” more often than not are ultimately pernicious not only to their environment but their members as well.
I felt this way about the entire United States when I moved here from Europe. The standard stance in the UK is cynical, dry, and too-cool-for-school. Try-hards are despised. The US was different and very refreshing. Enthusiasm and optimism can be expressed without embarrassment, and having a too-frequently-cynical stance is looked down on.
I felt a step change again moving from academia to industry, but perhaps it goes a step too far. Sometimes I feel like thoughtful analysis is suppressed in favor of active thrash, because the former smells like bad-valence skepticism and the latter approved optimism.
I found similar going from UK to USA - can-do positive attitude, enthusiasm is appreciated. (This was private sector). This was 20 yrs ago though. Having returned to UK I think its got better here at least in tech. Also if you take a (slightly tempered, not in-ya-face) US attitude back to UK , in the right workplaces this goes down well because people actually like positive enthusiastic colleagues. In the UK it can be hard though getting managers to not see this as a threat. When I first returned the American pro-active attitude I think caused me to bomb an interview because I (innocently and tactfully at least I hoped ;) ) asked what reason they were using technology X and had they considered Y , which I think they took as a threat, someone who'd be insubordinate whereas in the US they'd more likely go "good question!" and be glad someone cared enough to ask (at least in my experience). I behaved the same the following week when interviewing at a startup and naturally, got that job. ;) Since then I think I adjusted to balance being pro-active with careful not to rock the political boat. American positivity has to be seen to be believed. ;). When in the US I was both impressed by people's ability to stay positive in the face of very difficult challenges, but slightly bemused by the amount of unfair s** people would silently put up with too e:g in customer service jobs. ;)
On the other hand, I found US engineers were way more likely to try to exaggerate their own work and push for promotion, etc. whereas the EU had more team solidarity.
Like working with US engineers there was one who deliberately put meetings at bad hours (time zone difference), or would "accidentally" send the invite to your pager email instead, etc. to make others look bad.
We're talking about cynicism and not criticism, correct? In my experience, cynicism is unproductive at best and anti-productive at worst. Criticism, of course, is valuable and healthy.
Cynicism is a good and healthy thing to share with colleagues over a beer, but when you're on the clock will kill morale. Arguably dumping loads of money on a vaporware moonshot is also a morale killer, but sniping at it in meetings helps no one.
Excessive cynicism in the face of actual positive change is bad, however when you have no power to change something, you resort to cynicism. If your employees feel no power to express themselves over your bad decisions, you've built a truly toxic system.
The tone comes from the top. If criticizing the CEO’s strategy is prohibited as “cynicism,” then lower level leaders throughout the company will take that as license to prohibit criticism of their own ideas. And often that type of criticism is far more productive.
If people are cynical in the face of positive change then that’s only because they have experience with previous examples of ‘positive change’ that turned out to not be positive after all.
It’s easy for bad leaders to redefine legitimate criticism as cynicism. When I worked for a company with a similar culture I saw this happen all the time.
Too much cyncism in any person or organization will lead to gridlock and/or burnout as new ideas are immediately scrapped and morale tanks. Just ask anyone who has worked for 10+ years in government. However, for a private company that kind of critical thinking is often important to make sure that all the lemmings don't run off the cliff.
It is interesting to me that cynicism is stifled at a cultural level at Meta. It is some kind of low-level cult like behavior, to stifle internal criticism. It must also breed a kind of in-group/out-group mentality, as I don't know a single person IRL who has a positive view of the company, its products, or the metaverse.
I find too much cynicism off-putting. I'm at a Big Tech adjacent (or not depending) company and one reason (of many) I don't consider Google as a potential employer is because everyone I've met there is deeply cynical about the company. I've gone places in my life I never expected I'd end up in, my own brain is wired to filter out cynicism. If I had to deal with a company culture deeply cynical about everything they work on, I'd either become irascible or horribly depressed.
People are different. Good thing we tech folks are well-compensated and are in fairly high demand.
Too much cynisism certainly sounds bad, but cynisism in and of itself shouldn't necessarily be problematic. Eskewing it entirely to always opt for optimism is inherently dishonest and does not acknowledge that sometimes having a negative response is fair and justified. And not allowing that as part of company culture is stifling.
Do you think people's cynisism about Google has made you miss out on a positive opportunity or can their discontent have signaled actual organizational issues that'd have affected you in negatively?
Some of this is quibbling about definitions in my head at least. Fundamentally I like working in environments where folks are optimistic but realistic, keenly aware of how effortless failure is. Discussing both failure and success should be allowed and encouraged, but constantly looking at the negative or opining about how an individual can't change anything in the organization doesn't feel healthy to me. Most Googlers I've talked to view the company as a large, corporate politics chess game where engineers are the pawns.
> Do you think people's cynisism about Google has made you miss out on a positive opportunity or can their discontent have signaled actual organizational issues that'd have affected you in negatively?
This is a really good question and I don't have a good answer for it. At this point my sample size is high enough that I'm inclined to think it's Google but I also realize my sample set has lots of correlating factors (they're more junior than me, they work in different areas than I would, etc, etc) that could lead to their cynicism that might not affect me.
I don't know, I'm not a Googler. I don't enjoy working with large groups of folks who think like this and I don't enjoy working in organizations where engineers are treated as pawns. Seems like either way, Google isn't the place for me. If you're a Googler you probably have a better understanding than I do.
I think across this thread we're conflating cynicism with skepticism. I am a utopian; I'm no believer in cynicism. It is unhelpful to shoot down everything or to refuse to even try, and a goal being unachievable doesn't necessarily means we won't accomplish valuable stuff in it's pursuit.
But that doesn't mean we shouldn't examine and criticize ideas, that we shouldn't seek to improve upon them and - perhaps, if they are irreparable - abandon them for better ideas.
Attempting to force the market into a box that is convenient for you because it enhances your power and market position, because you want to be in control of a hardware platform to achieve parity with your competitors - and refusing to acknowledge it may not be what people actually want - that is truly cynical.
Cynicism is often a protection reaction against realizing things are not as they are stated. And that can happen for any number of reasons, but usually it's because the words and actions of the company aren't lining up in very noticeable ways.
From taking privately to a few Google employees, part of what drives the cynicism is disagreement with the organization's progressive political bias. Those who are more politically conservative know that openly expressing their opinions will result in retaliation, so they just grit their teeth and stay silent in order to continue collecting their large paychecks.
> When people comment internally and there is even a hint of cynicism in what they say, they are frequently called out on it.
Correct me if I'm wrong, but it sounds like whenever people presented valid criticism, the standard approach to silence it would be to criticise the tone with a holier-than-thou attitude. Sounds like a cynical ploy to shield yourself from criticism.
I get that cynicism can be a negative thing, and can drag down people and groups. But I think there's a fine line between cynicism and (constructive) negative feedback. And I wonder if an anti-cynicism culture has the effect of silencing legitimate negative feedback as well.
About cynicism vs. criticism, I'm reminded about something Scott Alexander recounted in his review of Red Plenty, an actual historical event:
(Kantorovich) invented the technique of linear programming, a method of solving optimization problems perfectly suited to allocating resources throughout an economy. He immediately realized its potential and wrote a nice letter to Stalin politely suggesting that his current method of doing economics was wrong and he could do better - this during a time when everyone else in Russia was desperately trying to avoid having Stalin notice them because he tended to kill anyone he noticed. Luckily the letter was intercepted by a kindly mid-level official, who kept it away from Stalin and warehoused Kantorovich in a university somewhere.
Cynicism can be warranted. Of course it's possible to be excessively cynical, but lack of cynicism can be downright deadly. Sometimes you are in situations where you can do little, and the best you can do is still pretty awful. And if you suppress the doubts that could make you cynical, more often than not someone else has to carry those doubts for you, to at least prevent the disasters that ARE preventable.
That's a form of thought policing. It's not cynical if it's a legitimate criticism. Ideas do not automatically deserve legitimacy, they must require reason first.
The metaverse is not real, it's not going to happen. The numbers are not there. All of those users are in vrchat, and when Facebook buys it and turns it into a hell scape of a child playground the community will yet again go elsewhere.
I would be curious how many people are willing to wear VR goggles for any amount of time. I spend easily 10-12 hours a day at my computer. I am absolutely someone who is happy working, socializing, playing, and learning all at the same desk. But I can't wear those goggles for even 2 hours. Are there people who can wear them for 12?
I haven't used the new ones, but I have an Oculus Go. I think the most important part is fitting. I believe there are companies selling accessories to make it more comfortable to wear, and I'd totally invest in that if I planned to use it more, or in a different setting.
I'm using it for porn (and it's amazing, VR porn is the most underrated thing imho, but maybe I'm just weird) and for movies (non-3d, having these slightly-3d-movies didn't really add to the experience for me). I'm someone who can't concentrate on movies on a normal screen, my attention wanders and I'll quit watching and do something else, continue later etc and it might take me three days to complete a single movie. Not so while using the Oculus Go, I'm cut off from the world around me, focused on the movie, and now I sometimes watch a movie in one sitting (though I do rarely watch movies these days, so idk how much this is worth).
I don't know if I want to spend any time "socializing" through it, but when I was sick I've definitely used it for 6-7 hours on one day to watch multiple movies, and it was fine.
The fact that VR content is gated behind major companies concerned with brand safety is a major reason to be skeptical about current VR tech ever taking off. If it were more like the early internet where any passionate and reasonably technical person could make widely available apps and content, I think VR would be much more interesting. Porn is one of the most obvious genres, but also just having a bunch of weird niche content and experimental games would be really cool.
Living in a bland Facebook controlled world overseen by Zuckerberg the God is about as enticing as filing my taxes on a daily cadence.
Watching movies seems like a different application of the tech. Doesn't that just simulate a movie screen several feet in front of you? That's probably not quite as sickness inducing as moving around a full VR environment.
Yes. I mean, if you want to, you can also have a simulated empty cinema around the screen.
I've never felt sick while using it, but I've also only played very few games on it, and those weren't action packed with lots of moving about, but more simple and relaxed.
I can manage 15 minutes or so on my aging Oculus Quest but that's about it. There's a Netflix option on there, I can relax in a virtual cinema with surround sound and a screen sized big enough to feel like a cinema screen and I've not been able to watch anything because of the vertigo.
I thought my kids would go crazy on it, perhaps I'm too old, out of touch etc. but they can do 15 mins max too. It's a novelty toy, quickly put away.
If Google Glasses had really taken off and I could have AR, not VR - overlays on ordinary vision - I'd be there. Handy for work, could do virtual meetings, notifications, all sorts. But as with most things Google it went the way of the dodo and I haven't heard of any replacement poised to take the world by storm.
The Quest has a fairly low refresh rate, which is one of the main barriers to delivering a product that doesn't make you sick.
Back when Carmack was leading the tech dev, they came out with a rule saying that they needed a working minimum of 90fps for MOST people to tolerate VR.
We've known for a long time that for actual use refresh rate (and secondary factors like jitter and tearing) is far more important than resolution, or polygons (and before it became a moot point, color depth).
But the demos and screenshots required to sell the idea aren't impressive if they don't look comparable to recent AAA games, so the wrong kind of hardware keeps getting crammed into the prototypes, and the target keeps moving so there is no opportunity for Moore's Law to solve the mismatch for you.
I see this a lot, and I'm not sure why people are so confident about it. Lighter, thinner, more durable materials are a huge upsell and focus of ongoing product development in eyeglasses, and that's to save 10 or 20 grams. The lightest VR headsets are still an order of magnitude heavier than eyeglasses. Closing that gap while also addressing the resolution and battery life issues with existing headsets is going to require breakthroughs.
I think it's inevitable in the long run, the question is just whether those breakthroughs come too late for Meta (my guess is they will). The salient point is that given enough time, hardware limitations won't be an obstacle to a metaverse-like something.
Flying cars aren't a thing not because we can't practically do it, but because it is an absolutely horrifying idea to turn the average driver into a pilot with an aircraft that they are in charge of maintaining. (Or was that your point?)
Maybe I'm in a bubble, but most people I know who wear glasses don't actually enjoy doing so and would really rather not if they can avoid it. VR would have to be demonstrably superior to any alternative to justify wearing it for that long even if it was as light as a feather.
I've worn glasses since I was a small child and that's not my experience at all. It's comparable to wearing clothes, except they cost me less and are quicker and easier to clean and maintain. I've occasionally had uncomfortable glasses with materials that my skin reacted to, but that's easy to resolve. And I've occasionally lost or damaged a pair of glasses, and that's a hassle, but not as much as losing your wallet or keys. On any given day I don't consciously think about my glasses at all.
>Maybe I'm in a bubble, but most people I know who wear glasses don't actually enjoy doing so and would really rather not if they can avoid it.
I think you're in a bubble. We have a way of avoiding glasses: contacts. But people choose glasses instead. I wear contacts, not glasses, but every time I ever talk to a glasses-wearer about why they wear glasses instead, they usually can't imagine wearing contacts and think glasses are just fine. I've never heard anyone complain about them. And just looking around me, most men wear glasses; it's usually women that wear contacts.
Contacts don't need any more maintenance, probably less. You just clean them quickly after you take them out for the night, or in the morning just before you put them in. Glasses need to be cleaned regularly. Plus you throw the contacts away after a while (daily, biweekly, or monthly usually) and use new ones.
Putting contacts in takes a few seconds if you're used to it. What's weird is walking around with heavy metal contraptions on your head and ears.
My Apple headphones weigh nothing. Very comfortable.
I don't like having to take them out, or put them in, when I change tasks to where I don't need or want them in my ears.
VR is for immersive experiences. There is a place for that. It will not be in mainstream constant use, unless "mainstream" means to plug in and be fed a false reality for somebody's profit.
I got used to them pretty quickly after some initial motion sickness issues and could wear them for quite a while. Multiple hours if I got super into a few games. I'm pretty excited for having VR replace multiple physical monitors. I'm gonna be looking for a new headset in the coming weeks to see if that's viable right now--maybe the new Meta headset. But if something is promising and can render text fairly well, I'm gonna give it a whirl and see how it goes.
Right now the tech obviously has a ton of comfort issues, from motion sickness to fatigue. But it's not hard to imagine a near future where it's a lot more comfortable to wear VR headsets and be able to do some novel things with them. I mean, the first "laptops" were quite burdensome.
I think the problem is something that we dont actually understand yet. Like there needs to be some kind of psychologist to study it to figure out what is going on.
When I am in the goggles, Its kind of cool, but Im completely disconnected from the real world.
Its not a group activity either, like board games connect you to other people. VR connects you to the matrix. When one person in a family puts on the goggles, everyone else just leaves the room. There is nothing to see or share. (if you have a TV on, maybe its interesting but you are still like watching a person move their head around and look at things that arent there like a person whose mentally ill, its a strange experience)
And the physical barrier is always there. You walk up to the zone you have defined in your real life room, which is probably a few dozen square feet. You can touch invisibility, some bizarre barrier exists to your hands but your eyes tell you the exact opposite - you see infinite space, but are trapped inside a tiny grid whose barriers appear when you walk too far. Your eyes and ears tell you theres a whole world, your hands and feet understand you are still in your living room or whatever.
So when I come out of VR i have this bizarre, uneasy, queasy, unpleasant, feeling, for which I have not the word to describe. It is not like waking up from a dream. It's like shifting uncomfortably from one reality to another, one you have been to by yourself, completely alone.
Weight, they can make you sweaty, may feel uncomfortable in other ways, look dorky as hell. Probably do a bunch of bad eyestrain-related stuff that we haven't figured out yet (or some have but are keeping it quiet). Serious motion-sickness issues for a fraction of users that's too large to ignore, even with top-notch goggles.
When they're in the same size/weight/appearance ballpark as sunglasses, is when AR/VR glasses will take off. It'll be the next "smartphone revolution", no question about it. We'll wonder how we ever put up with being as tied-down as we are at a normal office workstation. The smartphone put the Internet everywhere, rather than in one place, AR/VR will put your computer everywhere. Until then... yeah, it's niche tech.
Sounds like a nightmare. And I already have frequent nightmares about my phone. The one where I need to do something urgently and the phone just gets slower and slower. It's a modern rehash of the dream where you are trying to run from something but your legs just get heavier and heavier.
I don't love it either but it seems inevitable unless it turns out to be impossible to make AR/VR glasses that are svelte enough for people to actually wear in public, and I wouldn't bet against it—I expect most of us in 1995 didn't believe we'd have many-core computers with gigabytes of ram and disk and with near-zero-latency touchscreens and all-day+ (for typical use, at least) battery life in devices with about the same volume as a cigarette packet, just ~12 years later.
Otherwise, the future of at least "consumer" computing is surely computer-as-overlay/HUD-on-reality, very likely with a side of being able to toggle totally virtual environments on and off at will ("Siri, take me to my office") which is what I meant by the computer being everywhere, rather than just having Internet access everywhere. Unless we crack some kind of really good brain/computer interface before we solve the VR-goggles-suck problem, which I doubt, but who knows.
So the plural of anecdote isn't data, but I have a Quest 2 and the limiting factor on use for me is one of two things.
1. The battery runs out.
2. I get physically exhausted. Most of the VR stuff I do is fairly energetic so it's not the VR goggles that tire me out, it's the constant swinging of arms and jumping/crouching.
I've never had an issue with motion sickness and since I'm doing it in my home any worries about how dorky they look are silly. Comfort is mostly fine, although you do have to wash off the foam bits that touch your face regularly or they'll start to smell like old gym socks. Fogging of the lenses is also an annoying and regular issue that I've never fully solved, mostly just getting used to everything being soft looking. The final minor issue is that the lenses can get warm (like hardworking cell phone level warm) so if your room is already hot they would probably get fairly uncomfortable.
Thanks to the battery issue I've never used them for more than a couple of hours at a time however. I can't comment about the comfort after 12 hours. I imagine my arms would have fallen off long before I got to 12 hours of Dragon Fist, Ragnarock, or Beat Saber however.
To stay article relevant I will comment about Horizon Worlds: My overall impression after an hour of trying it out just to see was "What did they spend the billions of dollars on?" It's so corporate and empty and I have no idea where all of the money went. It looks like any old VR Chat clone, there are a handful of minigames, chatrooms, and "VR Experiences" which are just short looped videos. It's not like SecondLife where you could maybe build your own thing or might stumble upon some crazy weird thing at any point. It's just minimal effort everywhere you look. To hear that it is such a money pit makes me wonder if it's some kind of weird money laundering thing or if the developers are just watching YouTube all day for years?
There's a little bit of heaviness if they headset isn't balanced well, but that's easily fixed.
The more concerning thing is the motion sickness. Most people, at first, get nauseated after a short while, and if they don't stop using them for hours at that point, it gets worse and worse each time they use them.
However, if they stop and recover (at least a few hours, a day is better) when they first start to feel it, they'll gradually get more and more used to it.
There's also the inability to properly see things around you, like your coffee or your mouse. AR is a good fix for that, though, and Meta's new Pro glasses specifically don't have full wrap-around so that you can still see around you somewhat. It ruins immersion in games, but they aren't meant for games.
I'm a pretty big fan of VR from way back, and I've owned multiple different headsets now. I do think the "metaverse" is an eventuality, but it's not about meetings, it's about agency. Meta's current attempt at "the metaverse" is just a crappy attempt at doing better than Second Life, but without even the things that made Second Life as good as it was.
The agency to create things yourself and sell to others, and the ability to buy licensed in-universe items is pretty much essential to a functional metaverse, IMO. Meta may intend to get there eventually, but trying to sell it as "the metaverse" before that point is pointless and harmful to their goals. It's going to take a long time to get there, and I'm still hoping that a grassroots movement makes it happen first instead of a big corporation. Ready Player One was all about that scenario and what it would mean. You have to look past the cloying nostalgia to see it, of course. ;)
I use Ocullus quest 2 to play (and LOVE it!). Weight, head and neck strain / tightness are the first issue. Nausea is the follow up. Eye strain is the final. I never use it for more than 30min at the time.
I cannot imagine spending ANY work time in VR at this point in technology cycle, once you add resolution, accuracy, etc. I do not understand what problem it's solving - if you want to visually interact remotely, turn on your camera. If you don't, just talk and screenshare. I do not understand what virtual reality will add to my interactions and productivity.
As someone who has pretty heavily used a Rift 2 for 5+ years now, primarily its ergonomics and comfort.
More physical activities can result in the foam around the eye piece absorbing a goodly quantity of sweat (addressable e.g. with the plastic cover that comes with the Quest 2 or aftermarket alternatives) which just feels gross and can lead to more humidity being trapped within the headset, fogging the lenses, and so on.
The weight is a bit awkward, and different straps can help distribute it better and stay comfortable for longer. The ear phones can be uncomfortable after a time as well, pressing down on the ears as they do. If they were a cupping style like high end headphones, that would help a lot.
I do find that the tethered units like the Rift are more comfortable for longer than the self contained units like the Quest, since they offload processing, power, etc and the attendant weight, to the desktop machine.
Eye strain does add up eventually, and newer headsets have better screens but I wonder if this is just a truly insurmountable problem of mounting screens mere inches from your eyes.
For me personally, my eyes got very tired very fast, after an hour long session left me feeling as though I had been staring at a screen for 10 hours straight
I have a HTC vive original and I actually had a lot of fun with it, but I don’t use it anymore because it takes up a lot of space and the experience is still kind of clunky/low res.
For many people there is a physical discomfort side. From either the heavy device or motion sickness. I didn’t have much issue with this other than playing one time for most of the day and the weight on my face was a bit much.
I spend about 40 minutes-1 hour/day with my Oculus 2 (just did a 40 minute Fall Out Boy Beatsaber session). But just playing. I've done 2 hour sessions before and they are pretty intense, but not in the eyes (maybe the weight becomes a bit of a strain after 1 hour?).
Well, not that I'm in favor of the idea, but probably if you're used to wearing VR goggles since childhood (the same way we are with regular screens) spending 12 hours a day with them on may be just fine.
When I am interacting with regular screens what my eyes see and what my proprioception and internal ear perceive are perfectly synchronous. However, the lag in VR is still human perceivable.
It is not a 'getting used to' exposure problem it is still very much is a VR technology problem. We are just not quite there yet.
I never understood the VR hype. Sure it’s cool for games, and both AR and VR could have commercial uses, but people were talking to me like it was going to be the most significant computing revolution since smartphones and we would all be interacting with VR/AR user interfaces primarily in the near future.
Then Meta comes and it seems like it’s just a ripped idea that’s been done multiple times + some buzzword tech and graphics that make Xbox player profiles look good. The idea that this was seemingly going to be some grand new flagship product for the company was laughable from the beginning.
Sure, but you trying to do that won't hurt anyone (well, aside from the bank account of the person paying you, but that's on them). Building a dystopian VR-based social network hurts a lot more people.
I would hope we have better ethics than to work on things that harm others for truckloads of money.
Then again, a lot of people still work for Meta (and Twitter, and...), so I guess ethics is pretty lacking.
People change the environment themselves by choosing that over the alternative. If you don't like the thing that other people chose that's a fair opinion to have, but it's hard to argue that harm was caused by it.
I would argue that trying to force VR on consumers by over-promising it before it has a 'killer-app' is harmful to VR as a technology, and I think that VR is a potentially very important technology. However I can't argue against the Zuck giving Carmack 20 billion dollars to build awesome and affordable toys that I love using, especially when it has the potential to scare off his stockholders and make him even more of a laughing stock. I say it is a wash.
I work in the field of Data Science and one upsetting reality has started to sink into my mind over the last year.
In a business there is top line and bottom line. There are a lot Statistics/ML/Data Science jobs that are about moving that bottom line. You build something to optimize something to reduce costs.
The value provided by the bottom line people is less visible than the value of top line people. The easiest way to move the move the bottom line is by just getting rid of people. So when the axe falls the bottom line people get cut and it's hard to understand why.
It's the same thing as people say about fires. When you put out a fire you are a hero. When you prevent the fire in the first place, everybody thinks it's business as usual and nobody understands why you are needed.
> It's the same thing as people say about fires. When you put out a fire you are a hero. When you prevent the fire in the first place, everybody thinks it's business as usual and nobody understands why you are needed.
I got a dose of very cold water about this thirty years ago when I was building payware that improved developer productivity. I gave a presentation about its ROI, and afterwards, a developer walked up to me and gave me some feedback that none of the business-types had articulated:
Products are either vitamins or painkillers. People buy painkillers, because they're in pain. People postpone vitamins, because nothing is wrong and the benefits are always "later."
I didn't 100% change what I chose to build over the years, but from that time to today, I have worked on always spinning what I sell as an antidote to a customer's pain point, rather than as an investment they make to pay off eventually.
p.s. I don't know where that dev got the "vitamin/painkiller" metaphor, but it's sticky!
Ironically this quote also show how broken the US is: It's normal to take pain killers.
It should not be.
It should be a last resort.
You should take what fixes the problem and give your body time to heal not take pain killers and pretend nothing is wrong.
Pain killers are addicting, can have an increasingly reduced effect, can have a bunch of side effects and can make the end result much worse by not healing wounds (metaphorically) when they are still easy to heal(1).
(1): Through sometimes they can also help you healing by preventing you from doing pain-caused bad actions, like setting down your food in a bad angle.
EDIT: Just to be clear I mean pain killers for a "normal live" situation, not in context of you lying in a hospital bed or having extrema healthy issue which can't be fixed/heal anytime shortly.
It’s important to draw a distinction between narcotic painkillers like opioids and safe painkillers like anti-inflammatories. In general, it’s safe for people to occasionally take certain painkillers for minor pain. In some populations, drugs like aspirin can even extend life when taken daily.
Chuckle. I wonder about the "leopards ate my face" moment which seems particularly apt here if you consider Si valley VC money to largely be expensive steroids that appear to provide big growth fast, but will likely land you in worse shape long run either when you withdraw them, or by leaving you addicted trying to avoid the resulting crash.
AKA your removing the "vitamins" path in exchange for some future "medicine"
A real fun one is rebound headaches. Spent a few months with horrifically painful headaches. Turned out it mostly from painkillers. More I took. Worse headaches Got.
My other less painful headaches that started the cycle were an actual brain issue. Just took a few years to get correct diagnosis.
Eventually had a cycle of one round of pain killers every other day. Cycling through To a different kind each time. This mostly worked until I got excess brain fluid drained off. Which actually solved issue.
I got hospitalized once with the suspicion of having TIA but it turned out to be migraine with aura. I had those since I was a kid but never so intense (part of vision going poof!). I identified that the trigger is intensive smells (like vinegar for example).
My head hurts because atmospheric pressure changes fast. If you solve this problem I'll stop taking painkillers, and in the meantime you could stop using appeal to nature fallacies.
Say you have a 100 developers and you reason each should get a second monitor worth 300$, because this increases productivity by 20%.
According to an accountant, you just added 30K in costs to the books, with nothing to show for it. You can't eat productivity nor is it a line item in the books.
Who is to say that this 20% of freed up time is used productively? Or used on things that increase revenue? If so, how much revenue? And when?
>Say you have a 100 developers and you reason each should get a second monitor worth 300$, because this increases productivity by 20%.
>According to an accountant, you just added 30K in costs to the books, with nothing to show for it. You can't eat productivity nor is it a line item in the books.
Right, this is why developers should NOT get 2nd monitors.
Even better, the business can save another $30k by not getting the developers any monitors at all.
Yep, and I think we can apply the accounting logic to the original article.
A team that regularly saves costs for other business units is a promise of cost savings. Dropping the entire time is an immediate and factual cost saving.
It doesn't run out of steam so much as it fits a particular scope better than others. We can bend over backwards to make it fit other scopes, like another commenter points out: FOMO is a headache, and social media could be called the aspirin.
I personally find that metaphors work best as mnemonics. Unless you're in health sciences, products are neither vitamins, nor painkillers. If this metaphor helps us remember to distinguish products that solve urgent and existing problems from products that are investments providing an ROI over time... Good enough for me.
Now about fear and greed. I agree with you. In fact, I do so because when I was in sales and marketing, I read a book that said that the four motivations that mattered were fear, greed, exclusivity, and belonging. The first two are literally what you just mentioned.
That particular simplification doesn't fit every situation any more than Maslow's Hierarchy does, but it's another surprisingly useful lens to use when looking at value propositions.
At the end of the day, all these rules of thumbs and metaphors are tools, if you find one that's useful, add it to your toolbox and figure out when it works and when it doesn't. The more of these you have, the more ways you have of analyzing a situation and coming up with a rough model for how it works.
So I agree with you: Fear and greed are the big sellers in B2B.
The original facebook was a painkiller the same way the Oxycodone you crush up on a table and insuffulate is a painkiller. The metaphor works amusingly well, actually.
iPhone was / is a 'status symbol' and 'fashion accessory', which happened to be way better than the clunky, expensive, and poor UI mobile phones which came before, (aside from Blackberry, which was a corp status symbol, work / gov focused, not average consumer.)
What actually happens with vitamins is people love taking them (because they’re colorful and some of them are food preservatives) but there’s like no evidence they have health benefits.
I don't know, I was surprised to learn that my vitamin D was quite low, and I get a fair amount of sunlight each week, probably more than a lot of office workers.
Now that I take supplements though my levels have been fine. From what I have read, quite a few people fall into a similar bucket.
Honestly no, but I believe there is some research showing potential long term health benefits of avoiding low vitamin D. It's totally possible I'm wasting my money, but I'm happy with the potential benefit/cost ratio. I would gladly pay the price of some vitamins for even a small chance of avoiding significant health problems.
>It doesn't replace getting real sunlight though. Or if you're an Inuit, eating polar bear livers.
Not that I expect HN readers to likely end up in a situation where they have to decide whether to eat a polar bear liver, but that is definitely a part of the polar bear that no one should eat:
>...A polar bear’s liver contains an extremely high concentration of vitamin A. This is due to their vitamin A rich diet of fish and seals. The Eskimos have long been wary of eating the polar bear for this reason, but it’s something the early Artic explorers found out the hard way.
Ingesting the liver can cause vitamin A poisoning known as acute hypervitaminosis A. This results in vomiting, hair loss, bone damage and even death. So although actually capturing a polar bear may seem life threatening, it turns out that eating its liver is just as deadly.
Oh, I was thinking of animal fat in general. They have some minor genetic adaptations to get more vitamin D from it (since there's not much sunlight) and after moving away from the traditional high-fat diets now lack it as much as anyone else.
Cutting costs is always a marginal thing, because businesses tend to value growth. Oversimplification: If you have a 50% margin business, the value of one more dollar of revenue is $.50. If you cut costs and change the margin to 55%, then you've added only $.05 of revenue to that additional dollar.
Now, a sane person will look at the improvements to margin across the whole business and still want to make those improvements because in aggregate, they add up, BUT, you cannot improve margin forever as a strategy. Eventually, hard limits come up and the incremental gains shrink and shrink. At that point, growth dominates.
Most mature businesses need revenue growth much more than they need marginal internal gains, especially because as businesses get bigger, marginal gains tend to apply to more limited segments of the business. E.g. improving one product is marginal and applies to only the sales associated with that product.
I think the claim that data science is about moving the bottom line is right, but I think the other way of thinking about this is that Project/Consulting is probably a more relevant way for companies to buy these skills than Salary. Many companies can see the value in an incremental move in the bottom line, but most companies don't have a sufficiently large problem space to worry about paying a continuous cost to focus on this.
I've seen a lot of big companies say that they need these skills, but also believe they can't attract talent because they wouldn't be able to keep a data scientist busy.
I've been a part of this argument before. I have another, additional perspective on why growth is more important than cost-cutting in many cases. If there are costs to be cut, you can cut them today, you can cut them tomorrow, they're right there and eventually, you can hire someone/buy something to cut those costs.
But growth is a tricky thing. If you're in a land grab market and you cut your costs at the expense of growth, you may find that you lost your chance to grow, because the market is now dominated by other people.
For people with this mentality, they expect in the long term to cut costs, but only after growth has slowed for reasons out of their control, e.g. the makret is stabilizing and has already chosen the #1 big gorilla, the #2 little gorilla, and numbers #3 though #100 small monkeys picking up scraps.
And if you cut costs in a (prospective or current) operating area from 120% of revenue to 90% of revenue, you've opened up an entire new operating area to profitably grow in.
Developing the technology to do a thing profitably that previously could not be done profitably is the stuff unicorns are made of.
Absolutely! I hope my reply didn't imply that I thought there was no value in doing things more efficiently. There clearly is, and as consumers we love marginal gains in product quality, efficiency, and price.
I'll nitpick a bit to ask, though, how many times has a new entrant to a market gotten a process/business/tool/etc from 120% operating to 90% through marginal gains? I'd wager almost never. Process improvement can be marginal or stepwise/punctuated. I think most unicorns create punctuated change in ossified industries, but, I don't think any big companies are likely to hire a data scientist and through years of grinding through the margins achieve that 30% improvement.
put differently, the decision to focus on revenue vs profit is a decision that typically does not include the NPV of R&D investments. those are uncertain and have some probabilistic value, but not so much in accounting terms.
> Eventually, hard limits come up and the incremental gains shrink and shrink. At that point, growth dominates.
The trick is understanding where the hard limits are. I've noticed that upper leadership tends to be pessimistic about these hard limits (they come quickly) and engineers on these teams tend to be optimistic (there's a lot of fat/cost to cut so the hard limits are quite far down.) Now naturally, the engineers on these teams have a vested interest in being optimistic, as their team charter is based around their work. But I've seen this conflict play out in many organizational situations and I'm not sure this interplay between upper leadership and engineering about these margins is illuminating for the business.
Not to nitpick, but 5% improvement profit does not apply to the additional dollar. It applies to all the revenue.... So the improvement could be massive.
That what I meant when I said the decision would be weight to do that because in aggregate it pays off, but, the payoff for those one time things is not fully retroactive. E.g. for sold products it does nothing. For services, it can be much better!
Yeah I've worked in infrastructure through most of my career wherever such a distinction is available (or when it opens up), and this is a common complaint. Product folks get the most visibility and get kudos and parties for product launches. Meanwhile, the deployment infrastructure staying up is just expected, even though the engineers responsible for it are working hard to keep it up. It affects team morale (infrastructure teams are unrecognized for their hard work) and also has material affects on promotions and compensation as it's harder to justify business impact on these teams. I know folks that left infrastructure teams because of this dynamic.
Hired into a company. First day on job I find that the entire infrastructure team had quit. It was in a failing state.
told them flat out that they are most likely going out of business, but I’ll get it a try.
Couple of times owner tried to Ask me when feature X would be delivered. Just told them no. Managers were wise enough to understand they were one pissed tech guy from failure.
3 years of endless late nights to get company back to a good spot with a rebuilt time, new infrastructure. Proper documentation, the works.
Finally left after being passed over for promotion to a guy that did nothing, but promised the world. (He never delivered)
Took me a couple years to recover from that job.
I don’t work late nights anymore. If company doesn’t care to invest in infra, I look elsewhere.
If you are high in EQ and vaguely likable you can substitute for technical skill or hard work.
I had a colleague who was like a golden retriever and lacking in all talent. Everyone loved him. He never got anything important done and always worked on superficial shiny objects.
He was basically untouchable. Being optimistic and having no talent is a huge advantage.
The hard working workhorse industrious person always griping about how broken everything is: Can’t wait to get rid of you.
Don’t do great work for morons.
All of the collective results of these brutal lessons for me has been to become ultra cautious about where and who I work for and to try to do a much better job reading the room and analyzing people.
If you are an Aspergers person, this is super hard. I now do my best to get multiple second opinions about the situation because I learned my personal judgement and evaluation was nearly always wrong.
It's tricky, because there's genuine uncertainty about whether you have prevented a fire, or just wasted some time and maybe added some overhead. Even people who understand a system deeply can have reasonable disagreements about whether a preventative measure is worthwhile. Executives whose only interaction with the system is feeding it money have almost no chance of figuring it out in the face of any amount of conflicting info. And of course a mixture of natural human optimism, aka blithe disregard of danger, and having their salary depend on believing there are easy things to cut, makes it quite difficult for them to believe in any particular instance of a fire prevented.
I hope it's clear that I don't mean to excuse them for giving up. It's hugely destructive both for decision makers and everyone around them. I just want to show that the problem is substantially harder than "just reward preventing fires already".
I understand the top-line bottom-line divide, but I am not fully convinced if the top-line projects are any safer. Wouldn't another reasonable business strategy be to get rid of all new projects, and only focus on operations-as-is during times of economic uncertainty?
That's exactly what weak management does. Family management is especially prone to this IME. Cut new investment, cut cost of inputs, labour, quality control.
That works as long as you have weak competitors (or a moat) and nothing terrible happens, like high defects. Essentially you're coasting on prior investment. But as soon as something changes in the market you're falling behind.
What I've often observed is that new low cost competitors introduce features which are often reserved for high end devices/products due to market segmentation. The dominant player refuses to adapt and hence they lose all their low end market share, the volume of which is necessary to make the whole thing work. Meanwhile new customers start with the lost cost ecosystem.
I've seen this happen with e.g. Agilent, or SaaS companies, who charge 10x for something that costs little, like SaML/AD auth.
Imagine if NVIDIA had charged for CUDA or considered it a distraction from selling graphics cards. They wouldn't own the HPC/ML space if they had done that.
That would be an extreme action. You do still need to be working with the future in mind. Anything that looks promising to revenue growth in the nearish future should probably continue to be invested in. You may ask those teams to become more scrappy and figure out how to achieve their goals with minimal new investment, especially if the new revenue streams are still a few quarters from coming online.
A very simple question that I've had to ask is "what likely happens if we cut this group?" then "what's the 'likely' worst case if we cut this group?"
That problem with Eric's group and most Data Science teams is that the company continues to move along. There is some long-term cost, but there are likely teams where there are severe short-term ramifications if they are cut. E.g., imagine if Windows cut their servicing team (snarkiness aside).
It's a failure of the data science team management that they didn't make themselves a front line capability. It is easy for OR (Operations Research) to explain their business value, any DS team that only stays at the tail end of building capability is liable to be cut (or under invested).
For DS it might mean being more on the market research / customer requirements / subscriber churn side, instead of being on the back end of services improvement / risk reduction. Be the thing that customers are asking about, that brings new customers.
I think this is an insightful assessment. Not everyone in a company can be top line. But I also think there's a lot more opportunity in using statistics/ml/data science in the top line than most companies practice.
>But I also think there's a lot more opportunity in using statistics/ml/data science in the top line than most companies practice.
I consider myself a fairly honest Data Scientist, in the sense that I like it when I can map what I'm doing to the value it delivers. I know some other great people I've worked with who are like this as well.
This is anecdotal, but all of us have hated working with many top line people because there's some really fuzzy mapping from goal to value (since value is realized in the long term), and some of the people are champion bullshitters. I don't need to explain sales people. But marketing, corporate strategy, and even upper product management - they drove us crazy because their standard of being data driven was absolutely not consistent with how we thought about things at all. All of it was because the mapping from project to revenue was over years, not quarters. And it was all projections.
Compare this to bottom line people, where the mapping from project to cost savings is on a shorter time frame. The types of personalities this attracts is different.
Maybe the growth hacking stuff at software companies is different and you can focus on revenue growth and still connect what you are doing to that. I've never worked in that role so I don't know.
This is the real problem. Visibility is to be abhorred at the top levels because viability brings accountability. How many Dilbert comics are there out there with the punchline being "I don't care what the real numbers are these are what I want the numbers to be" from the PHB
There is a large swathe of middle and upper management that gets by due to continually making sure their actual impact is never measured, and they are only a "force multiplier." not that you should do away with middle management, but there are many in middle management who could be done away with, with very marginal loss.
OMG the 'Technology Foresight' group, the 'Process Improved Team'. Cross functional synergy!
We all know what the problems are and where investment is needed, but management pretends that they don't know so they can have An Initiative to discover it, but not really address it (because e.g. the problem is one they caused with previous poor management).
Yeah that’s especially unfortunately true for data science and data engineering teams in companies where ml or data are not the core business but nice-to-have. They are usually the first ones from engineering being axed in times of lay offs.
Even for companies that have ML and/or Data in the core business. I think few would argue Meta in this specific layoff example doesn't have data as a core business.
(And those few are probably the ones drinking the "metaverse Kool-Aid" and thinking the pivot away from data siloes is already complete to some sort of VR scape where data somehow doesn't matter or doesn't exist, that Meta still hasn't actually convinced consumers to buy or figured out how to build. They finally figured out "legs", pivot complete I guess?).
Cutting costs but bringing no revenue shows as Cost Center on any financial report. Revenue though shows up as Revenue center. Thus this decisions which sometimes are illogical. Sad but true :)
This is one of the reasons why I think making the workplace Democratic is a good idea. The workers have a better idea of what is important than the management.
Also cost savings has a hard, well known upper bound but revenue growth is speculative with many opportunities for pleasant fantasy. Business leaders love the idea of being the visionary who takes a big gamble and makes it work.
Facebook is an example of where that breaks down: there isn’t an easy way to grow that much larger so they would likely see greater return from cost savings than they are likely to make from VR, but after a couple decades of thinking of themselves as this incredibly innovative tech company it’s hard to accept that they’re stable as an ad company.
There's also cost savings that are numbers shuffling on a spreadsheet and then there's cost savings that are actually less money leaving the corporate accounts.
This is really sad to hear. Probabilistic programming languages are IMO one of the coolest things ever: if you have an idea about how your data could be plausibly generated given some massive amount of hidden state and inputs, and an arbitrarily complex rendering function, you just write the rendering function and it determines probability distributions over the state variables that most likely map your inputs to your output.
For instance, say you want to be able to vectorize logos, e.g. find the SVG representation of a raster image. If you wanted to link a text model of the characters that make up SVG files to their raster representation via a modern deep learning system, you'd need a heck of a lot of data and training time. But if you could instead just write a (subset of a) SVG parser and renderer as simply as you'd write it in any other programming language, but where the compiler instead creates a chain of conditional probability distributions that can be traversed with gradient descent, you can reach a highly reliable predictive model with significantly less training time and data.
This is where the massive cost savings come in. You get a forward-deployed engineer who knows this stuff and can dig into the compiler for features not yet implemented, they can work magic on any domain problem. I would have loved to have seen the spinoff they mentioned. Sigh.
> But if you could instead just write a (subset of a) SVG parser and renderer as simply as you'd write it in any other programming language, but where the compiler instead creates a chain of conditional probability distributions that can be traversed with gradient descent, you can reach a highly reliable predictive model with significantly less training time and data.
It's a balance between engineer time (headcount costs) and training time/costs (infra costs.) Usually engineer time is more valuable than training costs. Embedding engineers into teams and building cost models is one of those cases where probabilistic programming makes a lot more sense than a DL approach, but most situations favor the economics of a DL approach.
I’m probably not as talented as the author, but I can’t relate to this feeling of giving up because some work won’t be used. I have been working for ten years post-PhD and every single product I’ve ever worked on has been canned, sometimes very circuitously via acquisitions. My work is trade secret so I’ve never filed a patent, written a publication, nor given a talk. I have zero outwardly observable accomplishments. My resume and LinkedIn rolodex are the only testaments that I’ve done anything at all.
And yet I don’t see myself retiring once I have enough money in a few years
At some point you may start to wonder what your legacy on this planet is. At the very least: if you've made a good use of your limited time (and the scarce resource that is your labor). (Hard mode: if you've left the planet better off than you found it?)
The last few years have pushed a lot of people's "burn out" buttons and the self-reflection of "what have I accomplished with my time?" (and "have I contributed more to good or to evil in this world?") are very easy burn out spirals to experience, so a lot of people are asking these sorts of questions now. (Including just about every day lately for months on "Ask HN", in a million different unique individual ways, if you've not yet noticed.)
You sound like you are in a very fortunate place in your life that you aren't struggling with that right now. I envy you a little. I'm also glad for you and I hope it remains that way for you.
(I've spent too much time in the last few months worried that too much of my precious labor into finished projects and net revenue generation has been spent in service to the greater evil than the greater good of the world and have been struggling to figure out what that means or what I do with that cursed feeling.)
Ultimately everything will be destroyed anyway... "legacy" is an egotistical concept, if you think you're building anything but sandcastles you're deluding yourself.
Enjoy the process, admire your castle, but never forget the tide will have its day.
> but I can’t relate to this feeling of giving up because some work won’t be used.
People are fulfilled by different things. Some people are far more interested in their working having a meaningful (to them) impact to the "outside" world than the specifics of the work.
I don't think Eric is giving up or retiring, just taking a much needed break. We should all look up from our keyboards from time to time to see the bigger picture.
> I need a good long corporate detox before I go looking again.
I can’t relate to this feeling of giving up because some work won’t be used
Everyone is motivated by different things. My strongest motivation and satisfaction comes from implementing technology to make drastic and lasting positive change in the work done by other people. Agile development methodology with iterative development and meaningful change every couple weeks suits me very well.
What you described as your work would not be fulfilling to me.
I don't think it necessarily has to be about giving up, but it makes a lot of sense that if you already sort of hate your employer, them deciding to throw out a bunch of valuable work you did and lay you off is a good incentive to reconsider your current industry or at least take a break.
Personally I had an entire year worth of difficult sweng work thrown out due to politics, and it's impossible for that not to negatively impact my mood (or performance reviews)!
There's a big difference knowing ahead of time, also.
If I am hired to do trade secret work I'm already understanding that it will never be "known" even if the product or something associated with it DOES work - and many companies in the world will never be known anyway, let alone their products.
To Management, you are either in a Cost Center or a Profit Center. In all advertising-supported monstrosities, only adtech and sales are profit centers. Literally everything else is a cost center. Everyone at Facebook and Google is in a Cost Center if they are not directly involved in landing advertising accounts, presenting ads, or billing for ads.
Never look for work in a Cost Center.
Come hard times, Cost Centers are cut first. Not because it is good for the business, but because cutting payroll impresses Wall Street, inflating stock valuation. To Wall Street, layoffs mean you are serious.
I've only briefly worked closely with a billing team, but my impression was always that billing is seen less as a cost center, but more as a critical "without this team we get no money" team, which seems closer to a profit center. I'm not sure how far up the management team that perspective stays true, though.
Full quote: "Everyone at Facebook and Google not directly involved in landing advertising accounts, presenting ads, or billing for ads is in a Cost Center."
or "people don’t want to lug a computer with them to the beach or on a train to while away hours" (https://qz.com/593329/choice-quotes-from-a-1985-new-york-tim...). Looking around the beach and especially the train, everyone is lugging around a portable computer (phone)
I don't think today's VR/AR is "the thing" but I do think, in the same way people thought Palm Pilots and Windows CE devices were a small niche market for ~15 years and it wasn't until iPhone that the masses finally understood what a pocket device was good for VR/AR will eventually reach a version/device that will be more compelling than smartphones and similarly blow up
It might take until they get to a small dot like "Striking Vipers" but I'm glad at least one company, if not 5, are pushing forward.
Cutting that sentence off there paints a slightly different picture from the full sentence:
> News flash: no one wants to wear VR goggles to spend any time in a digital heaven where the role of God is played by Mark Zuckerberg and you can do anything you can imagine, including “work” and “shop”.
Plenty of people want to wear VR goggles, but it's hard to see how the metaverse specifically will take off.
Plenty of people spend time on facebook which is basically one step away from "digital heaven where the role of God is played by Mark Zuckerberg and you can do anything you can imagine, including “work” and “shop”."
Sounds just as much like what critics said about the Segway, which was supposed to revolutionize pedestrian movement, and which ... well, let's just say the critics turned out to be right.
Which is to say, I don't think you can tell much about the stickiness of a technology from what it's proponents say for or against it. History is full of well-hyped failures, and not a few overperforming fringe ideas.
Let’s not forget the price difference of a e-scooter and a Segway is like 50x. Even more if you consider the new and popular rental model.
Maybe they are cheaper nowadays, I remember the early days they would cost almost as much as a car. Whereas e-scooters today are in the price range where you can buy them as toys for your kids.
A segway that competed in the price range of bikes instead of cars would be much more widespread.
The full line from the article appears to be more along the lines of (and I am paraphrasing here) "no one wants Mark Zuckerberg's cynical and unimaginative interpretation of VR" and not a dismissal of the technology itself.
>This to me sounds similar to "I think there is a world market for maybe 5 computers" That was wrong
>or "people don’t want to lug a computer with them to the beach or on a train to while away hours"
Exactly. It's also just like how "people don't want to wear 3D glasses in a theater". That was wrong too: every single movie today is in 3D.
Oh wait... No movies are in 3D now. They tried (for a 2nd time) and failed and gave up.
VR goggles might get some popularity for gaming, but the "metaverse" thing is just dumb.
AR, however, makes a lot of sense if they can make it convenient. I would love to have some cycling glasses connected by BT to my phone, which show a simple moving map display as I'm riding in the city so I know where to turn, instead of having to stop every so often and pull out my phone to look at where I am.
You can somehow feel that he has been dying to say this for years, but couldn't while he was still working for Meta...
But yeah, I can imagine how the decisions on layoffs usually go: "what are those guys doing? Probabilistic something or other?! No idea what that's good for! And wow, look how much they get paid!"