Hacker News new | past | comments | ask | show | jobs | submit login
'Ray, this is a religion': How Bridgewater lost two top hires (nymag.com)
202 points by saeranv on Nov 7, 2023 | hide | past | favorite | 159 comments



Related ongoing thread:

Bridgewater Had Believability Issues - https://news.ycombinator.com/item?id=38181360 - Nov 2023 (2 comments)


I worked with Ferrucci in BW. The internals of the employee rating system that preceded him were hilarious. In short:

Everyone is encouraged to rate anyone else in a variety of categories, as often as possible. Every rating is public. You know who is rating you, and how they did it. Those ratings are put together to get a score in every category, and can be seen by anyone. It's your Baseball Card.

The problem is that not everyone is equally 'credible' in their rating. If I am bad at underwater basketweaving, my opinions on the matter are useless. But if I become good, suddenly my opinion is very important. You can imagine how, as one accumulates ratings, the system becomes unstable: My high credibility makes someone else have bad credibility, which changes the rankings again. How many iterations do we run before we consider the results stable? Maybe there's two groups of people that massively disagree in a topic. One will be high credibility, and the other bad, and that determines final scores. Maybe the opinion of one other random employee just changes everyone else's scores massively.

So the first thing is that the way we know an iteration is good involves whether certain key people are rated highly or not, because anything that, say, said that Ray is bad at critical thinking is obviously faulty. So ultimately winners and losers on anything contentious are determined by fiat.

So then we have someone who is highly rated, and is 100% aware of who is rating them badly. Do you really think it's safe to do that? I don't. Therefore, if you don't have very significant clout, your rating of people should be simple: Go look at that person's baseball card, and rate something very similar in that category. Anything else is asking for trouble. You are supposed to be honest... but honestly, its better to just agree with those that are credible.

So what you get is a system where, if you are smart, you mostly agree with the bosses... but not too much, as to show that you are an independent thinker. But you better disagree in places that don't matter too much.

If there's anything surprising, is that more people involved into the initiative stayed on board that long, because it's clear that the stated goals and the actual goals are completely detached from each other. It's unsurprising that it's not a great place to work, although the pay is very good.


> My high credibility makes someone else have bad credibility, which changes the rankings again. How many iterations do we run before we consider the results stable?

Tangential to your point but this is actually a solved problem and you only have to run once. You might even recognize the solution [1] [2].

[1]: https://en.wikipedia.org/wiki/PageRank [2]: https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors


Why only once? When we did pagerank in class we chose a precision factor and then iterated pagerank until the delta values for the matrices were less than the precision factor


More generally factor graph optimization / msg passing, if you don't need the constraint that it must be an eigenvector/eigenvalue operation. The number of iterations is bounded by graph width but is locally constant, if memory serves.


I'm surprised by your comment. It's a basic knowledge of getting feedback in UX research or any social research that social pressure influences your response. That's why most feedback surveys are anonymous. Even in politics, there is an old theory called the Spiral of Silence on how people shut down their contrarian opinions in public.

It looks like Ray's fixation on "radical transparency" ignored the basics of human nature.


The people who practice/advocate radical transparency believe they are above human nature.


Doesn't look like anything I've ever called radical transparency.


Unfortunately, no one will trust that a work survey is anonymous. They sent it to an email address that includes my name, with a unique link. And it's often possible to just infer who it is.


I like my employer very much but still do not trust that their frequent employee surveys are anonymous. But since managers get grief for low response rates, I do the surveys, but skip each of the questions.


Well, anyonymous to who?

Do i trust that big bosses, IT, and HR can’t deanonymise my answer? Never. Obviously they can and will if they are interested.

Can the rank and file do the same? Probably not.



One of my absolute favorite episodes. Right up there with the Asscrack Bandit episode.

#SixSeasonsAndAMovie!


We have Six Seasons already and the movie is apparently in production now(Although the strike has delayed things a bit and may cause downstream trouble).

[1]:https://www.imdb.com/title/tt21958386/

BTW: Man the Asscrack Bandit was such a memorable episode.


Another aspect of this kind of system is that it favours visible over invisible jobs. The people working their asses off in the technology sewer might literally keep everything running, but because they are:

A) underfunded and understaffed

B) have no time and energy for niceties

all those in more comfy positions might be inclined to vote against them, while they probably don't vote at all because they got no time for that shit.

As you said, if you let a pidgeon vote on how good your chess moves were you will get pidgeon shit on a chessboard. A costly form of bureaucratic performance art, sans the art.


I worked at a place where publicly praising folks via company wide emails was encouraged. It was a painful experience. Almost immediately mutual admiration societies started up. It was like a competition between these small groups to send these nauseating emails every Friday. Mostly it was sales, HR, and the executive team sending these emails.

Most folks just didn't participate and it faded away pretty fast.


This is classic example of if you give people a metric (that is meant to be a condensation of things that truly matter) based on which they are evaluated it, they will just start optimizing for and gaming that metric instead of focusing on things that truly matter.


I don't think there ever was a time this metric was good.


Incentives are a hell of a drug.

It would be so cool if someone could apply game theory for mortals that could digest these systems and show how to make them more "fair". It may be an impossible task but it seems worthy of exploration.


> It would be so cool if someone could apply game theory for mortals that could digest these systems and show how to make them more "fair". It may be an impossible task but it seems worthy of exploration.

I consider this to be a hard, but clearly not an impossible task. What I rather consider to be nigh impossible is to get a good definition on what we actually want to achieve by creating these systems. There is where in my opinion the true difficulties lie.

Just to give some shallow examples:

Do we want to make this purpose be quite fixed (say, for decades) (and create a system around this purpose)? Then it will be quite hard to change the ship's directions if because of some event the economic environment changes a lot.

On the other hand: do you want to make the system's purpose very flexible, so that the system can react to such circumstances? You can bet that this will be gamed towards the political whims of the people inside the system.

On the other hand: if you want to get a glimpse at system ideas that did "work out", look at long-existing religions, or entrepreneurial dynasties with a history of centuries. It should be obvious that these examples are far off from a "democratic", "particapative" spirit (which perhaps should be a lesson for anybody designing such systems).


My goal would be to "maximize prosperity for all". It could be better qualified and defined, but making sure honest hard-working people don't get screwed over and cheaters don't prosper.

I think the goal should be the goal, as the circumstances will continue to evolve. That is, the process should serve the people rather than the other way around.

Sorry, I'm just an old naive idealist and would like to cling to hope that we can move society forward in a positive direction.


>Everyone is encouraged to rate anyone else in a variety of categories, as often as possible.

You'd think someone with as much experience as Ray Dalio would be vigilant against overfitting a model, and that's exactly what this is.

Just goes to show you that even the most successful are susceptible to ego-driven blind spots.



The people who survive in that culture, are they interesting, and do they go on to do anything interesting afterwards?


I have met or value relationships with plenty STEM BW alums, but they tend to speak about it as a funky place, seemed to color their views about buy side jobs and suggest that you know what you’re getting into with a job there.

Seems anecdotally that it’s used as a stepping stone into leadership or used to bank savings.

Several tech leaders I know are out of BW.

An SRE I knew there was at ~$600 TC and got a retention offer at ~$900 when they left to do a passion job.


If those scores can be seen by everybody it should be possible to mine the data and reverse engineer the algorithm.

A bad actor could use this to craft a „malicious review“.


Apparently there was no algorithm.


Is it true that Ray has a bias against STEM people, and prefers elite liberal arts pedigrees?


Also in the Vanity Fair there’s an extract from the new book about the time the future FBI director was snooping for employees’ dirty laundry at Dalio’s hedge fund and conducting year-long show trials:

“Inside James Comey’s Bizarre $7M Job as a Top Hedge Fund’s In-House Inquisitor” https://www.vanityfair.com/news/2023/11/james-comey-dalio-br...

It’s every bit as strange as it sounds.


Lawyers really like external validation from authority figures, and Dalio sounds like a douche whose ego needs for others to feel they want his external validation.

The section with the multiple investigations of the employee who didn't bring in the bagels really brings that home lol

Match made in heaven.


> It’s every bit as strange as it sounds.

Nope. Much, much stranger.


Absolutely wild.


Very funny. If it's true that all the talk of computers and statistical modelling at Bridgewater is bullshit, and it's just a vanilla hedge fund, then it reminds me of the story from The Man Who Solved the Market that Renaissance was founded as a quant fund but none of the algorithms actually worked, so Simons just traded on hunches until D.E Shaw started competing with them years later.


A lot of people think of this in too much of a binary fashion. Hunches vs Quant.

It's really more of a continuum.

Or you could broadly tranche into 4 categories - 1) pure hunches, 2) pure quant ML stuff (I don't know what this signal even means, but computer say number go up), 3) hunches that you validate with data (kind of like the scientific process), and 4) the reverse - quant ML/data that you validate with logic/hunches (computer says number go up, but why would this pattern exist, who is on other side of this, what are the tail risks, etc).


>A lot of people think of this in too much of a binary fashion. Hunches vs Quant. It's really more of a continuum.

People say the same thing about fundamental analysis versus technical. It's not a binary; everyone is both, and only vary on how much they lean to one side.


It’s the same except that your avg tech/fund. analyst isn’t sitting on 10gb private fiber global data networks.

Hunches and quants tend to work when you have that much data and compute firepower.

Setup is closest akin to what insurance and social media companies run. When you have that data and speed, you’ll do well. Secret isn’t the IP, it’s having access to the infra that can make use of it.

Siren Servers/Jaron Lannier describes it in generalities well.


I read that book too, albeit a number of years ago when it first came out, and I don’t recall “none of the algorithms actually worked” being true.


Was just thinking the same thing. I remember that they went through a long time period to get their algorithms to work, so maybe that's what OP means. I also remember that they found some algorithms that work but they don't know why, but they keep running them anyways.

I definitely do not remember anything about Simon making trades based on 'hunches'


I recall Simons saying that early on in his trading career with his partners, they traded on hunches as opposed to purely statistically. He said later on they did not, it was purely algorithmic later on.


Page 59: "Even as Simons and his colleagues were uncovering Soviet secrets, Simons was nurturing one of his own. Computing power was becoming more advanced by securities firms were slow to embrace the new technology.... Simons decided to start a company to electronically trade and research stocks, a concept with the potential to revolutionize the industry."

Page 79: "In 1978, Simons left academia to start his own investment firm focusing on currency trading."

Page 99: "In the following days, Simons emerged from his funk, more determined than ever to build a high-tech trading system guided by algorithms, or step-by-step computer instructions, rather than human judgment. Until then, Simons and Baum had relied on crude trading models, as well as their own instincts, an approach that had left Simons in crisis."

Page 101: "The regulators somehow missed the humor in Simons's misadventure. They closed out his potato positions, costing Simons and his investors millions of dollars. Soon, he and Baum had lost confidence in their system. They could see the Piggy Basket's trades and were aware when it made and lost money, but Simons and Baum weren't sure why the model was making its trading decisions. [...] In 1980, Hullender quit..."

Page 102: "With Hullender gone and the Piggy Basket malfunctioning, Simons and Baum drifted from predictive mathematical models to a more traditional trading style."

Page 105: "Their traditional trading approach was going so well that, when the boutique next door closed, Simons rented the space... Simons came to see himself as a venture capitalist as much as a trader."

[Various things happen. Renaissance makes and then loses a bunch of money. It is now 1985]

Page 114: "Simons wondered if the technology was yet available to trade using mathematical models and present algorithms, to avoid the emotional ups and downs that come with betting on markets with only intelligence and intuition."

Page 201: "By 1990, Simons had high hopes Frey and Kepler might find success with their stock trades. He was even more enthused about his own Medallion fund and its quantitative-trading strategies... Competition was building, however, with some rivals embracing similar trading strategies. Simons's biggest competition figured to come from David Shaw."

So my timeline was a little off: Renaissance was largely a traditional fund from 1978 until AxCom in 1985, with D.E. Shaw only in 1990.


Renaissance has a very effective legal dept. I'm sure their math stuff is also great, but I was struck a bit reading an old story about the other side of one of their lawsuits/investments regarding a small town. Part of the business is definitely squeezing out an extra 5-10% from distressed exotic assets bought at discounts.

The mythology focusing on math/quant I think is a feature, for them.

Apologies for not finding the original; my $SE appears to be dominated by a recent tax deal instead.


Patrick Boyle posted an interview on this topic with the author of the article just an hour ago - https://www.youtube.com/watch?v=bb5kngSrrsw


I wonder when they actually do their work. I mean, if you're constantly evaluating your colleagues and their work, when do you find the time to get your own work done? Or are you just making up numbers for these evaluations?


Hey, those evaluations won't evaluate themselves you know! That's important work!


I've heard a lot about Bridgewater and frankly don't understand how they hire anyone. If you're smart enough to get offers at top-tier funds surely you'd rank BW last because of this (very public) insanity.

Have they just resigned themselves to the uphill battle?


I interviewed at Bridgewater a few years ago. I ended up passing on the job but it was tempting.

I love the idea of strong feedback. I've always wanted work to feel like I'm lifting weights with my buddies: We constantly critique each other. We all want to get better and any advice or critique is welcome.

In general, withholding criticism is a sign that either:

1. The person needing advice is more concerned with their ego than actually getting better. They get mad about criticism or find it hurtful.

2. The person who is withholding advice either has nefarious purposes, has a low option of you (they assume you fall into category 1), or simply doesn't care about helping you.

Criticism is the respectful, professional thing to do. It assumes the best in people - that they're trying their best and want to get better.

You shouldn't be an asshole by the way. If you phrase your criticism in a way where someone is likely to get defensive then it's less likely to be effective. Empathy is an important skill in teaching.

After years spent working with companies in SF - where criticism is generally avoided at all costs - Bridgewater piqued my interest.

I tried to discern if Bridgewater shared my outlook but, ultimately, I just couldn't tell. I asked extremely pointed questions - uncomfortable things I wouldn't normally ask in an interview. But I figured they wanted radical candor, right?

I asked every interviewer things like "How important is making sure people actually hear this criticism?", or "Couldn't people just use this as an excuse to be a jerk?". All the answers were wishy-washy.

Plus, they force constant feedback. More feedback is good. But constant? It felt like it'd be, at best, distracting. And at worst, like it would lead to a lot of false criticism. If you _have_ to criticize, even if you don't have an opinion, then are you really making people better?

In the end I got the impression that they value criticism for the sake of criticism. And it generally seemed like giving criticism was more highly valued than ensuring people actually heard what you were saying (communication skills and empathy weren't emphasized). They'd confused the forest for the trees.


I had a chat with someone working at Bridgewater. After meetings they basically debrief with feedback. Even internal meetings. This person seemed pretty happy with it, but it seemed like a bit of overkill? I have 3-6 meetings a day, am I really going to hear feedback for each?

And the issue I have with feedback is that there is a big difference between feedback to improve something where the person is deficient (e.g. you should learn about net present value), and feedback that is just about the person's style (e.g. you ask too many questions).

People are different. There is more than one way to skin a cat. For example, some people develop a solution through an analytical approach - defining a goal, looking at all options, coming up with some objective way to rank them, then filtering down.

Other people do it in a very "messy" way (or what some percive as messy). They immediately come up with a solution, then refine as they go based on what they learn and what other people think (e.g. questions, opinions, etc).

Is one way better than the other? No. But if someone kept giving me feedback on my particular style, I'd start to get annoyed. If it's not wrong, then there shouldn't be feedback. You actually want people with different styles, it can prevent group think and personally, I learn a lot by working with people who approach problems different.

I get the sense Bridgewater feedback is just used to shape behaviors in the model of some ideal form (i.e. the New Soviet Man), not feedback that improve people whatever their style is. But I might be wrong.


Bridgewater's practices sound like an episode of Black Mirror - indeed, that one where any person could realtime rate anyone else.

If you want to get continuous feedback, all you need to do is to get married.


I mellowed out when I realized crystal energy chasers and systems peddled by MBAs were equally unsubstantiated and unfalsifiable, seeking answers to the same things: fulfillment and productivity


Yeah, but the MBAs can take away your food supply.


I love the pee story here.

A long time ago I was friends with a facilities person at Apple. She told of a peculiar case where she was called in to take care of a problem in one of the conference rooms. Apparently someone had not thought a meeting had gone well. That evening they had returned to the conference room and taken a dump on the table. Message received.

Maybe someone was trying to send a Ray a message with a puddle of pee:)


That's pretty awful; making some random innocent cleaning person deal with literal shit does nothing to the person you're actually mad at.


Can't disagree with that. But it is amazing the kinds of things that go on in large corporations.


Wait until you hear about "dot collector". Any Bridgewater veterans care to comment?


There was a NYT article about Bridgewater and Dalio last week: https://www.nytimes.com/2023/11/01/business/how-does-the-wor...

If these articles are right, the picture I'm coming away with is that Dalio had some good strategies that gave him an edge decades ago but now his competitors have caught up and pulled ahead. Apparently Bridgewater no longer has any quantitative edge and it's really just Dalio and a few close advisors making calls from the gut. Plus all this weird nonsense about "principles".


Same deal with Bill Gross (“the bond king”). Macro tailwind, a few good compounding successes, and riding that draft for the remainder.

They don’t retire, they just graduate to “thought leaders” because what else are you going to do with that much free time and ego. The money no longer matters, all that is left is to chase status.

https://www.google.com/search?q=bill+gross+lucky

https://www.cnbc.com/video/2013/04/03/great-investors-more-l...

https://www.reuters.com/article/us-janus-henderson-billgross...

https://www.sec.gov/news/press-release/2016-252 (ETF pricing violations mentioned in the book)

https://us.macmillan.com/books/9781250120854/thebondking

(highly recommend the book "The Bond King" for deep background on Gross' professional career and PIMCO)


Yeah that's the thing with investing. Markets change, trades get crowded, data/tech gets commoditized, and strategies stop working. If you do not have a process and evolve, you are roadkill.

So some guys who are early, big and right can attract a halo that long outlives its reality.

To build some piece of tech you need to have some sort of process, and it is probably replicable to building another piece of tech. It is entirely possible to just get lucky right place, right time and make a killing investing for a good run before your lack of process catches up with you.

Half the "Big Short" guys fall under the same category as Dalio & Gross.

But hey, if you make your number and then can graduate into "thought leadership" its not such a bad gig either.


Buffett seems to do quite well for himself from what we often hear.



Not bad, that seems to be exactly what the parent post says.

Guess it's all about that smart beta :p /jk Investment rationale: "Just follow the herd, don't be a midwit."


Not all strategies stop working. See “The Super-investors of Graham-and-Doddsville”

https://gromnitsky.users.sourceforge.net/lit/buffet1984/


Dalio is a perfect example of just how much finance has evolved over the last 30-40 years. Risk parity today is a turbo vanilla strategy that every systematic shop can offer and that a master student in fin. eng. can probably build, and yet in like 1990 this was the forefront of systematic investing and got you paid. Bridgewater today is still riding on the fact that they "invented" risk parity.


Curious if there are any materials to do some autodidactic learning similar to what a financial engineering masters would get you.

There are a lot of oft recommended books whenever the topic comes up, but they always border on too theoretical or outright "pop."

I would imagine simply looking at a syllabus from something like Baruch and going through the recommended readings would be a starting point.


I mean, if you have MFE prerequisites then sure. The prerequisites for modern financial engineering is having a good handle on probability theory and in particular stochastic calculus.

It would also depend on the program. Baruch is more for quant strats and wants to send people to option market makers/funds or to banks to sell derivatives. Something like Carnegie Mellon MSCF(Computational Finance) is more oriented to send people to Bridgewater or AQR, which run systematic strats(risk parity, factor, alternative risk premia, portable alpha, etc.).


Ugh, I had a recruiter approach me a couple of years ago to work for some company that had a "radical openness" culture or something like that. The idea was supposedly that everyones ideas were valid and hierarchy didn't matter and that everyone should challenge anything no matter who they were in the organisation.

The way they sold it sounded a bit culty to me, and the potential for abusive behaviour seemed high. Didn't make me want to work for them.


> The idea was supposedly that everyones ideas were valid and hierarchy didn't matter and that everyone should challenge anything no matter who they were in the organisation.

On the engineering side, ideally this is how things are done anyway.

I've never had a problem digging into the ideas presented by engineers more senior than me, and I've always encouraged junior engineers to question any ideas I present.

Everyone, independent of seniority, is blind to giant gaping flaws in their own ideas. People also don't know what they don't know. If I have a gap in my knowledge, any plans I come up with are not going to take what I don't know into consideration if I don't know that I don't know something.


Just because you are in engineering does not mean politics, culture, psychology stop applying. It's really no different than any job at all. Engineers will often think they're in some kind of meritocracy, when in reality they're just as trapped by social constructions as anyone else.


This.

I've never worked at a place that could reasonably be broad-brushed as a meritocracy. Every company employs humans, and humans bring their humanness to their jobs.

Some places are closer to a real meritocracy than others, but ones that think they are tend to be further from it than ones that don't.


Not a meritocracy, no, but when you're dealing with an instrumented system you're better able to prove whether an idea is good or bad. Compared to e.g. a teacher, an engineer can prove the impact of a piece of work much faster. Engineers in practical fields also have the advantage of decades of safety culture and strong institutions that will protect them if they report a problem that will literally kill people.

I agree that an engineer is still trapped by social concerns, but not that they're as trapped as anyone else, because anyone who can easily demonstrate that they're right, even if they aren't listened to and even if they can only demonstrate it in some specific areas, has an advantage over someone who can't.


Furthermore: the MORE they ignore that reality, the more trapped they are.


Most good places I've worked are like this naturally. Good ideas can come from anywhere, it just needs respect.

What put me off about this company was how they sold this as the central idea of their company. It made me wonder what made them think this needed to be pushed so strongly. I tend to be suspicious of overriding ideologies.


There's two abuses in this system you have to watch out for:

- being presented under-cooked ideas so you do the thinking for them.

- too many ideas (which can lead to the first) with an inability to focus

There is a feedback loop for the first where people don't dive too deeply into their idea because they know they'll get a lot of criticism.

The second can be bad because the idea or its implementation is never polished.


It's hard to get right, and merely having a culture of openness is not sufficient - it is easy to abuse - intentionally or otherwise. A common theme I see is "X criticizes/scrutinizes Y more often than he does Z because of some bias X has against Y" (could be an implicit bias). Most teams I've been in are incapable of handling such behavior (i.e. recognizing that X is the problem and not Y).


> Most teams I've been in are incapable of handling such behavior (i.e. recognizing that X is the problem and not Y).

This is true. On the best team I was ever on, the team leader was respected by everyone and he also mediated conflicts. At the same time, he was incredibly open to new ideas and points of view, and would gladly spent days thinking about what people said to him before coming to a conclusion.

On multiple occasions I made proposals that were in direct conflict with what he had proposed, and he took the time to think my ideas over and give thoughtful feedback, and also on many occasions we came to either compromise, or he changed his mind based on the technical evidence presented.

I guess what I'm saying is, if you have someone in the team that everyone looks up to, and who everyone respects both technically and personally, then that one single person can define the culture for the team.


Conflict will happen if it's a truly honest system. If there's no conflict, people are not telling the truth about things that cause conflict. Does it happen, and how does the system handle it?


When someone has made an honest mistake, exposing that is not a source of conflict. The conflict only arises when the "mistake" wasn't honest, but rather the person was being dishonest about their goals from the start.


Again, that's an ideal, and I promote that ideal and do my best to practice it. But if there is no conflict, there is certainly dishonesty. A social system that pretends there is none is a lie, and everyone knows it (but apparently can't say so). A social system that pretends there is no conflict is like a software development system that pretends there are no bugs.


> if there is no conflict, there is certainly dishonesty

Do you consider honest disagreement (whether about facts or about priorities) to be "conflict"? That is something that always happens, but secret nonalignment doesn't always have to happen.


I mean anger, people feeling threatened/unsafe, etc. It doesn't always happen, it can be reduced, but you're kidding yourself if it isn't happening. Again, like getting zero bug reports for your software - something is wrong.


Will it happen? Sure. Can you reduce it to the level where it doesn't significantly deter people from being honest about issues? IME yes, though perhaps not in every kind of organisation.


Yes, agreed, though I don't think the solution is only reducing frequency, but managing it effectively when it does happen.


> The idea was supposedly that everyones ideas were valid and hierarchy didn't matter and that everyone should challenge anything no matter who they were in the organisation.

Treating all ideas as valid is fine if that means all ideas get evaluated. For hierarchy you need somebody who can be the decision maker when a consensus cannot be reached otherwise the most stubborn opinionated people become the de facto decision makers. I experienced that situation before and resulted in some very bad implementations and massive tech debt and the people who stuck us with those bad decisions got to avoid responsibility and leave other people cleaning up the mess they created.


Could it be "radical candor?" [1]

[1] https://www.radicalcandor.com/



I honestly can't remember, it's possible.

It sounded like the kind of idea that would either be awesome if done well, or sheer hell if people did what people often do with otherwise good ideas...


When I was looking for work back in 2016 there was a company who advertised their way of working as "infrared", if I remember correctly. It was all about total openness and autonomy. For example, they asked one employee to decide how many hours everyone should work per day, and they decided on something, which was then applied company-wide. This was apparently "democracy in action". Quite odd. I think they've since dropped that whole thing.


I would have tried to challenge it during the interview, so that if they hired me I'd know they were legitimately pluralistic.


Radical Openness is definitely Bridgewater


That's a completely different thing from constantly rating each other in public with more senior people's ratings being given more weight.


valve?


> The truth is that at Bridgewater, some are more equal than others. And Ray Dalio is the most equal one of all.

Major Animal Farm vibes.


It’s an almost verbatim quote from the book, of course it has “Animal Farm “””vibes””” “


I think this is just human nature. Dalio just systematized it and put it out in the open. We are primates.


The fact that Ferrucci didn’t immediately see through this insanity makes me question his supposed genius. Like the job was never to make some ultra-intelligent system, the job was to stroke the CEO’s ego by saying “Look, hard math proves that you’re the smartest guy in the room.”




Who is doing the trading? It seems like making money is almost an afterthought and other traders can't seem to find these Bridgewater whale trades.

Is it all a front or a Madoff situation?


The SEC investigated them a long time ago and determined that their trades were legitimate but just done in convoluted ways. There is a recent NYT piece that covers it. There's a new book about Bridgewater coming out so that's why there are so many articles about them recently.


There's no reason BW needs to be a whale. No matter how big these funds are, liquid global markets are gigantic. They are in equities, rates, fx, all sorts of derivatives thereof.

They are generally making millions and millions of smaller trades. So sure they have $100B AUM and maybe 5x levered to $500B. But across a million positions that's .. an average $500K position size.

Few if any of these funds ever have like $10B sitting in a single trade.

And even if they did, consider the result of a quick google search "Average daily trading volume of U.S. treasury securities 2000-2018. In 2018, the average total volume of treasury securities traded per day was over 547 billion U.S. dollars."


With enough money and power, a crazy asshole can get people to work towards any premise, whether they believe in it or not.


We want meritocracy, and in theory we reward merit with capital and power. But merit can change quickly - and power tends to corrupt and reduce merit - while power and capital remain.

From the article's description, CEOs like to flaunt their corruption and lack of merit these days.


I had an ex-Bridgewater colleague a couple jobs ago, who brought “Radical Transparency “ with him.

One on one he was a nice enough guy, and he was super smart. In meetings or as a manager he was a nightmare and left a dumpster fire of problems in his wake including driving off nearly his whole team.


I feel like 75% of the CEOs I've met have essentially been this high on their own supply, they just been the tyrants of smaller kingdoms than Bridgewater.


Most of these hedge funds CEOs are psychopaths and completely unfit to be in the society if it wasn’t for the wealth they have, a lot of people used to put up with that but since COVID, a lot have been changed, that’s why they hate the remote work, it doesn’t feed their narcissistic egos.


This may have been just a stupid delusion by Dalio but some day one of these sociopaths is gonna get their hands on the wrong AI model and end us all.

The necessary precondition of private individuals side stepping democratic control via their unchecked powerful organizations is already established. Gotta hope for the whistleblowers...


I think you're seeing it wrong. When CEOs no longer need scientists and engineers, when they can speak their will into existence directly, they will immediately destroy themselves. This rating system, had the CEO gotten it's say, would not have made Bridgewater thrive. That stupid ideas are often impossible to accomplish is a huge barrier to stupid leadership screwing themselves sideways.

Anyways, thats the optimistic take :P


Sandbagging can just as easily be used for good or for evil.

"How goes our plans to <do something morally reprehensible>?"

Oh, we are still having trouble with the requirements. It's all blocked up in committee. Not much progress to report.


Unfortunately I think this is sort of the optimistic take. They will destroy themselves, but also take down a lot of other systems with them, just look at Musk. These things have a very long tail end for all of the accumulated problems to turn from a ripple to a tide.


That's assuming the effects of that stay contained within the respective organization which is not something I would bet on.

Chances are a CEO type will be the first to get their hands on AGI or something close enough.


This is already happening, and has nothing to do with LLMs. Look at the other thread about banks closing customer accounts for no reason.


> His main interest, as he wrote in his best-selling autobiography-cum–self-help book, Principles: Life and Work, is to lead others toward “meaningful lives” and “meaningful relationships.”

> Staffers were given iPads and directed to rank one another on a one-to-ten scale on their performance dozens of times per day in categories derived from Dalio’s Principles

> The goal of all the data, Dalio would say, was to sort everyone at Bridgewater on a single scale.

———

Truly they’ve stumbled on the heart of human connection and meaning.


If it comes out that Dalio was secretly sleeping with five different people at the company, at the same time, let's just say that my response would be, "that tracks."


His book had many insane ideas for running a company. I assume the power of PR is why it was well received.


Or because Bridgewater is inarguably in the top 0.000001% of most successful commercial endeavors ever?

A book being "well-received" doesn't generally mean "everyone should follow all of the ideas in here."


Yeah, that's true. He's clearly been successful and the biography parts of the book were interesting.


"Dalio would often review the results in his office ... chewing on Scotch tape, as was his habit when he was concentrating..."

This article really buried the lede here


Does he have a special dispenser at his desk? Does he have one strip he chews on throughout the day, or does he get a new one after every meeting? Is it folded up, balled up?

So many questions. So, so many.


It's too weird to make up.


Maybe it is product placement (advertising). If a popular personality has an interview, they can reach out to an ad agency (or vice versa) and be asked if they want to do something weird during the interview with a brand name product that would get people talking.


Yeah the rest of it can be chalked up as narcissistic business personality, but this? This is psychotic. A billionaire chewing on scotch tape.


You can dislike billionaires without making questionable ad-hoc diagnoses disconnected from actual knowledge about the disorder. Chewing scotch tape is many things, "out of the usual" certainly being one of them, but it's nowhere close to diagnostic criteria for psychosis.


I stopped listening to Ray Dalio when he trotted out the meme about out of control crime in big cities at the beginning of one interview or another. I don't know if he's lazy, or out of touch, but I don't like it when people are fast and loose with extremely verifiable information. Like, maybe you have expert intuition the future of inflation, that's great; but if you're going to talk about extremely verifiable information, like per-capita crime rates in American cities, then you should have your facts straight.


> if you're going to talk about extremely verifiable information, like per-capita crime rates in American cities, then you should have your facts straight.

To be fair, if his claim is "out of control crime" and you interpret it as "out of control crime rates", you're talking about different things. There's enough anecdotal evidence to suggest that in some big cities, the occurrence of certain kinds of crime (like cars being broken into) is up, yet reporting is down, leading to a lower crime rate that doesn't reflect the street-level reality accurately.

Of course, this isn't a problem limited to this domain.


> To be fair, if his claim is "out of control crime" and you interpret it as "out of control crime rates", you're talking about different things. There's enough anecdotal evidence to suggest that in some big cities, the occurrence of certain kinds of crime (like cars being broken into) is up, yet reporting is down, leading to a lower crime rate that doesn't reflect the street-level reality accurately.

That's too easy to just type; and therefore means nothing. If you think you have an argument, make a claim for what the rates are and show evidence for that. Show evidence that published rates are unusually low.

In the Internet, post-truth era, anecodetal evidence - if you even have that - is less reliable than ever; we swim in oceans of misinformation and disinformation. People post their takes, repeat others, etc. endlessly.

My anecdotal evidence, based on much time spent in cities that are depicted (by right-wing media - let's be honest) as crime ridden, and IME they are almost wonderful, there almost couldn't be a better time to be in them. Crime is low, cities are clean, streets are busy with people and energy, and people go about their days without much of a care. Is that true in every neighborhood? No; but that's not a realistic standard.


[flagged]


I haven't heard about this, do you have a source on what crime-generating activities he was involved in?


What are you talking about? Do you know that person?


Would you say that believability is very important to you?


I think that supporting an argument with accepted facts is a good way to build trust.


"Scientology, but for hedge funds!"


Man, this feels like something out of Severance. Creating an AI model of Dalio to properly rank employees according to tenants? What the hell?


Hm. When is Severance coming back? Such a great show.

I assume Dalio has been having a field day with ChatGPT/LLMs.


It’s sad to see so many person/hours wasted spinning their wheels to please a narcissist.


It could be argued that none of what was done at Bridgewater ever created actual value anyway.


Most of the economy is people spinning their wheels to please narcissists. It's the foundation upon which modern capitalism is built.


[flagged]


>It's always funny when some rag tries to say "X declined to comment". No one is going to comment on some random crap you publish on what's a glorified blog

Strong disagree. I like when people writing articles at least give the other side a chance to comment. It makes me feel the piece is less obviously biased. Of course it can be abused by giving the other side just a dew hours to comment (so basically no real chance), but it's still better than nothing.


Haha, yes, of course. It's just that it's natural to decline when some guy writing a glorified blog post anthology fills it with "declined to comment" when it's obvious everyone would decline to comment. He's not Walter Isaacson. He's just a low-level reporter at the NYT who mostly publishes GPT-4 level articles on what the current stock price of a bank is.


> No one is going to comment on some random crap you publish on what's a glorified blog.

I have no idea why the article doesn't lead with this, but it's adapted from a book:

> Dalio declined to be interviewed for the book from which this article is adapted.


Fair enough. I suppose if one were a rando NYT reporter one would think most people would be thrilled to be interviewed to comment on the book one is writing with no access to the principals.


What defines a 'glorified blog'? What research did the author do?


It's interesting. I have seen rating systems work – in the military. They're harsh to some, but in some situations, you have to filter out B and C players to keep the A players alive. I do think transparent systems w/360º peer reviews are better than many others. I wonder what else is out there that might work.


We’re talking about OERs and NCOERs right? Those pencil-whipped, boilerplate evals where if you have a 300 PT score and don’t get a DUI/beat your spouse, 1/1? The process that takes 24 months to kick someone out?

I don’t think you know what you’re talking about in this regard and/or didn’t serve, candidly.


Extreme lethality and Total Force focus; promote now.


I was a Green Beret for 15 years and have worked for DOD/IC for a subsequent five. The rating system I described was used throughout SFQC and on some ODAs and at multiple schools like SOTIC and SFAUC.

NCOERs/OERs aren't great, I agree. They're almost as bad and useless as command climate surveys.


The problem is that the majority of people are bad, so trusting the majority is a bad idea.


Bad in what sense? Surely that's not correct otherwise I imagine society would be significantly worse than it is today.


I think he meant "bad" as in incompetent. In the context of the top parent, you'll have many more B and C players than A players, so trusting the majority works against the A players.


Ah that makes more sense! I was thinking in moral terms


Ha, the good old bad vs evil.

You might want to read Friedrich Nietzsche, On the Genealogy of Morality, First Treatise.


You mention rating so maybe you're talking about enlisted only, which granted I don't know how those work.

But the up-or-out system used for officers in the US military is notorious for being primarily political after like O3 and historically almost comically incapable of rejecting unqualified but politically sophisticated/connected officers.


SNCOs have up-or-out too, for sure. I agree that the promotion system across the board is FUBAR. I'm talking about other sytems, typically at the unit level, that can be used to supplement or even side-load the current 'official' rating system, which is a joke.


Oh I see, that makes more sense. I don't have any particular experience or insight, just from an area where military service is routine so I've heard a lot secondhand. Everyone I know in it or retired seems to think the promotion system is completely fucked though. They either fume because it worked against them or laugh because they could work it, but no one seems to think it's effective.


Prediction markets : Have a system where people can make predictions in semi serious fashion in any field, record them with reasoning and probability numbers and match them later on. Atleast this will prove how deeply they know about a field they are interested in. Tagline: Prediction is Intelligence


How do you compare the results? Most of the time you can’t know the actual probability of events being predicted even after they happen, and so it’s apples and oranges.

If you predict that it will rain on Friday, and I predict an alien invasion before Christmas, and both actually happen… Does that mean I was smarter because my event was vastly more unlikely? How would you score these two successful predictions and what possible value could those scores have? Where was intelligence applied here?


It would atleast show both of you are good in your respective fields. Also the predictions could be mild in nature since everything is based on hunch, we want the guy who gets the best hunches.


How does the military rating system work?


I've seen multiple. The one I like is 'pinks and blues' where you're given two of each - a pink and a blue - and you write out why you think two dudes should get a blue (good) and a bad (pink). If you start accruing a bunch of pinks, there's a problem.

It's very simple and probably wouldn't pass muster with an HR department, but it was effective at getting rid of turds.


Can you split the votes as you want among the candidates or does each have to get at least one review? How often do you rate? Do you only rate peers, or new hires? What about superiors?

I could not find any reference to this system. As a person below said it recalls https://en.wikipedia.org/wiki/Limited_voting


It's a form of limited voting, with both approval and disapproval votes. In a unit, maybe there are 20-40 "candidates" and you are given four votes, two up and two down. This will probably identify a couple people receiving more than one vote. Limited voting has had interesting results in things like city council elections.


It sounds like it's not an automated rating system and doesn't rank people. It only identifies really good people and really bad people.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: