Hacker News new | past | comments | ask | show | jobs | submit login
Harvard concluded that a dishonesty expert committed misconduct (chronicle.com)
264 points by Tomte 7 months ago | hide | past | favorite | 157 comments



There was an interesting two-part series on the Freakonomics podcast about academic fraud, and it covered this case.

It's all incredibly depressing. I really feel for all the junior researchers who end up wasting their time, and often derailing their careers because they followed a path based on other people's academic fraud.

https://freakonomics.com/podcast-tag/academic-fraud/


It seems like dishonesty in academia is not punished severely enough. Someone who is caught may face professional embarrassment, and lose some privileges, but I think there should be tougher consequences. Hell, Ranga Dias still seems to be employed. These people are wasting government (our) money which could have been used by honest scientists to further our scientific knowledge of the world. It also further wastes the time of other scientists who may try to build off of the fraudulent research. To me, they are essentially committing fraud in a field where truth-seeking is paramount. I think a trial and prison time needs to be a part of the consequence for flagrant fraud.


A hardline approach is not going to win support from the people on the front lines in the best position to spot and police this. Even the most honest researcher has a published result or three that they suspect is incorrect and they will not trust you to accurately litigate against only the worst players because they are smart cookies and fully understand that honesty is a severe liability when there is an authority out for blood.

No, the better approach here is to just shift the incentives. Start funding replication. Once we see labs and career paths that specialize in replication / knowledge consolidation, the whole system will shift for the better. Bibliometrics and hiring committees will start to pay attention and then exploratory researchers will start to pay attention and the system will start to work a little bit better.


> Start funding replication.

I’m not convinced this fixes anything. Even when a result is genuine, it’s very easy to fail to replicate it. We all know this from software development: it’s a lot easier to say “couldn’t reproduce” about a genuine bug than it is to track down the precise context in which the bug actually manifests. So if you get rewarded for failing to replicate a result, all the fraudsters will do that. If you get funded only when you actually replicate the result, the fraudsters will pretend to replicate the result.


> Even when a result is genuine, it’s very easy to fail to replicate it.

To the extent that is true, then it is itself evidence supporting the proposition that not-yet-replicated results should be regarded as provisional.


> We all know this from software development: it’s a lot easier to say “couldn’t reproduce” about a genuine bug than it is to track down the precise context in which the bug actually manifests.

If a study claims to prove something, it should be repeatedly provable or it's a) fraud or b) not proven solidly enough.

I think replication is a key component of a functional research.


Figuring out why a result is reproducible by some, but not others, is probably where the scientific discovery lies, if there is one to be had.


This makes so much sense. Research is compounding, even for failures to reproduce.


> So if you get rewarded for failing to replicate a result, all the fraudsters will do that. If you get funded only when you actually replicate the result, the fraudsters will pretend to replicate the result.

So reward either? This seems pretty obvious.


It's much easier to fail to replicate a result than to actually try to replicate it.


Yes, but the original author is incentivized to attempt to show where the replicators got it wrong, so there will still be a push to correct the bad data from the false replication failure. With that said, I am convinced it will be a panacea.

A bigger issue is that... 3-sigmas is really a weak signal in a high cardinality state-space, which is basically everything above physics of small numbers of elementary particles, and it is the elementary physicists that go for higher. This is what is feeding the replication crisis: Weak signals from very poorly sampled studies.

The meta issue is that we as a society need to start accepting that some things will take longer and require more investment to achieve results. Do fewer studies per unit grant, but do the three-sigma ones only to justify a real experiment/study, not as an acceptance criteria for "discovery!".

I'm not even going to touch politicalization, since I really have no idea what to do about it without a worse cure than the disease.


> the better approach here is to just shift the incentives.

Yes!

> Start funding replication

No :( That won't solve the problem, it'd just change shape. Funding people who produce fraudulent or incompetent papers won't stop being a problem if you ask them to produce a slightly different kind of paper. You just get fraudulent replications, fraudulent claims of failure to replicate, fraudulent rebuttals to claims of failure to replicate and so on.

The other thing this would do is sharply shift the distribution of papers towards those that are guaranteed to replicate, either because they prove things that are trivially true or because they prove things about simulated worlds that may or may not be connected to reality. Both categories of paper are already way too popular, nobody needs more.

Unfortunately, the incentives fix for this problem is so blindly simple it's also difficult to bring up in polite company, because lots of people have an allergic reaction to the implications. Look at places where science is being done at scale without replication crises or widespread fraud, and notice what's different about the incentives. Then ensure science is always done with those incentives.


Why can we charge other professions with fraud but not researchers? I'm sure people in other professions might worry about being wrongfully prosecuted too, but that doesn't stop us. Even doctors get criminal charges when they deliberately do something wrong. You can be pretty sure that virtually every doctor has made honest mistakes, but they don't all prevent dishonest doctors from being brought to justice.


(IANAL) Fraud can often be prosecuted as a civil crime, in which case there needs to be an aggrieved party with credible evidence of damages, no inhibitions about making a splash, and the time and money resources to sue.

I suspect educational institutions don't want to be seen as organizations that sue their own researchers. Same with paper publishers. And same for grant issuers.

Downstream researchers? Do they have the time, the money? ability to show direct damages whose recompense would be worth all the effort?

Students affected perhaps could, but only if the effect was very direct and they had the resources. And desire to be known as someone who sues their professor.

The damage is usually so diffuse. There is no one party with all the reasons to sue.

I have no idea what the process would be for criminal prosecution, but the diffuse impact may be an inhibitory factor there too.


There are objective standards in other professions. The thing about research is, by definition, whatever you're doing is not part of an established profession yet.

Obviously that doesn't apply in setting like drug development where standards do exist, as defined by the best currently available treatments. But if someone is working on something like psychological studies where replicability is the exception rather than the rule, or on exotic tech where only one experimental facility might exist, or on substances or effects that exist only under weird conditions, it's not always that easy (or that safe) to accuse them of lying. Even when you're pretty sure they are.


I have a hunch that the not everything you do as a researcher is novel. A part of it is, like you mention, new by definition.

But there's also the old and established parts, like statistics, parts of the experimental setup, methodology, reporting data accurately (or at all). This is plenty enough to have objective standards for.

A lot of fraud is not in making up experimental results, but instead misreporting the data and drawing unsupported conclusions.


Exactly! There are some obvious things too like: don't copy and paste tiny bits of an electrophoresis gel and put it into another image to make it look like it was the same result. Sylvain Lesné comes to mind here. Last I checked, this jackass still has a job, too


Agreed. There are plenty of stories in physics where researchers reported some effect which was found later to be due to an error setting up the experiment. Are they supposed to face fraud charges because of this? Researchers will just quit and go work in the industry or something.

And there are plenty of fields where you have different interpretations of the same data (see the entire field of economics, also plenty in physics and other fields). Should the people who espoused ether theory be sued for fraud? It'll be a huge mess because doing research is by definition doing something unprecedented.


If your contention is fraud then good news - we already have the laws and authorities required to pursue it. Nothing new required.


Or derailing their careers because only the very successful grad students and postdocs get grants and tenure, and evidently a good chunk of those elite spots get taken by people who publish dishonest research.


No one should be putting all their eggs in one scientific basket even if the basis ISN'T fraudulent. I see this mistake over and over.


I understand where you are coming from, but if you are starting out on a PhD and decide to do research that is branching off from work done by someone like Gino you could spend a long time chasing ghosts and it will be hard for you to turn round and say "I think this is actually BS".

Even once you are over that hump you will have quite a few years going from one short-term grant to another, with your ability to get funding being dependent on your previous work. If that has been stymied because you were basing it off dodgy research of other people it could take a long time to build up the kind of record where you get to diversify and some any kind of academic security of freedom.


A friend of mine proved (for his PhD I think) that the thing his entire department was working on was based on bullshit. They weren't happy.

That said, putting all your eggs in one basket is often necessary to get anywhere with that basket of research.


What did they disprove? Or at least, what field of study?


I forgot the details, but he studied both mathematics and AI, so something in either of those fields.


Having done a PhD and said 'I think this is BS', it's not easy but possible.


Academic research (especially a PhD) is about going deep into one particular topic. You are fully dependent upon the giants on which you stand.


Yep. "Just stand on the shoulders of TWO giants" is muuuuch easier said than done, lol.


So the author of a bestselling book on why breaking the rules can be advantageous... broke the rules?

https://www.amazon.com/Rebel-Talent-Pays-Break-Rules/dp/0062...


Irony aside… doesn’t that validate the premise?


Well not at the moment, her career is pretty well off track now.


They say to write what you know, so they did!


I think this is a great reason why one should trash all these self-improvement books by grifters with "expensive" credentials (CEO, Ivy League person etc.) and replace them with fiction works of 19th Century American/Russian literature.


For excellent ongoing coverage of Francesca Gino and other misconduct cases in academia, esp. behavioral science (such as Dan Ariely), see Pete Judo. Also covers replication failures, data hacking, and much more.

[YouTube: https://www.youtube.com/@PeteJudo1 , https://www.petejudo.com/]

Also: DataColada blog [https://datacolada.org/], who made public the misconduct reported by Gino's graduate student.


Thanks for the links. I watched some of his videos where he explained how DataColada did their forensic investigation in data manipulation.

What amazes me is how simple was the fraud (or at least the ones reported by Pete!). They basically just opened an excel file, started from the top, changed some random numbers, until they reached the effect they aimed for!!! Really? What about those that can do more sophisticated data manipulation?


> Really? What about those that can do more sophisticated data manipulation?

They win. They are rewarded with the power to control governments and whole populations, they are feted by the media and they accumulate an army of staunch defenders who Believe The Science. When the true believers find their way onto juries or into the courts, they are able to destroy their opponents lives (see the recent case of Mann vs Steyn, which will certainly be very encouraging and helpful for Gino in her quest to bankrupt her critics).



The sad part about the field is that in the end nobody really cares what science turned out to be fake… because nobody actually cares about the specific results that come out of psychology research.


Yup, that's the point this article makes:

I’m so sorry for psychology’s loss, whatever it is - https://www.experimental-history.com/p/im-so-sorry-for-psych...

This whole debacle matters a lot socially: careers ruined, reputations in tatters, lawsuits flying. But strangely, it doesn't seem to matter much scientifically. That is, our understanding of psychology remains unchanged ...

That might sound like a dunk on Gino and Ariely, or like a claim about how experimental psychology is wonderfully robust. It is, unfortunately, neither. It is actually a terrifying fact that you can reveal whole swaths of a scientific field to be fraudulent and it doesn't make a difference.

---

Also interestingly the author apparently studied under Dan Gilbert at Harvard, so he's an "insider". I remember >10 years ago seeing the "happiness science" from Gilbert go around:

https://blog.ted.com/ten-years-later-dan-gilbert-on-life-aft...

Although I actually think the core insight is a good one, and a memorable one -- people are unable to predict what makes them happy. They think they will be happy if they buy a new car to show off, but if you ask them afterward, that didn't really happen.

There was another one of these "TED memes" that turned out to be widely mocked / unreplicable:

When the Revolution Came for Amy Cuddy

As a young social psychologist, she played by the rules and won big: an influential study, a viral TED talk, a prestigious job at Harvard. Then, suddenly, the rules changed.

https://www.nytimes.com/2017/10/18/magazine/when-the-revolut...


Holy crap that article was a good read


That seems like a worse thing, it says something about the field. There are always going to be individuals who do things wrong, or wrong things. But ultimately, it really comes down to how the group behaves and the goals they seek.


Fact of the matter is Psychology is at this point more pseudo than science.

The biggest factor impacting the outcome of an experiment in psych is what the researcher conducting the study wants to be true. The whole subject is broken down in this article: https://slatestarcodex.com/2014/04/28/the-control-group-is-o...


I think you're kind of mischaracterizing this article. The conclusion drawn isn't really that psychological research is wrong per se, but that the statistical and methodological tools which we _believed_ suffice to allow us to formulate experiments that make decisive epistemological statements apparently do not do so. Scott Alexander isn't imputing the behavior of most of these scientists or even the endeavor of social psychological research. He is simply observing that our methods clearly are insufficient to the end of attributing results strong epistemological weight.


> I think you're kind of mischaracterizing this article. The conclusion drawn isn't really that psychological research is wrong per se

This is a characterization that you have just made, not one that the person you are replying to made. They were very clear: "The biggest factor impacting the outcome of an experiment in psych is what the researcher conducting the study wants to be true."

Even to say something is "more pseudo than science" almost directly contradicts your characterization, because it implies that there is both science and pseudoscience.


Psychology is just vibes, dressed up as science


Microbiology is similarly rife with fraud


At least you can more easily verify an experimental result that takes a million e coli vs one that takes a million depressed people.


How so?


Just wait until it's all in AI LLMs... fun times ahead.


I understand where you are coming from. However, this overlooks a critical stakeholder group directly affected by the outcomes of such research: patients.

For instance, controversies surrounding the PACE trial, which explored treatments for Chronic Fatigue Syndrome (ME/CFS), illustrate the profound implications of research integrity. Critics have condemned the trial as "biased and profoundly flawed," arguing that its deficiencies have led to detrimental effects on patients' lives and treatment approaches. This controversy underscores the significance of research outcomes.

For more insights into the academic dishonesty surrounding ME/CFS and its consequences on patients I recommended this article: https://www.theguardian.com/commentisfree/2024/mar/12/chroni...

This issue highlights why it's crucial for both the scientific community and the public to demand rigor and transparency in research, especially when the well-being of vulnerable populations is at stake.


I certainly don't. I see so many scientific/philosophical issues with it, and Im surprised anyone takes it seriously.



Researchers should tick a checkbox “I swear I did not hack the data” before submitting a paper to a peer-reviewed journal to prevent this kind of misconduct.


Isn't this just basically fraud? I'm sure it's already covered by the existing things you sign, but surprisingly that doesn't stop people who are willing to commit fraud.


I assume that in this particular case, it's a joke referring to the subject of the study, which involved similar, if even weaker, assurances.

In general, this kind of thing is oddly common. It's all over government forms. I just interviewed with a Chinese father who wanted me to spend time with his children providing exposure to English. He asked me whether I had a criminal record.

I don't, but if I did, and I chose to lie about it, random Chinese parents would never know the difference. (Though entering China might have been a challenge.) Why ask?


> I don't, but if I did, and I chose to lie about it, random Chinese parents would never know the difference. (Though entering China might have been a challenge.) Why ask?

I think you'd be surprised how many people are really bad at lying; or even bad at acting normal when they think they have something to hide, particularly when asked such a question unexpectedly. Sure, some people with a criminal record may be able to lie plausibly when put on the spot, but there are a reasonable number who would give themselves away even if denying it. No real cost, some benefit, so why not ask?


Well, I sanitized my report of the interview. He actually asked if I had a bad history. I was confused and said I didn't understand. He clarified that he was asking about things like crime.

So the element of getting suddenly put on the spot doesn't really apply. That's already about as awkward as communication gets.

More importantly, though, I don't think the point is correct to begin with. This is fine:

> I think you'd be surprised how many people are really bad at lying; or even bad at acting normal when they think they have something to hide, particularly when asked such a question unexpectedly.

But job applicants with criminal records are only going to match this description once or twice. For the rest of their lives, it's not going to be an unexpected question, and they'll have lots of practice in denying their record if that's the way they choose to go. You're just never going to catch anyone out this way.


> But job applicants with criminal records are only going to match this description once or twice. For the rest of their lives, it's not going to be an unexpected question, and they'll have lots of practice in denying their record if that's the way they choose to go. You're just never going to catch anyone out this way.

You're right, you're only going to catch people out if you're the only one asking this kind of question... but right now that's more or less true of the people in your story. First-mover advantage. :-)


Why do forms require signatures if it's easy to forge (especially if you are submitting it electronically), and evidence of the agreement is gathered from other sources if it comes up in court?

Because it boils down what could have started as a nebulous mix of halfway-unethical actions into a single fraudulent act that can be pointed to and punished later, and will provide a lot of people with the impetus to back out of the fraud once they see themselves about to commit an unambiguously illegal act.


Can't they run a background check? It's common to check a tenant out when renting a property.


Maybe, but engaging an American company to run a background check on an American would represent a significant commitment for the typical Chinese person. It's not a realistic idea.


More than hiring an American person to tutor their children?


And the checkbox should be at the top of the submission form :)


You just doxxed yourself as Dan Ariely :-)



And a question at the bottom asking “do you feel the urge to wash your hands with soap?"


It’s Pre-ticked


They already do for a lot of publications. Check the 'reporting summary' in this random article https://www.nature.com/articles/s41586-024-07171-z#rightslin...


They do, but those "Reporting Summaries" are useless makework, IMO.

First, you fill them out after the data has been collected, analyzed, and written up. It's perhaps helpful as a reminder to include a few tidbits in the text (e.g., the ethics approval #), but literally no one is going to fill this form out, realize the sample size is way too small, and....abandon the manuscript.

Second, you don't actually want people to comply with the instructions. For example, it asks for "A description of any assumptions or corrections, such as tests of normality". A decent number of statisticians argue that you shouldn't be using normality tests to choose between parametric and non-parametric stats. On top of that, nobody actually writes out assumptions behind OLS in their paper either.

I am deeply skeptical that this cookie-cutter stuff actually helps in any meaningful way. It feels like rigor-theatre instead.


This sounds RFC 3514 compliant.


Reportedly, a lot of people who choose to study psychology are motivated originally to figure out what's going on in their own heads (which they have a sense is not quite normal) [1]. I wonder if there's a similar dynamic with "honesty / ethics": people who lack a native impulse to be honest / ethical and are curious about people who do.

[1] See e.g., https://news.ycombinator.com/item?id=39703638 from yesterday


Alternative, and more likely imo, hypothesis is that attaining and maintaining the social (and probably $$$) capital of being a "Harvard Expert" leads to desperation and breaking the rules.


Sure, that might have been the "trigger"; but that's missing the key thing that needs to be explained. If this were dishonesty by someone doing chemistry, it's unlikely that this would have hit the front page of HN. As they say, "Dog bites man isn't a story; man bites dog is."

But a priori, you'd expect someone studying honesty* to personally care about honesty, and thus to be less likely to give in to these sorts of pressures. That's the thing that needs to be explained; the "man bites dog" aspect.

* EDIT s/honestly/honesty/g;


> As they say, "Dog bites man isn't a story; man bites dog is."

The headline is just making fun of theory and practice. If some is a murder expert, is that some one who understands murder or someone highly qualified to kill people?

Therefore, the title is about a "dishonesty expert" matching the practice, not about a "honesty expert", which could be replaced by any other field.


Reminds me of the grad student in criminology who allegedly committed a brutal multiple murder in Moscow, Idaho. He got caught, so I guess he wasn’t as smart as he thought he was.


> not about a "honesty expert", which could be replaced by any other field.

Surely not marketing and sales!


> But a priori, you'd expect someone studying honestly to personally care about honestly, and thus to be less likely to give in to these sorts of pressures.

I'm not so sure you can automatically assume this. You can definitely assume that the subject interests them.

But it's also possible that someone studying honesty/dishonesty might start seeing the subject in academic/technical terms instead of moral terms. Which may give them much less of a disincentive to be dishonest than the average person.

Which is to say that repeated studying and analysis of instances where people are dishonest may break down the gut reaction people have to being dishonest.


I didn't say one should assume it, just that many people do. "Chemistry researcher committed fraud" is simply not the same as "Dishonesty expert committed fraud".

You give an alternate explanation, but it's still an explanation; one which wouldn't be needed (and indeed wouldn't apply) to a chemistry researcher.

A variation on your explanation might be: Dishonesty researchers discover just how easy it is for dishonest people to cheat the system, and how little consequence there is, and so is more tempted to be dishonest.


> But a priori, you'd expect someone studying honestly to personally care about honestly

But then, why study honesty if you already know what it is?

For me, this sort of thing - an honesty expert - is in the same realm as 'ethics panels'.

These folk are there to abuse edge case arguments (think "trolley problem") in order to provide moral cover for corporations to act dishonestly, unethically. And they will provide documentation in support. CEOs will just do what they wanted, but call it "moral" tm.


The researcher was studying dishonesty... So I gather that she was more interested in dishonesty?


lawyer broke law or accountant tax cheat or doctor kidney thief or lock smith cat burglar

People using their skills dishonestly is the theme of the story


computer professional caught hacking!


Probably the most important reason this story got big is because it involved Dan Ariely, a popular author and perhaps the most famous active researcher in the world.


Strong disagree. Literally every story I've read about this starts by talking about Francesca Gino.


I think it’s simpler than that: there’s a heck of a lot of dishonesty in academia: but the dishonest dishonesty researcher grabs the headline.


Yes, my experience with academics is that there are a lot of very dishonest people. They are political bullies who also lie in their research.

Chances of being caught are close to zero (I have contacted many times authors of papers who's work I was unable to replicate - most of the time zero reply, sometimes "yeah it was a honest mistake, oops"), super high competition (only a few tenured positions in all world's high visibility institutions per year), full control over student's future and being able to force them to do fraud (and later blame on them).

Obviously, not all, blah blah - but many academic scientists are the last people that should be doing science.


I think it's more likely that psychology offers a lot of potentially marketable propositions, such as lie detection, effective lying, covert behavioral control, secret information about the movements of financial markets, etc. Therefore, a lot of snake oil is sold, and people's careers advance proportionally to the amount of snake oil they can sell.

The job has often been to come up with a marketable theory; to design experiments and write papers to imply that that theory is true without quite proving it; and to avoid the possibility, through any means, that someone will weaken or disprove the theory.


Within academia, this is known as "research is me-search".


Interesting hypothesis but my money is on the opposite direction of causality. When you're surrounded by dishonesty on a daily basis, you get de-sensitized to it. You start to see it as "normal" or as "everyone does it". And then you start to think "Oh, what's the harm if I just fudge a little bit here and there. It's not like I'm profiting off of this. Unlike all these other millionaires who do far worse"

Sure, it's easy for us to think "I would never do anything like that, no matter what others around me are doing". But as someone who has lived in multiple countries, trust me - the vast majority of the things we do are simply a reflection of what we see other people doing.


A thief has the most locks on his door.


> Reportedly, a lot of people who choose to study psychology are motivated originally to figure out what's going on in their own heads

People who go into psychology are more interested in what's going on in other people's heads and more importantly manipulating other people. It's more about controlling others than controlling oneself. It's why psychology was founded in the first place.

> people who lack a native impulse to be honest / ethical and are curious about people who do.

Ethics isn't about studying people. It's about studying principles. IE what does ethics mean. What makes an act ethical vs non-ethical. So on. You can delve into the ethics of gods, god, AI or even animals. Are ethical principles universal or not. So on and so forth.


> manipulating other people. [...] It's why psychology was founded in the first place.

What are you referring to here? Which specific founder(s) wanted to (or did) manipulate people?


The article states that this researcher falsified data. While I rather abhor dishonesty, especially being a former academic myself, one must keep in mind the following: 90% of academia is rotten in the sense that the research being done is being done because the academic path requires some kind of research. This in turn is the case because research groups and departments want to keep themselves alive, and the only way to stay alive is gain funding. Finally, and worst of all, gaining funding has been almost entirely divorced from the usefulness that research has to people.

On the counterpoint, I do support basic research and curiosity, which I fundamentally believe has value to societies, but the research being done is hardly out of curiosity. Instead, these perverse incentives have led to an environment where basic curiosity is paradoxically discouraged.

The end result is that large portions of academia are filled with useless machines doing useless work. As a result, we're getting these pathological cases. And we can think about this fraud on one more level: don't think that it will necessarily hurt academia....a fraud is necessary every once in a while so that everyone else shines just a little brighter.


>Finally, and worst of all, gaining funding has been almost entirely divorced from the usefulness that research has to people.

By "usefulness to people" do you mean, essentially, "good for society"? Because I have a hard time believing that funding entities are simply flushing money down the toilet. They must find the research useful to their ends.


There is definitely a lot of money thrown after whatever the current buzzword is - we had blockchain, now we hve AI; sustainability is also a big one in some places. That doesn't mean that there's not great things one could achieve for society in both the AI and sustainability fields, but having the correct keywords on your project proposal goes a long way even if you quite obviously don't have a clue what those words mean.


Yes, I mean good for society. They are not flushing money down the toilet: who do you think makes funding decisions? Who do you think reads the funding proposals? Scientists. Scientists are giving money to each other to keep their efforts alive. Scientists don't do things for the good of humankind, they do it for fame and pure intellectual curiosity.

Of course, there are benefits to those outside of science: science furthers the growth of technology, but is that really beneficial to us? Yes, we are thrown a bone now and then that is useful such as a vaccine here and there, but the the benefits that actually make life better probably account for 1% of scientific activity these days.


I've come to think we need to ignore the current science system/establishment and start over with something¹ better.

¹ To be determined :)


For the most part, I absolutely do agree with you. I would say something like more traditional knowledge emphasizing the relationship with the earth. I would get rid of 90% of science, at least.


The Wikipedia article also has some nuggets:

"In or before 2020, a graduate student named Zoé Ziani developed concerns about the validity of results from a highly publicized paper by Gino about networking. According to Ziani, she was strongly warned by her academic advisers not to criticize Gino, and two members of her dissertation committee refused to approve her thesis unless she deleted criticism of Gino's paper from it"


It's one of these days where I have to double-check I'm not on a hn-themed the onion.

"CEO of data privacy company Onerep.com founded dozens of people-search firms"

"Harvard concluded that a dishonesty expert committed misconduct"

And depending on the general mood I'm in this is kinda a downer sometimes, like apparently only assholes make it in this world because you cannot compete with others if you have a conscience and they don't. Sorry for this tangent but it's just one of these days somehow.

The McDonald's outage at least was good for a quick laugh when I read the "unexpected world's health day" comment.


> like apparently only assholes make it in this world because you cannot compete with others if you have a conscience and they don't.

I think this is the key fact of life that it took me far too long to realize. They say that "cheaters never win and winners never cheat" but it's actually pretty much the opposite of that.


> you cannot compete with others if you have a conscience and they don't.

This is self evidently true, though. A conscience is a constraint on behavior. People without this constraint can accomplish more things. Of course, they may end up suffering for their actions, but it's a surefire way to short-term success, and not unlikely to lead to long-term success as well.


A constraint can lead to more power e.g. the power of abstractions in code. On a more grounded note, having a constraint (a creed or code that limits you) can be a visible signal to others that you’re worth following. With these bonds strengthened by self-imposed constraint, you can accomplish more together than you could alone but unfettered by limitation (debatable). This is a theme in many movies and stories where the main character’s restraint inspires others to great deeds and gives them the resolve to do the unthinkable (Star Wars). Of course, a person with no conscience can always feign this restraint… (Palpatine)


>This is self evidently true, though.

I think you mean "this is intuitively true" because, like so many intuitive conclusions, it is wrong. Looking at animal behavior, it's intuitively true that organisms will only win if they are selfish, and yet selfless behavior has evolved in countless species. Another intuitive truth is that when you have a powerful species of predator that can kill prey at will, that they will decimate the prey population and end up dying off from lack of food. Yet most ecosystems can and do support both predators and prey in equilibrium. In business, it's intuitively true that it would be stupid to make another fast-food hamburger business in the presence of McDonalds - yet Carl's Junior/Hardees did very well doing just that. Counter-examples abound to the intuitive conclusion.

One of the best habits you can form is the habit of questioning your intuition.


congratulations - a completely self-centered analysis amidst a world of systems of systems


> This is self evidently true, though.

I absolutely disagree.

Conscience, empathy, shame and similar emotions/instincts that encourage pro-social behavior would only have developed if it lead to higher reproductive fitness for those who had them compared to those who didn't.

For instance, only a very small percentage of people are psychopaths. If we look at the average life outcome for psychopaths, we discover that while some of them may be super-successful, we also find a lot of them as social outcasts or in prison. And if they live in areas like law enforcements, they're also more likely to get killed.

I would say the reason should be obvious. People without these guardrails on behavior may get away with anti-social behavior for a time, but eventually people figure them out. This leads to all sorts of active and passive punishment, ranging from difficulty making friends or holding on to a job all the way to life in prison or execution.

SOME people may negate the downsides by acting in a super-rational way. But this only works for people who are both highly intelligent and who also have very good impulse control.

But for most people, having at least moderate levels of pro-social instincts/emotions will be just as beneficial for themselves as for the people around them.


> Conscience, empathy, shame and similar emotions/instincts that encourage pro-social behavior would only have developed if it lead to higher reproductive fitness for those who had them compared to those who didn't.

It's a classic prisoners dilemma. In aggregate, the best option is cooperation. For an individual, the best option is betrayal in a society where everyone else follows the rules.

Betrayers can't become dominant because the worst option for both the individual and society is everyone being a betrayer. It ruins social cohesion, so there's no synergies to reap rewards from.


> For an individual, the best option is betrayal ...

Well, this applies to prisoner's dilemma games if you know exactly how many times you will "play" the game with a given other player, including the case where you will only play one time.

But for repeated prisoner's dilemma games, where you don't know what games is the last against a given player, a tit-for-tat strategy massively dominates an always-defect strategy.

Humans have evolved in environments where repeated prisoner's dilemma situation is common, and have evolved emotions such as empathy and consciousness for interactions with cooperative agents.

We ALSO have evolved (to a lesser or greater degree) the ability to turn such emotions off when dealing with people who attempt the always-defect strategy (such as psycopaths).

In fact, psychpathic behavior only becomes viable in populations that have been using tit-for-tat for so long that they've "forgotten" how to punish defectors (ie almost all agents switched to always-cooperate).

But once you introduce a few always-defect agents into the population, the tit-for-tat agents will soon dominate again.

At least for the brain wireing for interacting with members of our own "tribe". When dealing with people outside our "tribe", it's much more likely that we will only play them once, and in such cases, always-defect may indeed dominate.

However, there is also a tribe-level game, where the tribe can act as agents. And in tribe-vs-tribe games, depending on circumstances, both tit-for-tat and always-defect can be optimal strategies. Always-defect can, ultimately, lead to genocide, while tit-for-tat can lead to alliances, trade and intermarriage.

People who think that always-defect is always the optimal mode of behavior tend to be people who either don't consider it for their close relations, or who have very few close relations. For instance, someone who lives alone in the city and is either unemployed or tend to have purely transactional work relations.

What such people tend to forget, is that there is an artificial presence in the city, called the police, that can impose a level of safety that can remove the need for high trust, pro-socal relationships. In a society with police, all you have to do is to know what behavior lands you in prison, and otherwise you can be a selfish bastard.

As soon as you remove the police, loners like that are free game for "predators", and only those who have a "tribe" have any kind of protection. Such a "tribe" can be a clan in an Afghan mountain area or it can be a gang in a high-crime city. While a gang member may have little to no empathy or conscience when dealing with outsiders, maintaining good relations with other clan members can be critical (Depending on what specific gang it is.)


> repeated prisoner's dilemma games

I think this is why it is very important with communities: so that your interactions with other people are mostly with the same people instead of mostly strangers.

The other important aspect is that people thinks "we have the same values, so we are the same team", as you will otherwise always select to deflect due to mistrust of the other part.

Police on its own is not enough, they cannot solve most crimes without help from citizens. A police officier without good informers is not good police ("The Wire", see IMDB). So police is only really effective in a high trust, pro-socal relationships society.


I suppose I should have clarified that always defecting is optimal if you're optimizing purely selfishly for material goods. It won't make anyone feel fulfilled and loved.

> But for repeated prisoner's dilemma games, where you don't know what games is the last against a given player, a tit-for-tat strategy massively dominates an always-defect strategy.

If we're considering repeated prisoner's dilemma games, you also have to consider the gain/loss of power between the various players. The classic prisoner's dilemma consists of two equally impotent players. Over a series of games, a player could aggregate enough power to change the nature of the game. They could aggregate enough power/resources to alter the parameters of the game, like using those resources to offer a substantial reward for cooperating with them or confessing. Inversely, they could use that power to enforce penalties on other players for betraying them.

The question then becomes whether an always-defect player can accumulate enough resources to change the rules of the game before the other players switch to tit-for-tat.

The answer in the real world appears to be "mostly yes". Uber appears to have chosen "always defect", but they got enough power through their defections that we've become effectively powerless to penalize them for defecting. Theranos is a counter-example where they were eventually punished, though they made the mistake of playing the prisoner's dilemma against already powerful players.

Without getting too political, Trump has a history of defecting on business relationships and the man was elected president.

> At least for the brain wireing for interacting with members of our own "tribe". When dealing with people outside our "tribe", it's much more likely that we will only play them once, and in such cases, always-defect may indeed dominate.

Within the tribe, the power levels tend to be flatter, though, making it easier for an always defect player to seize enough power to not have to play the game anymore. I.e. if the player can become moderately wealthy, the resources they offer their tribe would likely outweigh their history of defection. Company towns are an example of this; virtually all the residents agreed the company was awful, and yet they stayed (for a long while).

> As soon as you remove the police, loners like that are free game for "predators", and only those who have a "tribe" have any kind of protection. Such a "tribe" can be a clan in an Afghan mountain area or it can be a gang in a high-crime city. While a gang member may have little to no empathy or conscience when dealing with outsiders, maintaining good relations with other clan members can be critical (Depending on what specific gang it is.)

Sure, but that's not the prisoner's dilemma anymore, so always-defect is probably not an optimal choice.

A crux of the prisoner's dilemma is that a player can benefit by choosing to defect when the other player chooses to cooperate. If cooperating results in the best individual outcome and the best societal outcome, it's not a dilemma anymore. Cooperating is the obvious best choice.

This is part of what makes always-defect viable. Most situations do not mirror the prisoner's dilemma, and cooperating is often the best individual outcome. If you look at an always-defect player, they might be cooperative in 9/10 or 99/100 situations because only 1/10 or 1/100 were prisoner's dilemmas where they could benefit by defecting.

I don't think "always defect" is globally optimal, only for prisoner's dilemmas. Always-defect globally implies doing so even when it is clearly a sub-optimal choice (like driving your car into a brick wall just because the police said not to). The parameters of the dilemma scope it to only situations where defecting confers gain.


> I suppose I should have clarified that always defecting is optimal if you're optimizing purely selfishly for material goods.

First I disagree that "success" is to be considered only in terms of accumulating material goods. It's also not JUST about feeling fulfilled and loved.

Second, I'm not denying that people are sometimes involved in "games" where always defect is indeed optimal. My objection is to the claim that always defect is universally optimal, even if you narrow "success" down to just maximizing wealth.

And, as I stated above, even prisoner's dilemma games tend to have tit-for-tat as the dominant strategy, since most games are repeated an unknown number of times.

That being said, there is an important element of competitiveness in capitalism. When you negotiate your salary (or that of your employees), you and the counterparty have conflicting interests. If you're naîve when playing the capitalism "game", you're likely to be taken advantage of.

I don't consider this a "defect" situation though, and definitely not a prisoner's dilemma "game". Capitalism tends to be a positive-sum game. This positive sum comes from a side effect of a free market, namely that supply and demand (including for labor) acts as a kind of computation algorithm that generally leads to more efficient than any organization is able plan using a top-down approach.

Imposing always-cooperate into capitalism (such as through labor union negotiated wages) may actually REDUCE this positive sum.

And if there is a game where the sum increases if at least one side defects, then it's not a prisoner's dilemma game at all.

> If you look at an always-defect player, they might be cooperative in 9/10 or 99/100 situations because only 1/10 or 1/100 were prisoner's dilemmas where they could benefit by defecting.

If 1/100 "games" are single-play prisoner's dilemma games, then we shouldn't be surprised if many chose to defect in those specific games. That actually doesn't bother me. Those people who defect in those games, but are cooperative 99% of the time are still probably behaving better than what I generally expect from people.

If you expect people to NEVER defect, then you may want to adjust your expectations. Basically such an expectation is the very essence of being naîve, and will act as a magnet on people who regularly take advantage of other people.


Genetic fitness != financial success. Moreover, ancestral environment != modern environment. Not saying you're wrong, but it's not a straightforward application of evolutionary thinking.


I was interpreting "this world" a bit more widely than just the work environment in high level Academia.

There are certainly environments where anti-social behavior can be optimal. But there are also a LOT of environments where pro-social environments are rewarded. Includig, but not restricted to, within families.

Anyway, you're right that we're not living in our ancestral environment. But I don't think we're in a world that actually rewards anti-social behavior less (EDIT) than much of our evolutionary history.

Rather, I think cynical outlooks like that of the OP actually appears for reasons such as:

1) People who are themselves anti-social who either actually think that everyone else are the same, or at least try to normalize such tendencies.

2) People who, during early development, had a very naîve world view and then when faced with reality, became so disillusioned that they went to the other extreme.

I think the vast majority of people on earth today still live in environments where some amount of compassion, conscience and empathy are useful traits (for themselves).

However, in a lot of cases, it can also be useful to have the ability to turn such emotions on or off depending on the context. Once we start to think of some other group of people (or individuals) as either a mortal enemy or a kind of prey, a lot of us can reach such an off-switch when dealing with them.


Adding a little bit of optimism: there are many decent people in all of these fields. They might not as often make the news (initially for extraordinary “achievements“ then for downfalls), but it is entirely possible to “make it in this world“ (supporting a family, taking the occasional vacation, retiring at age 65) while being honest.


For further details on the "data privacy company Onerep.com" discovery

https://news.ycombinator.com/item?id=39709089


>like apparently only assholes make it in this world because you cannot compete with others if you have a conscience and they don't. Sorry for this tangent but it's just one of these days somehow.

We are surrounded by psychopaths. We have politicians willing to sell out to foreign countries for what isn't even massive wealth. We have government workers "working from home" on side gigs on government contracts[1]. And these are examples of "nice" ones that don't involve wars. It's crazy.

[0] https://www.justice.gov/usao-sdny/pr/us-senator-robert-menen...

[1]https://nationalpost.com/news/politics/auditor-general-fired...


This is true in some fields and not at all in others.


This and other factors are why n-gate’s HN article descriptions were so good. Most of the time they were basically more accurate and honest than the actual articles or most posts on this site, while also being concise. Cynical? Yeah, but… accurate.


This reminds me of Orson Welles' excellent film: F for Fake.

It explores art forgery, art experts, and the author of a book about art forgery committing a massive forgery.


This is great news for Harvard! It implies that it's taking academic honesty and rigor seriously, and deserves praise for that. Just as police departments deserve praise for firing/suspending/charging bad officers, just as state bars deserve praise for disbarring bad lawyers, we need to see more of this not less. It concerns me that such news is often used as evidence of systemic problems when this sort of news signals the exact opposite.

We should be more concerned when there are NO instances of malpractice reported by institutions. This doesn't mean there isn't malpractice, but that the institution has lost the will to enforce its own rules. Reacting to this news negatively provides a perverse incentive to institutions, and should be quelled.





I guess she is the expert.


> I guess she is the expert

Absolutely. I recommend everyone to look at her publication history on HBS website:

https://www.hbs.edu/faculty/Pages/profile.aspx?facId=271812&...

Her featured work is a book titled "Why It Pays to Break the Rules at Work and Life."

One of her most cited paper is "The Dark Side of Creativity: Original Thinkers Can Be More Dishonest".


Maybe she wasn't committing fraud, but original research.


If enough people read and internalize the fraudulent conclusions via pop science journalism etc., maybe the effect will become measurable enough to replicate the original paper :)


You become a dishonesty expert by lying for 10,000 hours. There are no workarounds, got to do the work.


Being an "expert" takes a lot of practice, and it shows.


It’s high time we accept that social sciences have nothing scientific about them, and are just a mix of political ideology and astrology.

I already said this before and I’ll repeat it. Media constantly portraying social sciences and their findings as actual science, is a big culprit of all this disbelief the masses are showing towards real science (the physical and natural sciences).


The question of whether social sciences are sciences is subtle to answer. If we take Francis Bacon's method as more or less what a science is, then, in order to be a science, we need to be able to do experiments. Experiments are objective (anybody can watch it happening), communicable (you can tell anybody how to do the experiment), and repeatable (anybody can do it).

An example would be something like chemistry. An experiment could be burning hydrogen with oxygen to see if it yielded water. There is a very strong sense in which anybody could do this same exact experiment. It is repeatable by anybody, it happens in an objective space we can all observe.

Now, take something like history (perhaps one of the social sciences). It's proverbial that history repeats itself, but can you do experiments in history? Not really, because you can't do something like rerun WWII, only this time the nazi's didn't chase out Einstein.

Note, this doesn't mean that History is bogus. Yes, it takes as its object of study something which you can't perform experiments on, but nevertheless, it does have other methods which are apropos to its subject matter, and it is a discipline--doing history is scholarship. It yields knowledge--just not scientific knowledge in the sense of the Baconian scientific method.

Economics is the same--we can't just re-run the Reagan era, only this time without trickle-down economics. If somebody is studying, say, the causes of the great depression, their subject matter isn't something which you can gain knowledge about using experiments. Doesn't mean you can't get economic knowledge in other ways.

So....what about Psychology? Specifically, somebody who studies...deception. Is deception something you gain knowledge about by doing experiments? If you take a group of people and try to deceive them, and study what happens, is that an experiment?

Its not, and for a very interesting reason: the "is-ought" distinction. Experiments can tell you the "is"--what is happening. But they can't tell you the "oughts"--what should be happening. In English, we use two different verb moods to mark the distinction, e.g. "Fred is not lying" vs "Fred should not be lying."

Now, you can perform experiments and observations to determine whether or not Fred is not telling the truth. But lying? Lying has an extra ingredient, the intention to deceive. Intentions are subjective and not observable in the same way that a chemistry experiment is. Very problematic from the standpoint of Baconian science.

But what is completely outside the scope of Baconian science is determining whether a sentence like "Fred should not be lying" is true. If Fred is hiding jews in the basement, should he lie to the stormtroopers at his front door? If he had an affair, but its over and he wants to stay happily married to his wife, should he lie to her if she asked him whether he had an affair?

There are no experiments--in the sense of yielding publicly observable and repeatable results--which will tell you this. Note, this doesn't mean that Psychology is bogus, any more than History is Bogus. It's just Psychology has as its subject matter, some phenomena which are not amenable to experiment. It used to develop methods--like psychoanalysis--which were more apropos to studying its subject matter.

But these days, it seems to be embarrassed about all that, and wants to be a "respectable" science. Instead of Freud theorizing about how childhood trauma affects adult moodiness, they do things like study whether or not prozac helps depression. Studying whether or not a drug can boost serotonin levels in the brain can be investigated scientifically, but in an important sense, it is changing the subject. We're not talking about humans anymore, we are talking about what chemicals do to neuron serotonin re-uptake.

So, what if you are an assistant professor, desperately trying to get some scientific results so you can get tenure? If you try to study something like "deception" scientifically---doing experiments on sophomores--you'll run up against the brute fact that your subject matter can't be studied scientifically. You can call what you are doing "experimenting", but any results you get will not be repeatable.

This is a problem if you are trying to get tenure. Psychology could have just said "hey, we are more like economics and history than we are like chemistry, so lets develop some methods appropriate to investigating our subject matter."

But no. What they did instead was just make up results, on what looks like a phenomenal scale. And its been going so long that we have generations of people who got their Ph.D. from somebody who cheated their way through. And even if they wanted not to cheat---they are competing with their peers who will happily cheat.

The result is sad stories like the OP. It's a vicious circle, a race to the bottom. Honesty is punished, deception is rewarded. The whole field has seriously lost its way. They need to get back to realizing that they are studying a subject matter which needs different methods than experimentation to study.


I’m not sure the social sciences are any more scientifically corrupt than the “harder” sciences.


Biology has probably had the most replication scandals of the hard sciences, but if you look back 10 years, there have been several tangible breakthroughs. CRISPR, mRNA vaccines, Cystic Fibrosis treatments, malaria vaccine, and that’s just off the top of my head.

So let’s not throw the baby out with a bathwater on Science.

I would be interested to see what other peoples comparable list would be for the social sciences – I don’t know enough to say that “absence of evidence is evidence of absence“


Here's the thing: even some of the listed items "off the top of your head" as counterexamples are entirely fraudulent.

I don't think anybody's trying to throw the baby out with the bathwater.

This is entirely serious. But most people have no idea how broad and deep the reach of the hard science integrity problem goes (at least in biology).

Not only that, but it's HARD for people to know, and there are with 100% certainty efforts to keep it that way.


The harder sciences get to lay claim to things like AI and cell phones. Anti-depressants and EV's.

Social sciences get to lay claim to CBT?


So? That list has nothing to do with corruption in the sciences.

BTW, social sciences get credit for the modern economy and for the known history of humankind, for modern farming, for today’s government and policy, for mental health and mental therapies, for modern advertising, for understanding of languages, and for modern corporate management, just to name a few.


Your list includes a lot of things I wouldn't be proud to claim as accomplishments.


You honestly believe that the social sciences have the same rigor as physics?


Yes, for whatever that's worth.

One example: I did a pure-chemistry undergrad degree and a psychology-adjacent graduate degree. The "hard science" degree involved basically zero applied statistics of any kind, any statistics concepts that came up were emergent from lower level stuff even in e.g. stat mech courses, and I wasn't even aware that "Design of Experiments" was a thing. Whereas the "soft science" curricula was heavily focused on statistical rigor, DoE, layers upon layers of internal controls, and so forth; basically because there was no practical way to see an unambiguous effect. It certainly seemed "rigorous".

However the soft science stuff just has less predictive power despite the rigor. In broad sense it relates to human perception, capabilities of technology, model systems, semiotics/epistemology (maybe wrong word), prediction/confirmation, especially faith.

For example, at a macro scale on Earth, it is very easy to accurately predict at human scale how things work. Micro scale physical objects require a bit more bootstrapping. You can't see most molecules or atoms, so you have to come up with a way of inferring measurements through other processes which themselves have to be trustworthy. Etc.

At some point of course, one has to select some model as an axiom or matter of faith to make any progress. You can't model a dropping brick if you can't trust your timer or measuring tape, etc. So you stand on giants' shoulders and make a predictive model as an extension of the axiomatic one.

This starts getting really squishy when you're dealing with entire concepts that have no agreed upon definition whether quantitative or qualitative and try to make them into something onto which statistical rigor can be applied!!!!!

So with soft-science literature it is always extremely important to mentally substitute the details of a model's implementation, for the shorthand expression used to describe it which may overlap with a commonly used word.

For example, "We found that this drug candidate significantly reduced depression in mice" -> "We found that this drug candidate significantly increased the amount of time mice swim around before giving up when you chuck them into a tank of water, etc.

because the rigorous conclusion might not actually be practically meaningful if the axioms are practically unpredictive.

And even worse, going back to the "no agreed definition" thing, the definitions of psychological concepts are only defined in terms of these very vague experiments! It's like bootstrapping physics if you're a disembodied nothing in a simulation.

idk there's much more to say on this but I'm procrastinating at work so


corruption and rigor are different things.


Can't really be corrupt if you have enough rigor.


Oh, lots of people have been rigorous about their corruption. Maybe they only got caught because they’re not rigorous enough. :P https://en.wikipedia.org/wiki/List_of_scientific_misconduct_...


Physics: "Elements of what became physics were drawn primarily from the fields of astronomy, optics, and mechanics, which were methodologically united through the study of geometry. These mathematical disciplines began in antiquity with the Babylonians and with Hellenistic writers such as Archimedes and Ptolemy." Established: 200 BC[0]

Social Sciences: "The history of the social sciences began in the Age of Enlightenment after 1650" Established: 1650 AD[1]

Physics: 2224 year history

Social Sciences: 374 year history

Physics: 3 incidents of misconduct on Wikipedia article (and that includes engineering as well).

Social Sciences: 11 incidents of misconduct on Wikipedia article.

Physics: One incident per 741 years.

Social Sciences: One incident per 34 years.

[0]https://en.m.wikipedia.org/wiki/History_of_physics

[1]https://en.m.wikipedia.org/wiki/Social_science


I believe high profile academic fraud to be worse for the world than financial fraud, even.

Financial fraud misappropriates one sum of money. Academic fraud can misappropriate massive amounts of resources when decision makers rely on bullshit ideas


Ethicists and especially bio-ethicists are the least ethical people on Earth. "Happiness experts" are the most miserable people on Earth. etc. etc. etc...



Hmmm....her best move might have been to write up the data-fudging she did, but in the form of a proposal to do an experiment, fedex it to herself, and put it, still in a sealed envelope, in a safety deposit box.

What was the experiment? What happens when you attempt to deceive your RAs, colleagues, and reviewers about the results of the experiment!! After all, she has a Ph.D. in deceit, whaat else is she supposed to be studying?

If and when somebody started to suspect, she could just retrieve the fedex envelope, dated and sealed before the results were submitted to review, as proof that this was just one giant experiment, congratulate everybody that they were doing such a good job at preserving the integrity of the field, etc etc...

Frankly, after reading the school's report, I'm amazed that somebody with a Ph.D. in deceit-ology couldn't come up with a better line about how the data was corrupted. Somebody stole my password? Not even my wife would believe that one if she found compromising chats on my phone--and that without benefit of Ph.D.



Fake it until you make it!


it reminds me of Fullmetal alchemist's Shou Tucker


Once they start giving TED talks and such, they are not researchers anymore. They are highly-educated influencers who want to stay in the limelight. And this brings its own set of bad incentives, staying "prolific" and eventually dishonesty being some of them. In the same field, see what is happening with Jordan Peterson.


In my experience most/all academics crave a certain type of attention. It is an occupation where you get promoted in part by how famous you are (how many citations your publications have, which conferences you attended, etc.).


Can someone link to the 1300 page ruling?


The choice of title has an infuriating amount of wasted potential


Plot twist - he was doing his job, it was just a dishonesty experiment. /s


As an expert in dishonesty - literally a corporate espionage contractor - I would be very sad if anyone were to actually expect me to not commit misconduct...

Things in the academic world are a little different I see. Strange.


Harvard will come out on top of all this because they can easily pivot to being a clown college.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: