"I just discovered that some one, or my company/institution, on which my livelihood currently depends, is engaged in fraudulent/immoral/illegal/unethical activies."
The next question is "Now what do I do?" It ranges from report it and leave immediately, to let the leadership know you are there to help. When I was at Intel I discovered that someone within Intel was selling memory chips that had 'binned out' (failed one of the edge cases tested post production) on the grey market. I took the path of reporting it, which ended up in this employee's dismissal. My manager at the time talked with me and said that while he admired my integrity he wondered if I understood the risk I had taken. We talked about it, and the number of people that had to be working together in order for this little scheme to work, and they only fired one guy. The implication was that there were still an unknown number of people at the company who knew (or suspected) that I had interrupted their gravy train. At least one of them had to be reasonably high up the management chain. So now I likely had 'enemies' where before I was just an unknown.
You might think, "Wow, are people really thinking like that?" and my experience suggests that yes, they really do.
So think about the consequences of being the "good guy" can be just as painful for you in a different way than they are for the "bad guy." It isn't an easy choice.
If it transpires that your advisor has fabricated data your own achievements immediately become suspect and funding agencies, journal reviewers and future employers may treat you as collateral damage.
I mean, what might another dissertation committee member say to counter lack of support from her advisor? And how likely is it that any of them would say it?
Also, it's not obvious why Tufts is at fault.
Without such a process, all this stuff comes back in spades. With such a process, you still have a struggle but at least you have a chance.
Also it's why journalists need strong protections when reporting this stuff. etc.
Actually blowing the whistle is generally not in your best interest. Most people only do that when there really is no other option to protect themselves.
The worst part is that if you have genuine evidence on someone with actual power, you're better off keeping that in your pocket rather than reporting them. As in chess, "The threat is often more effective than its actually execution."
So unethical behavior needs to be stopped early on, before it is a firable offense. But that requires a change where ethical people can feel safe reporting and also not feel like they are going to ruin someone's career for something small (I think this is also a key component).
So how do you take care of responsible reporting with these kinds of problems? Even if we ignore the already existing unethical behavior.
To add to that, you could however also have created some people really thinking highly of you as well, especially for people appreciating such things as honesty and loyalty. That could be worth a lot as well. Not always enough, but sometimes a huge win.
This seems pretty alarming.
They have built an ivory tower that has nothing to do with capitalism.
Imagine this story the other way around - PhD candidate accused by respectable researcher of making up data...the accuser would have been held out as a hero.
This has nothing to do with capitalism.
Odd how the whistleblower is finding herself in the same position as if she actually committed the fraud.
Yes, this has everything to do with capitalism. The optimal situation for science is when research institutions are surrounded by riches, i.e capitalism, but science is itself not too much constrained by money.
Capitalism does not reward honesty. It rewards ambition and greed. Today's universities faithfully reflect that.
I did confront the perpetrator privately and his response was that the results I was seeing was from a different version of the software than the one that produced the reported results. This seemed implausible to me for reasons that are too complicated to recount here, but again, no way to prove it.
On the plus side, that line of research was quietly abandoned and so I may have had the intended effect. The results are still on the record, but no one really cares any more.
The professor still works there but we have no idea if the professor was censored in some means other than firing or if the investigation failed to produce the hard proof the university felt was needed to fire the professor without facing a wrongful termination suit or other legal ramifications. If the university continued to employee the professor with proof that they had falsified data this would speak poorly of Tufts but isn't something the complainant should somehow be financial compensated for.
The article reveals no evidence that the complainant not getting a position following has been the result of any defamation or effort by either Tufts or the accused professor. Whether right or wrong knowing that you have previously filed such a complaint will make most employers avoid hiring you. Why take on the slightest risk when there other candidates which don't present such risk.
Maybe there is more that has not been revealed but what is int the article doesn't seem to support a financial payout to the complainant.
Many students are still taught, in 2019, that the appropriate way to conduct a multiple linear regression analysis is to:
1. Fit univariate comparisons to all your explanatory variables of interest
2. Take the significant variables and put them in a multiple regression analysis
3. Report the p-values from the regression analysis as valid and meaningful.
This is incorrect for several reasons (type 1 error rate inflation, confusion about marginal and conditional effects etc.), but people still do it. That's before we even account for publication bias, in which null results are often "shelved" or not accepted for publication.
However, I do not agree that this is the most common problem. Ten years as a professional have shown me that the real issue is that 95% of people using statistics don't have a single clue about how the techniques they use daily really work.
It's especially alarming given the booming machine learning fad.
But all that said - the amount of outright fabricated data is pretty small. Many people can make mistakes or overstate the meaning of their data. But the amount of people who flat out fake their data is really small. And institutions are more likely to loose grant money if they turn a blind eye than if they self regulate.
Is way too cautious of a wording.
She reported her advisor for fabricating data, not for allegedly fabricating data. You can't report someone for allegedly doing something.
Thereby she's alleging her adviser fabricated data.
The more you convert academic science to just another industry (i.e constrain PIs by way of funding), the more people you'll have who will behave just as in other industries.
Tufts University seems to have some problems. Sounds like a pretty shitty university.
You're starting from the assumption that the allegations are true. But if you're inside the field, you know the professor, go to conferences with him... At some point, the department has to decide if you want to hire a troublemaker, and then actually go to bat for them with the administration who will look at the hire as a business risk.
This isn't a troublemaker, this is a proper academic who knows the importance of protocol and scientific process.
I've gotten a lot of downvotes, and I'm not really sure why. This isn't my opinion of how things should be, it's based on what I observed in university. I knew a guy who tried the suing the university for messing up his graduate education thing -- life did not turn out well for him. The highest paid state employees are college football coaches, because they make the most money. Funds that used to help professors are pouring into middle management and image consultants. Modern universities are run with all of the techniques and moral flexibility of multinational corporations, and no amount of downvotes will change that.
This only matters if the outside cares very deeply about that, which it doesn't seem to, not enough to outweigh the benefit of not having to deal with incidents in the first place. You have to consider the real opportunity cost: signalling is less effective than silence.