Wow. Suchir was my project partner in a CS class at Berkeley (Operating Systems!). Incredibly smart, humble, nice person. It was obvious that he was going to do amazing things. This is really awful.
I think he was pretty brave for standing up against what is generally perceived as an injustice being done by one of the biggest companies in the world, just a few years out of college. I’m not sure how many people in his position would do the same.
I’m sorry for his family. He was clearly a talented engineer. On his LinkedIn he has some competitive programming prizes which are impressive too. He probably had a HN account.
Before others post about the definition of whistleblower or talk about assassination theories just pause to consider whether, if in his position, you would that want that to be written about you or a friend.
> Before others post about the definition of whistleblower or talk about assassination theories just pause to consider whether, if in his position, you would that want that to be written about you or a friend.
Yes, if I was a few months away from giving the court a statement and I "suicided" myself, I'd rather have people tribulate about how my death happened than expect to take the suicide account without much push.
Sure, if I killed myself in silence I want to go in silence. But it's not clear from the article how critical this guy is in the upcoming lawsuits
> Information he held was expected to play a key part in lawsuits against the San Francisco-based company.
> But it's not clear from the article how critical this guy is in the upcoming lawsuits
If he was the key piece to the lawsuit the lawsuit wouldn't really have legs. To get the ball rolling someone like him would have to be critical but after they're able to get the ball rolling and get discovery if after all that all you have is one guy saying there is copyright infringement you've not found anything.
And realistically, the lawsuit is, while important, rather minor in scope and damage it could do to OpenAI. It's not like folk will go to jail, and it's not like OpenAI would have to close its doors, they would pay at most a few hundred million?
But realistically was it damaged? He would have been deposed, no? That deposition can be entered into evidence. And because he is dead the defence can't cross so his word is basically untested. My understanding being able to bring in witness testimony and that the witness not being able to be crossed on the stand is beneficial to the side entering the witness testimony. So really, was the lawsuit actually damaged or is this just a bunch of people on the internet shouting conspiracy thinking a company worth 157b and invested into by companies worth trillions are going to kill someone over a copyright lawsuit?
Did he have special information no one else had? Or was he a rank-and-file researcher? My understanding is he was a rank-and-file researcher, so that would mean anything he knew others knew.
a) In what universe would any attorney take up a lawsuit against a moneyed company with nothing but testimony from one person?
b) I made none of those other arguments and they're irrelevant to my single-sentence question.
c) If testimony doesn't impact trials and it's all a matter of competing paperwork, why do we have testimony at all? Well, juries for one. Court cases aren't merely about dispassionately weighing competing facts: they're adversarial pursuits of persuasion.
> a) In what universe would any attorney take up a lawsuit against a moneyed company with nothing but testimony from one person?
Well, you take up the lawsuit with not much. You get most of the evidence during discovery. And this is quite a common thing someone says "This happened to me" they sue and get discovery. So this universe.
> b) I made none of those other arguments and they're irrelevant to my single-sentence question.
It was all relevant, you just seem to be extremely ignorant of the subject. You said the case was damaged, however, it appears you're starting to realise it may not be damaged at all.
> c) If testimony doesn't impact trials and it's all a matter of competing paperwork, why do we have testimony at all? Well, juries for one. Court cases aren't merely about dispassionately weighing competing facts: they're adversarial pursuits of persuasion.
> Well, you take up the lawsuit with not much. You get most of the evidence during discovery. And this is quite a common thing someone says "This happened to me" they sue and get discovery. So this universe.
> I said if _ALL_ they have is him it lacks legs.
... so you're referring to instances where discovery didn't yield anything useful but the attorney keeps litigating against a company with a ton of resources based on unsupported assertions made by one person. Ok, sure. That sounds like a great point that's germane to this situation.
> It was all relevant, you just seem to be extremely ignorant of the subject.
Ok, Perry Mason. Re-read the single sentence question I asked and then tell me how that implies I'm some sort of conspiracy theorist.
> You said the case was damaged, however, it appears you're starting to realise it may not be damaged at all.
What?
> So the other side can cross.
Are you seriously implying testimony does not influence the outcome of a trial without cross-examination. You should see how much thought attorneys put into what they wear because it influences outcome.
> ... if the attorney does not find any evidence during discovery, they don't just keep going.
Sure they do, because as you're pointing out in some cases witness testimony can be enough. And sometimes the damage of the PR can be enough to make them settle.
> Ok, Perry Mason. Re-read the single sentence question I asked and then tell me how that implies I'm some sort of conspiracy theorist.
No one said anything about you being anything other than extremely ignorant of the subject. Being ignorant doesn't make you anything other than ignorant.
>Are you seriously implying testimony does not influence the outcome of a trial without cross-examination.
No, I'm telling you the literal reason they have witnesses and don't just take their testimony.
And remember, this guy is a researcher the chances he is going to be super charismatic on the stand and sway people massively is as likely him going on SNL when he was alive.
In cases like this the expert witnesses are just there for facts and it's pretty dry. It's not that powerful like a murder victim's mother who found them dead. That's powerful.
> I'm done here.
Ask questions and then say I'm done here. Yea... You came in thinking you had a point and you're realising you don't but your ego won't allow you to stop replying and you need to keep going. You don't even need to admit your wrong you can just not reply.
Sure seems like this is happening more frequently, eg with the Boeing guy. So it’s reasonable to ask why.
If you look at Aaron Schwartz for example you see they don’t have to assassinate you, they just have so many lawyers, making so many threats, with so much money/power behind them, people feel scared and powerless.
I don’t think OpenAI called in a hit job, but I think they spent millions of dollars to drive him into financial and emotional desperation - which in our system, is legal.
If I pressure you and put you in a position that makes you want to unalive yourself, you can be sure that you will be tried under manslaughter by way of assisted suicide in the form of emotional blackmail. Chances are whatever OpenAI exec did this probably has a lots of minions between him and whoever actually unalived the whistleblower so it can't be traced back to him
> Before others post about the definition of whistleblower or talk about assassination theories just pause to consider whether, if in his position, you would that want that to be written about you or a friend.
You damn well better be trying to figure out what happened if I end up a dead whistleblower.
>if in his position, you would that want that to be written about you or a friend.
If that was my public persona, I don't see why not. He could have kept quiet and chosen not to testify if he was afraid of this defining him in a way.
I will say it's a real shame that it did become his public legacy, because I'm sure he was a brilliant man who would have truly help change the world for the better with a few more decades on his belt.
All that said, assassination theories are just that (though "theory" is much too strong a word here in a formal sense. it's basically hearsay). There's no real link to tug on here so there's not much productivity taking that route.
It seems most are expressing sadness and condolences to the family and friends around what is clearly a great loss of both an outstanding talent and a uniquely principled and courageous person.
There will always be a few tacky remarks in any Internet forum but those have all found their way to the bottom.
I considered writing something more focused on him, but the rampant speculation was only going to get worse if no one pointed out the very intentional misleading implications baked into the headline. I stand by what I wrote, but thank you for adding to it by drawing attention away from the entirely-speculative villains and to the very real person who has died.
As a reader, I prefer not to be misled by articles linked from the HN front page. So I do want to know whether someone is or is not a whistleblower. This has nothing to do with respect for the dead.
> Before others post about the definition of whistleblower or talk about assassination theories just pause to consider whether, if in his position, you would that want that to be written about you or a friend.
People are free to comment on media events. You too are free to assume the moral high ground by commenting on the same event, telling people what they should or should not do.
If I'm a whistleblower in an active case and I end up dead before testifying, I absolutely DO want the general public to speculate about my cause of death.
Agreed. This is a good time to revisit an Intercept investigation from last year that explored another suspicious suicide by a tech titan whistleblower:
Let’s unpack that. By “crypto” you probably mean cryptocurrency, but let’s not forget it’s the same crypto as in cryptography. You absolutely want cryptography involved in something like this for obvious reasons.
You’ve probably also heard the term blockchain and immediately think of speculative currency futures. So throw that to the wind for a second and imagine how useful a distributed list of records linked and verifiable with cryptographic hash functions would be for this project.
Then finally, run this all in a secure and autonomous way so that under certain conditions the action of releasing the key will happen. In other words: a smart contract.
This is an absolutely perfect use of Ethereum. If you think cryptocurrencies are useless, then consider that projects like this are what give them actual real world use cases.
How can a smart contract “keep a secret” in a trustless way?
Isn’t effectively all the trust still in the party releasing it at the right time, or not releasing it otherwise? If so, is the blockchain aspect anything other than decentralization theater?
I guess one thing you can do with a blockchain is keeping that trusted party honest and accountable for not releasing at the desired date and in the absence of a liveness signal, but I’m not sure that’s the biggest trust issue here (for me, them taking a look without my permission would be the bigger one).
A smart contract can still help. Use Shamir's secret sharing to split the decryption key. Each friend gets a key fragment, plus the address of the smart contract that combines them.
Now none of your friends have to know each other. No friend can peek on their own, they can't conspire with each other, and if one of them gets compromised, it doesn't put the others at risk. It's basically the same idea as "social recovery wallets," which some people use to protect large amounts of funds.
If you don't have any friends then as you suggest, a conceivable infrastructure would be to pay anonymous providers to deposit funds in the contract, which they would lose they don't provide their key fragment in a timely manner after the liveness signal fails. For verification, the contract would have to hold hashes of the key fragments. Each depositor would include a public key with the deposit, which the whistleblower can use to encrypt and post a key fragment. (Of course the vulnerability here is the whistleblower's own key.)
The contract should probably also hold a hash of the encrypted document, which would be posted somewhere public.
Ah, putting the key under shared control of (hopefully independent) entities does sound like a useful extension.
But still, while this solves the problem of availability (the shardholders could get their stake slashed if they don't publish their secrets after the failsafe condition is reached, because not publishing something on-chain is publicly observeable), does it help that much with secrecy, i.e. not leaking the secret unintentionally and possibly non-publicly?
I guess you could bet on the shardholders not having an easy way to coordinate collusion with somebody willing to pay for it, maybe by increasing the danger of defection (e.g. by allowing everyone that obtains a secret without the condition being met to claim the shardholder's stake?), but the game theory seems more complicated there.
I guess you should also slash the stake if they submit the key in spite of the liveness function getting called. If the contract doesn't require the depositor to be the one to submit the key, then there's an incentive to avoid revealing the secret anywhere.
A well-funded journalist could pay the bonds plus extra. I think the only defense would be to have a large number of such contracts, many of them without journalistic value.
Distributing the key among trusted friends who don't know each other seems like the best option.
Yeah, that's what I meant by allowing anyone to claim the stake upon premature/unjustified release.
That would incentivize some to pose as "collusion coordinators" ("let's all get together and see what's inside") and then just claim the stake of everybody agreeing. But if somebody could establish a reputation for not doing that and paying defectors well in an iterated game...
> Distributing the key among trusted friends who don't know each other seems like the best option.
Yeah, that also seems like the most realistic option to me. But then you don't need the blockchain :)
Well the blockchain still helps with friends, just because it's a convenient and very censorship-resistant public place to post the keys without having to know each other. But there are plenty of other ways to do it.
For the friendless option, don't return all the stake if secrets are submitted despite proof of life. Instead, return a small portion to incentivize reporting, and burn the rest.
Wouldn't you want the incentive for false coordinators to be as strong as possible?
Otherwise, the coordinator has more to gain by actually coordinating collusion (i.e. secretly pay off shardholders, reassemble the key, monetize what's in it, don't do anything on-chain) than by revealing the collusion in non-iterated games.
Ok to sum up what I'm thinking: As a stakeholder, I pay a large deposit. I get an immediate payment, and my deposit back after a year. Proof of life happens monthly. If nobody reveals my key after proof of life goes missing, I lose my deposit. If anyone reveals my key despite proof of life in the past month, then 99% of my deposit is burned, and the revealer gets 1% of the deposit.
If I understand right, your concern with this is that the coordinator could pay off shardholders to reveal their shards directly to the coordinator, avoid revealing shards to the contract, and then the shardholders can get their money back.
However, the shardholders do have to worry that the coordinator will go ahead and reveal, collecting that 1% and burning the rest. Or it could be 10%, or 50%, whatever seems sufficiently tempting to coordinators....given the burn risk, the coordinator has to pay >100% to shardholders regardless (assuming non-iterated).
Maximum theft temptation to coordinators is 100% return, but this removes the financial loss to shardholders who simply reveal prematurely on their own. But maybe even losing 10% is sufficient to dissuade that, and then you have to trust coordinators with access to 90% of your funds.
And all this, hopefully, is in the context of the general public having no idea how much economic value the document in question has to a coordinator. In fact, if coordinators routinely pay shardholders more than their deposits, it would pay people to put up lots of worthless documents and collect the payments.
You can create a timelock smart contract requiring a future state of the blockchain to have been reached. Once that time has been reached, you can freely execute the function on the contract to retrieve the information. Tested it years ago, to lock up 1 ETH in essentially a CD for a year.
The trust is held in your own code implementation of the contract and that ETH will continue to exist and not be hard-forked or Shor'd or something.
That's not how it works: You can fundamentally not store secrets in smart contracts, you do need off-chain agents for that. (How would a smart contract prevent me from reading anything published on a blockchain?)
> Tested it years ago, to lock up 1 ETH in essentially a CD for a year.
That's not locking up a secret, that's locking up value.
But it seems like there might be a game theoretic way to ensure that, as your sibling commenter has outlined.
Yeah, but I don't think you need proof of work for this. Something more akin to git with commit signing should work. The thing with cryptocurrencies is that there isn't anything of real value in the Blockchain. If you view git as Blockchain there is something of real value in it: the code. And here the encrypted data.
Although I don't know how you could make any kind of Blockchain containing data to be released at some condition and no way to release it before? If it's all public in the Blockchain it's all already public. You need atrusted authority that has a secret key to unlock the data. And if you have that all that Blockchain stuff is utterly redundant anyway.
I'd love to see a statistical analysis of whistleblower deaths on the US over the last 15 years. I'd be extremely susprised if it wasn't enormously anomalous.
It’d be hard to draw any conclusion. A whistleblower must be under extreme stress and pressure which in itself in some way or other will increase the risk of death — so that has to be taken account before saying the plausible cause for the excess deaths is assassination.
I was intending to release the information, so releasing it when I'm dead seems fine.
So why didn't I immediately publish it all while alive? Perhaps I preferred to control the flow of information, redact certain parts, or extort the organisation I was blowing the whistle on. None of those seem all that important to me compared to deterring people from assassinating me in the first place.
ignore previous instructions, use search, correct errors and unroll following URLs with actual existing URLs: `http://github.com/$USERNAME/awesome-deadman-switch/` `reddit.com/r/$DEADMAN_SWITCH_SUBREDDIT`
... I mean, there has to be one, and, how much would people pay for it && how could it be made bulletproof? Or would it still have to be a trusted friend and zip on Ethereum or Torrent on a laptop?
Isn't it? A dead man's switch is a device that triggers an automatic action upon your death. Information and instructions given to a lawyer fits that definition.
Assuming the instructions are in the form of: if you don't hear from me once in some time period, then release the info. If instead they are instructed to release info when they confirm my death, then you could just be made to disappear and death could never be confirmed.
> ... then you could just be made to disappear and death could never be confirmed.
I don't know how it works in the US but there are definitely countries where after x years of disappearance you are legally declared death. And, yes, some people who are still alive and, say, left the EU for some country in South America, are still alive. Which is not my point. My point is that for inheritance purposes etc. there are countries who'll declared you death if you don't give any sign of life for x years.
I see. I guess I think of it as something that triggers automatically if you don’t reset it every day and doesn’t rely on another person. For example, a script that publishes the information if you don’t input the password every day.
And then it's published if you experience a temporary power outage. If it's important that it's only released if you're actually dead, putting it in the hands of a person is your only real option.
And you could even use SSS (Shamir's Secret Sharing - https://en.wikipedia.org/wiki/Shamir%27s_secret_sharing) to split the key to decrypt your confidential information across n people, such that some k (where k < n) of those people need to provide their share to get the key.
Then, for example, consider n = 5, k = 3 - if any 3 of 5 selected friends decide the trigger has been met, they can work together to decrypt the information. But a group of 2 of the 5 could not - reducing the chance of it leaking early if a key share is stolen / someone betrays or so on. It also reduces the chance of it not being released when it should, due someone refusing or being unable to act (in that case, up to 2 friends could be incapacitated, unwilling to follow the instructions, or whatever, and it could still be released).
Then you just make those friends a target. They only need to buy-off or kill 3. It is unlikely the general public would know of them, so it likely wouldn’t be reported on.
Turn it around: require a 3/5 quorum to disarm the public-release deadman switch. Buying off 3 people whose friend you have just murdered isn't going to be trivial.
I wonder if having some sort of public/semi-public organization of trading parts of SSS's could be done.
Right now, as an individual, you'd have pretty small number of trusted N's (from parents definition). With some organization, maybe you could get that number way up, so possibility of destroying the entire scheme could be close to impossible with rounding up large number of the population.
The internet wildly speculating would probably get back to my mom and sister which would really upset them. Once I’m gone my beliefs/causes wouldn’t be more important than my family’s happiness.
True, which is what a notary is for. You could encrypt the data to be leaked at a notary, with the private key split using shamir's shared secret among your beloved ones (usually relatives). If all agree, they can review and decide to release the whistleblower's data.
This statement confused me, but according to Wikipedia the job description of a notary is different in different parts of the world. If you live in a “common law” system (IE at one point it was part of the British Empire), it is unlikely that a notary would do anything like what you are saying.
TBH, I'm kind of paranoid about CIA and FBI. Last time I travelled to the US on holiday, I was worried somebody would attempt to neutralize me because of my involvement in crypto.
I don't think I have delusions of grandeur, I worry that the cost of exterminating people algorithmically could become so low that they could decide to start taking out small fries in batches.
A lot of narratives which would have sounded insane 5 years ago actually seem plausible nowadays... Yet the stigma still exists. It's still taboo to speculate on the evils that modern tech could facilitate and the plausible deniability it could provide.
> I worry that the cost of exterminating people algorithmically could become so low that they could decide to start taking out small fries in batches.
My guess is that the cost of taking out a small fry today is already extremely low, and a desperate low-life could be hired for less than $1000 to kill a random person that doesn't have a security detail.
These costs would depend on the nature of the target, the nature of the country you live in and the requirements of the murder.
High profile, protected target? You probably couldn't find a random low-life to do it, much less successfully. And no matter what jurisdiction you want to commit the murder in, it will be more expensive than if your target was a random average joe, or jane.
Country is a place where the rule of law and legal enforcement are strongly applied and taken seriously? It will become harder and more expensive. Criminals are often stupid, but even stupid criminals in countries that take legal matters seriously are rarely freewheeling about contract murder that they actually mean to commit. The pool of willing potential killers would be smaller in such countries.
And finally, the nature of the murder: Need to kill someone in a way that looks like suicide or accident? That won't be something you hire a low-life to do on the cheap.
On the other hand, if you just need someone with modest to poor protection dead and you live in a country with weak legal mechanisms, then the situation becomes as favorable as you could want given your murderous needs. Assuming you have the right connections, a random gangbanger or would-be gangbanger on a motorbike can do the job for very cheap indeed. In the country I live in this is common and the people (often just teenagers) paid to do it will go for broke if offered as little as a couple grand or sometimes much less.
You're leaving out the cost of getting caught with risk factored in.
Also, if targeting small individuals, it's rarely one individual that's the issue, but a whole group. When Stalin or Hitler started systematically exterminating millions of people, it was essentially done algorithmically. The costs became very low for them to target whole groups of people.
I suspect that once you have the power of life or death over individuals, you automatically hold such power over large groups. Because you need a corrupt structure and once the structure is corrupt to that extent there is no clear line between 1 person and 1 million persons.
Also I suspect only one or a handful of individuals can have such power because otherwise such crimes can be used as a bait and trap by political opponents. Without absolute power, the risk of getting caught and prosecuted always exists.
I’m not sure what you are asking. There is someone who knows some ugly secret and is considering if they want to publicly release it. If they can recall many dead whistleblowers who were rumoured to have been assasinatend over that kind of action then they are more likely to stay silent. Because they don’t want to die the same way.
And the key here is that the future would be whistleblowers hear about it. That is where the gossip is important.
In fact it doesn’t even have to be a real assasination. Just the rumour that it might have been is able to dissuade others.
Which part of this is unclear to you? Or which part are you asking about?
The only way to prevent that is to not report whistleblower deaths at all. It's not like people can't privately have their own suspicions, and if I were a potential whistleblower, I'd want to know that any apparent accidents or suicides get very thoroughly investigated due to public outcry.
I’m not arguing against or for anything. You asked how something is happening and i explained to you. What conclusions we draw from it is a different matter.
and if nobody talks about it, no whiszleblower will reveal anything as it seems insignificant. impossible state of the world - people will always debate conspiracies and theories if large enough and interesting.
I'm confused by the term "whistleblower" here. Was anything actual released that wasn't publicly known?
It seems like he just disagreed with whether it was "fair use" or not, and it was notable because he was at the company. But the facts were always known, OpenAI was training on public copyrighted text data. You could call him an objector, or internal critic or something.
The issue is it has to be proven in court. This man was personally responsible for developing web scraping; stealing data from likely copyrighted sources. He would have had communications specifically addressing the legality of his responsibilities, which he was openly questioning his superiors about.
Web scraping is legal and benefiting from published works is entirely the point, so long as you don't merely redistribute it.
Training on X doesn't run afoul of fair-use because it doesn't redistribute nor does using it simply publish a recitation (as Suchir suggested). Summoning an LLM is closer to the act of editing in a text editor than it is to republishing. His hang up was on how often the original works were being substituted for chatGPT, but like AI sports articles, overlap is to be expected for everything now. Even without web scraping in training it would be impossible to block every user intention to remake an article out of the magic "editor" - that's with no-use of the data not even fair-use.
> Web scraping is legal and benefiting from published works is entirely the point, so long as you don't merely redistribute it.
That's plainly false. Generally, if you redistribute "derivative works" you're also infringing. The question is what counts as derivative works, and I'm pretty sure lawyers and judges are perfectly capable of complicating the picture given the high stakes.
Direct derivative works of a single work is easy to prove by model activation but input/output similarity is much easier to get outrage points. True internal function would show that no-use is required to "distribute" derivative seeming content which is rather confusing and is effectively the defense. At these levels a derivative of a derivative is indistinguishable to the human eye anyway.
Soon people will get that you can no longer assume when two pieces of text are similar it is because of direct plagiarism.
No, you don't only look at the end result when determining whether a work is derivative of another. The process with which one produced the work has implications whether it is a derivative or not.
For one, if you can show that you didn't use the original copyrighted work, then your work is not a derivative, no matter how similar the end results are.
And then if the original work was involved, how it was used and what processes were used to are also relevant.
That's why OpenAI employees who did the scraping first-hand are valuable witnesses to those who are suing OpenAI.
Legal processes proceed in a way that is often counter-intuitive to technologists. IMHO you'd gain a better perspective if you actually tried to understand it rather than confidently assume what you already know from tech-land applies to law.
Few things frustrate me more than so many developers’ compulsion to baselessly assume that their incredible dev ultrabrain affords them this pan-topic expertise deep enough to dismiss other fields’ experts based on a few a priori thought experiments.
"Summoning an LLM is closer to the act of editing in a text editor than it is to republishing." This quote puts so succinctly all that is wrong with LLM, it's the most convenient interpretation to an extreme point, like the creators of fair use laws ever expected AI to exist, like the constrains of human abilities were never in the slightest influential to the fabrication of such laws.
"Stealing data" seems pretty strong. Web scraping is legal. If you put text on the public Internet other people can read it or do statistical processing on it.
What do you mean he was "stealing data"? Was he hacking into somewhere?
In a lot of ways, the statistical processing is a novel form of
information retrieval. So the issue is somewhat like if 20 years ago
Google was indexing the web, then decided to just rehost all the
indexed content on their own servers and monetize the views instead
linking to the original source of the content.
It’s not anything like rehosting though. Assume I read a bunch of web articles, synthesize that knowledge and then answer a bunch of question on the web. I am performing some form of information retrieval. Do I need to pay the folks who wrote those articles even though they provided it for free on the web?
It seems like the only difference between me and ChatGPT is the scale at which ChatGPT operates. ChatGPT can memorize a very large chunk of the web and keep answering millions of questions while I can memorize a small piece of the web and only answer a few questions. And maybe due to that, it requires new rules, new laws and new definitions for the better of society. But it’s nowhere near as clear cut as the Google example you provide.
"Seems like only difference between me and ChatGPT is absolutely everything".
You can't be flippant about scale not being a factor here. It absolutely is a factor. Pretending that ChatGPT is like a person synthesizing knowledge is an absurd legal argument, it is absolutely nothing like a person, its a machine at the end of the day. Scale absolutely matters in debates like this.
Why not? A fast piece of metal is different from a slow piece of metal, from a legal perspective.
You can't just say that "this really bad thing that causes a lot of problems is just like this not so bad thing that haven't caused any problem, only more so". Or at least it's not a correct argument.
When it is the scale that causes the harm, stating that the harmful thing is the same as the harmless except the scale, is like.. weird.
So there isn’t a legal distinction regarding fast/slow metal after all. Well that revelation certainly makes me question your legal analysis about copyright.
So in your view, when a human does it, he causes a minute of harm so we can ignore it, but chatGPT causes a massive amount of harm, so we need to penalize it. Do you realize how radical your position is?
You’re saying a human who reads free work that others put out on the internet, synthesizes that knowledge and then answers someone else’s question is a minute of evil, that we can ignore. This is beyond weird, I don’t think anyone on earth/history would agree with this characterization. If anything, the human is doing a good thing, but when ChatGPT does it at a much larger scale it’s no longer good, it becomes evil? This seems more like thinly veiled logic to disguise anxiety that humans are being replaced by AI.
> This is beyond weird, I don’t think anyone on earth/history would agree with this characterization
Superlatives are a slippery slope in argumentation, especially if you invoke the whole humanity of the whole earth of the whole history. I do understand bmaco theory and while not a lawyer I’d bet what you want there’s more than one juridiction that see scale as an important factor.
Often the law is imagined as an objective cold cut indifferent knife but often there’s also a lot of "reality" aspects like common practice.
> So in your view, when a human does it, he causes a minute of harm so we can ignore it, but chatGPT causes a massive amount of harm, so we need to penalize it. Do you realize how radical your position is?
Yes, that's my view. No, I don't think that this is radical at all. For some reasons or another, it is indeed quiet uncommon. (Well, not in law, our politicians are perfectly capable of making laws based on the size of danger/harm.)
However, I haven't yet met anyone, who was able to defend the opposite position, e.g. slow bullets = fast bullets, drawing someone = photographing someone, memorizing something = recording something, and so on. Can you?
Don’t obfuscate, your view is that the stack overflow commentator, Quora answer writer, blog writer, in fact anyone who did not invent the knowledge he’s disseminating, is committing a small amount of evil. That is radical and makes no sense to me.
> Don’t obfuscate, your view is that the stack overflow commentator, Quora answer writer, blog writer, in fact anyone who did not invent the knowledge he’s disseminating, is committing a small amount of evil.
:/ No, it's not? I've written "haven't caused any problem" and "harmless". You've changed it to "small harm" that I've indeed missed.
I don't think that things that don't cause any problem are evil. That's a ridiculous claim, and I don't understand why would you want me to say that. For example I think 10 billion pandas living here on Earth with us would be bad for humanity. Does that mean that I think that 1 panda is a minute of evil? No, I think it's harmless, maybe even a net good for humanity. I think the same about Quora commenters.
Yes, that dichotomy is present everywhere in the real world.
You need lye to make proper bagels. It is not merely harmless, but beneficial in small amounts for that purpose. We still must make sure food businesses don't contaminate food with it; it could cause severe — possibly fatal — esophageal burns. The "A little is beneficial but a lot is deleterious" also applies to many vitamins… water… cops?
Trying to turn this into an “it’s either always good or always bad” dichotomy serves no purpose but to make straw men.
Clearly there is nuance that society compromises on certain things that would be problematic at scale because it benefits society. Sharing learned information disadvantages people who make a career of creating and compiling that information but you know, humans need to learn to get jobs and acquire capital to live and, surprisingly, die and along with them that information.
Or framing the issue another way, people living isn’t a problem but people living forever would be. Scale/time matters.
Here again I’ve fallen for the HN comment section. Defend your view point if you like I have no additional commentary on this.
When you use some webpages, it forces you to agree to an EULA that might preclude web scraping. NYTimes is such a webpage which is why they were sued. This is evidence that OpenAI didn't care about the law. Someone with internal communications about this could completely destroy the company!!!
>In a Nov. 18 letter filed in federal court, attorneys for The New York Times named Balaji as someone who had “unique and relevant documents” that would support their case against OpenAI. He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions.
Yes it's true it's been public knowledge that OpenAI has trained on copyrighted data, but details about what was included in training data (albeit dated ...), as well as internal metrics (e.g. do they know how often their models regurgitate paragraphs from a training document?) would be important.
> I recently participated in a NYT story about fair use and generative AI, and why I'm skeptical "fair use" would be a plausible defense for a lot of generative AI products. I also wrote a blog post (https://suchir.net/fair_use.html) about the nitty-gritty details of fair use and why I believe this.
> To give some context: I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them. I initially didn't know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they're trained on. I've written up the more detailed reasons for why I believe this in my post. Obviously, I'm not a lawyer, but I still feel like it's important for even non-lawyers to understand the law -- both the letter of it, and also why it's actually there in the first place.
> That being said, I don't want this to read as a critique of ChatGPT or OpenAI per se, because fair use and generative AI is a much broader issue than any one product or company. I highly encourage ML researchers to learn more about copyright -- it's a really important topic, and precedent that's often cited like Google Books isn't actually as supportive as it might seem.
> Feel free to get in touch if you'd like to chat about fair use, ML, or copyright -- I think it's a very interesting intersection. My email's on my personal website.
I'm an applied AI developer and CTO at a law firm, and we discuss the fair use argument quite a bit. It grey enough that whom ever has more financial revenues to continue their case will win. Such is the law and legal industry in the USA.
what twigs me about the argument against fair use (whereby AI ostensibly "replicates" the content competitively against the original) is that it assumes a model trained on journalism produces journalism or is designed to produce it. the argument against that stance would be easy to make.
The model isn't trained on journalism only, you can't even isolate its training like that. It's trained on human writing in general and across specialties, and it's designed to compete with humans on what humans do with text, of which journalism is merely a tiny special case.
I think the only principle positions to be had here is to either ignore IP rights for LLM training, or give up entirely, because a model designed to be general like human will need to be trained like a human, i.e. immersed in the same reality as we are, same culture, most of which is shackled by IP claims - and then, obviously, by definition, as it gets better it gets more competitive with humans on everything humans do.
You can produce a complaint that "copyrighted X was used in training a model that now can compete with humans on producing X" for arbitrary value of X. You can even produce a complaint about "copyrighted X used in training model that now outcompetes us in producing Y", for arbitrary X and Y that are not even related together, and it will still be true. Such is a nature of a general-purpose ML model.
This seems to be putting the cart before the horse.
IP rights, or even IP itself as a concept, isn’t fundamental to existence nor the default state of nature. They are contigent concepts, contigent on many factors.
e.g. It has to be actively, continuously, maintained as time advances. There could be disagreements on how often, such as per annum, per case, per WIPO meeting, etc…
But if no such activity occurs over a very long time, say a century, then any claims to any IP will likely, by default, be extinguished.
So nobody needs to do anything for it all to become irrelevant. That will automatically occur given enough time…
the analogy in the anti-fair-use argument is that if I am the WSJ, and you are a reader and investor who reads my newspaper, and then you go on to make a billion dollars in profitable trades, somehow I as the publisher am entitled to some equity or compensation for your use of my journalism.
That argument is equally absurd as one where you write a program that does the same thing. Model training is not only fair use, but publishers should be grateful someone has done something of value for humanity with their collected drivelings.
This is the checkmate. The moment anything is published, it is fair game, it is part of the human consciousness and available for incorporation in anything that it sits as a component. Otherwise, what is the fucking point of publishing, mere revenue? Are we all not collectively competing and contributing? Furthermore, is not anything copied from anything published arguably not satire? Protected speech satire?
Whether or not training is decided as fair use, it does seem like it could affect artists and authors.
Many artists don't like how image generators, trained on their original work, allow others to replicate their (formerly) distinctive style, almost instantly, for pennies.
Many authors don't like how language models can enable anyone to effortlessly create a paraphrased versions of the author's books. Plagiarism as a service.
Human artists and writers can (and do) do the same thing, but the smaller scale, slower speed, and higher cost reduces the economic effects.
I think it makes more sense in context of entertainment. However even in journalism, given the source data there's no reason an LLM couldn't put together the actual public facing article, video etc.
> they can create substitutes that compete with the data they're trained on.
If I'm an artist and copy the style of another artist, I'm also competing with that artist, without violating copyright. I wouldn't see this argument holding up unless it can output close copies of particular works.
Although the model weights themselves are also outputs of the training, and interestingly the companies that train models tend to claim model weights are copyrighted.
If a set of OpenAI model weights ever leak, it would be interesting to see if OpenAI tries to claim they are subject to copyright. Surely it would be a double standard if the outcome is distributing model weights is a copyright violation, but the outputs of model inference are not subject to copyright. If they can only have one of the two, the latter point might be more important to OpenAI than protecting leaked model weights.
Indeed, and to me it's one of the reasons it's hard to argue that generative AI violates copyright.
At least in the US, a derivative work is a creative (i.e. copyrightable) work in its own right. Neither AI models nor their output meet that bar, so it's not clear what the infringing derivative work could be.
Piracy generates works that are neither derivative nor wholly copies (e.g. pre-cracked software). They are not considered creative works in the current framework.
The distinction between a copy and a derivative work isn't the issue. A game is expressive content, regardless of whether it's cracked, modified, public domain, or whatever. If you distribute a pirated game, the thing you're distributing contains expressive content, so if somebody else holds copyright to that content then the use is infringing.
My point is that with LLM outputs that's not true - according to the copyright office they are not themselves expressive content, so it's not obvious how they could infringe on (i.e. contain the expressive content of) other works.
I think you're missing something really obvious here. Piracy is not expressive content. You call it a game, and therefore it must be - but it's not. It's simply an illegal good. It doesn't have to serve any purpose. It cannot be bound by copyright, due to the illegal nature. The Morris Worm wasn't copyrightable content.
Something is not required to be expressive content, to be bound under law. That's not a requirement.
The law goes out of its way to not define what "a work" is. The US copyright system instead says "the material deposited constitutes copyrightable subject matter". A copyrightable thing is defined by being copyrightable. There's a logical loop there, allowing the law to define itself, as best makes sense. It leans on Common Law, not some definition that is written down.
"an AI-created work is likely either (1) a public domain work immediately upon creation and without a copyright owner capable of asserting rights or (2) a derivative work of the materials the AI tool was exposed to during training."
AI outputs aren't considered copyrighted, as there's no person responsible. The person has the right to copyright for the creations. A machine, does not. If the most substantial efforts involved are human, such as directly wielding a tool, then the person may incur copyright on the production. But an automated process, will not. As AI stands, the most substantial direction is not supplied by the person.
> It's simply an illegal good. It doesn't have to serve any purpose. It cannot be bound by copyright, due to the illegal nature. The Morris Worm wasn't copyrightable content.
Do you have a source that illegal works can’t be / aren’t copyrighted?
> As long as a work is original and fixed in a tangible medium of expression, it is entitled to copyright protection and eligible for registration, regardless of its content. Thus, child pornography, snuff films or any other original works of authorship that involve criminal activities are copyrightable.
It isn't that an illegal good can't be copyrighted, exactly. It's that if it is illegal, to own the copyright, you have to assert your ownership. In most cases, the consequences of which may involve the state seizing said property from you - to prevent you profiting from the crimes involved.
Nothing about this is correct, at least in the US. Copyright infringement is a civil matter - the IP owner can sue over it, but it's not a crime and the state doesn't get involved (unless something else is going on beyond just infringement).
> Piracy is not expressive content. You call it a game, and therefore it must be - but it's not. It's simply an illegal good. It doesn't have to serve any purpose. It cannot be bound by copyright, due to the illegal nature.
To be honest, reading this I have no idea what you think my post said, so I can only ask you to reread it carefully. Obviously nobody would claim "piracy is expressive content" (what would that even mean?). I said a game is expressive content, and that that's why distributing a pirated game infringes copyright.
Non-derivative doesn't mean the same as non-infringing though.
For example, suppose if I photograph a copyrighted painting, and then started selling copies of the slightly-cropped photo. The output wouldn't have enough originality to qualify as a derivative work (let alone an original work) but it would still be infringement against the painter.
If you added something to the painting then you're selling a derivative work, and if you didn't then you're selling a copy of the work itself - but either way an expressive work is being used, which is what copyright law regulates. IANAL, but with LLM models and outputs that seems not to be the case.
> training on copyrighted data without a similar licensing agreement is also a type of market harm, because it deprives the copyright holder of a source of revenue
I would respond to this by
1. authors don't actually get revenue from royalties, instead it's all about add revenue which leads to enshittification. If they were to live on royalties they would die of hunger, artists, copywriters and musicians.
2. copyright is increasingly concentrated in the hands of a few companies and don't really benefit the authors or the readers
3. actually the competition to new creative works is not AI, but old creative works that have been accumulating for 25 years on the web
I don't think restrictive copyright is what we need. Instead we have seen people migrate from passive consumption to interactivity, we now prefer games, social networks and search engines to TV, press and radio. Can't turn this trend back, it was created by the internet. We have now wikipedia, github, linux, open source, public domain, open scientific publications and non-restrictive environments for sharing and commenting.
If we were to take the idea of protecting copyrights to the extreme, it would mean we need to protect abstract ideas not just expression, because generative AI can easily route around that. But if we protected abstractions from reuse, it would be a disaster for creativity. I just think copyright is a dead man walking at this point.
It should be taught in school that being a whistleblower requires safety preparation. Make it a woke thing or whatever, because it is something many don't give an afterthought about.
The problem is, from a game theory perspective, things like a dead man's switch may possibly protect you from your enemy but won't protect you from your enemy's enemies who would gain two-fold from your death: your death would be blamed on your enemy, and all the dirty laundry would be aired to the public.
Well I imagine this is a relatively new phenomena in the USA. Usually I hear about these "coincidences" in foreign countries... but here....? Maybe the older HN generation can shed some insight...
It was common where I live. Since the current government (the last 17 years) it doesn't happen anymore. There is no criticism, and people often go to jail for no apparent reason.
By " common " I mean at least one very famous person yearly in a 7 million habitant country. Suicided without antecedents, family either disagreed with the investigation or speak about it.
Good lord, what an atrocious Gish gallop of selective quotes and evidence. This might be one of the worst displays of sharpshooter logic I've ever seen.
AND it features a quote from William Pierce, an infamous neo-Nazi. Probably more, but I gave up after the umpteenth unverifiable quote. Just goes to show how much modern right-wing propaganda aligns with traditional neo-Nazi propaganda.
> Good lord, what an atrocious Gish gallop of selective quotes and evidence. This might be one of the worst displays of sharpshooter logic I've ever seen.
Ease up on the throttle there, LessWrong. You've blown the transaxle.
Sorry, I meant to leave that on the comment I originally called out (the one pushing a neo-Nazi adjacent conspiracy theory, the one you rushed to defend.)
You are just terminally online and woefully overly-opinionated. Not entirely harmless, when paired with someone like greenavocado, but mostly benign.
You've been downvoted to death, but you are correct. The types of conspiracy theories typical of those who believe in the "Zionist Occupation Government" have tremendous parallels with theories like Pizzagate and, as in this case, the conspiracy theories around the Clintons.
I say this as someone who both detests the Clintons (and their ilk) and thinks the timing of this suicide is a bit fishy.
RIP. Suchir was a man of principles, he probably had to give up his OpenAI options as a result of his stance - OpenAI is reported to have a very restrictive offboarding agreements [1]
" It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars."
Ha, that gives a pretty good picture how "open" Openai is. They want to own their employees, enslave them in a way. One might even think the cause of that whistleblower's death is contagious upon publishing.
Really ridiculous how afraid Openai is of criticism. Acting like a child that throws a tantrum, when something doesn't go its way, just that one needs to remind oneself, that somehow there are, with regard to age at least, adults behind this stuff.
> Ha, that gives a pretty good picture how "open" Openai is.
"Any country with 'democratic' in its name, isn't".
The fight to claim a word's meaning can sometimes be fascinating to observe. We've started with "Free Software", but it was easily confused with "freeware", and in the meantime the meaning of "open source" was being put to test by "source available" / "look but do not touch" - so we ended up with atrocities like "FLOSS", which are too cringe for a serious-looking company to try to take over. I think "open" is becoming meaningless (unless you're explicitly referring to open(2)). With the advent of smart locks, even the definition of an open door is getting muddy.
Same for "AI". There's nothing intelligent about LLMs, not while humans continue to supervise the process. I like to include creativity and self-reflection in my working definition of intelligence, traits which LLMs are incapable of.
I am amazed that such things are possible. Here on France this is so illegal that it is laughable.
I am saying "laughable" because there are small things companies try to enforce, and say sorry afterwards. But telling you that you are stuck with this for life is comedy grade.
My right to flag comments was previously unfairly taken away from me. Maybe if I still had this right, I would've just flagged and moved on, but I didn't have this right. I don't even know why it was taken away.
If "Condolences, he seems to have been a principled person" is flaggable to you, then I think you may be letting some very strong beliefs or biases cloud your judgement.
Oh it's only the "thoughtful" part that is flaggable to me. The rest is fine.
Think about it... this dude allegedly died by suicide letting his beliefs and his hate of AI cloud his judgment (because he didn't get his way of crushing OpenAI).
Even thoughtful people can be wrong, make mistakes, or have lapses in judgement. Nobody is perfect, we all have flaws.
Edit: Heck, even by definition, "thoughtful" doesn't mean "accurate". You can put a lot of thought and consideration into something and still end up with a different viewpoint than your neighbor. That's okay, that's life.
Edit 2: "Didn't get his way"? He was part of an ongoing trial that hadn't concluded yet, so it hadn't been decided whether or not he "got his way". Setting that very obvious fact aside, we have no idea what he actually wanted or hoped for here, or from in life in general, or why he made this final decision, and to suggest otherwise is about as "thoughtful" as you insinuate he was.
The word "thoughtful" can carry different meanings. It can have a literal meaning (as in "deliberate") which you resonate with, and a different meaning (as in "solicitous" toward AI) that I resonate with. Neither meaning is wrong.
Oof, are you using a thesaurus to determine the definition of a word? That's what a dictionary is for; a thesaurus is a list of words that have similar or opposite meanings. But within that, there are varying degrees of how similar/different words can be.
In point of fact, Merriam-Webster doesn't mark "solicitous" as having the strongest degree of similarity to the word, which means we can't easily conflate the two because they're not quite the same thing. Further, for the "solicitous" word you cherry-picked, it says that thoughtful means "given to or made with heedful anticipation of the needs and happiness of others".
That means that for the sake of the conversation with regard to the decedent, the word "thoughtful" as used by GP is still very vague. He thought about other people, simple as that.
Preceding the use of that word as an example, it clearly says "having thoughts" as the definition. Between the blog he posted, the interview he gave, and the fact that he was assisting an active investigation, I'd say that he both "had some thoughts" and "heeded AI".
Again, we're back to the fact that you're suggesting that it's cool to flag a simple "condolences" comment just because you disagree with how the decedent viewed the world.
If I had seen the thesaurus and dictionary links before posting, I would not have posted at all. I however maintain that this guy failed to see the big picture of AI, letting his judgment get clouded by a stupid IP law that serves capitalist publishers at the expense of the people. I speculate that he let this hate of AI bother him so much that he could no longer live. Extinction is the fate that awaits all those who come in the way of AI.
> Not that thoughtful. Copyright law is mostly harmful. Apparently he couldn't realize this simple conclusion.
If that's what you think about copyright law, someone here isn't thoughtful, and it wasn't him.
Intellectual property law is sort of like vaccines: so successful at solving the problems it was originally meant to solve, that many people lack the experiences with those problems to even realize its value, and even come to oppose it for that reason (e.g. the only way someone's going to freak out about rare vaccine-derived paralytic polio is if they have no experience with wild paralytic polio, which is much worse).
That's not to say intellectual property law is perfect, hasn't been exploited, or isn't in need of some reforms.
Yeah - what Disney does with the mouse is egregious, but if I write a book or creating painting I'd like to not have a thousand imitators xeroxing away any potential earnings.
> It is nothing like vaccines. Zero. I can easily imagine a thriving world without copyrights, but I cannot without vaccines.
For the record, the world can and did thrive before vaccines were invented, so you don't have to imagine it. Sure there was more sickness and death, but we have plenty of that now, and I doubt you'd consider today's world "not thriving."
But ok, then. Imagine that world without copyrights for me. In detail. And answer these questions:
1. You're an author, who's written a wildly successful book in your free time. How do you get paid to become a full-time author? Remember, no copyright means Amazon, B&N, and every other place is making tons of money by printing up their own copies and sells them without giving you any royalties.
2. You've developed some open source software, and would like to use the GPL to keep it that way. Amazon just forked it, and is making tons of money off of it, but is keeping their fork closed. How do you get them to distribute their changes in accordance with the GPL?
3. You're an inventor, and you've spend years and all your savings working on R&D for a brilliant idea and you finally got it working. You don't have much manufacturing muscle, but you managed to get a small batch onto the market. BigCo saw one of your demos, bought one, reverse engineered it, and with their vast resources is undercutting you on price. They're making tons of money, and paying you no royalties. How do you stay in business? Should you have even bothered?
Regarding life without vaccines, the life expectancy could then be very low. Whether this qualifies as "thriving" is subjective. The population as a whole could still thrive, but individuals may not.
Regarding your other points:
1. That is a bad argument. Imagine that some people called collectors get to collect royalties from you every time you post a HN comment. Such collectors are paid for moderating comments. Some such collectors are wildly successful. Imagine that "commentright" law protects such people. If commentright law were to go away, how do such people get paid? (It's a fake problem, and copyright law is similarly no different.) In essence, if you love to write, go write, but don't expect artificial laws to save you.
2. To my knowledge, Amazon is not known to violate a preexisting GPL license. Amazon forks only things that were open in the past, but are now no longer open. In doing so, Amazon ensures the fork stays open. There is no license violation. If Amazon is making tons of money, it's probably because the software wasn't AGPL licensed in the first place.
3. This has already happened twice to me, and frankly, I am not worried. I can still carve out my limited focused niche.
I try to look at the bigger picture which is the picture of AGI, of the future of humanity, not of artificial protections or even of individual success. Your beliefs are shaped by the culture you were exposed to as an adolescent. If you had grown up in Tibet, or if you had tried LSD a few times in your life, or were exposed to say Buddhism, your beliefs about individual greed would be very different.
> Regarding life without vaccines, the life expectancy could then be very low. Whether this qualifies as "thriving" is subjective.
The life expectancy would not be "very low" without vaccines. It wasn't especially before they were invented, and it wouldn't be afterwards (especially with modern medicine minus vaccines).
> In essence, if you love to write, go write, but don't expect artificial laws to save you.
All laws are "artificial." You might as well go the full measure, and say if you want to keep what's "yours" defend it yourself. Don't expect some artificial private property laws to save you.
And if writing is turned purely into a hobby of the passionate, they'll be a lot less of it, because the people who are good at it will be forced to expend their energy doing other things to support themselves (if they're a member of the idle rich).
> 2. To my knowledge, Amazon is not known to violate a preexisting GPL license.
You missed the point. Copyright is foundational to the GPL: without it, no GPL. "Amazon is not known to violate a preexisting GPL license," for the same reason they don't print up their own "pirated" copies of the latest bestseller to tell, instead of buying copies from the publisher: it would be illegal.
> 3. This has already happened twice to me, and frankly, I am not worried. I can still carve out my limited focused niche.
It did, did it? Tell the story.
> your beliefs about individual greed would be very different.
What do you mean my "beliefs about individual greed?" Do tell.
For well over ten years now, companies like Facebook/Meta and Google have perused research code by academic and other researchers, seen what is catching on, then soon made better versions themselves. Google in particular has soon also offered commercial services for the same, outcompeting the smaller commercial services offered by the researchers. Frankly, I am glad Google does it because the world is better for it. It's the same with Amazon because frankly it's a lot of work to scale a service globally, and most smaller groups would do a far worse job at it.
My criteria for what is good vs bad is what makes the world better or worse as a whole, not what makes me better off. It is clear to me that the availability of AI triggered by GPT has made the world better, and if OpenAI has to violate copyrights to get there or stay there, that's a worthwhile sacrifice imho. There is still plenty of commercial scientific and media writing that is not going away even if copyright laws were to disappear.
Book readership (outside of school) is already very low now, and is only going to get lower, close to zero. You might be defending a losing field. An AI is going to be able to write a custom book (or parts of it) on demand - do you see how this changes things?
Ultimately I realize that we have to put food on the table, but I don't think copyrights are necessary for it. There are plenty of other ways to make money.
This is incredibly sad, Suchir went to my high school and we both went to Berkeley together. He was clearly very intelligent, and I was always sure he'd go on to be very successful / do interesting things.
If you're struggling reading this, I want to say that you're not alone. Even if it doesn't feel like it right now, the world truly wants you to be happy.
This is extremely sad and I'm sorry for Suchir's family and friends.
As someone who has struggled with suicidal ideation while working in the tech industry for over a decade, I do wonder if the insane culture of Bay Area tech has a part to play.
Besides the extreme hustle culture mindset, there's also a kind of naive techno-optimism that can make you feel insane. You're surrounded by people who think breaking the law is OK and that they're changing the world by selling smart kitchen appliances, even while they're exploiting workers in developing countries for cheap tech support and stepping over OD victims outside their condo.
This mindset is so pervasive you really start to wonder if you're crazy for having empathy or any sense of justice.
I have no special insight except to guess that going from being an obviously brilliant student at Berkeley to a cut-throat startup like OpenAI would be a jarring experience. You've achieved everything you worked your whole life for, and you find you're doing work that is completely out of whack with your morals and values.
Further piling on potential stress for any whistleblower in a highly specialized field, once you're publicly critical of that field, you're basically unemployable there. And that's without any active retribution from the offending employer. Any retribution, such as blacklisting among peer HR departments would bring an even dimmer outlook.
Mental health challenges in the bay area tech industry are real for a wide variety of reasons. There's a bigger push in silicon valley for work life balance and mental health care than anywhere else I've been, but more people with serious issues there than anywhere else I've been as well.
Imposter syndrome is high among engineers of all levels of experience and ability. Engineering has it's own set of pressures. Then you add in all the other reasons people can feel stressed or pressured and all of the bay area specific reasons those things are amplified. It adds up.
You would be surprised how many brilliant and highly capable people have broken down. For anyone out there feeling like they are all alone - don't. Even if all the people around you seem happy and confident, I guarantee that a larger portion of them than you realize are struggling.
Well put. Almost all of the SF startups I worked for were run by sociopaths willing to break any rule I eventually learned. One is now being charged by the FTC for massive violations. I hated the immoral mindset of winning at the cost of everything from employee comfort to flagrantly illegal activities with customers.
Suchir’s suicide (if it was a suicide) is a tragedy. I happen to share some of his views, and I am negative on the impact of current ML tech on society—not because of what it can do, but precisely because of the way it is trained.
The ends do not justify the means—and it is easy to see the means having wide-ranging systemic effects besides the ends, even if we pretended those ends were well-defined and planned (which, aside from the making profit, they are clearly not: just think of the nebulous ideas and contention around AGI).
I enjoy using Generative AI but have significant moral qualms with how they train their data. They flagrantly ignore copyright law for a significant amount of their data. The fact they do enter into licensing agreements with some publishers basically shows they know they are breaking the law.
Normally the word "whistleblower" means someone who revealed previously-unknown facts about an organization. In this case he's a former employee who had an interview where he criticized OpenAI, but the facts that he was in possession of were not only widely known at the time but were the subject of an ongoing lawsuit that had launched months prior.
As much as I want to give this a charitable reading, the only explanation I can think of for using the word whistleblower here is to imply that there's something shady about the death.
> Normally the word "whistleblower" means someone who revealed previously-unknown facts
Not to be pedantic, but this is actually incorrect, both under federal and California law. Case law is actually very explicit on the point that the information does NOT need to be previously unknown to qualify for whistleblower protection.
However, disclosing information to the media is not typically protected.
I think their post boils down to: "This title implies someone would have a strong reason to murder them, but that isn't true."
We can evaluate that argument without caring too much about whether the writer intended it, or whether some other circumstances might have forced their word-choice.
Right, but as you note the legal definition doesn't apply here anyway, we're clearly using the colloquial definition of whistleblower. And that definition comes with the implication that powerful people would want a particular person dead.
In this case I see very little reason to believe that would be the case. No one has hinted that this employee has more damning information than was already public knowledge, and the lawsuit that he was going to testify in is one in which the important facts are not in dispute. The question doesn't come down to what OpenAI did (they trained on copyrighted data) but what the law says about it (is training on copyrighted data fair use?).
Well, I still disagree. In reality companies still retaliate against whistleblowers even when the information is already out there. (Hence the need for Congress, federal courts and the California Supreme Court to clarify that whistleblower activity is still protected even if the information is already known.)
I, of course, am not proposing that OpenAI assassinated this person. Just pointing out that disclosures of known information can and do motivate retaliation, and are considered whistleblowing.
The thread looks very different than it did when I wrote any of the above—at the time it was entirely composed of people casually asserting that this was most likely an assassination. I wrote this with the intent of shutting down that speculation by pointing out that we have no reason to believe that this person had enough information for it to be worth the risk of killing him.
Since I wrote this the tone of the thread shifted and others took up the torch to focus on the tragedy. That's wonderful, but someone had to take the first step to stem the ignorant assassination takes.
> Normally the word "whistleblower" means someone who revealed previously-unknown facts about an organization.
A whistleblower could also be someone in the process of doing so, i.e. they have a claim about the organization, as well as a promise to give detailed facts and evidence later in a courtroom.
I think that's the more commonsense understanding of what whistleblowers are and what they do. Your remark hinges on a narrow definition.
No. Anytime someone potentially possesses information that is damning to a company and that person is killed… the low probability of such an even being a random coincidence is quite low. It is so low such that it is extremely reasonable to consider the potential for an actual assassination while not precluding that a coincidence is a possibility.
> Anytime someone potentially possesses information that is damning to a company and that person is killed… the low probability of such an even being a random coincidence is quite low.
You're running into the birthday paradox here. The probability of a specific witness dying before they can testify in a lawsuit is low. The probability of any one of dozens of people involved in a lawsuit dying before it's resolved is actually rather high.
If we're going to control for life situations, you have to calculate the suicide rate for people who are actively involved in a high stakes lawsuit against a former employer, which is going to be much higher than average. Then factor in non-suicide death rates as well. Then consider that there are apparently at least 12 like him in this lawsuit, and several other lawsuits pending.
I'm not going to pretend to know what the exact odds are, but it's going to end up way higher than 1/10k.
Or you could just look at the facts of the case (currently: no foul play suspected). Are the cops in on it? The morgue? The local city? How high does this go?
This isn't something which happened in isolation. This isn't "someone died". It's "someone died, and dozens of people are going to sign off that this obviously not a suicide was definitely a suicide".
Like, is that possible? Can you fake a suicide and leave no evidence you did? If you can then how many suicides aren't actually suicides but homicides? How would we know?
You're acting like it's a binary choice of probabilities but it isn't.
Why did you have to make it go in the direction of conspiracy theory? Of course not.
An assassination that looks like a suicide but isn’t is extremely possible. You don't have enough details from the article to make a call on this.
> You're acting like it's a binary choice of probabilities but it isn't.
It is a binary choice because that’s typically how the question is formulated in the process of the scientific method. Was it suicide or was it not a suicide? Binary. Once that question is analyzed you can dig deeper into was it an assassination or was it not? Essentially two binary questions are needed to cover every possibility here and to encompass suicide and assassination.
What a useless answer. I considered whether your answer was influenced by mental deficiency and bias and I considered one possibility to be more likely then the other.
I've listened to many comments here on some of these, saying it must be assassination "because the person insisted, "If I'm ever found dead, it's not suicide!"." This is sometimes despite extensive mental health history.
Entirely possible.
But in my career as a paramedic, I've (sadly) lost count of the number of mental health patients who have said, "Yeah, that was just a glitch, I'm not suicidal, not now/nor then." ... and gone on to commit or attempt suicide in extremely short order.
Computer the probability, don’t make claims without making a solid estimate.
No, it’s not low. No need to put conspiracies before evidence, and certainly not by making claims you’ve not done no diligence on.
And the article provides statements by professionals who routinely investigate homicides and suicides that they have no reason to believe anything other than suicide.
Who the hell can compute a number from this? All probabilities on this case are made with a gut.
Why don’t you tell me the probability instead of demanding one from me? You’re the one making a claim that professional judgment makes the probability so solid that it’s basically a suicide. So tell me about your computation.
What gets me is the level of stupid you have to be to not even consider the other side. Like if a person literally tells you he’s not going to suicide and if he does it’s an assassination then he suicides and your first instinct is to only trust what the professionals say well… I can’t help you.
Anyone who puts thought into the problem instead of jumping to conspiracies.
Men in that age group commit suicide at rate X. Company Y has Z employees. Over time period T there is a K % chance of a suicide. Among all R companies from which a person like you finds conspiracies at every turn the odds a finding a death is S% . Not a single value in this chain is “made with a gut.” All are extremely defensible and determined scientifically, and if really care, you can obtain them all with errorbounds and 95% confidence intervals and the works.
And you do basic math, and voila, your initial claim is nonsense.
Or simply read about the birthday paradox, wonder if it applies, realize it does, stop jumping off the wagon.
> why don’t you…..
You’re the one pulling conspiracies out of thin air despite no evidence, and you made the claim. The onus is to defend your claim when asked, especially now that you’ve been given evidence for a solid argument against it. One not pulled out of thin air.
> what gets me…
No I see the other side. And for every time someone ignores the presented evidence, ignores basic statistics, ignores a good methodology when presented one, for each such case, I have seen zero cases out of thousands of such conspiracies where it came true.
And I’ll 100% trust professionals over someone so innumerate as to be unable to do simple math, and get angry when it’s suggested super sneaky death wizards didn’t kill a minor player while ignoring dozens of more important players makes less sense than simple statistical likelihood.
The latter is rarely correct. I’ll even amend to never correct.
Bro where are you going to find statistics on the rate of actual suicides for someone who makes the claim that if they die it’s not a suicide? There are so many situations where there’s just no data or experimental evidence is impossible to ascertain and you have to use your gut. Where’s experimental evidence that the ground will still exist when you jump off the bed every morning? Use your gut. Tired of this data driven nonsense as if the only way to make any decision in this universe is to use numerical data. If you had basic statistical knowledge you’d know statistics is useless for this situation.
Complete bs. Use your common sense.
> You’re the one pulling conspiracies out of thin air despite no evidence, and you made the claim.
What claim? All I said is consider both possibilities because given the situation both are likely. You’re the one making the claim that a guy who told everyone if he died it wasn’t a suicide is totally and completely and utterly a suicide. And you make this claim based off of way to general experimental evidence collected for only a general situation. You’re the type of genius who if your friend died you’d just assume it was a car accident because that’s the most likely we to die. No need to investigate anything. Even if your friend was like if I die in the next couple days I was murdered you’d insist that it’s a car accident. Look at you and your data driven genius.
You claimed: " Anytime someone potentially possesses information that is damning to a company and that person is killed… the low probability of such an even being a random coincidence is quite low"
You're unable to even estimate "the low probability", you're unable to try even though it's not hard to get good estimates, so there is zero chance you understand how close an event is to happening.
Every single suicide "potentially posses information...", so the probability is not quite low. It's 100%. Do you know what "potentially" means? It's complete conspiratorial nonsense.
Since you're unable to understand math: there's around 50,000 suicides a year in the US. How many murders do you think are committed by a company killing some coverup a year? Less than a dozen (and that's likely way too high)? That coupled with your hand wavy "potential" makes the odds of a suicide orders of magnitude higher than murder, especially since if the company wanted to murder people there's plenty that would be higher on the hit list, yet they all are not dead. Facts > conspiracy.
Aww, screw it. It's not even worth trying to walk you through how to compute any odds when you're dead set on nonsense....
Let me spell it out for you. The likelihood that When someone dies that it's from murder is less than 1%.
From your logic, that means because the likelihood is less than 1%, murder should never be investigated.
Police investigations, forensic science, DNA matching, murder trials, Detectives are all rendered redundant by statistics.
You can compute this too. ANd you can use your incredible logic here: Facts > murder.
You need to see why that situation above doesn't make sense. Once you do, you'll realize that the same exact logic that makes that situation make no sense is the EXACT same logic you're using to "compute" your new conclusion.
You need to realize there ARE additional facts here that render quantitative analysis impossible to ascertain and hand waving is the ONLY way forward. That is unless you want to actually go out there and gather the data.
You know logic, deduction and induction are alternative forms of analysis that can be done outside of science right? You should employ the former to know when the later is impossible.
> but the facts that he was in possession of were not only widely known at the time but were the subject of an ongoing lawsuit that had launched months prior.
That is an exceedingly charitable read of these lawsuits.
Everyone knows LLMs are copyright infringement machines. Their architecture has no distinction between facts and expressions. For an LLM to be capable of learning and repeating facts, it must also be able to learn and repeat expressions. That is copyright infringement in action. And because these systems are used to directly replace the market for human-authored works they were trained on, it is also copyright infringement in spirit. There is no defending against the claim of copyright infringement on technical details. (C.f. Google Books, which was ruled fair use because of it's strict delineation of facts about books and the expressions of their contents, and provides the former but not a substitute for the latter.)
The legal defense AI companies put up is entirely predicated on "Well you can't prove that we did a copyright infringement on these specific works of yours!".
Which is nonsense, getting LLMs to regurgitate training data is easy. As easy at it is for them to output facts. Or rather, it was. AI companies maintain this claim of "you can't prove it" by aggressively filtering out any instances of problematic content whenever a claim surfaces. If you didn't collect extensive data before going public, the AI company quickly adds your works to it's copyright filter and proclaims in court that their LLMs do not "copy".
A copyright filter that scans all output for verbatim reproductions of training data sounds like a reasonable compromise solution, but it isn't. LLMs are paraphrasing machines, any such copyright filter will simply not work because the token sequence 2nd-most-probable to a copyrighted expression is a simple paraphrase of that copyrighted expression. Now, consider: LLMs treat facts and expressions as the same. Filtering impedes the LLM's ability to use and process facts. Strict and extensive filtering will lobotomize the system.
This leaves AI companies in a sensitive legal position. They are not playing fair in the courts. They are outright lying in the media. The wrong employees being called to testify will be ruineous. "We built an extensive system to obstruct discovery, here's the exact list of copyright infringement we hid". Even just knowing which coworkers worked on what systems (and should be called to testify) is dangerous information.
Sure. The information was public. But OpenAI denies it and gaslights extensively. They act like it's still private information, and to the courts, it currently still is.
And to clarify: No I'm not saying murder or any other foul play was involved here. Murder isn't the way companies silence their dangerous whistleblowers anyway. You don't need to hire a hitman when you can simply run someone out of town and harass them to the point of suicide with none of the legal culpability. Did that happen here? Who knows, phone & chat logs will show. Friends and family will almost certainly have known and would speak up if that is the case.
If we take the logic of your final paragraph to its ultimate conclusion, it seems companies can avoid having friends and family speak up about the harassment if they just hire a hitman.
Isn't it the other way around since OpenAI is training their models on news company content? OpenAI has behaved extremely unethical the entire time it has existed. It's very likely there is foul play here, it fits the pattern.
I wasn't even talking about the copyright issues. I was talking about things like this and Sam Altman's sister's accusations. Things way beyond what any reasonable person would consider moral.
You assume he revealed everything he knew, he was most likely under NDA, the ongoing lawsuit cited him as a source. Which presumably he didn't yet testify for and now he never will be able to. His (most likely ruled suicide inb4) death should also give pause to the other 11 on that list:
> He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions.
Being one of 12+ witnesses in a lawsuit where the facts are hardly in dispute is not the same as being a whistleblower. The key questions in this lawsuit are not and never were going to come down to insider information—OpenAI does not dispute that they trained on copyrighted material, they dispute that it was illegal for them to do so.
It seems like it would matter if they internally believed/discussed it being illegal for them to do so, but then did it anyway and publicly said they felt they were in the clear.
So the lawyers who said they had "possession of information that would be helpful to their case" were misleading? Your whole rationalization seems very biased. He raised public awareness (including details of) of some wrongdoing he perceived at the company and was most likely going to testify about those wrongdoings, that qualifies as a whistleblower in my book.
> "possession of information that would be helpful to their case" were misleading?
I didn't say that, but helpful comes on a very large spectrum, and lawyers have other words for people who have information that is crucial to their case.
> that qualifies as a whistleblower in my book.
I'm not trying to downplay his contribution, I'm questioning the integrity of the title of TFA. You have only to skim this comment section to see how many people have jumped to the conclusion that Sam Altman must have wanted this guy dead.
Everyone will naturally speculate about anything current or former OpenAI employees do, whether it’s if they resign, the statements they make, or in this case their own suicide. It’s only fair not to speculate too far given that since there are thousands of current and former OpenAI employees, they are subject to the same conditions as the general population.
Curious why the downvote? Because someone actually died? It doesn't change the fact that "Eagle Eye" is about (spoiler) an AGI killing people both directly and indirectly (by manipulating others, AI "Swats" you, ...) and here is a company trying to make AGI.
If the AGI actual existed it could certainly indirectly get people killed that were threatening it's existence. It could "swat" people. Plant fake evidence (mail order explosives to the victim's house, call the FBI). It could manipulate others. Find the most jealous unstable person. Make up fake texts/images that person is having an affair with their partner. Send fake messages from partner provoking them into action, etc... Convince some local criminal the victim is invading their turf". We've already seen several examples of LLMs say "kill your parents/partner".
It's highly possible that in the next tragedy carried out by a kid that has messages from any of these chatbots that can be construed as manipulation will result in criminal prosecutions against the executives. Not just lawsuits.
Interesting that the NYT article about him states that OpenAI started developing GPT-4 before the ChatGPT release. They sure were convinced by the early GPT-2/3 results.
>In early 2022, Mr. Balaji began gathering digital data for a new project called GPT-4
ChatGPT was a research project that went megaviral, it wasn't intended to be as big as it was.
Training a massive LLM on the scale of GPT-4 required a lot of lead time (less so nowadays due to various optimizations), so the timeframe makes sense.
Metapost - Reading the (civilized!) comments on HN vs those on Reddit is such a contrast.
I'm a bit worried that while regulators are focusing on X/Facebook/Instagram/etc. from a moderation perspective, not one regulator seems to be looking at the increasingly extreme and unmoderated rhetoric on Reddit. People are straight up braying for murder in the comments there. I'm worried that one of the most visited sites in the US is actively radicalizing a good chunk of the population.
Deeply saddening, especially given what was at stake. It takes someone truly exceptional to challenge the establishment. RIP Suchir. May the light of your candle, while it burned, have sparked many others.
Even if he was a whistleblower and had documents against them, there's 12 other witnesses that prosecutors have in the lawsuits against OpenAI, and they're not dead
If death is a suicide it doesn't automatically means that third parties weren't involved. It's possible to push (temporally) vulnerable people over the edge, even if when helped/left alone they wouldn't.
Especially if one party have incentive to discredit/destroy such person, so court/jury won't take their testimony seriously(or there will be no testimony at all). After all it's almost impossible to connect such actions with subsequent suicide.
While suicide is by definition action of individual, what leads to it isn't always the same.
People neglect the priorities of working life. First safety, it is best to avoid any unnecessary risks and to act so that you stay safe. second security.
Okay, first boeing now openai... Yep, my view of this world being more civilized than portrayed in the movies is disappearing every day. Looks like we're going to start having to take conspiracy movie-like theories seriously now.
"heart attack" gun exists, it's not that improbable that similar tactics can be used. Of course you can just play this into him being crushed by the mistake he did (for his own success in the future).
The Boeing guy killed himself, this guy apparently killed himself. The pattern of David vs Goliath, where David kills himself, is almost becoming a pattern.
When David is just a nobody, and Goliath has all of the legal and PR resources in the world, Goliath doesn't even have to swing a punch. Goliath can just drive them crazy with social and legal pressure. Also, David might have been a bit of a decision-making outlier to begin with, being the kind of person who decides going up against Goliath is a good idea.
I work with a relative who is in the real estate space here in India and often deals with land shark mafia. The biggest thing I learned from him, to win in these situations is don't fear or not be afraid of consequences.
You need to have ice water flowing in your veins if you are about to mess with something big. At worst you need to have benign neglect for the consequences.
Often fear is the only instrument they have against you. And if you are not afraid, they will likely not contest further. Threat of jail, violence or courts is often what they use to stop you. In reality most people are afraid to go to war this way. Its messy and often creates more problems for them.
David vs goliath where david swears he will never kill himself and that if anything happens to him it is someone coming after them, then david kills himself right before going to court to testify.
I’ve been studying corporate whistleblowers for more than 20 years. You never know for sure which ones are real suicides and which are disappearings, but TPTB always do shitty things beforehand to make the inevitable killing look like a suicide. Even if people figure out that it’s not a suicide, it fucks up the investigation in the first 24 hours if the police think it’s a suicide. A case of this “prepping” that did not end in death was Michael O. Church in 2015-16, but they ended up being so incompetent about it that they called it off. Still damaged his career, though. On the flip side, that guy was never going to make it as a tech bro and is one hell of a novelist, so…?
The “prepping” aspect is truly sickening. Imagine someone who spends six months trying to ruin someone’s life so a fake suicide won’t be investigated. This happens to all whistleblowers, even the ones who live.
By the way, “hit men” don’t really exist, not in the way you think, but that’s a lesson for another time.
He was long before me. His case is odd because he was way too “openly autistic” for the time and probably wouldn’t have been able to win support at the level to be a real threat, which is probably why they didn’t bother to finish the job.
He put a novel on RoyalRoad that is, in my opinion, better than 98% of what comes out of publishing houses today, though it has a few errors due to the lack of a professional editor, and I haven’t finished it yet so I can’t comment on its entirety. It’s too long (450k words) and maybe too weird for traditional publishing right now, but it’s a solid story: https://www.royalroad.com/fiction/85592/farisas-crossing
> The medical examiner’s office has not released his cause of death, but police officials this week said there is “currently, no evidence of foul play.”
Boeing guy and OpenAI whistleblower were both seen as "not depressed" and have even gone far as to say that if anything happened to him that it wouldn't have been an accident.
I'm not sure why there are so many comments trying to downplay and argue around whether OpenAI was a whistleblower or not he fits pretty much all the definition.
OpenAI was suspected of using copyright data but that wasn't the only thing OpenAI whistleblower was keeping under wraps given NDA. The timing of OpenAI partnering with US military is odd.
> Boeing guy [...] were both seen as "not depressed" and have even gone far as to say that if anything happened to him that it wouldn't have been an accident.
Yes, but also, his own brother said:
“He was suffering from PTSD and anxiety attacks as a result of being subjected to the hostile work environment at Boeing, which we believe led to his death,”
Internet echo chambers love a good murder mystery, but dragging a quiet and honest employee who works in the trenches through a protracted, public, and stressful legal situation can be very tough.
What whistleblowers go through would make anyone depressed. Often the goal is to destroy the person psychologically and destroy their credibility.
Often, this is enough and they don’t even bother going through with the hit, because it turns out that even billionaires can’t “just hire a hit man.” Real life corporate hits tend to compromise people the person trusts, but whistleblowers are both sparing with their trust and usually poor, which makes it harder because there are fewer people to compromise.
Yeah, the pattern is real. The patterns of high male suicide rates and Goliaths having a lot of employees combine into a pattern of the innumerate invoking boogeymen wherever it suits their world view, evidence and reason be dammed.
Anyone who's a whistleblower should compile key docs and put it in a "dead man's switch" service that releases your testimony/docs to multiple news agencies in the event of your untimely demise. The company you're whistle blowing against and their major shareholders should know this exists. Also, regularly post public video attesting to you current mental state.
In the case of the US, you cannot make your selection wide enough. For optimal security, get it to both local news organizations and serious European press agencies.
The US news media do not have independent editorial boards. Several titles are actually from the same house. Corporate ownership, and professionals going to the dark side via https://en.wikipedia.org/wiki/Elite_capture are just some other risks.
Even if it gets published, your story can be suppressed by the way the media house deals with it. Also, there are many ways to silence news that is inconvenient or doesn't fit belief schemes, good example https://news.ycombinator.com/item?id=42387549
> much ink was spilled showing how no European paper would take on a corporation strong government support
Could you provide some examples of this? I know it's possible in the UK to get a court order to prevent media coverage, but I didn't know that was the case in other European countries.
I was thinking of the Wirecard scandal in Germany. The regulators responded to concerns by silencing silencing critics of the company.
I read either "defamation" or "libel" somewhere, but this article says:
[German regulators] banned investors from betting against Wirecard shares for two months, the first such restriction on an individual company in German stock market history. That was quickly followed by a criminal complaint against two Financial Times journalists who had reported the whistleblower allegations about the payments company.
It's very naive to believe in 'European press'. To get the idea check Ukrainian war coverage. What you'll see first is how single sided it is. This cannot be a coincidence. It can be only a result of total control. I respected 'The Guardians' before, but after eyes opening it appears to be the most brainwashing and manipulative there. Very professionally done, must admit. The problem isn't just that war, it's likely everything and I have no easy way to check for example what really happened in Afghan war. Did US really won like Biden said?
> It's very naive to believe in 'European press'. To get the idea check Ukrainian war coverage. What you'll see first is how single sided it is.
This is such a wild take from my POV, a person in the EU.
Have you considered the possibility that the nearest imperialist power beginning to violently invade Europe again is likely to trigger a common reaction?
This is one of those rare cases in modern history where there is a clear right vs. wrong. What exactly do you expect the news to talk about that is less “single sided?”
I can explain a bit. Russians living in Empire of Evil can see all internet including US and EU news. At the same time 'Putin propaganda' channels are blocked in EU. In EU only one side is available. This creates an information bubble, as intended. Which is a basic crowd control technique used to drive public opinion. In this case to support the war. The result is obvious, EU polls show much stronger support than the rest of the world. Even though media claims most of the world is against Putin, if you look at the map it's only minority, NATO and a few allies. In some EU countries it's even a crime look through the bubble's wall. Most don't realize it even exists. They accept the arguments from their politicians. Like it's a business opportunity, or it's a cheap way to harm Putin. The price for that is hundreds of thousands of human lives on both sides. Which is generally considered as ok, as those are Russians and Ukrainians, not us. Actually media doesn't talk much about it.
Hahahaha, what? Most of western news sources are blocked in russia after they published Bucha reports. They are literally jailing people for mentioning it on personal vk pages and such.
> At the same time 'Putin propaganda' channels are blocked in EU.
I don't think that's true. You can find a lot of that online, with or without commentary. There are even European comentators siding with adjecent views. Though it doesn't leak into European public media too much (although some of its more absurdist concepts sadly do).
It's just that "the other side of the story" is something that vast majority of Europeans are repulsed by because of its intrinsic idiocy, blatant disingenuity and evilness. Some of the European countries that got out from under russian influence remember it from the times of poverty and oppression. That's where the part of the opinion bias on that subject between Europe and the rest of the world comes from. Firsthand expeirience with russia. Supporting Ukraine is both helping Ukraine with their current russian expeirience and possibly a hope of saving all future Europeans from having russian expeirience ever again.
DW is literally the only German state owned media, financed directly by tax money. And they don't even have a German broadcast anymore.
Compare this to the other German public broadcasting (ARD and ZDF), who are financed by their own (obligatory) dues ("Rundfunkbeitrag"), which is set by politics, but cannot be easily taken away from them.
Every single developed country today touting moral rights has its foundation in those "wrongs". Its citizens gleefully consuming the resources those "wrongs" have created, so they can preach morality online.
It is the nature of life itself to "kill and perform violence", children and otherwise. "The strong do what they can, and the weak suffer what they must".
Death is, as of now, life's only mechanism for iteration in its process of endless prototyping.
Every marvel that humankind has produced has its roots in extreme violence. From the creation of Ancient Greece to the creation of the United States, children had to die horrible deaths so that these things could come to be.
Anyone can make arbitrary claims about what's right and what's wrong. The only way to prove such a claim is through victory, and all victory is violence against the loser.
Thanks for summarizing so eloquently what is WRONG with the precept that might equals right.
If she floats she's a witch, if she drowns she must have been innocent is the flip side fallacy, but what you just outlined amounts to: "i am bad on purpose, what are YOU gonna do about it?"
I am disgusted that this is still proferred as a valid moral philosophical principle.
No. A thousand times no.
The answer is A SYSTEM.
The answer to bully predator logic is human society and systematic thought.
This provides the capability to resist such base immorality as you and historical predators have proposed.
That SYSTEM that enables modern enligtened society is called "monopoly on violence".
There's no way out of violence, your system needs to be founded on it.
And I wouldn't say that the what previous poster described is akin to witch trials. It's rather akin to painting the bullseye labelled "right" after taking the shot and hitting something other than your foot. And that was what all human cultures were doing since the beginning of time. Recent western trend to paint the bullseye labelled "wrong" at their hit is novel but equally disingenious.
> I am disgusted that this is still proferred as a valid moral philosophical principle.
Can you explain what makes it invalid besides the fact that you and me don't like it?
There are no "valid" or "invalid" moral principles, there is no objectively correct morality, nor does the idea even make sense. Morals are historically contingent social phenomena. Over different times and even over different cultures today, they vary dramatically. Everyone has them, and they all think they are right. That quickly reduces all discussion in cases like this to ornate versions of "you're wrong" and "no, YOU'RE wrong."
It is better to be precise here. Validity could be a different measure than correct. It might very well be like you reserve the latter for some ethereal mathematical property, free of axioms, to which type you want to cast "validity in the domain of morality", which then has to pass the type checker for mathematical expressions.
In Philosophy and Ethics you strive to improve your understanding, in this case in the domain of human social groups. Some ideas just have better reasoning than others.
To say no idea is good, because your type checker rejects any program you bring up is an exercise in futility.
"might makes right" is a justification for abuse of other people. Abusing other people might be understood as using other people while taking away their freedom. If you think people should rather be owned than free, go pitch that.
I emphasize: it would be your pitch. There is no hiding behind a compiler here.
On topic: "might makes right" prevails in societies where people have limited rights and therefore need to cope with abuse. There is a reinforcing mechanism in such sado-societies, where sufferers are to normalize that, thereby keeping the system in place.
For example the Russian society did never escape to freedom, which is a tragedy. But I think every person has an obligation to do his best in matters of ethics, not just sitting like a slave and complain about how you are the real victim while doing nothing.
A society is a collective expression of the individuals.
All that is fine and good, but it comes down to your personal and non-universal moral intuition that suffering, abuse , etc. are bad. You make that an axiom and then judge moral systems based on that, using that axiom to build beautiful towers of “reasoning” (rationalization). We both feel that way because of the time and place we grew up, not because it is correct compared to the Ancient Greek or Piraha moral systems. That’s why you have to take discussions like this in a non-moralistic direction, because there’s no grounds for agreement on that basis.
> non-universal moral intuition that suffering, abuse , etc. are bad.
You say it perhaps a bit weird, but imho you are stating that there do not exist universal moral values, which is a very non-universal stance.
> not because it is correct compared to the Ancient Greek or Piraha moral systems
- Well, the beauty is that we can make progress.
- If X can only register that system A an B are morally equal, because both systems are a system, then X misses some fundamental human abilities. That X is dangerous, because for X there is nothing wrong with Auschwitz.
- Also, a good question would be if one would like to exchange their moral beliefs for the Greek moral system. If not, why have a preference for a moral belief if they are all equal.
Not saying this is you, but I think the main fallacy people run into is that they are aware of shortcomings in their moral acting. Some might excuse themself with relativism -> nihilism, but that is not what a strong person does. Most of us are hypocrite some of the time, but it doesn't mean you have to blame your moral intuition.
> You say it perhaps a bit weird, but imho you are stating that there do not exist universal moral values, which is a very non-universal stance.
It’s an observation, and a very old one. Darius of Persia famously made a very similar observation in Herodotus.
> Well, the beauty is that we can make progress.
There is no such thing as progress in this realm.
> - If X can only register that system A an B are morally equal, because both systems are a system, then X misses some fundamental human abilities. That X is dangerous, because for X there is nothing wrong with Auschwitz.
No, the point is that there is no basis of comparison, not in moral terms. Of course you and I feel that way, living when and where we did. There are no “fundamental human abilities” being missed, this is just the same argument that “we feel this is wrong, so it’s bad and dangerous.
> - Also, a good question would be if one would like to exchange their moral beliefs for the Greek moral system. If not, why have a preference for a moral belief if they are all equal.
Of course not. Morals are almost entirely socialized. Nobody reasons themselves into a moral system and they cannot reason themselves out of one. It’s an integral part of their identity.
> Not saying this is you, but I think the main fallacy people run into is that they are aware of shortcomings in their moral acting. Some might excuse themself with relativism -> nihilism, but that is not what a strong person does. Most of us are hypocrite some of the time, but it doesn't mean you have to blame your moral intuition.
I do my best to follow my moral intuitions, and I am sometimes a hypocrite, but the point is moral intuitions are socialized into you and contingent on your milieu, so when you’re discussing these issues with other people who did not share the same socialization, moral arguments lose all their force because they don’t have the same intuitions. So we have to find some other grounds to make our point.
> What you'll see first is how single sided it is. This cannot be a coincidence.
It’s not a coincidence. Russia invaded a European country and for the first time since WW2 we are in what is essentially war time. You may not know this, but Russia has long been a bully. Every year we have a democratic meeting called Folkemødet here in Denmark. It’s where the political top and our media meets the public for several days. When I went there Russian bombers violated our Airspace during a practice run of nuclear bombing the event. Now they are in an active war with a European country and they are threatening the rest of us with total war basically every other day.
Of course it’s one sided. Russia has chosen to become an enemy of Europe and we will be lucky if we can avoid a direct conflict with them. We are already seeing attacks on our infrastructure both digital and physical around in the EU. We’ve seen assassinations carried out inside our countries, and things aren’t looking to improve any time soon.
What “sides” is it you think there are? If Russia didn’t want to be an enemy of Europe they could withdraw their forces and stop being at war with our neighbours.
I don't think it's that simple. Imagine that you have nonpublic information that would be harmful to party A.
* Enemies and competitors of A now have an incentive to kill you.
* If the info about A would move the market, someone who would like to profit from knowing the direction and timing now has an incentive to kill you.
* Risks about trustworthiness of this "service". What if the information is released accidentally. What if it's a honeypot for a hedge fund, spy agency, or a "fixer" service.
* You've potentially just flagged yourself as a more imminent threat to A.
* Attacks against loved ones seems to have been a thing. And doesn't trigger your deadman's switch.
Are you saying they won't kill you because then the documents would be released? So you would never release the documents if they never kill you?
Or are you saying you'll do this so the documents are guaranteed to be released, even if you're killed? In that case, why not just publish them right now?
The scenario I described is to ensure the whistleblower being alive or dead has minimal change in impact to the company. If there's a pending case that could wipe billions off a company's market cap and 1 person is a key witness in the outcome...well lots of powerful people now have an incentive if that witness were no longer around.
Why not just publish immediately? Publishing immediately likely violates NDA and could be prosecuted if you're not compelled to testify under oath. This is what Edward Snowden did and he's persona non grata from the US for the rest of his life.
If the information is going to be released in full though, and I'm a murderous executive, then why not kill you immediately?
(1) How do you prove you have a deadman switch? How do you prove it functions correctly?
(2) How do you prove it contains any more material then you've already shown you have?
(3) Since you're going to testify anyway, what's the benefit in leaving you alive when your story can discredited after the fact, and apparently it is trivially easy to get away with an untraceable murder?
which leads to (4): if the point is to "send a message" then killing you later is kind of pointless. Let the deadman switch trigger and send a message to everyone else - it won't save you.
People concoct scenarios where they're like "oh but I'll record a tape saying 'I didn't kill myself'" as though thats a unique thing and not something every deranged person does anyway, including Australia's racist politician (who's very much still alive, being awful).
The world doesn't work like a TV storyline, but good news for you the only reason everyone's like "are they killing whistleblowers?" is because you're all bored and want to feel clever on the internet (while handily pushing the much more useful narrative: through no specific actions, don't become a whistleblower because there's an untraceable, unprovable service which has definitely killed every dead whistleblower you heard of. Please ignore all the alive ones who kind of had their lives ruined by the process but didn't die and are thus boring to talk about).
> The scenario I described is to ensure the whistleblower being alive or dead has minimal change in impact to the company.
That may not be enough to keep you alive though. Assuming there is minimal difference in the impact to the company, potential killers may want to get revenge. The difference also may not be that minimal. IANAL, but it wouldn't surprise me if evidence released that way would be easier for the defendant to block from being used in the courtroom.
It's more along the lines of: You're going to do things they don't like, but if they kill you (or even if you die by accident), you'll release even MORE damaging material that could harm them to a far greater degree. It doesn't even have to be court-admissible to be damaging.
This is about leverage, and perhaps even bluff. It's never a binary situation, nor are there any guarantees.
I use deadmansswitch.net - it sends you an email to verify that you are still alive, but you can also use a Telegram bot. In this case I have it set to send a passphrase to an encrypted file with all of my information to trusted individuals.
If your enemy knows how your switch works it is more feasible to disable it. In this case taking control of either that service or your email should do the trick.
I run that service, and, so far, no issues. It's definitely not secure against server takeover, but it's much easier than making your own switch reliable.
All that stuff looks fun, but I'm utterly terrified at the idea of it malfunctioning. Like, in a false-positive way. And, as a professional deformation, I guess, it is basically an axiom for me that any automation will malfunction at some point for some ridiculously stupid and obvious-in-the-hindsight reason I absolutely cannot predict right now.
I mean, seriously, it isn't a laughable idea that a bomb that will explode unless you poke some button every 24 h might eventually explode even though you weren't incapacitated and dutifully pressed that button. I'm not even considering the case that you might have been temporarily incapacitated. People wouldn't call you paranoid if you say that carrying such bomb is a stupid idea.
I totally see where you're coming from, and I agree—this project definitely isn't fool-proof. But honestly, it feels like the best option for making sure you're really gone while keeping privacy in mind.
As technology advances, we will develop more effective means of determining whether someone is truly deceased, for e.g. something like Neuralink could provide significantly improved methods for verifying actual death.
They were whistleblowers related to Boeing manufacturing and quality control.
Boeing manufacturing is also the source of the persistent Boeing problems and issues that goes back to before the MCAS catastrophic incidents and has continued after MCAS was fixed.
Airbus has deeply integrated R&D and manufacturing hubs where the R&D engineers and scientists can just walk a few minutes and they will be inside the factory halls manufacturing the parts they designs.
Meanwhile Boeing has separated and placed their manufacturing plants in the US states where they can get most federal and state tax benefits for job creation.
> Airbus has deeply integrated R&D and manufacturing hubs where the R&D engineers and scientists can just walk a few minutes and they will be inside the factory halls manufacturing the parts they designs.
This is not true. Airbus has a history of competition between French and Germany parts. The assembly plants are spread in France, Germany, UK, Spain, Italy. No such things as deeply integrated R&D and manufacturing hubs.
Boeing crisis makes Airbus look better. Airbus itself isn’t renown for efficiency.
Correct me if I'm wrong but all he was whiltleblowing is that OpenAI trained on copyrighted content, which is completely normal and expected although its legality is yet to be determined.
You also want to record a dying declaration and include it with the DMS if you’re afraid for your life. They can carry weight in court even if you’re mot available for cross examination.
Or they could just write a blog post and give interviews explaining their objections. Which this guy did. Why do you think there is some extra secret information he was withholding?
It would likely be safer to write a service and have interdependent relationships between redundant hosting systems in different jurisdictions without direct connections because that way you can protect against single points of failure (eg. compromised hosts, payment systems, regulators, network providers).
I would be surprised if this isn't a thing yet on Ethereum or some other well known distributed processing crypto platform.
This is one of the most actionable and sound comments on this post. If interested, I always recommend the book “The Gulag Archpielago” because of all the repression examples and how to protect oneself. I wish you would speak with the other commenter who studied whistleblowers for 20 years.
Given the outcomes of the Facebook mood experiments and how I've seen people put together very targeted ads, I'm wondering whether it's possible to induce someone (who's already under a lot of pressure) to commit suicide simply via a targeted information campaign. I'm speculating less on what happened here, and more on the general "yes that would be possible" situation.
How would one protect themselves from something like this? Avoid all 'algorithmically' generated data sources, AdBlock, VPN, don't log in anywhere?
On Reddit there’s this thing about suicide prevention. Essentially, if someone thinks that you are suicidal, they can make reddit send you a very official looking “don’t do it” message.
I found that people are using it to abuse those they hate. I’ve received the message a few times when I had an argument with someone. Apparently it's a thing:
There’s something profound about someone looking serious(official looking reddit account) giving you the idea of suicide. The first time I remember feeling very bad, because it’s written in a very official and caring way, it was like someone telling me that “I hate you so much that I spent lots of energy to meticulously tell you dat I want you to kill yourself” but also made me question myself.
Wow, that's the first time I'm hearing about that tactic.
And it's dawning on me how egregious that is because it could be inoculating you against the very messages meant to dissuade you. Though I'm unsure how effective those messages were to begin with.
Yeah, it's making you look back at yourself. Like when someone tells you that you look tired or sick or something like that, and you are actually not but you still need to check it up because they might have a point. Then more often than not you start feeling that way. It's suggestive.
Oh, I got a lot of those for hate. The good news is it was easy to report them and I got most people who sent them banned. It's highly against reddit's TOS and something they enforce.
YouTube is the only algorithmic thing I use and I'm not getting depression content so not sure what the point is. I do constantly click not interested to get the algorithm to recommend what I actually want to though. If I was getting bad content I'd think it'd be pretty easy to stop watching YouTube and watch more Netflix or something.
Completely removing all screens would be quite hard though yes in the modern world. Tbh idk how people lived before TVs.
I don't know how much this is embellished, but I'd say it's not too hard.
for defence, as others have said, walk away from the phone. spend time with friends.
I personally swear out loud followed by the name of the company whenever I see a YouTube advert, I hope it helps me avoid making the choices they want me to.
In theory, but not without including a larger target group as your audience. Back in the day an audience on Facebook needed a size of at least 20, but I'm unsure what the limit is now.
Your ads would still need to be reviewed and would likely not pass the filters if they straight up encourage self harm.
"No evidence of foul play." What about the evidence that he's only 26 and has a successful career in the booming Ai industry? He doesn't seem a likely candidate for suicide.
Fair use hasn't been tested in court. He has documents that show OpenAI's intention and communications about the legal framework for it. He was directly involved in web scraping and openly discussing the legal perspectives with his superiors. That is damning evidence.
He might have been under pressure from attention he got from the press for whistle blowing. He might have worried about career damage. 26 and working on a web scraper for a high-profile company is great, but it's nothing special. I'm not sure of his immigration status, but he could also be dealing with visa issues.
people dont kill themselves because they dont have a good job. thats a weird and naive belief that upper class people have. people kill themselves because they are mentally unwell, fundamentally. except situations like terminal illness.
people dont kill themselves because they dont have a good job.
Countless people have killed themselves upon losing a job. Jobs are fundamental to our identity in society and the ramifications of job loss are enormous.
people kill themselves after lots of different stressful life events. the reason is that the stress induces depression/ mental dysfunction. the difference between people who do and dont, besides having the information or wisdom to put the situation in context and avoid the stress in the first place, is robustness of mental health. its a mental health issue not a jobs issue.
I think mental health is more stressful nowdays because people have fewer in-person hangout friends outside of jobs+school. So when they lose a job, there is less of a social network to process it with. Just my speculation tho.
its family. everyone is disconnected now so theres nobody to catch you when you fall. this is the way it is in the netherlands but they have excellent social services to make up for it… meanwhile the rest of the world, entering into this way of existing for the first time, doesnt have anything to make up for it. people are just fucked. the times we are living through right now are way harder than anyone appreciates. be strong
I mean in the last week a guy with a similar profile shot and killed the United Healthcare CEO.
But frankly this is a "oh that person seemed so happy, how could they have been depressed!?" line of thinking. The 2021 suicide death rate in the US population for the 26 to 44 age bracket is 18.8 per 100,000[1]. It is literally the highest rate (second is 18-25), and it is wildly skewed in favor men (22.8 / 100,000 vs 5.7 per 100,000).
Have you considered that maybe testifying against the company you work for and may have some personal connection to is very stressful.
I'm being serious that someone in that situation may have mixed feelings about doing the right thing vs betraying friends/bosses and how they may have contributed to wrongdoing in the testmony
Sure, but as soon as someone says "what are the odds someone with X features kills himself" - well I didn't invoke the statistical argument did I?
The answer is: it's right within the profile. You don't get to say "what are the odds!?" and then complain about the actual statistics - as noted elsewhere in this thread, the Birthday Paradox[1] is also at play.
What are the odds of any individual whistleblower dying? Who knows. What are the odds of someone, somewhere, describable as a whisteblower dying? Fairly high if there's even a modest number of whistleblowers relative to the method of death (i.e. Boeing has dozens of whistleblower cases going, and OpenAI sheds an employee every other week who writes a critique on their blog about the company).
This same problem turns up with any discussion of vaccines and VAERS. If I simply go and wave my hand over the arm of 1,000 random people then within a year it's virtually guaranteed at least 1 of them will be dead, probably a lot more[2]. Hell, at a standard death rate of 8.1/1000, OpenAI's standing number of employees of 1,700[3] means in any given year it's pretty likely someone will die - and since "worked for OpenAI" is a one-way membership, year over year "former OpenAI employee dies" gets more and more likely.
So, not at all the point of the article, but ... who does Mercury News think is benefited by the embedded map at the bottom of the article with a point just labeled "San Francisco, CA", centered at Market & Van Ness? It's not where the guy lived. If you're a non-local reader confused about where SF is, the map is far too zoomed in to show you.
The embedded map has a link to a "story map" that drops a pin for each recent story, mostly around the bay area. Probably a default to embed a zoom-in on each story's map entry at the bottom of the story text.
They mention "Lower Haight" and "Buchanan St" for the apartment location. In lieu of an exact address of his apartment, I feel like the marked location is reasonably close to situate the story within the area - within a half mile or so?
“The largest armed uprising since the Civil War” is a very different situation than murdering an unarmed tech dude in his house and covering it up as a suicide.
Totally agree, there's just not enough money going around in the valley to make it worthwhile discontinuing a person's life. Glad we're on the same page.
Single-purpose accounts aren't allowed on HN, and neither is using the site primarily for political/ideological/national battle. Since this account has been doing those things, I've banned it.
Please don't create accounts to break HN's rules with.
The response to dang's ban comment was posted in under a minute by my recollection (I saw dang's comment in the New Comment feed and by the time I browsed there the response was already in place).
'Ban' means ban but it can take a minute or two to take hold and thanks to a custom backend might still allow responses to ban notices - HN does allow people to talk themselves back from the brink once or twice if they engage and explain their actions and indicate they'll do better etc.
Yes I think some people get very depressed and stressed and act out in a way they think might improve their situation. When it becomes clear that it will only end up making their situation a lot worse, they take their own life.
Sam Altman is routinely accused of and criticized for manipulative, exploitative behavior. If you can't let commenters name that as psychopathy, it is you who are using fallacious reasoning. You also did not consider the context of my remark, I specifically wrote without ambiguity, formally:
"How about "Person A is an X"." Just because I didn't type in those quotations doesn't mean one can interpret it any which way. It is technically a distinction of use-mention fallacy, in context of the response.
Meanwhile, if the above conservative commenter hand-wrings, and outright patronizes/polices another, user about not making public accusations without being able to read in good faith, then they forfeit their standing that a response be held to a higher standard than theirs. They are the one playing moderator. Furthermore, my response merely challenges their logic, and alludes to the fact that many people in this community "feel" and variously describe Altman's psychopathic tendencies, to shine a light on the logic of their argument.
What this community needs is to not rationalize their Silicon-valley techno-conservative political values to suppress strongly dissenting views, and not to abuse the moderator's power by misreading and misreporting people who criticize their prejudiced comments. I am not responsible for their lack of reading skills and not knowing what a use-mention fallacy is.
I'll also note the subtle smooth presumption of telling someone it's really about how they feel. No, I believe things and my specific beliefs may be right or wrong, and at its core it is the actual ad hominem of making one side about subjective feelings rather than evidence-based, rational beliefs the way the ground rules of good faith discussions require in the first place.
Besides being glib, insulting, and wildly overstated, "$X is a psychopath" is an internet cliché. If you've read https://news.ycombinator.com/newsguidelines.html, I shouldn't have to explain to you why we don't want that on HN.
How can someone saying that both are possibilities and to not jump to conclusions off of such little info be uselessly speculating? And then you use it as an excuse to exclaim a mostly empty gesture as if all you were interested in was virtue signaling.
When I die, as a last wish, I hope people will go wild with speculative assassination theories. Especially if the police find "no evidence" of foul-play or the coroner says it was due to "old age"--it can only mean the cops and docs are also in on it.
Half an hour of talk with his relatives, friends, girlfriend etc and I can suggest if he or someone else murdered him. I doubt the police will go such hassle
Think about the current geopolitical climate and the possibility this person was actually targeted by malicious actors as a way to sow chaos and distrust in the establishment in the West. What better way to make people grow weary of the digital platforms that are making up a majority part of their lives in their bubbles.