Considering how aggressive GitHub is with marking new accounts as spam, it's unlikely they signed up with a VPN or Tor. My money is on them being identified.
I would have guessed that their best shot at identifying the leaker would have been through their internal security team. Hoping that a technically competent individual will be uncovered by GitHub feels like a last ditch attempt by a company that doesn't appear to have internal control over their own IP.
> Considering how aggressive GitHub is with marking new accounts as spam, it's unlikely they signed up with a VPN or Tor.
Unless something is changed, you certainly can. I signed up with a Protonmail email over a VPN with no issues (though it's been some years.)
DMCA claims can go up the chain. For example, they could get the email address from GitHub, then subpoena the email provider for info to unmask the person (for example, any phone number used when signing up or logging in to the email account). Then, they could subpoena the phone company to identify the perpetrator.
Just irrational Musk hate or is there any reason we want people to freely be able to share all code and open up for leaks involving everyone using a site or software?
Musk has nothing to do with it. There's this whole movement full of people that want to have the code for everything they run. It's called Open Source. Of course, there's the matter of consent, and someone's private code being shared is not what the movement is about, but yes, some people want people to able to freely share all code.
It might also shock you to find out that there are even groups out there that want everything for free! They can be found at places like the pirate bay, or libgen.
Microsoft owns a pretty big chunk of OpenAI. They are going to make a lot more from that investment than the nickels and dimes that their search engine generates. They can afford to pay for chat.
Since the issue looks to be the consent of the footage being released, would it be the same situation if the footage came from their body cams requested by FOIA? Provided all of their cams didn't simultaneously and mysteriously malfunction of course. Either way I hope things go Afroman's way on this. The raid itself was complete clown shoes.
They have no claim. Irizarry vs Yehia[0], among others, have concluded that filming a public servant in the execution of their duties have no right to privacy. There was another case that came to the same conclusion because a woman recorded cops wrecking her house during a search and then tried deleting the footage from her laptop, unfortunately I cannot find that case again.
In practice, the larger the organization the less likely the potential legal sanctions are to dissuade them. My observation has been that once an organization (in the US anyway) grows large enough it is in a special protected status where no real penalties can come to it and there is certainly no risk of exposure to criminal charges for the decision makers.
As someone who works for large enterprises: they are absolutely terrified of legal sanctions and pay huge amounts to contractors who can mitigate the risk. And sanctions do regularly happen, they are just not advertised on the HN frontpage I guess :)
I've been following tech long enough to know that as soon as the computer can figure out which button to press it's only going to click on ads, I guarantee it.
Isn't it trivial to make a computer click on ads though? Just run selenium, apply the filtering rules from adblock and then click on a random element which would be blocked.
I think their point is the opposite - it's *not* trivial to make the computer click the correct "Download Now!" button to get Minecraft versus the other 4 that lead to malware.
and then that’s going to be met with MS making it “impossible” for bots to automate clicking on ads which will have the unintended consequence of making it harder to use for power users.
I thought to make something similar but what scared me away was the idea that some educational institution could use it and decide a student was being dishonest due to a false positive. I wouldn't want to bear responsibility for something like that.
And secondly, this is a never-ending race. Even if it were to be able to detect ChatGPT content with 100% accuracy today, it would just be used to assist in training another model to defeat it.