Hacker News new | past | comments | ask | show | jobs | submit | more reset-password's comments login

Hope the leaker used good opsec and that this goes nowhere.


Considering how aggressive GitHub is with marking new accounts as spam, it's unlikely they signed up with a VPN or Tor. My money is on them being identified.


> My money is on them being identified.

I would have guessed that their best shot at identifying the leaker would have been through their internal security team. Hoping that a technically competent individual will be uncovered by GitHub feels like a last ditch attempt by a company that doesn't appear to have internal control over their own IP.

> Considering how aggressive GitHub is with marking new accounts as spam, it's unlikely they signed up with a VPN or Tor.

Unless something is changed, you certainly can. I signed up with a Protonmail email over a VPN with no issues (though it's been some years.)


They cracked down on anonymous accounts after they launched Actions and every spammer in the world tried to run crypto miners on them.

I've tried signing up via Tor in the past, and my account was automatically flagged with no ability to create public repositories.


A freshly-imaged machine on public wifi should be more than enough to hide their identity. No reason they have to use Tor.


DMCA claims can go up the chain. For example, they could get the email address from GitHub, then subpoena the email provider for info to unmask the person (for example, any phone number used when signing up or logging in to the email account). Then, they could subpoena the phone company to identify the perpetrator.


Next we'll hear about Musk buying a coffeeshop in downtown SF to get access to their security cameras


Just irrational Musk hate or is there any reason we want people to freely be able to share all code and open up for leaks involving everyone using a site or software?


Musk has nothing to do with it. There's this whole movement full of people that want to have the code for everything they run. It's called Open Source. Of course, there's the matter of consent, and someone's private code being shared is not what the movement is about, but yes, some people want people to able to freely share all code.

It might also shock you to find out that there are even groups out there that want everything for free! They can be found at places like the pirate bay, or libgen.


Of course it does.


Well it’s either that or you pay a subscription for it. Microsoft has to recoup their costs somehow.


Or it ends up being both, because Microsoft.


Microsoft owns a pretty big chunk of OpenAI. They are going to make a lot more from that investment than the nickels and dimes that their search engine generates. They can afford to pay for chat.


Aren't search ads the biggest revenue source for Google, one of the biggest tech companies around?


Compute and bandwidth are not free. Just because they can afford to give it away for free doesn’t mean there’sa good business case to do so.


I felt the same way when I really wanted a super soaker 2000 and then the neighbor kid got one before me. "MOOOOM!!!"


This is exactly my feeling from the whole letter. I don't understand why are people scared of ChatGPT and similar, when it is just better IntelliSense


Windows 11 is a thinly-veiled state-sponsored malware masquerading as a productivity tool.


Trying to stop the spread of LLaMA with DMCAs is an interesting technique. Like the internet version of trying to put out a grease fire with water.


Every day, another company re-discovers the Streisand effect.


Since the issue looks to be the consent of the footage being released, would it be the same situation if the footage came from their body cams requested by FOIA? Provided all of their cams didn't simultaneously and mysteriously malfunction of course. Either way I hope things go Afroman's way on this. The raid itself was complete clown shoes.


They have no claim. Irizarry vs Yehia[0], among others, have concluded that filming a public servant in the execution of their duties have no right to privacy. There was another case that came to the same conclusion because a woman recorded cops wrecking her house during a search and then tried deleting the footage from her laptop, unfortunately I cannot find that case again.

[0] https://www.forbes.com/sites/nicksibilla/2022/07/24/first-am... (includes a link to the ruling directly from the 10th circuit website)


In practice, the larger the organization the less likely the potential legal sanctions are to dissuade them. My observation has been that once an organization (in the US anyway) grows large enough it is in a special protected status where no real penalties can come to it and there is certainly no risk of exposure to criminal charges for the decision makers.

Source: front page here every single day.


As someone who works for large enterprises: they are absolutely terrified of legal sanctions and pay huge amounts to contractors who can mitigate the risk. And sanctions do regularly happen, they are just not advertised on the HN frontpage I guess :)


Hopefully there will be a plug-in-your-own-API-key open source thing then. Even better.


The future is now:

    gptask() {
    data=$(jq -n \
              --arg message "$1" \
              '{model: "gpt-3.5-turbo",
                max_tokens: 4000,
                messages: [{role: "user", content: $message}]}')

    response=$(curl -s https://api.openai.com/v1/chat/completions \
                    -H "Content-Type: application/json" \
                    -H "Authorization: Bearer $OAIKEY" \
                    -d "$data")

    message=$(echo "$response" \
                  | jq '.choices[].message.content' \
                  | sed 's/^\"\\n\\n//;s/\"$//')

    echo -e "$message"
    }

    export OAIKEY=<YOUR_KEY>
    gptask "what is the url for hackernews"


Not to sure how to make it one section, aka, one session mode, so chatgpi can understand the previous talk.


quick script

    #!/usr/bin/env bash
    set -uf -o pipefail
    IFS=$'\n\t'

    BOT='\033[33m'
    NC='\033[0m'

    messages=()

    trim() {
      local var="$*"
      var="${var#"${var%%[![:space:]]*}"}"
      var="${var%"${var##*[![:space:]]}"}"
      printf '%s' "$var"
    }

    function complete {
      local message="$1"
      local data
      messages+=("{\"role\": \"user\", \"content\": $(echo "$message" | jq -R -s '.')}")
      processed_messages=$(printf '%s,' "${messages[@]}")
      processed_messages="[${processed_messages::-1}]"
      data=$(jq -n \
        --arg model "gpt-3.5-turbo" \
        --argjson messages "$processed_messages" \
        '{ model: $model, messages: $messages }' \
        | sed 's/\]\[/,/g')

      response=$(curl -s https://api.openai.com/v1/chat/completions \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $OPENAI_API_KEY" \
        -d "$data")

      message=$(echo "$response" | jq '.choices[].message.content')
      printable_message=$(echo "$response" | jq -r '.choices[].message.content')
      printable_message=$(trim "$printable_message")
      echo -e "${BOT}Bot:${NC} $printable_message"

      messages+=("{\"role\": \"assistant\", \"content\": $message}")
    }

    while true; do
      read -r -p $'\e[35mYou:\e[0m ' message
      complete "$message"
    done


Just use iteration where you continuously update the prompt to include previous messages


I've been following tech long enough to know that as soon as the computer can figure out which button to press it's only going to click on ads, I guarantee it.


Isn't it trivial to make a computer click on ads though? Just run selenium, apply the filtering rules from adblock and then click on a random element which would be blocked.


I think their point is the opposite - it's *not* trivial to make the computer click the correct "Download Now!" button to get Minecraft versus the other 4 that lead to malware.


Isn't it? Just run it on a machine with ublock origin.


and then that’s going to be met with MS making it “impossible” for bots to automate clicking on ads which will have the unintended consequence of making it harder to use for power users.


That explains googles Ux understanding AI. It can tell in words what the next step on a form is.


I thought to make something similar but what scared me away was the idea that some educational institution could use it and decide a student was being dishonest due to a false positive. I wouldn't want to bear responsibility for something like that.

And secondly, this is a never-ending race. Even if it were to be able to detect ChatGPT content with 100% accuracy today, it would just be used to assist in training another model to defeat it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: