Additional details I wrangled for this rabbit hole. I don't think it's t.co doing this intentionally, but rather poor handling of 'do you have our cookies or not'. Everyone in this thread _proving things_ without taking into account the complexity of the modern web.
man curl
-b, --cookie <data|filename>
(HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line.
----
Add that option to your curl tests.
---
$ time curl -s -b -A "curl/8.2.1" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.245s
user 0m0.087s
sys 0m0.034s
---
$ time curl -s -b -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.265s
user 0m0.103s
sys 0m0.023s
---
$ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum
eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2 -
real 0m1.254s
user 0m0.100s
sys 0m0.018
---
Amazing that this poor handling of 'do you have our cookies or not' only affects news papers and social media sites that Elon doesn't like! What a coincidence.
Alright thanks for explaining that . Here's what I see explicitly setting the cookiejar
$ time curl -s -b cookies.txt -c cookies.txt -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/DzIiCFp7Ti
[t.co meta refresh page src]
real 0m4.635s
user 0m0.004s
sys 0m0.008s
$ time curl -b cookies.txt -c cookies.txt -A "wget/1.23" -e ";auto" -L https://t.co/DzIiCFp7Ti curl: (7)
Failed to connect to www.threads.net port 443: Connection refused
real 0m4.635s
user 0m0.011s
sys 0m0.005s
$ time curl -b cookies.txt -c cookies.txt -e ";auto" -L https://t.co/DzIiCFp7Ti curl: (7)
Failed to connect to www.threads.net port 443 Connection refused
real 0m0.129s
user 0m0.000s
sys 0m0.013s
The failed to connects are threads.net likely blocking those user agents but the timing is there which is different than the first UA attempt.
I can replicate this behavior fairly easily in a browser.
1. Open incognito window in Chrome
2. Visit https://t.co/4fs609qwWt -> 5s delay
3. Open a second tab in the same window -> no delay
4. Close window, start a new incognito session
5. Visit https://t.co/4fs609qwWt -> 5s delay returns
Your humble anonymous tipster notes to their skeptical audience that browsers are capable of caching all sorts of things, even something as peculiar as an HTML page.
Here's a simpler test I think replicates what I am indicating in GP comment, with regards to cookie handling:
Not passing a cookie to the next stage; pure GET request:
$ time curl -s -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > nocookie.html
real 0m4.916s
user 0m0.016s
sys 0m0.018s
Using `-b` to pass the cookies _(same command as above, just adding `-b`)_
$ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > withcookie.html
real 0m1.995s
user 0m0.083s
sys 0m0.026s
Look at the differences in the resulting files for 'with' and 'no' cookie. One redirect works in a timely manner. The other takes the ~4-5 seconds to redirect.
You're completely missing the point, which is that the 5 second delay doesn't exist at all for most t.co links, even without cookies. The delay only exists for a few Musk-hated domains.
In your second example you are passing the cookie file named ./-A then trying to GET the URL "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" followed by https://t.co/4fs609qwWt
Add that option to your curl tests.