Hacker News new | past | comments | ask | show | jobs | submit login

There are plenty of unmaintained sites out there that do stupid things in reaction to UA strings, and one of the stupid things is using a regex that expects specific strings to exist.

Using a completely new string format in the same field (or removing it entirely) breaks a lot of sites that'll never be fixed.

Freezing it prevents this. And if we're freezing and creating a new system then why not go for something queryable without all the baggage?






I tried using no UA header at all for a period of a few weeks, many years ago when "appsites" weren't as common, and yet a lot of sites failed to load mysteriously, showed odd server errors, or even banned my IP for being a bot.

I expect no UA header to be even less usable now that sites are more paranoid and app-ified, so instead I use a random one. That still confuses some sites...


A random one makes you unique and thus identifiable across sites.

I meant random as in "randomly picked from list of common UAs", not as in "randomly generated GUID".

Random UA can get you outright blocked for looking like a bot. So is faking Googlebot and such.

How is something without all the baggage expected to help with the sites doing the stupid things?

Users still want to see those pages. If browser X removes the UA string, that site brakes for them, and they change to browser Y that still has the old useragent.

But won't removing just the baggage parts of the UA ("like Gecko", "KHTML" etc) break those sites anyway?

The old UA will exist as it always has done - frozen, unchanged. Nothing will be removed from the literaly `user-agent` header value.

Yep, they can receive those crazy UAs, see https://wiki.mozilla.org/Compatibility/Go_Faster_Addon



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: