Your use of the phrase 'to use the British term' implying you think that's clever and relevant (and not at all contentious) is the first indicator you don't know what you are talking about.
X200/201, X220/230, X240/250/260, X270/280/A275/285 are each substantially different. Between slashed groups there are limited parts interchangeability e.g. battery, X250 trackpad retrofit on X240. In case that matters.
- To reduce the attack surface
In the event of site with a *.gov.uk subdomain getting compromised at least it won't now be able to steal auth cookies for internal services
- to keep test/stage as faithful a copy of prod as possible they will have a totally separate but the same DNS set up/CDN set up/ load balancing etc. Theoretically the only difference would need to be one routing rule rather than stuff that might start creating edge case bugs with certs/cookies etc where there are different numbers of segments in domains. Also allows for more certainty/confidence when something is tested in a lower environment that it will work when promoted to prod
Disabling cookies quite naturally disabled local storage as well, since localStorage can be used to identify and track a user as surely as a cookie can, at least if they have JavaScript enabled.
I selectively enable cookies on websites that I wish to remember me as required. The vast majority of websites are perfectly capable of loading and operating without cookies / localStorage (though more recently a lot of them will keep popping up annoying cookie banners on every page load, since they can't remember I asked them not to use cookies if I don't let them set a cookie to remember that fact, ironically enough).
There are numerous sites that are not _useful_ without cookies, but even the majority of those detect that cookies are disabled and explain that they are required, and most of the rest do something broken but basically understandable like generating a XSRF-detected error or redirecting one back to the login page over and over again.
Even the small minority that fail to do anything at all and just sit there showing a blank page are at least harmless.
Doing nothing useful _and_ using >100% CPU would therefore seem to entail either an unusually high level of incompetence, a wanton disregard for good practice (i.e. graceful degradation) or outright malice.
I'll choose to apply Hanlon's razor and assume it's the former until proven otherwise.
...how else the website that is start page is supposed to remember what you have set it up ?
You sound like someone that rode on bald tires for a year, finally crashed into a wall, then started to sue manufacturer because "car didn't told me to change them"
> Will this cause performance issues for sites that use static cookieless domains for js, images etc
> Google themselves do this with gstatic.net and ytimg.com etc
Most probably not. The point of cookieless domains is that you can use a very simple web server to serve content (no need to handle user sessions, files are pre-compresses and cached, etc.) and it lowers incoming bandwidth a lot. If you have a lot of requests (images, css, js) the cookie information adds up quickly.
Opening video thumbnails from ytimg.com will still be cached for youtube.com as before. The only thing that will change is for embedded videos on 3rd party websites as those won't be able to use caches ytimg.com thumbails from elsewhere.
Couldn't the same thing be achieved by routing e.g. google.com/static/ to a separate simple webserver, instead of using another domain? Or use a subdomain, e.g. static.google.com.
The current way seems like needless DNS spam to me...
Even if Google used a separate highly optimised webserver for google.com/static/jquery.js, users who are logged in would be sending their auth cookies when requesting the library.
Given that generally people have slower upload than download, shaving off a few bytes from requests is worth it.
I also recall that browsers [used to (?)] limit concurrent requests per domain which this helps work around