I believe the main intent is to block SERP analysers, which track result positions by keywords. Not that it would help a lot with bot abuse, but will make regular SEO agency life harder and more expensive.
Last month Google have also enstricted YouTube policies which IMHO is a sign, that they are not reaching specific milestones and that'd definitely be reflected over the alphabet stocks
Speaking as a person who has played on both offense and defense: this is a heuristic that's not used frequently enough by defenders. Clients that load a single HTML/JSON endpoint without loading css or image resources associated with the endpoints are likely bots (or user agents with a fully loaded cache, but defenders control what gets cached by legit clients and how). Bot data thriftiness is a huge signal.
Even legitimate users might want to disable CSS and pictures and whatever, and I often do when I just want to read the document.
Blind users also might have no use for the pictures, and another possibility is if the document is longer than the screen so the picture is out of view then the user might program the client software to use lazy loading, etc.
As a high load system engineer you'd want to offload asset serving to CDN which makes detection slightly more complicated.
The easy way is to attach an image onload handler with client js, but that would give a high yield of false positives.
I personally have never seen such approach and doubt its useful for many concerns.
Unless organization policy forces you to, you do not have to put all resources behind a CDN. As a matter of fact, getting this heuristic to work requires a non-optimal caching strategy of one or more real or decoy resources - CDN or not. "Easy" is not an option for the bot/anti-bot arms race, all the low hanging fruit is now gone when fighting a determined adversary on either end.
> I personally have never seen such approach and doubt its useful for many concerns.
It's an arms race and defenders are not keen on sharing their secret sauce, though I can't be the only one who thought of this rather basic bot characteristic, multiple abuse trams probably realized this decades ago. It works pretty well against the low-resource scrapers with fakes UA strings and all the right TLS handshakes. It won't work against the headless browsers that costs scrapers more in resources and bandwidth, and there are specific countermeasures for headless browsers [1], and counter-countermeasures. It's a cat and mouse game.
1. e.g. Mouse movement, as made famous as ine signal evaluated by Google's reCAPTCHA v2, monitor resolution & window size and position, and Canvas rendering, all if which have been gradually degraded by browser anti-fingerprinting efforts. The bot war is fought on the long tail.
In the US you have an option to turn front passenger airbags off and place a baby seat there.
In Europe there's no such option.
I hope those enormous modern seats are made that way for the sake of protection. Cause even with a bigger car, like Audi A6, two such seats consume all the space on the back bench
On the models designed for the US market - maybe. I have checked with 3 cars being produced specifically for Europe: Volvo, Audi and Toyota. While on the majority of videos and instructions there's a special off switch, on those cars it does not exist.
Pretty sure it's mandatory - I seem to recall a considerable educational campaign when they first arrived in Poland regarding passenger front side airbag and child car seats.
I live here and have rented vehicles across pretty much all brands and have never seen a car where the front passenger airbag can't be turned off. In fact there are ~identical regulatory stickers on the passenger sun shade warning that the passenger airbag needs to be turned off when a rear facing car seat is used in the passenger seat.
Pure P2P node discovery is practically impossible. You can use DHT till some extent, but initial discovery will be done using some traditional peer tracker
You will know your subnet if you are allowed to use STUN servers. Although it also hints to my original complaint about WebRTC being designed in a way that makes p2p hard for no good reason.
You knowing your subnet is not the issue. The other party needs to know it.
WebRTC needs info about where to try to connect. It doesn’t specify the signaling method which is how you get that information and share it.
If you’re already on the same subnet, use mDNS.
If you’re on different subnets it requires some kind of way to share the subnet info.
If you’re on different networks behind NAT then scanning isn’t very helpful. You’d need to scan IPs and ports because NAT randomizes the outbound ports. That’s what STUN helps you figure out. Once you’ve got that info and you build the session description structure you can send that to the other party however you want.
Public, free STUN servers exist. And people use them for all kinds of WebRTC stuff.
Neither WebRTC nor WebSockets prevent pure P2P. It sounds like your bigger issue is with NAT. IPv6 isn’t going to fix it though since the IP space will be too large to scan for peers.
> You knowing your subnet is not the issue. The other party needs to know it.
You misunderstood, knowing subnet is an option for discovery of nearby peers. If you know your subnet you can scan it in reasonable amount of time. Otherwise you'd have to scan the whole IPv4 space (or worse IPv6 space).
> WebRTC needs info about where to try to connect. It doesn’t specify the signaling method which is how you get that information and share it.
Yes.
> If you’re already on the same subnet, use mDNS.
You can't - you are in the browser.
> If you’re on different subnets it requires some kind of way to share the subnet info.
Yes, that's how connections are established with an intermediary signaling service. The p2p question is about possibility of establishing WebRTC connection without a signaling service that runs over something else and/or requires a centralized server.
I use WebRTC in non-browser applications primarily. For those systems you can use mDNS.
I have been working on WebRTC a lot lately. If you have ideas on how you’d solve this issue I’d take a stab at implementing them.
I’ve built a demo that lets you get two systems connected by copying and pasting the session description information. No third party systems required. But you still need to get your clipboard data to someone else. Which usually means SMS, Apple Messages, etc.
Once you are off the dopamine needle, you start seeking for options to get the most relevant and compact content.
And you start understanding that algorithms are playing against you, they are made to keep you scrolling. Not to deliver the most relevant stuff in shortest time
A bit childish, but I've coded a sound button website for myself, where you can grab a sound from reels or tiktok (or YouTube), trim it and add to your own collection.
It is stupid, but I enjoy poking my friends, colleagues and kids with those buttons.
Male, 39y.
It took me about 2 weeks to build, zero regrets, absolutely worth the time spe t.
Last month Google have also enstricted YouTube policies which IMHO is a sign, that they are not reaching specific milestones and that'd definitely be reflected over the alphabet stocks
reply