Hacker News new | past | comments | ask | show | jobs | submit login
Why are websites requesting access to motion sensors on my desktop? (2019) (grantwinney.com)
162 points by joecobb on Sept 28, 2023 | hide | past | favorite | 174 comments



A while back I stumbled upon google chromes privacy settings and found things like serial port on your computer to be accessed by websites. Turns out google has thrown everything in the mix because they probably want their 'Chromebook' users like children in school to use motion sensors for convertibles to maybe play games via a browser. Websites are just taking advantage of these things. The chrome browser has ruined the internet.


The browser is essentially the operating system for most computing today so access to peripherals is reasonable.

My current job uses USB security keys and I assumed I'd have to configure them in the OS before the browser was aware of them -nope! Chrome knows if the key is in the USB port and can interact with it with my approval, which is exactly right.

The leap from access to USB to access to serial is minimal. As long as the right permission checks are in place.


> The browser is essentially the operating system for most computing today

You're right, and it's such a bummer. I often think about how interesting it would be if we didn't end up with the Chrome/Safari browser duopoly and Windows/macOS duopoly on the desktop and Android/iOS duopoly for mobile. How cool would it be to see what the Amiga, Atari ST, Spectrum, OS/2, BeOS, etc... could have become with another couple of decades development. Even Windows and macOS would probably be different if they had to compete in a healthy, diverse ecosystem.

Instead, further concentration is probably going to happen once Apple allows alternate browsers. At that point, there isn't much to stop Google's Chrome from becoming the only application platform that really matters.


If we didn't have an OS duopoly, we'd have a programming language duopoly, GUI library duopoly or something of that sort.

It's just not reasonable to expect every company to maintain more than two or three completely different versions of their apps, and most would vastly prefer to maintain just one, hence Electron and React Native.

It would be a constant incompatibility hell, and most code would be littered with #ifdefs and polyfills.

You can argue that companies producing software tools would specialize, so you would use Microsoft image editing tools on Windows, Foo's image editing tools on Amiga, Adobe's image editing tools on Mac etc, but that argument breaks down when it comes to banks, movie and music streaming companies, games etc.


I think as software matures, we will settle on free software. We more or less already did server side.

Then it will be up to the OS maintainers to make sure the software is compatible with their operating system, like how it works with FOSS systems already.

A man can dream anyway...


Working in the streaming media space, I can tell you what happens when there isn't a duopoly. It sucks.

Making an app means:

Android (and Android TV being more work), iOS, web, Roku, Fire TV, Tizen, Vizio, WebOS (LG), and multiple set top box vendors who all have horrible underpowered CPUs.

Some companies try do to cross platform, and that sort of works, but it is janky and customer complain of the sorts of UX issues that always pop up with cross platform apps, and for any decent functionality you end up writing per-platform shims. Also some platforms (Roku) you have to write an app for anyway because the platform requires using a custom language. Other platforms (Set top boxes) are so underpowered that you can't really run anything resembling modern code on them.

It sucks. It is a huge waste of engineering effort for no real gain. Most customers don't choose a smart TV based on its OS, a large % of people choose based on what is on sale at Costco, and another demographic chooses whatever they are told is "the best" by reviewers.

Mobile app developers dealing with a duopoly have it easy, but even that dramatically increases barrier to entry compared to the 90s where you just had to write one app for Windows and so long as you only used documented APIs, Microsoft would move heaven and earth to make sure your app kept working between major OS updates.


> Instead, further concentration is probably going to happen once Apple allows alternate browsers. At

Not if the DoJ forces Google to abandon Chrome. Which they should.

Apple and Google should lose their app store monopolies (including first party default preference), Google should lose the Chrome monopoly. These are incredibly harmful to technology and competition.

Each company has plenty of money, attached user base, and engineering headcount to continue to be wildly successful and profitable without operating in a way that damages the rest of the tech sector.


I’d probably add WebOS to that list too even though it’s currently living on in LG tvs.

The idea of WebOS is strong enough it seems to have lived on through Palm, HP, LG and now also a forked version.

It really was late to the mobile os race, but ahead of its time.

https://www.webosose.org/docs/tutorials/web-apps/developing-...

And more generally:

https://www.webosose.org/


It was so ahead of it's time, that I've _recently_ seen Apple and Google "invent" paradigms it was using all those years ago.


Very true.

The designer of WebOS now works on asteroid too, though.


WebOS absolutely belongs in that list.


>How cool would it be to see what the Amiga, Atari ST, Spectrum, OS/2, BeOS, etc..

But all of these systems did exist. And for whatever reasons, they did not survive in the market. So the market decided they were not what was wanted.


> So the market decided they were not what was wanted.

No, the competition decided what they wanted, by using shady-as-shit (as well as out right illegal) tactics to squash everyone else.

People _loved_ their Amigas, STs, Be boxes, etc. They loved them so much that there are still some nutjobs out there trying to keep Amiga alive! Do you think there'd be that kind of devotion for Windows 40 years later, if it died around 3.1?

No, the users didn't choose. A loose hand on monopoly law did.


How much of that was the market decision and how much was illegal anticompetitive practices that got Microsoft in trouble a few decades ago? Paying manufacturers that used them while penalizing this that made other OS's available, amongst other practices. Hell the only reason Apple is around today is Microsoft bailed them out so the could point at apple and claim in court there was a compatator and therfore were not a monopoly back in the 2000s


Or the market remained irrational longer than they could remain solvent. It's an economic system, it's not omniscient.


People using invisible hand / the market decided arguments gloss over the fact modern capitalism is yet to produce truly fair markers without corruption.

If only it was as simple as letting buying power decide.


Is it really? Many people today seem to be living in a world almost purely of apps. Besides using The Google to find a piece of trivia, I hardly see anyone living in the browser to the extent that they are treating it like an OS in and of itself. If anything, the browser is seen as antiquated. The decision of browser makers to expose so many non-document APIs seems to not be closely connected to direct consumer demand for them.


How many of those apps are wrappers around a browser through?


Kind of doesn't matter since such wrappers routinely use native code or "plugins" to allow for behavior nonstandard to browsers, although your point is totally fair.


The only viable alternative to iOS/Android apps are web apps. Apple fights it by limiting number of features you can use in the browser on mobile phones and no alternative browsers. Google - by saying, ok, go with it, you will use tech, that we control anyway.

The current amount of hacks needed to make the native desktop apps compatible across even the same operation system, but different versions, is kinda scary. Pretty sure the similar situation for mobile apps too.


Huh? But there's so much diversity in the desktop space. You have Windows/Mac, but then Debian/Rhel, Free/Net/OpenBSD, SteamOS, ChromeOS, Tails, NixOS, Qubes, Solaris Family, ReactOS and that's just the ones I've actually seen people use at conferences.

The browser space has never been more diverse as well, most of them use Chromium under the hood but who cares, Chrome was Webkit was KHTML when it started too. A browser's success is only somewhat related to its engine. Having a base you can build on that guarantees all current and future website will work and be performant on has allowed for crazy levels of experimentation.


> most of them use Chromium under the hood but who cares

We should all care, because people start writing apps that work on Safari and Chrome only rather than to a standard. The web wasn't meant to be controlled by two companies, the idea was using standards anybody can implement.

Use Firefox and see what sites you are using regularly that doen't work because they are chrome sepcific.


I've been Firefox only for more than a decade now (although tbf not on iOS) and I've still yet to find a site that straight up doesn't work. I've had some sites where I've had to tell it I'm using Chrome because of poor user agent sniffing but it's been a long time since that was necessary. Ahh Netflix when it still used Silverlight.


The site which lists available COVID vaccination times for this region of Sweden does not work in Firefox but does work in Safari: https://www.vgregion.se/ov/hitta-vaccinationstider-vgr/vacci...

I don't know the reason for the Firefox failure.

The Adobe site https://new.express.adobe.com/tools/generate-qr-code# says it does not work with Firefox, but if I change the User-Agent it does work.

I ran into both in the last month.

Your experience for most of that decade was when Firefox was much more widely used than it is now, so had a higher support priority.


That first one "works on my computer", and the second one also works (but is purported not to).

I've long been confused when reading how Firefox doesn't work everywhere. Now I'm even more confused, because you posted an example that doesn't work on your computer, but does on mine. Do I have some kind of Ultra Firefox or something?


Could you describe what "works" means?

On Firefox (I am using the most recent version for macOS) I see only only about a paragraph of text, plus header and sidebar.

In Safari I also see pull-down menus for "Kommun" and "Tidsperiod för bokningsbara tider", plus other input items, and a description like "Antal mottagningar: 23" plus a list of locations.


I see menus at the top, a heading, a paragraph of text, some dropdowns like "kommun" and "Tidsperiod för bokningsbara tider", checkboxes under "Visa mottagningar som har", and then what I think are a bunch of boxes for locations for appointments? When I click on one, I go to that location.

Are you missing any of that?


Got the same response with Firefox, temporarily disabled uMatrix and it no longer complained.

It's just a shitty as website by a shitty corporation for shitty ends I guess... I never have such issues with sites that benefit me when I visit them :P


How curious!

I don't have uMatrix or other ad blocker installed.

I turned off all "Enhanced Tracking Protection" and still see nothing in Firefox.

I don't know what you mean by "by a shitty corporation for shitty ends ... sites that benefit me". It's run by the regional council (https://en.wikipedia.org/wiki/V%C3%A4stra_G%C3%B6taland_Regi...), which is the political organization responsible for the area's public healthcare system. The page lists available COVID vaccination appointments, which benefits me as I was looking to get a booster shot and wanted to know where to go.


I'm sorry, I should have made clear I was referring to the Adobe site :/ Which, to be fair, I didn't attempt to actually use, it's just that disabling uMatrix made the message that it doesn't work with Firefox go away, but I didn't attempt to use it further.


Ahh. That was the one where it actually does work, if I change the User-Agent.


There are plenty; anecdotally, I run into them more than not.


My bank is doing some security theater fingerprinting (instead of something actually secure, like 2FA, but that's a different story), which in the end means I can't login to my bank account using Firefox anymore these days.


Chrome/Safari isn't a duopoly, it's the same browser (Webkit).


Not at all.

While they have shared origins, Chrome (Blink) and Safari (WebKit) have been going separate ways for quite a few years now.


WebUSB is actually a W3C open standard. For instance, the BBC:MicroBIT educational dev environment runs in a web browser and allows python code to be pushed to the microcontroller straight from the browser.

https://developer.mozilla.org/en-US/docs/Web/API/WebUSB_API

Isn't that neat?! Well, it could be, as long as you browser didn't allow this to be used, probed or even enumerated without explicit consent.


> WebUSB is actually a W3C open standard.

This is misleading at best. Here’s what the actual spec says <https://wicg.github.io/webusb/>:

> This specification was published by the Web Platform Incubator Community Group. It is not a W3C Standard nor is it on the W3C Standards Track.

It’s an experimental spec by Google (observe the affiliation of the three editors: all Google); Mozilla has adopted a negative position on it <https://mozilla.github.io/standards-positions/#webusb>; WebKit has not remarked upon it.


To my knowledge, no browser allows any usage of WebUSB without a prompt.

WebAuthN is different, since it does not provide sites low-level peripheral access – WebAuthN and CTAP have been designed for specifically this environment and go to great lengths to make fingerprinting hard.

As long as you don’t actually use an authenticator on a site to store a credential, it won’t be able learn anything about it.


Not sure about this, but I think from JavaScript you can absolutely probe stuff without explicit user consent. For instance, without accessing any USB device I can try:

  if(!navigator.usb) {
    console.log("learned that browser does not have USB capability");
  } else {
    console.log("learned that browser has USB capability");

    navigator.usb.getDevices().then((devices) => {
      devices.forEach((device) => {
        console.log(device.productName);
        console.log(device.manufacturerName);
      });
    });
  }
(Which is useful for fingerprinting.)


Okay, so you can learn that Chrome supports WebUSB and Firefox doesn't. But you already knew that from the User-Agent header...


Hahah, so you think. But now you have additional telemetry to show that this wasn't cURL forging a Chrome (or Firefox) user-agent header.

Finger printing sounds sophisticated, but it's just collecting the bits and pieces into something that (mostly, probabilistically) identifies you. And then tracking you, surveilling you till you're somewhere where they can identify you.

From there: profit!


> this wasn't cURL forging a Chrome (or Firefox) user-agent header.

There must be a million different ways to establish that, though.

I get the general idea, but this particular data point seems highly correlated with just the family of browser, as GP suggests.

It's also very easy to fix – just make your non-WebUSB-supporting browser expose that object, but always behave as if the user had declined that particular prompt.


That still allows website to distinguish between webUSB-aware clients and older browsers. The point being, that it would be great if extentions like WebUSB were developed such that nothing about capabilities could be learned without the users' awareness and explicit consent.

Unfortunately, instead, new capabilities are added to browsers constantly and the interfaces commonly are silently made available as part of a regular software upgrade. Sure, thought is given to security and the user is prompted just before something horrible is about to happen (access camera, mic).

But don't underestimate the shitload of "niceties" in the grabbag of APIs that in aggregate reveal more or less a supercookie of your browser instance.


Yes, enumerating available capabilities helps fingerprinting. And this is not good, and APIs should be designed better.

But there are easier avenues that are harder to mitigate. Hashing an image that relies on the browsers rendering of (default) fonts. Highly instance specific, lots of entropy.


> more or less a supercookie of your browser instance.

That's really not what it is though, is it?

These capabilities will be rolled out for all users of a given browser, or even for a given rendering engine, and I'd assume that your browser family is already easily fingerprintable. In other words, they are all highly correlated.

Things like installed fonts, window sizes, your clock drift etc. are a different story. These lower-correlation measurable properties are the real supercookie problem.


It lets you enumerate all the USB crap on the bus.

My desktop has 12 things on the bus. 8 are soldered on to the motherboard, and 4 are plugged in. There are at least 32 choices for each of the things, so that’s 5 bits of entropy per device — 7 bits, ignoring the motherboard.


But only if you allow WebUSB access, which the browser will ask you first.

If you allow Camera access you get a metric ton of bits.


The author of the sample code implies it will run without prompting.


No, but the browser reveals that it generally supports these APIs, letting the site know that there is a point in even prompting.


I mean, if you grant USB access to an untrustworthy web site, it's game over – you can probably just read the serial number of at least one of these devices over USB.


>The browser is essentially the operating system for most computing today so access to peripherals is reasonable.

Sure, but the fact that browsers became operating systems is unreasonable in the first place.


Why? Isn't the web basically the perfect fully virtualized and sandboxed environment with a highly standardized and open API and a sophisticated, accessible UI toolkit, with elaborate development tools built right in, like we always dreamed of? Isn't the web basically the perfect OS?


> Isn't the web basically the perfect OS?

I don't think so at all. Web-based applications tend to suck, and it seems to me that much of the reason is because the browser is very imperfect as an OS.


And yet you are posting this on HN, which one could argue is a Web application.

And don't forget shopping online, there are a few small web shops out there with great UX.

And you can use 20 year old websites just fine, the web has great backwards compatibility too.

Web apps don't have to suck.


> And yet you are posting this on HN, which one could argue is a Web application.

I think if HN counts as a web app, then "web app" has no meaningful definition.

> Web apps don't have to suck.

Maybe, maybe not. All I know is that the ones I've used (and have to use at work) do suck.


It is on the web, and it is interactive, i.e. not just a static blog. The average social media site is clearly a web app, right? What is HN missing? Not looking like it's from the early web 2.0 era?


> The average social media site is clearly a web app, right?

I never considered them as such, no. Those (including HN) are just ordinary websites with some amount of interactivity.

To me, a "web app" is a thing that replicates a normal application in web form. Things like GMail, Office, etc. In other words, they aren't things that uniquely leverage the web, they're things that are using the web as a shortcut to platform independence.

But perhaps the definition has changed, and I need to be much more explicit and specific instead of using the term "web app". I could buy that, but it also means that I don't actually know what a "web app" is anymore.


This seems like an artificially restrictive definition that necessarily excludes anything that you might actually enjoy using. Doesn't it just naturally make sense that the software you enjoy using most is the software that's designed idiomatically around the platform it runs on? If "web app" is defined to mean "software not meant for the web, but shoehorned onto it" then of course all web apps will suck.


HN mostly doesn't have JavaScript. I think vote buttons and the collapsing comments are the only things that are JavaScript. Everything else is HTML and links.

I don't think it makes a functional difference, except for "more" links instead of universal scrolling. Just like can't tell between static blog and dynamic blog, can't tell between static HTML and dynamic JavaScript. I would say "web app" is where download JavaScript, and the JavaScript builds the page.


>And yet you are posting this on HN, which one could argue is a Web application.

Which has 0 to do with the virtues of the Web as OS and much more to do with catharting the pain and frustration induced by sharing the digital world with people with shockingly bad points of view through acting in kind. A game nobody wins; alas...


Yes, and most importantly: nobody owns it.

Sure, we all complain about Chrome and its outsized influence, but at the end of the day the standards are more open than not and Safari and Firefox mostly work most of the time on most of the pages. That's a stark contrast to, say, .NET vs Cocoa or Android vs Apple app stores.


>mostly work most of the time on most of the pages

well, that sounds perfectly reasonable that only some pages are not standards compliant. :facepalm:


"Comply or we will break your shit" works better in closed ecosystems. I'll take a little mess over a 30% tax and heavy-handed tempramental moderation any day of the week.


Not fully disagreeing, but the web feels more heavy on RAM and other resources than native software. Also, the only programming language being Javascript, which is just starting to sort of change with WebAssembly is also far from ideal. Some other stuff like storage is also comparatively recent AFAIK.


If it were, then people wouldn’t bother writing native applications.


Seeing where the mobile world goes, I still prefer my browsers, at least I can modify the websites as I want.

Sure it's not great, but the alternatives are worse.


Not to mention the sandboxing. I'm glad a lot of the "apps" I use are just "webapps", so that I can trust them less. A user process on a desktop OS is given an insane amount of permissions by default, though this is being fixed, slowly


That's also a good point yes, the browser sandbox is the strongest that we know of.


If it’s proprietary, it can stay in they browser sandbox.


IDK, that seemed to be the vision even back in the Netscape days.


That was always the end goal.


> The browser is essentially the operating system

That's a fashionable observation; I think it's a kind of illness. The idea that you can take over anyone's computer, and make it do things the user doesn't want done, and doesn't know are being done, makes some web-developer's heads swim; they can turn the whole internet into a sort of distributed supercomputer for their own private use. WHATWG bears a lot of responsibility for this.

A real operating system doesn't download and execute code from unverified remote locations. Nearly every website nowadays tries to load and execute in the browser code from any number of remote locations, without the user's approval or even knowledge. By default, I only allow 1st-party JS, which I consider to be an extremely liberal policy.


> A real operating system doesn't download and execute code from unverified remote locations.

Sorry, but that is pretty much the standard way to install apps on windows.

That the browsers execute untrusted code all the time and still are secure is an advantage of web technology.


> Sorry, but that is pretty much the standard way to install apps on windows.

Maybe now, but when I was on XP and, later, Windows 7, you only had a handful of software you would use (I have all of them on a CD, and later on an HDD). Things like VLC, Notepad++, Codeblocks, Office, and others. It requires trust, but these programs did not phone home, AFAIK, every second. That's what we lost, trust in our computer and the software programs running on it. And now, it is a hostile relation between customers and software developers. I wasn't concerned about VLC tracking the file I opened with it, or Office scanning my documents.


> That the browsers execute untrusted code all the time and still are secure

But they aren't secure. Most of that untrusted code is doing stuff that's of no value to the user, and is positively against the express interests of many users.


> The browser is essentially the operating system for most computing today

The browser is more of a universal user interface than a universal OS.

Of course something like chromeOS/ChromiumOS is an OS what boots directly into a browser, but it’s not a universal interface.

Maybe WebOS was a step in that direction being a mobileOS that was all html and JavaScript.

Screenshots: https://www.webosose.org/docs/guides/getting-started/webos-o...

https://www.webosose.org/docs/tutorials/web-apps/developing-...


> The browser is essentially the operating system for most computing today so access to peripherals is reasonable

I suppose. Not for me, though, as I don't (and won't) use web apps or complex websites. I sorely wish there was a browser that simply didn't have that capability.


I guess I don’t know how you got from A to B there. I love the idea of kids being able to experiment with serial ports (though I’m not sure what you mean in that context, WebUSB?) in a safe, locked down programming environment.

Ideally it wouldn’t mean random web sites request motion data from you but I really don’t see this as ruining the internet.


Webserial let's Home Assistant users flash their ESPHome devices without downloading or compiling any software. WebUSB let Google update my Stadia Controller to a normal controller after they shut down their cloud services. It also offers firmware updates for some Pixel phones.

These are all quite useful tools. I've never used WebMIDI but it's older than the other Web* APIs. When you have a use case for them, the APIs are a lot better than figuring out a cross platform serial port protocol (or, more realistically, writing a Windows application and letting the Linux/macOS/Android users figure it out themselves).

WebSerial/USB/Bluetooth doesn't do anything unless you permit it to. If websites used this feature, you've clicked "okay" when mapquest.com asked to use your serial port.


My students were able to program Arduino devices from their Chromebooks because of this tech. That would have been inaccessible to them if they had to use a "real" OS, which the school did not provide.


A failure of the school, then.


You have to explicitly grant permission for a site to use a serial port.


And it can be rather practical. I've flashed firmware onto some devices using an online tool.


The existence of the Web Serial API is a godsend for working with many embedded devices. The ability to flash a device directly from the web instead of futzing around with a commandline tool feels like magic.

Unfortunately, Mozilla decided that this (and other related functionality) is "harmful". https://mozilla.github.io/standards-positions/#webserial

It is a shame, because the overlap between people who use Firefox as their main browser, and people who tinker with microcontrollers is likely pretty large.


Serial ports are everywhere and these APIs can provide quite a lot of fingerprinting capabilities.

I understand why Mozilla is hesitant. "Why does a browser need to give access to a serial port" is a good question. Certain web tools have definitely proven useful (especially when using an Android device to flash microcontrollers!) but if you asked the average internet user 20 years ago if their browser should provide websites with access to their serial ports, you'd get laughed at.

I hope Mozilla reconsiders their positions on this, because this is just one of those reasons I keep Chrome installed. I need it very rarely, but when I do, it's often because Mozilla made a choice I disagreed with (like their decision to remove anything resembling PWAs on desktop Firefox, which is why I have a bunch of Chrome shortcuts in my application launcher now).


> "Why does a browser need to give access to a serial port"

Why does a program need to give access to a serial port?

> if you asked the average internet user 20 years ago if their browser should provide websites with access to their serial ports, you'd get laughed at

What if you included "Only if you allow it"?


Web browsers used to be about websites, not applications. That's my point. It took years even after Gmail discovered the XmlHttpRequest for in-browser HTML applications to even become a thing people would just use.

> What if you included "Only if you allow it"?

You'd probably hear something like "IE/Opera is bloated enough already", I just want my downloads to finish faster.


Fingerprinting would be my guess. It could give at least one extra bit of data about your device (has motion sensors or not).

Could also help detect agent string falsification: why would a device claiming it's UA is legacy IE have motion sensors?


Web client to web server: I'm not sure I can trust you with my user's privacy, so I'm going to give you as little cookies/telemetry/side channels as I can.

Web server to web client: I'm not sure your so-called "user" represents a real-life human user. Can you at least prove that you're a real-life web browser?

Web client to web server: hmm, well okay, here's my User-Agent header. Just curious why you care about who's sending the request?

Web server to web client: Still not convinced, can you give this CAPTCHA to your so-called "user"? As to you second question: bots, spam. And the ad network sponsoring me doesn't trust my tally.

Web client to web server: My human hates CAPTCHAs, and AIs are better at it anyway.

Web server to web client: Ghrmbl. Okay let me talk to Widevine/FairPlay/PlayReady DRM or your microphone/gyroscope/battery level/uptime/...

...

The arms race of surveillance enshittification.


That was my assumption as well, but the author makes a good case for it being an Akamai bot detection script designed for mobile responsive sites, and inappropriately implemented on desktop by these big sites.


That is still fingerprinting.

Bots will either all have the same (or a low diversity of) fingerprint, or if not then they will have the same randomisation techniques to mask the fingerprint.

Some of the additional browser features that access devices do reveal the most trivial way to determine "this is not a bot as bots don't have this fingerprint".

Of course that has implications, if privacy makes you look like a bot (it does), then the web has more friction the more private one is.


If you want to experience that, download Brave, open a Tor window, and try to do anything online.

Now you’re playing botsimulater 2023.


I just use an "ordinary" consumer VPN service and it's bad enough. It really seems to have ramped up in the past two years, and now it's a daily occurrence to find a site which won't even serve me a response body when I connect via VPN


Or download the Tor Browser?


Bad code being copied from stackoverflow was my first guess as well.


Or we could apply Hanlon's razor: someone probably copy pasted some magic code from Stack Exchange that grants them access to "everything" to solve any and all problems in perpetuity. Wham, bug fixed, another 10x day.


Hanlon’s razor in this case is a marketing manager who signed up for an analytics service and had one of their marketing dev flunkies throw the script up on the base page template for the site at the ignorance of the actual application developers.

The application developers one day notice that we have no less than 10 different tracking pixels blocking page loads and eating page performance to death (not to mention sending tons of tracking data out to god knows where). They do a git blame, call the marketing manager into a meeting, who doesn’t know what anyone’s talking about because that was “years ago” and just gives permission to delete all of them.

Then she comes back a couple days later upset because heap stopped reporting.


If the website contains a protection system such as Cloudflare, Perimeter X or similar, they run a small software in your browser called Challenge (which you might seen in Cloudflare also). This software is trying to understand if you are a bot or not by executing a series of internal functions on the browser which is also called Fingerprinting. They usually go hard on microphone, motion sensors, camera, location, navigator (language etc), WebGL and other similar technologies. Bots usually does not contain these functions. In my opinion, that would be the main reason for it.


Ah, this probably also explains why a bunch of sites have started requesting Speech Synthesis capabilities on Firefox, which it blocks (yay) but notifies about each time (less yay). I figured it was fingerprinting or some other nonsense.


It is time to make it illegal. It is clear that we need ways to keep our privacy and they are making even creeper ways to bypass. This is already 1984 level of privacy invasion


The clocks were striking thirteen, and Winston Smith was irritated by a pop-up in the corner of his telescreen asking for permission to measure the room temperature.


At first I read that as “Winston Churchill”, which made it sound like a Doctor Who episode.


Bravo. Freedom is not taken with a booming voice but a tide of whispers.


I think that was poking fun at OP, because the idea that asking for permission is "literally 1984" is pretty funny.


Well, kind of both? It was a joke, but the “tide of whispers” sentiment is not wrong either.

It’s fun to imagine a “1984” prequel that would take place in an era when the Party is not full-blown totalitarian, and the telescreen pretends to be a useful information source that asks for permissions so it can better serve you. “1969”?


What about "2023"?


The permission pop-up doesn't actually do anything obviously.


Why do we need a law when there is already a browser permission setting for it? Deny the access if you don't like it and let others make their own choices.


A browser permission that is OFF by default and automatically, swiftly returns to that state and reveals nothing when probed in this disabled state.

...And a law, that holds browser vendors accountable when neglecting this.


Because when DNT setting was introduced, it was instead used as another piece of data for tracking.

And abuse in this space is rampant.


The real problem is that I don't think competitor browsers are much better on privacy OR they don't have the same security. Would be happy to be proven wrong.


The problem with policy changes like this is that it will inevitably scoop up some kid just playing around with Web API's. If this were to be a policy change, I think we should rather require browser developers to have all sensor features (location, motion, etc) opt-in only which most browsers already do.


There should be a common English phrase for the kind of Judgement anullment (and mercy anullment) that is commonly seen online when it comes to objections against tech invasions.

A red herring is often used to mean an idea that leads one astray.

A red sparrow (or whatever animal) should often be used to mean a response that anulls one's idea.

The kind of whatabout-ism and but-it-already-is-everywhere responses are the old slippery slope concept, practiced in real time. They do nothing to discuss the facts at hand and are a void of useful conversation.

note: maybe an aussie bird, cause culturally aussies love peace, and will trick people to get it.. like Barbossa in pirates of the carribean..

the Crimson Rosella has a 'cussik-cussik' sound and a series of harsh metallic screeches. like a robot bird...

https://australian.museum/learn/animals/birds/crimson-rosell...

Red rosella - to make robotic screeches in an attempt to anull disliked ideas...

(or something)


GDPR and upcoming DMA are quite explicit about these things.

HN really loves to hate those regulations and laws.


The problem with GDPR is it doesn't seem to have done anything other than making every freaking web site have another pop-up that tells you "you gotta let us set a cookie or this site won't work".

I accept there may be more teeth in the background, but nobody sees that. That's all entwined with legal maneuverings in courtrooms. What we actually see on the Internet is "everything is exactly the same, but with a new pop-up".

I don't really have an answer, so it's not like I'm much help. I'm all for laws that protect, but if they don't do anything other than make things way more complicated, that's really just a boon to big tech, who can absorb the costs.


Even more annoying is that all those cookie popups are designed with dark ux patterns where they really want you to click the "allow all" button.


Slowly (too slowly) the tide is turning: https://noyb.eu/en/where-did-all-reject-buttons-come

DMA should reinforce this: https://ia.net/topics/unraveling-the-digital-markets-act (see "Law: Not giving consent")

If the actual enforcement wakes up.


If you make it illegal it vastly limits the potential for things we could do, that may not have been thought of yet. It stifles innovation.


It's illegal to put cameras in bathrooms. We're already stifling innovation!

It's illegal to record people without their permission, in certain jurisdictions. We're already stifling innovation!

It's illegal pay money for certain services. We're already stifling innovation!

You're arguing against something negative that hasn't yet happened at the cost not doing anything to prevent something negative already happening right now.


So do you want to ban motion sensor and webcams even in cases where the users is informed and consents about their usage?


The problem is users can’t be arsed to even turn these privacy invasive settings off.

So they’ll be triply unarsed to care enough to politically activate laws about it.

The best hope is Apple kicking Google in the nuts because they want the ad revenue instead.

God help us.


If users can’t be arsed to turn off privacy settings maybe privacy isn’t as important as we think it is, or perhaps we’re willing to trade some privacy in exchange for powerful new features.

The people beating the drum of user privacy might really just be doing it as a way to block competitors from getting an advantage. Weaponized Privacy.


> If users can’t be arsed to turn off privacy settings maybe privacy isn’t as important as we think it is

Or perhaps... users don't have the technical literacy necessary to understand what expectations of privacy aren't being met.


It's the aggregate problem. Would I cry if a penny was taken from me? Probably not.

If someone could take a penny from everyone in the world? Now they have 80 million dollars ...


Aren’t they generally on by default these days and you have to opt in when some app asks for permission? (of course some platforms seem to handle it better than other in large part because Apple has direct control on the apps their users are allowed to use..)


> in cases where the users is informed and consents about their usage?

I, for one, would be perfectly happy with the law that on the required actual informed consent. Of course, it would also have to outlaw coercing consent, because companies will immediately go for loopholes.


Congrats, you just made an argument that obtains against every single law ever written


And it’s not even 9:00 AM.


Innovation is great and all, but it's not so amazing that it should be valued above all other considerations.


This is blatently used for device fingerprinting. Maybe this wasnt a big thing back in 2019 which is why he doesnt mention it?


Near the end, he tosses it out as a possibility. It certainly seems like the simplest explanation, doesn't it?


how is it being used for device fingerprinting? I don't understand


hasMotionSensors: 1 or 0. Add 20 more bits of info and you have a pretty decent fingerprint.


due to the increase in fingerprinting protection showing up in more and more browsers, advertisers are getting rather desperate to perform the ritual of establishing unique clicks.

youll also see "enable DRM playback" on many news sites or aggregators. both of these actions are user-hostile attempts from not-google to target ads to you.


The more important question would be why does your web browser _ever_ need to be aware of your motion sensors?


Same as webUSB, webGPU, etc. - to allow for web-based applications that make use of motion sensors without having to develop a full-on mobile application. A good example was the Stadia flashing software which is just a website that uses webUSB.


An example for the sensors is AR; I've built web-based AR experiences and the underlying toolkit usually requires some access to device sensors.


> without having to develop a full-on mobile application

It sounds like you have developed a full-on mobile application at that point.


Without all the hassle of a developer account (which costs money for Apple's walled garden), having to pass through reviews, updates being slow to roll out, etc.

And I'm fairly sure it's easier to write an HTML page with a couple of lines of JS using webUSB than it is to develop a mobile all with all the boilerplate.


Because browsers are the modern operating system. Many people write applications for the browser because it is cross platform meaning you can target windows, mac, linux (on all of those weird CPUs it supports), freebsd, some student class project, and so on. Anything you can think of that needs a sensor could be done in the browser.




What is it supposed to do?


It's a web version of those lightsaber games that were super popular early in the app store's life.


Sigh.

What are "those lightsaber games that were super popular early in the app store's life"?


You hold your phone like a lightsaber handle, when you swing it, it makes a light saber sound effect.

(Bonus: when you accidentally drop it while swinging, you get to buy a new iPhone!)


That's it, but damned if I can find any ten year old references to what was a huge thing back then.


The beer drinking ones were good, too.


Fingerprinting. Every w3c specification that adds features to the platform adds more unique data that can be used to identify you. They try to add security to the sandbox to make sure the user has to give permission to the website to use these APIs but it's always a cat-and-mouse game to make sure that there are no ways to break out of the sandbox and to educate users about what all these modals and features mean.

Websites are getting obnoxious these days. Most tracking software on them will trigger all kinds of authorization modals for various APIs. And you can bet there will be a bunch it will try to use without authorization if there's a work-around.

It's all fingerprinting to sell data.


Because web developers are sometimes lazy and copy code and think it will and should work on all devices. It would take a whole `if` statement not to do it.


Or marketing/management is making them. You can only have the energy to push back against so many things.


Probably so. Still, that doesn't absolve the devs of blame here as well.


I would agree. Too often they are folding like lawn chairs. I push back against a lot of ideas, but I have my limits too. In the case of adding all of this invasive fingerprinting, it’s not really acceptable.


It's crazy that outside sites have the ability to know a system's capabilities. That info should only be available if the user makes it available to be known, &, as the baseline, all connections to one's system should only know that there is something there capable of receiving data — with no restrictions on connections for systems that do not choose to unlock this or that info about themselves.


Annoy Akamai by making a formal request under California law for any information collected from sensors in your device.


The more I read about stuff like this, the more the idea of 'the browser' becoming the operating system starts to feel more real by the day.

If you consider the stuff Google, microsoft or apple, etc do, claimed to be doing, being suspected of doing. I don't consider this to be a very positive development.


Vendors, vendors, vendors. I work on one of the sites listed in the article and it's shocking how many external vendors they have for various things. Stuff like this is always a vendor adding in extra functionality.


Note that motion sensors can, in some cases, be used as a microphone: https://www.kaspersky.com/blog/non-standard-smartphone-wiret...


At this point I'm kinda glad Samsung has already patented the "use a builtin camera to track how many people are sitting in front of the TV" tech, because otherwise I'm very sure we'd soon be seeing it in every new device with a camera.


I don't understand, how does me buying a product X on A result to me getting ads on B? Like how is the connection established between A and B? Sorry Im new to API's and programming in general but this is very intriguing


How difficult would it be for a bot developer to write avoidance to such detection?


Just slightly more difficult than punching "give me javascript that returns fictional but plausible motion sensor data to a website that asks for it, to spoof that we are on a device that actually has a motion sensor" into ChatGPT.


From the text, feels like user agent should be capable, it should let site request sensor data and randomly reject certain requests after X (random) seconds, while providing fake data in other cases.


Extremely easy. You use a headless browser. This is already used heavily for automated testing.


Half of the point of these scripts is to detect headless browsers. Most of them are fairly obvious, and even when they’re hidden, it’s things like what the article mentions that gives them away. For example, headless browsers can’t respond to permission requests, so they’ll likely immediately accept or reject the request for motion data.


I understand that Akamai’s new bot manager does more than just grab telemetry data.

It’s more like a captcha for browsers, i.e. if the user is using a real browser it should behave in a way that pre-scripted bots can’t easy replicate. The payload is auto-injected by Akamai so the expected behaviour can be altered in a non-deterministic way.


Just record couple hours of phone usage IMU data and then feed it to them with random segments added together and they won't by any wiser. Or just rock the phone in it's cradle. There are companies with robots tapping the screens, adding some rocking movements is not going to be that hard.


Wow, what an interesting find. I think it's a really big stretch to request sensor data just for detecting if the user is a bot.

Oh, and please add [2019] for clarity, thanks.


why in gods name does FedEx of all websites need motion sensors, on any device?


That was the whole point of the article


Probably because they are a popular target for scrapers, so some feature of bot detection implemented wrong.


the web is broken. motion detection should be for... detecting motion. not for some anti bot bullshit.


Aliveness testing for mobile bot farms that got left on for desktop?


Soon they will request you your SSN to login




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: