I this awesome... I've always done Array.from(new Array
But your approach makes perfect sense and is so much cleaner. Thank you!
If anyone is wondering why it works, it's because everything in JavaScript is an object. So, an array is an object with a length prop. Apparently Array.from just needs any object with a length prop to work
"If it walks like a duck, and it quacks like a duck, then it must be created by a QuackEnumeratorWalkManangerRepositoryProxyFactoryFactoryAdapterBroker." -Java
I did not know this variant, thanks. Having to fill the array with anything (even `undefined`) to iterate on it seemed such a weird concept (but then again, JS is weird).
Array.from lets you build an array from an iterable source (often used in conjunction with `new Set` to eliminate duplicates), or in this case, of a fixed length.
A forEach can easily return a transformed list of values.
acc = []
for el in values:
acc.append(f(el))
return acc
This hierarchy doesn't exist; you can pretty much implement any of them in terms of the others.
To make that untrue, you need to define very strict limits on what else your language can do.
(Sure, we produce a transformed list here through the use of side effects. But note that the original example sparking the claim that forEach can't do what map can was this:
The forEach isn't returning it in your example, the line after it is returning. Try again without acc and I believe that's what was meant.
You can implement any of them in terms of the others, but only if you break convention and introduce side effects. Follow convention and try to implement (for example) reduce with map and you'll find that it's not possible.
> You can implement any of them in terms of the others, but only if you break convention and introduce side effects.
This can't be right. forEach has no effects other than side effects. A convention that says not to use side effects prevents you from using forEach at all.
But that would make claims about the place of forEach in a hierarchy into meaningless nonsense.
> The forEach isn't returning it in your example, the line after it is returning.
That depends on your point of view. C has numerous functions which accept pointer parameters and return values in those parameters. That's just the normal way to return multiple values in C.
And by that standard, the forEach itself is returning the transformed list; that's where the transformation occurs. The following `return` line is only necessary if this is a snippet within a function whose purpose is to execute a for statement; you would actually do this inline, by just writing the for statement without needing a following `return`.
I think they’re only talking about JavaScript, in which case the forEach function doesn’t return anything.
So you can do x.map(…).forEach(…) because map returns an array. But u can’t do x.forEach(…).map(…)
Ur examples seem like they’re for a different language.
Haskell has strict limits on side-effects, but also abstracts these functions over the thing you're "forEaching." There, a reduce can't implement a map, because a map is required to return the same type of collection you put in. A reduce can only return an arbitrarily chosen collection (say, an Array).
And without side-effects, a map can't implement a forEach.
> And without side-effects, a map can't implement a forEach.
Yes, it can, because without side effects, forEach is a NOP. That's not difficult to implement. As long as you don't do anything with the return value from map, you're there.
I think it's just being confusingly stated. GP already said that reduce could implement map. I think it's merely that reduce can just output a single result, of any type. (Which is already more powerful than map, which must output an array.)
Of course, in fact you could hack anything in any of these, since you could be doing other work in the function called. But I think the general principle is sound.
forEach is actually more powerful than map, you would not be able to use map for side effects in a static and/or lazy language. In a strict, dynamic language, it matters less. Even so I wouldn't describe being an expression vs statement as a difference in power. It's contextual.
I take it you've never read any Python? That's a forEach. Python doesn't even have a for loop construct (though it does have while, which is equivalent).
> And depending on the language I'm pretty sure that return call would exit the parent scope.
Yes, that's the point, that's an illustration of terminating the forEach before you've processed each element.
I'm not very proficient in Python, no. I thought you were giving pseudo-code examples. If your argument doesn't apply to the common array methods (forEach, map, reduce) in JS and similar languages, then you missed the entire point of this comment thread.
If you have a C-style for-loop, you can and have to do everything yourself. That means you can skip elements or processer them twice, etc.
In a Python-style for-each loop, you can break out with break or return, but you have a harder time skipping or changing the order of processing. So they are weaker. And that's good.
(Your examples still process elements in the body of the loop. It's just that sometimes the body decides to do a no-op.)
Putting the no-op logic in the body of the loop, or in the function you pass to reduce is different than being able to short-cut evaluation.
You can see the difference most clearly, when trying to process (the start of) an infinite generator with reduce or a for-loop. Reduce will just hang.
Unless clicking the button furiously sends ton of network requests (I'm on my phone so can't confirm ATM), the only thing you're DoSing is your own CPU, as I forgot to add some kind of thread relief delay.
It responded correctly, it said: Robot, Exciting and then wrote "subject has run script to click on the button ten times within one second" whereafter "subject has clicked on the button a thousand times"
Maybe a contender to CookieClicker? 73% achievements atm
In this case, there's not much difference. In general, iterating over something rather than using a counter with limits makes it harder to create off by one or other errors due to laziness.
Or if you want to avoid off-by-one errors, why not have a function:
dotimes(n, () => button.click())
If anyone is in the habit of solving trivial constant space problems in linear space where n is expected to be over 100000, I don't know what to say to them. Are Javascript implementations pretty much guaranteed to optimise the array away?
I think it's mostly a stylistic choice, but getting in the habit pays dividends in that it makes it easy to do certain list related actions in a consistent way. forEach doesn't necessarily show this off well, but filter and map make it more obvious.
E.g.
// Get list of even squares of first 10 integers
Array.from({ length: 10 }).map((e,i)=>i*i).filter((i)=>i%2==0).forEach((i)=>console.log(i));
// Or more readable
Array.from({ length: 10 })
.map((e,i)=>i*i) // Get squares of each number
.filter((i)=>i%2==0) // filter for evens
.forEach((i)=>console.log(i)); // print
Want to filter or manipulate the values in a different way? Throw in another map or filter. Want to pass complex values between steps? You can pack them into an array or object in a map and unpack later.[1] Once you're used to doing stuff with lists, you can use some other interesting list operations like reduce(), some(), every(), etc.
It's not really better (it's subjective), it's just different and a lot of people are used to it and prefer it (and it's fairly consistent in nomenclature across some languages, which is a bonus).
This. Data and transformations on data (map, filter, reduce, etc) are trivially composable. Imperative loops are not (unless you abstract them away behind a function that accepts and returns data, in which case you'd be re-implementing map, filter, reduce with imperative loops).
Traditional for loops are probably the most common for ranges but if you want a more functional approach you can use the Array constructor or Array.prototype.from with its various parameters.
Sites like this often go down when they reach the HN first page. I've naively deployed stuff on AWS free tier with no scaling or anything that's handled thousands of concurrent requests out of the box. Is the HN kiss of death that bad, or is it just that a lot of people use weird/shared hosting providers?
I've had the honour to build clickclickclick.click (frontend/backend), with the truly unique team at Studio Moniker in Amsterdam.
I was quite inexperienced - 1 year after graduating from art school - when I joined and started working on this project. The server went down 5x in a single day because of all the requests. We were using web sockets (simple node server with express + socketio and a react + rxjs frontend) and that put some strain on the server. And I made the rookie mistake of storing images directly into the database instead of in a S3 bucket. Also we chose CouchDB, which saves document revisions (I didn't know..) so at the end of the day the database took 100% disk space and I couldn't SSH into it anymore. There were something like 20M+ database writes within 24hrs of the launch because it went viral.
We span up larger DO droplets several times because of this issue which took a while to fix.
It was my first time setting up / working with a VPS and nginx + sockets as well, so I'm actually quite pleased with only 15-30mins downtime overall on launch day :)
I learned a lot during that time (it was made in 2016), especially with the help of HN. Projects are often developed by junior devs or people skilled in other areas. Some experience is useful when deploying scalable apps. Nowadays it's much more easy with services like Vercel/Cloudflare/Heroku/AWS/etc. We used SFTP to deploy the site. I think I moved it to a Docker container some time after that.
I'm still using DigitalOcean to this day for my personal stuff, only now I use Dokku + Cloudflare - which works like a charm.
Check out https://studiomoniker.com if you want to see more crazy projects! (They're my previous employer FYI).
Fantastic projects there. I'm interested in "Red Follows Yellow Follows Blue Follows Red". It seems like it would be tons of fun for school kids. I can see a kit of headphones and capes being something a teacher could use to have a fun activity without too much work.
What’s frustrating is the lack of error handling, especially in an intentionally “mysterious” game like this. I spent a few minutes trying to “figure it out” before I realized, nope, it’s just broken (web socket connection failed).
It depends on what type of site it is. A dynamic site with lot of database calls/no caching would probably crash much quicker than a static HTML page with same amount of traffic. HN easily sends 100s of concurrent users if not many so it can crash a shared hosted dynamic site with no caching.
My website is hosted on the cheapest non-free Heroku tier ($7), and served by a single Node process. I've had a couple of posts get to #1 on HN (one stayed there for 24+ hours if I remember correctly), with zero server issues.
I do statically-render everything, so not doing that is the only reason I can think of for why so many sites might be going down when they get up on the front page. Many of these are blogs so could probably do much better, though I did see someone mention that this one uses websockets for something, so it would definitely be doing some logic on the server-side and wouldn't be able to go fully static.
I'm hosting 30+ apps (frontends / api's / cms's / other processes) on a single 5eu/mo DigitalOcean droplet. I believe Hertzner is the cheapest hosting provider with a 2,87eu/mo VPS.
Self-hosting is a lot of fun, and cheaper. Dokku has been around for several years, it's an open-source Heroku clone and I would recommend it to anyone looking into deploying web apps.
I've heard of it; for me, messing with a system (even once) isn't really fun, it just gets in the way of spending time on my code. I don't have a ton of projects and Heroku has a free tier for things that don't need 100% uptime, so it's worth the extra couple bucks for me
Edit: I see DigitalOcean has it available as a 1-click install. Do you get totally automatic updates/system management and everything with that?
How do you keep things secure and up to date? That’s always been my problem and when developing an app I don’t want to do system administration so I always end up going with heroku (or it’s cheaper sibling render.com lately) because I don’t have time for system administration, backing up, etc.
When I put up tls.ulfheim.net (a small-ish static site on a t2.micro) HN and Reddit were able to bring it down by maxing out the apache workers.
Some config changes fixed it right up, but my point is that it's not just the capabilities of the instance, the default http server configs might need some tweaking too.
(Static sites should be fine, but many sites use dynamic CMS like wordpress, ghost or discourse, on very low-powered hosts, which really need a caching layer to hold up under non-trivial load)
either badly coded website (heavily relying on some backend without static files) or weak hosters but not sure I have a blog that survived every single HN/reddit hug on a pretty weak VPS.
Blog written in PHP but without database interaction
Looks like the devs might want to put in some error handling in on the client side to handle this case. However fantastic/intriguing/addictive this site may be when it works, all I see is a button and this message and wonder WTF is this.
Weird. I'm using Firefox and the page says "Secure Connection Failed. An error occurred during a connection to clickclickclick.click. PR_END_OF_FILE_ERROR"
This is a piece created by the Amsterdam-based Studio Moniker. They have many more projects that play with the same ideas, see here: https://studiomoniker.com/projects
I did get a kick out of "https://donotdrawapenis.com/", which has apparently collected 25,000 reasonable penis sketches. I guess for ML training to keep penis drawings off of your website?
Sorry about that. It should be live again. Not sure for how long though... We (studiomoniker.com) are quite busy with other projects and we don't have to time to properly support this project. Hopefully we can make it more HN proof in the future. The project definitely deserves it!
Nifty. Now if only this was the required homepage on all major browsers starting up for the first time then folks at home would quickly become aware of just how well tracked they all are.
I have JS disabled by default using uBlock Origin. The first thing I done was press CTRL+U to inspect the source. Then I deduced the site wasn't malicious (or is it malicious?) and played around with it with JS enabled. It creeped me out, as all these data points could be used to fingerprint a user using simple heuristics like mouse movements etc
Most users have very unique styles, or mouse cadence. Same with typing cadence, and there are even services[0] which determine if a user is who they claim to be, based on their unique typing 'DNA'. All a site has to do is embed some JS that measures your mouse-movement style and they can reliably determine if it's 'you' who is on the site, and can more accurately target ADs at you, or even sell your data out the back door for a profit.
There is no good answer to why, except to improve fingerprinting. Which is most of what this site shows, the amount of data a site, even open in the background, can use to continuously fingerprint you. Or maybe I misjudge what this is supposed to do.
I believe that you think that your answers are true but my desktop will report 12, my work laptop will report 16, my macbook will report 16 as well, my work desktop will report 64 (my surface laptop will report 12 as well, my ipad pro will report 6... I can continue this for some time).
Which machine do you think has the best performance for your app?
Those numbers actually mean something. That’s approximately how many concurrent threads I would want to use. I’m not sure I follow the point you’re making
The answer to why: It helps with efficient allocation of worker thread pools.
I helped make a timing attack[1] as justification for adding this API, and then presented this suggested API to each browser vendor along with the timing attack. The result was that every browser has adopted my suggestion.
If this API was not present, ads could get this data in a more resource-intensive manner anyways.
The threshold of information needed to gain reliable fingerprintability is so low that we could rewind the browser development clock 20 years and still be nearly 100% identifiable. We'd gain nothing in terms of privacy, but we'd lose everything in terms of the first and only application platform that runs on every system short of a greeting card, is free to use, easy to use, not tied to an app store, not tied to a single vendor.
The original sin of the web is that the code comes from the server again every time you run it. That means you need robust sandboxing and anti-fingerprinting etc., because you're running potentially hostile code that nobody has been able to audit.
Other types of programs don't have that problem. If you get some code from Github, you can review it yourself before the first time you run it. Then every time after that, it's still the same code so you only have to do it once. And you can have someone you trust do it for you, like a Debian package maintainer.
But with nobody reviewing the code, the machine has to do it, i.e. there have to be a bunch of technical constraints on tracking and malicious behavior.
It's a terrible rubbish fire that we don't have any kind of real application platform for real applications that runs the same on every system and doesn't have a monopolist dictating terms. We should fix that. But we could fix that, and be better off than by giving up and conceding the world to surveillance dystopia.
> Other types of programs don't have that problem.
Well, they didn't. They do now, as you're expected to update everything continuously. A typical user of a PC or a smartphone has something downloading an update pretty much every day. Even a tech-savvy user can't hope to keep up with trying to track down all sneaky automatic updates and read a changelog before applying them (assuming there even is one, beyond "This update improves experience and fixes bugs" zero-information boilerplate).
At this point I'd be willing to pay for a service that would intercept all automatic updates on my devices and warn me about the ones that bring in telemetry, malware, performance degradation or other misfeatures. Unfortunately, such a service would require impossible feats of crowdsourcing to keep up with the deluge, and itself would be a huge privacy/security risk.
Aren't you just describing a package manager or an app store?
The reason mobile app stores are garbage is that the store is glued to the platform, making it high-friction to switch to another one if they do a bad job. Then they do a bad job by allowing things you don't want and prohibiting things you do want (and charging high fees etc.) and get away with it.
There is no reason for this to be centralized into a single approver. If you got 90% of your software through the Debian package manager but specifically need a newer version of Blender than they package, you could get that in particular directly from the Blender developers because you trust them not to intentionally distribute malicious code, while still relying on the package maintainers to do the work for all the other software you use.
That's possible right now on Linux. The problem is mostly that it's not possible right now on everything.
You're right, in a way. What I described would be a reality under a package manager with curated repositories, if I sourced all my software from there.
My wish came from the opposite end - I have all this software on my devices that's sourced from a lot of different places, and some of the software on my PC has built-in auto-update that's independent of the original installation method. What I want is a curation add-on - a single (at least per-device) component that would intercept all automatic updates of everything, coupled with a database (the service part) that could tell me roughly what the update contains, and flag anything problematic (telemetry, ads, feature removals, performance degradation, ...).
Your reply made me realize two things:
1. I used to hate default package sources on Debian for shipping a small selection of outdated software. I formed this impression back when I was young and naïve, and didn't question it since. But now I can see the value in having actual humans curate the software. I need to get out of the habit of adding random sources and PPAs just for the sake of having everything bleeding edge.
2. I'm really mostly pissed about this on behalf of other people. I've learned to manage my devices - mostly by being very selective about the software I run. Most people I know in the meatspace don't have the necessary experience and time, and helping everyone individually doesn't scale.
I don't make ad tech. I work on privacy tech that forces advertisers into compliance with privacy laws at my current employer.
That's a pinned tweet from May 2020, and I still stand behind it.
Here's something that will definitely blow your mind: I've unironically experimented with making an ad blocker blocker (i.e. forces ads to re-display after being removed) before, just to see if it could be done. This helped me understand what is currently possible. It was quite an interesting and educational experience which helped me improve an actual ad blocker in practice.
A good defense requires a good understanding of offense.
If you don't like the capabilities present in a browser, appeal to the browser vendor's developers in their public mailing lists.
> How is client core count contributing to this in any way?
I maintain an open source encryption/decryption library[1] for E2EE privacy request fulfillment at my current employer. We use navigator.hardwareConcurrency to distribute decryption jobs across multiple CPU cores so that you can get your E2EE files downloaded in a timely manner.
> It's not really blowing my mind that you are an disingenuous douchebag
If it can be done, it will be done. Real progress lies in the social norms that we perpetuate through the standards and policies of the applications that we use. Try to be the force that makes changes for a world that you want to live in.
I usually frame it as: if something is within reach of our technological capabilities and there's a way to make money on it, someone will eventually do it, no matter how evil that thing is.
Technology is mostly a ratchet - once something becomes possible, it usually never stops being. So if you want to stop or prevent some wrong behavior, you have to address the economics of it: make it not profitable to pursue.
In this particular case: adtech scoundrels find it profitable to know your core count. Explicit API for this information doesn't change much for them, as it's easy to reliably infer that information with a bit of clever code. That clever code, however, will tax your CPU and battery. They don't care, because they're not paying for your electricity. The solution to this problem isn't to oppose the API (that, beyond saving you battery life, has many beneficial applications). It's to make fingerprinting unprofitable.
To make fingerprinting unprofitable, you can make it harder to perform (removing this API doesn't achieve that). But that's pretty much impossible without completely lobotomizing the browser, or without technologies that don't exist yet. So the best avenue of attack is changing (widely understood) social norms to make adtech fingerprinting unprofitable. In case of as widespread practice as this, pretty much the only effective way is regulatory - try to get your government to make this stuff illegal.
Nice analysis, but you left out another possible solution, by Google: monitor resource consumption of web pages and penalize wasteful ones by giving them worse SE positions. Not sure if this is in G's interest though.
> Why do you think you need my core count when that number is useless for any sort of performance indication?
You're clearly being disingenuous, but I will answer your question in more detail. I already answered it above, but you don't seem to have the time to review the linked open source project.
If I use every core on your device for a decryption job that uses 100% of the resources that you provide it, your decryption job will undoubtedly finish sooner if you provide more resources. It doesn't matter how fast or slow your cores are, or if your computer runs on a big.LITTLE architecture or not.
And it is up to the client to manage all this. You do not gain anything by knowing the clients thread count. Which is what I have been complaining about all along.
I actually use this feature for a VR app I'm building.
I'm so sick of the privacy fetishism. You're fingerprintable, ok. You're so fingerprintable that to make you not fingerprintable, we'd have to make browsers only do static document retrieval. Oh, wait, no, not even that would really be enough. Your IP address plus the pages you visit at which times of day is enough.
The browser hasn't been a document-reading platform since CGI was invented. What is that, 30 years now? You don't like that apps are built in browsers? I don't care. Functionally nobody cares. If we couldn't, we'd be building Java apps instead, someone would have invented an open source, live, searchable, distributed system for finding apps, and then you'd still be fingerprintable.
Go sit in a cave if you don't want anybody to know who you are. The rest of us have work to do.
Let's avoid straw man arguments. There is nothing wrong with wanting to reduce the amount of information collection possible to the absolute minimum. You don't need all, or even most of that information either. Reducing information down to IP address and window size would be a significant win.
You don't need it. I do. I use a ton of the browsers APIs. Every single one of them improves the experience in some way. Several of them that people poo pop the most would make the apps I make fundamentally impossible.
What is so special about the browser that this one place you don't want apps to function? If I couldn't do it in the browser, then I'd have 5x more work to build my app for all the platforms I support. It wouldn't stop my app being made, it would just cost more. What possible purpose is there to artificially inflate the cost of development?
Or are you under some illusion that native apps are not tracking you?
"What is so special about the browser that this one place you don't want apps to function? "
This is where the disconnect is. I am not saying "I don't want apps to function". I am saying "app developers request way too much data, and could absolutely live without most of it". I don't care if your and my jobs are made harder. Boo-hoo. The world does not revolve around software developers. We don't do ethical things only when it's convenient.
"Or are you under some illusion that native apps are not tracking you?"
This sentiment clearly also applies to native applications. Clearly, system calls weren't designed with invasive data collection in mind. This is why I support what Apple is doing in its latest updates. Give an inch, and a greedy dev will take a mile.
"It wouldn't stop my app being made, it would just cost more."
If a system is not perfect, we're not allowed to adopt it? Good, let it cost 5x as much.
I understand what you've written to mean that you're advocating for artificially crippling software.
This whole conservation thread has been about `navigator.hardwareConcurrency`. I understand what you're saying (browser apps shouldn't have access to this field and native apps should also have the same anti-fingerprinting restrictions you want for browser apps) to mean that even native applications shouldn't be allowed to know the number of threads they can run. That would cripple lots of software.
If that's not what you're saying, then I re-iterate my question to another user from down-thread: what is your proposal? How would you suggest we make it possible to maximize software performance without leaking the user's fingerprint?
I submit that, whatever proposal you make, would either not significantly impact fingerprintability or be fundamentally useless for running software. It's not just that the systems that we have are imperfect, it is that the ideal is impossible to achieve.
Browsing habits alone identify you. Ad networks and shady app developers collude on the backend, selling their databases to each other. No browser-side security policy could prevent it. If you could force a fantasy world where all software running on your local machine was written exclusively by your own self, including a browser that could only send GET requests that you initiate from the address bar, or linked images from the same domain--100% trust in your local systems, 100% trust in no transmission of internal state--you're still completely trackable.
This is why I call it privacy fetishism. You can't create a technology solution to this problem. The very act of connecting to the network requires you put a huge degree of trust into the organizations that run the stuff you're connecting to, otherwise even benevolent actors won't be able to deliver the products and fulfill the services you actively request of them. You want privacy? You will have to stop doing business with anyone doing business with Facebook.
I'm not saying "give up on privacy and share everything in the open". I'm saying these foolhardy attempts at obscuring identity online do nothing torwards their goal, can do nothing torwards their goal, and only achieve making the browser a bad platform for software. All else being equal, I'd rather the browser be good at running software.
What's stopping someone from forking Firefox and making obfuscated local APIs for it? Things like core count, history length and amount of memory are all arbitrary, so there's no reason a website wouldn't work if I randomized them each time I visit a site.
And I do not get this at all. For my desktop you would get 12, I'm on a i7-8700k with a base frequency of 3.7GHZ and a permanent boost of 5.1GHZ. This rig runs Oculus VR/Steam VR all the time.
If I ran that site from my Surface laptop the response would be 12 as well. The 12 cores on my surface boost is 12 1.3GHZ cores though if they boost. The cooling might work out to boost 2 cores to 1.9GHZ. Or however the Intel boosting works out in this thermal constraint.
So, what is the useful info you get for your VR app from these values?
If I spawn 8 threads to do some kind of processing while you have 12 logical cores, I am missing out on a 50% performance boost (assuming the work is CPU bound).
If I spawn 20 threads when you only have 4 logical cores, they will just stand in each other's way (caches, context switches, etc.) and possibly eat 5x as much memory as needed on top of that.
I don't want to skirt the HN rules, but let me be clear that your replies appear as aggressive and incendiary while hinting at a lack of understanding for basic concurrency concepts. I don't know whether that's what it actually is or whether there's some miscommunication happening, but either way you're coming off as an ass. Be kind.
Your native VR games need to know how many CPU threads you have to efficiently allocate game processes. Some processes can run at a lower priority, and your OS scheduler figures out the rest.
I can assure you that most of your VR games are using this same data.
But how is my CPU count useful to you in that situation? My i7-8700k has 12 threads, my Surface book 2 pretends to have 16 threads, my iPad Pro pretends to have 16 cores, my iPhone pretends to have 12 cores.
What is the API I can use to map that to any sort of useful 3D performance? How are 3d game processes using this data to evaluate what they are supposed to be doing?
So, considering the amount of responses for thread count that you can not count on for any sort of performance, why do you need the thread count?
Since you already answered downstream, I'll spare you the bother of doing it here...
>You don't need it. I do. blabla, it makes it easier for meblabla native apps are tracking you as well
Do you... Do you maybe not know how concurrency works?
It's not about hitting a minimum threshold of performance. It's about achieving the best possible performance for whatever system it's running on.
Maybe you think because I said VR it must mean I need to be running on massive gaming rigs. We run quite nicely on 5 year old smartphones, too. We run a pancake mode for people without VR. We run on every headset on the market, and we don't have to pay a lick of attention to what Facebook or Google or Apple thinks should and should not be in their app stores. And we can do this because of the broad range of browser APIs.
Maybe you do not know what words in the English language mean? I have a Quest and an Oculus VR and a Samsung Gear VR and a shitty cardboard daydream.
I get how concurrency works. I do not get how you get any sort of useful information from my desktop claiming 12 threads, to my phone claiming 8 threads, to my quest claiming 6 threads, to my ipad/samsung tablet claiming however many threads over webxr.
I do not see how you can do anything useful with the thread count which is what I have been disputing. CPU thread count seems especially useless since it has no relation to VR performance.
I told you what usefulness I get out of it. Any one of those cores can decode a texture in T time. Do you want me to take T*N time to decode them on the render thread, or would you rather I took T*N/C off render thread on a system with C logical cores? I don't care that T is different on different machines. I don't care that C is different. Even small values of C makes up for some very large values of T on old CPUs.
And that is totally fine. If you don't care you don't need to know.
That has been my issue all along. Client computation of course needs to know how many threads are available to distribute computation in an efficient way. Why would the host need to know this though?
>Why does a webpage get to know how many CPU cores I have?
The question we have been replying too? Nobody gives a shit what you can do within the client. Fingerprinting the client as the host is an issue though, you might at least appreciate that...
What is your proposal? Forget the browser for a minute. Just any way, short of airgapping, that a user could receive an app, want that app to perform maximally, but also not have information about the system being egressed?
I don't appreciate it because it's childish. You want to make this out to be a privacy issue when it's not. It adds nothing meaningful to how easily you can be fingerprinted, but removing it would detract significantly from how performant a browser-based application can be made. Without it, you're pushing developers towards having to build native apps instead, giving even MORE access to the hardware fingerprint, where they are now stuck having to pay fealty to the platform gods, while also gaining nothing in reducing fingerprinting in the browser.
I do not dispute that you can use this in your app, my issue is with the server knowing about it (as it is in the demo here) and everything the server can call back from the client. It is not about airgapping but about realizing that current browsers can exfiltrate basically anything.
Your JS app wants to compute the amount of threads it can use on a client, that's great. If your JS app can calculate me switching browsers and using independent websites and their usage, that's cool. Why would you send that back to your server though?
If you do not think that current browser implementations have an issue with the amount of fingerprinting they can do, that is up to you as well. I think that the amount fingerprinting is at the very least difficult though. It is not about building native apps, it is about the amount of information that is send back. Or I'm drunk at this point and should go to sleep, I think I'm done in any case. Sorry about being an ass.
How many concurrent operations I can run. I use it to download and decode textures and audio data. Without it, you're waiting 15 second between scene transitions, massively dropping frames all the while. With it, is 1 second and there is never a hitch.
this sites are the exact reason why I browse without JS by default. There is no business a site should know how many previous web site I have visited as well as my CPU. All these JS API ought to be able to disabled by default.
Firefox has started nuking some of them like battery api and has a setting for fingerprint resistance which returns a standard value for all of the useful but not essential fields like time zone.
I can't find the website, but there's a site that specifically tells you everything it knows about you just by keeping the page open.
It tells you if it knows where your cursor is, what pages you've been to, what your computer is, etc.
It was intended to show that you, the user, give out a lot of data without even realizing it.
EDIT: I did a search on reddit and it looks like it actually is https://clickclickclick.click, but because the site hasn't been accessible I can't be sure.
There was a very nice attack regarding you being logged in to different websites by attempting to fetch crafted URLs that would be available only if you were logged in.
Great question. When I read about that i was working for an online lending startup. We briefly discussed on using such a feature as a signal for credit risk underwriting.
The logic was that, maybe you were logged to Telcel or to Movistar, or to Liverpool vs Sears (Mexican retailers). That could give us signal on the probability of default
Damn it HN! Y'all giving this place the hug of death and now I can't get past the "Subject clicked the button" level. It just won't load any other level for me. I guess I'll try tomorrow when it's off the front page.
That's really neat - fun use of VR, and entertaining without it too. It took me a while to figure out that you could click on the dancers and see it from their viewpoint though.
Love this. I sketched out a similar single-button idea for an art project back in 1998 with a programmer. I wanted human-human interaction, but this is much better.