Hacker News new | past | comments | ask | show | jobs | submit login
Click (clickclickclick.click)
1100 points by st_goliath 29 days ago | hide | past | favorite | 242 comments



This is great. I tried to hack the progress by automating clicks, using:

  let button = document.getElementsByClassName('button')[0]
  Array(100000).fill(undefined).forEach(() => button.click())

In response, I got the following log message:

  > Such a smart subject.


Well, that's an unconventional way to write a for loop. Maybe the website is commending you on that :-)


Can save more bits by doing:

Array.from({ length: 10000 }, button.click);


I this awesome... I've always done Array.from(new Array

But your approach makes perfect sense and is so much cleaner. Thank you!

If anyone is wondering why it works, it's because everything in JavaScript is an object. So, an array is an object with a length prop. Apparently Array.from just needs any object with a length prop to work


Something about ducks?


"If it walks like a duck, and it quacks like a duck, then it must be an object"


"If it walks like a duck, and it quacks like a duck, then it must be created by a QuackEnumeratorWalkManangerRepositoryProxyFactoryFactoryAdapterBroker." -Java


Of it has feathers, then as far as I'm concerned it is a duck.

Similarly, when I pick up a feathered pen in real life I become a duck


duckduckduck.duck is still available, by the way.

Well, .duck isn't a TLD yet and it may not be publicly available...

https://icannwiki.org/.duck


Now gogo.duck and go.duck would be nice aliases for DuckDuckGo. (There is ddg.gg of course.)


I did not know this variant, thanks. Having to fill the array with anything (even `undefined`) to iterate on it seemed such a weird concept (but then again, JS is weird).


The spread operator does not have the same quirk. I use it but it's still a bit weird tough

[...new Array(1000)].map(something)


You can drop the new

[...Array(1000)]

I think that is the least bits. Would love to be proven wrong :D


It may be shorter, but I like the spread syntax best used sparingly, it's harder to grasp why it was necessary there at a glance.

That's what I like in the elegance of Array.from({ length: 1000 }): it's a hack, but it reads naturally.


"Harder to grasp" only lasts until you're mildly familiar with the idiom.


[...Array(1e3)]


new Array(1000); ?


Try iterating on it, it won't work. This creates 1000 "empty slots".


In addition to franky47's more important point, I'll add that `new Array()` and `Array()` are equivalent.


In java it would be default initialized but it isn't in js


Never heard of a "from" loop before.


Array.from lets you build an array from an iterable source (often used in conjunction with `new Set` to eliminate duplicates), or in this case, of a fixed length.



This is your brain on javascript :)

Guessing it's bc one-liners are easier to write than for loops in the console.


I haven’t had an off-by-one error since I stopped using for loops.


Got any solution for naming and caching too?


I get my variable names from `openssl rand -hex 20`


It’s also because it’s good practice to use the lowest power loop possible.

A for loop can implement a reduce, a reduce can implement a map, a map can implement a forEach, but not the inverse.


Huh? I don't follow.

  In [1]: def lred(function, iterable, initializer): 
     ...:   acc = initializer 
     ...:   def reductor(el): 
     ...:     nonlocal acc 
     ...:     acc = function(acc, el) 
     ...:   for n in map(reductor, iterable): pass 
     ...:   return acc 
     ...:                                                                             
  
  In [2]: lred(lambda a,b: 10*a + b, [2,4,6,3,1], 40)                                 
  Out[2]: 4024631
What's stopping you from running these equivalencies in either direction?


They mean that the forEach runs code per element, but cannot return a transformed list of values, you need map for that.

A map can run code per list element but the result must be injective (one-to-one).

Reduce can do all of the above, but must return a lower dimension result from its input.

And finally, a for loop is basically omnipotent.


A forEach can easily return a transformed list of values.

  acc = []
  for el in values:
    acc.append(f(el))
  return acc
This hierarchy doesn't exist; you can pretty much implement any of them in terms of the others.

To make that untrue, you need to define very strict limits on what else your language can do.

(Sure, we produce a transformed list here through the use of side effects. But note that the original example sparking the claim that forEach can't do what map can was this:

  .forEach(() => button.click())
Side effects are clearly allowed.)


The forEach isn't returning it in your example, the line after it is returning. Try again without acc and I believe that's what was meant.

You can implement any of them in terms of the others, but only if you break convention and introduce side effects. Follow convention and try to implement (for example) reduce with map and you'll find that it's not possible.


> You can implement any of them in terms of the others, but only if you break convention and introduce side effects.

This can't be right. forEach has no effects other than side effects. A convention that says not to use side effects prevents you from using forEach at all.

But that would make claims about the place of forEach in a hierarchy into meaningless nonsense.

> The forEach isn't returning it in your example, the line after it is returning.

That depends on your point of view. C has numerous functions which accept pointer parameters and return values in those parameters. That's just the normal way to return multiple values in C.

And by that standard, the forEach itself is returning the transformed list; that's where the transformation occurs. The following `return` line is only necessary if this is a snippet within a function whose purpose is to execute a for statement; you would actually do this inline, by just writing the for statement without needing a following `return`.


I really don't think you got the original point...


What was the original point?


I think they’re only talking about JavaScript, in which case the forEach function doesn’t return anything. So you can do x.map(…).forEach(…) because map returns an array. But u can’t do x.forEach(…).map(…) Ur examples seem like they’re for a different language.


Haskell has strict limits on side-effects, but also abstracts these functions over the thing you're "forEaching." There, a reduce can't implement a map, because a map is required to return the same type of collection you put in. A reduce can only return an arbitrarily chosen collection (say, an Array).

And without side-effects, a map can't implement a forEach.


> And without side-effects, a map can't implement a forEach.

Yes, it can, because without side effects, forEach is a NOP. That's not difficult to implement. As long as you don't do anything with the return value from map, you're there.



> Reduce can do all of the above, but must return a lower dimension result from its input.

What do you mean by this? You can accumulate just about anything into the resulting object, including a copy of the original array.


I think it's just being confusingly stated. GP already said that reduce could implement map. I think it's merely that reduce can just output a single result, of any type. (Which is already more powerful than map, which must output an array.)

Of course, in fact you could hack anything in any of these, since you could be doing other work in the function called. But I think the general principle is sound.


> (Which is already more powerful than map, which must output an array.)

Don't tell Common Lisp.


forEach is actually more powerful than map, you would not be able to use map for side effects in a static and/or lazy language. In a strict, dynamic language, it matters less. Even so I wouldn't describe being an expression vs statement as a difference in power. It's contextual.


> forEach is actually more powerful than map, you would not be able to use map for side effects in a static and/or lazy language.

You might notice this problem above, where it's necessary to do

  for n in map(reductor, iterable): pass
to realize the mapping.


Especially, a for-loop can break out early or skip elements.


So can reduce. You'd put that logic in the function you pass to the call to reduce.

So can map. Ditto.

So can foreach. Ditto again.

  for el in values:
    if el == 3: pass
    else:
      do_something_with( el )


No, that's just a guard condition. Parent post was referring to break, i.e. skip the rest of the iteration.


For forEach that's easy:

  for el in values:
    if el == 3: return
    do_something_with( el )
For the other two, you still can, but you'll need to handle the control flow yourself, by doing something like goto or invoking a continuation.


That's a for loop. And depending on the language I'm pretty sure that return call would exit the parent scope.


> That's a for loop.

I take it you've never read any Python? That's a forEach. Python doesn't even have a for loop construct (though it does have while, which is equivalent).

> And depending on the language I'm pretty sure that return call would exit the parent scope.

Yes, that's the point, that's an illustration of terminating the forEach before you've processed each element.


I'm not very proficient in Python, no. I thought you were giving pseudo-code examples. If your argument doesn't apply to the common array methods (forEach, map, reduce) in JS and similar languages, then you missed the entire point of this comment thread.


Python's for-loops are really for-each loops.

If you have a C-style for-loop, you can and have to do everything yourself. That means you can skip elements or processer them twice, etc.

In a Python-style for-each loop, you can break out with break or return, but you have a harder time skipping or changing the order of processing. So they are weaker. And that's good.

(Your examples still process elements in the body of the loop. It's just that sometimes the body decides to do a no-op.)

Putting the no-op logic in the body of the loop, or in the function you pass to reduce is different than being able to short-cut evaluation.

You can see the difference most clearly, when trying to process (the start of) an infinite generator with reduce or a for-loop. Reduce will just hang.


>Guessing it's bc one-liners are easier to write than for loops in the console.

Firefox has a pretty good multi-line editor now if anyone was looking for that


It's how you would do it with a language like APL or q.


You and everyone else copying it seems to have DDOSd the site and now I don't get to play with it lol


Unless clicking the button furiously sends ton of network requests (I'm on my phone so can't confirm ATM), the only thing you're DoSing is your own CPU, as I forgot to add some kind of thread relief delay.


It responded correctly, it said: Robot, Exciting and then wrote "subject has run script to click on the button ten times within one second" whereafter "subject has clicked on the button a thousand times"

Maybe a contender to CookieClicker? 73% achievements atm


xdotool can do this type of thing pretty easily as well:

Open a console, and prep: xdotool click --delay 50 --repeat 1000 1

Move your mouse to the location and press enter.


Shameless plug, you can do the same with this[0] albeit not in the CLI.

0: https://github.com/rmpr/atbswp


Is that actually the best way to do ranges on JS?


The other comment points out Array.from, which seems pretty nifty. Have not seen it before. I would have used:

  for(i of Array(1000).keys()) { doSomething() }
or

  [...Array(1000)].forEach(() => doSomething())


why create the array?

for(let i = 0; i < 1000; i++) button.click()

wouldn’t be surprised if Javascript has some weird optimization under the hood!


In this case, there's not much difference. In general, iterating over something rather than using a counter with limits makes it harder to create off by one or other errors due to laziness.


I'm a fan of `Array.from({ length: 1000 })`


am i missing something with this entire subthread? why not for(let i=0; i<n; ++i){something()};?


Doesn't use enough memory perhaps.


Or if you want to avoid off-by-one errors, why not have a function:

    dotimes(n, () => button.click())
If anyone is in the habit of solving trivial constant space problems in linear space where n is expected to be over 100000, I don't know what to say to them. Are Javascript implementations pretty much guaranteed to optimise the array away?


I think it's mostly a stylistic choice, but getting in the habit pays dividends in that it makes it easy to do certain list related actions in a consistent way. forEach doesn't necessarily show this off well, but filter and map make it more obvious.

E.g.

   // Get list of even squares of first 10 integers
    Array.from({ length: 10 }).map((e,i)=>i*i).filter((i)=>i%2==0).forEach((i)=>console.log(i));
    // Or more readable
    Array.from({ length: 10 })
     .map((e,i)=>i*i) // Get squares of each number
     .filter((i)=>i%2==0) // filter for evens
     .forEach((i)=>console.log(i)); // print
Want to filter or manipulate the values in a different way? Throw in another map or filter. Want to pass complex values between steps? You can pack them into an array or object in a map and unpack later.[1] Once you're used to doing stuff with lists, you can use some other interesting list operations like reduce(), some(), every(), etc.

It's not really better (it's subjective), it's just different and a lot of people are used to it and prefer it (and it's fairly consistent in nomenclature across some languages, which is a bonus).

1: https://en.wikipedia.org/wiki/Schwartzian_transform


Because let transformed = for(let i=0; i<n; ++i){something()}; doesn't work.

For loops are just clunky. They're not values, you can't pass them around or copy'n'paste them where the transformed value is needed.


This. Data and transformations on data (map, filter, reduce, etc) are trivially composable. Imperative loops are not (unless you abstract them away behind a function that accepts and returns data, in which case you'd be re-implementing map, filter, reduce with imperative loops).


I guess they can argue they're introducing fewer variables. lol


there is no true scotsman

[...Array(n)].map(something)


this is what i’m preaching


Traditional for loops are probably the most common for ranges but if you want a more functional approach you can use the Array constructor or Array.prototype.from with its various parameters.


The functional approach also works better as a one-liner when typed in the devtools console.


I guess you could do `[...new Array(1000)].map((_,i) => i);` to create a sequence.


Javascript has gotten so good with ES6 and beyond. But there's still no nice way to do something n times without a C style loop.


I used a `setInterval(() => btn.click(),100)` and that seemed to fool it


Sites like this often go down when they reach the HN first page. I've naively deployed stuff on AWS free tier with no scaling or anything that's handled thousands of concurrent requests out of the box. Is the HN kiss of death that bad, or is it just that a lot of people use weird/shared hosting providers?


I've had the honour to build clickclickclick.click (frontend/backend), with the truly unique team at Studio Moniker in Amsterdam.

I was quite inexperienced - 1 year after graduating from art school - when I joined and started working on this project. The server went down 5x in a single day because of all the requests. We were using web sockets (simple node server with express + socketio and a react + rxjs frontend) and that put some strain on the server. And I made the rookie mistake of storing images directly into the database instead of in a S3 bucket. Also we chose CouchDB, which saves document revisions (I didn't know..) so at the end of the day the database took 100% disk space and I couldn't SSH into it anymore. There were something like 20M+ database writes within 24hrs of the launch because it went viral.

We span up larger DO droplets several times because of this issue which took a while to fix.

It was my first time setting up / working with a VPS and nginx + sockets as well, so I'm actually quite pleased with only 15-30mins downtime overall on launch day :)

I learned a lot during that time (it was made in 2016), especially with the help of HN. Projects are often developed by junior devs or people skilled in other areas. Some experience is useful when deploying scalable apps. Nowadays it's much more easy with services like Vercel/Cloudflare/Heroku/AWS/etc. We used SFTP to deploy the site. I think I moved it to a Docker container some time after that.

I'm still using DigitalOcean to this day for my personal stuff, only now I use Dokku + Cloudflare - which works like a charm.

Check out https://studiomoniker.com if you want to see more crazy projects! (They're my previous employer FYI).


Fantastic projects there. I'm interested in "Red Follows Yellow Follows Blue Follows Red". It seems like it would be tons of fun for school kids. I can see a kit of headphones and capes being something a teacher could use to have a fun activity without too much work.


Dang... I want to build a sand pen too!


re Studio website: minor spook when the arm comes from the bottom of the screen to tap on the shop link unexpectedly


I'd read this comment and still had that horror movie jump scare feeling!


What’s frustrating is the lack of error handling, especially in an intentionally “mysterious” game like this. I spent a few minutes trying to “figure it out” before I realized, nope, it’s just broken (web socket connection failed).


We're (Moniker) doing palliative care for it at the moment. The technology is quite old.

Sorry about that! There may be an initiative to update it in the near future.


Doesn't seem to recognize the latest Chromium version of IE Edge


Same here, had to switch off of Firefox


Worked for me the second time I loaded it in Firefox


It depends on what type of site it is. A dynamic site with lot of database calls/no caching would probably crash much quicker than a static HTML page with same amount of traffic. HN easily sends 100s of concurrent users if not many so it can crash a shared hosted dynamic site with no caching.


My website is hosted on the cheapest non-free Heroku tier ($7), and served by a single Node process. I've had a couple of posts get to #1 on HN (one stayed there for 24+ hours if I remember correctly), with zero server issues.

I do statically-render everything, so not doing that is the only reason I can think of for why so many sites might be going down when they get up on the front page. Many of these are blogs so could probably do much better, though I did see someone mention that this one uses websockets for something, so it would definitely be doing some logic on the server-side and wouldn't be able to go fully static.


You should look into Dokku!

I'm hosting 30+ apps (frontends / api's / cms's / other processes) on a single 5eu/mo DigitalOcean droplet. I believe Hertzner is the cheapest hosting provider with a 2,87eu/mo VPS.

Self-hosting is a lot of fun, and cheaper. Dokku has been around for several years, it's an open-source Heroku clone and I would recommend it to anyone looking into deploying web apps.


I've heard of it; for me, messing with a system (even once) isn't really fun, it just gets in the way of spending time on my code. I don't have a ton of projects and Heroku has a free tier for things that don't need 100% uptime, so it's worth the extra couple bucks for me

Edit: I see DigitalOcean has it available as a 1-click install. Do you get totally automatic updates/system management and everything with that?


How do you keep things secure and up to date? That’s always been my problem and when developing an app I don’t want to do system administration so I always end up going with heroku (or it’s cheaper sibling render.com lately) because I don’t have time for system administration, backing up, etc.


When I put up tls.ulfheim.net (a small-ish static site on a t2.micro) HN and Reddit were able to bring it down by maxing out the apache workers.

Some config changes fixed it right up, but my point is that it's not just the capabilities of the instance, the default http server configs might need some tweaking too.


Out of curiosity, what config changes did you use on apache?


Same questions here. I’m not sure I would know where to start on those particular configs


Maxrequestworkers, server limit?


IIRC it was MaxRequestWorkers to 1500, and MaxConnectionsPerChild to 0 (unlimited)


Here are some analytics from an HN hug a month an a half ago: https://forum.photostructure.com/t/front-page-of-hacker-news...

(Static sites should be fine, but many sites use dynamic CMS like wordpress, ghost or discourse, on very low-powered hosts, which really need a caching layer to hold up under non-trivial load)


either badly coded website (heavily relying on some backend without static files) or weak hosters but not sure I have a blog that survived every single HN/reddit hug on a pretty weak VPS.

Blog written in PHP but without database interaction


It appears to be hosted on Digital Ocean.


Surely not the first appearance?


I'm on Firefox. I just got "subject clicked the button" and nothin' else is happening. Is that it?


It makes a network query, and stops at this point if it fails. And it often fails because the website is overloaded.


Looks like the devs might want to put in some error handling in on the client side to handle this case. However fantastic/intriguing/addictive this site may be when it works, all I see is a button and this message and wonder WTF is this.


Same here. I'm getting:

> Firefox can’t establish a connection to the server at wss://clickclickclick.click/socket.io/?EIO=3&transport=websocket&sid=... websocket.js:111

> The connection to wss://clickclickclick.click/socket.io/?EIO=3&transport=websocket&sid=... was interrupted while the page was loading.

Reading the comments, it sounds the site is supposed to log everything that I do? Perhaps Firefox is blocking it?


Word on the street is socket.io doesn't scale


Weird. I'm using Firefox and the page says "Secure Connection Failed. An error occurred during a connection to clickclickclick.click. PR_END_OF_FILE_ERROR"


Same here. Can't load the page.


I had to disable all adblockers to get the full functionality.


No, there's a lot more. I have no problems in Firefox 86.


seeing this behavior as well.

As is the plight of a firefox user, I'm headed to chrome to see if it works there :\

edit: looks like it there's a ton of audio files that may have failed to load.


This is a piece created by the Amsterdam-based Studio Moniker. They have many more projects that play with the same ideas, see here: https://studiomoniker.com/projects


I did get a kick out of "https://donotdrawapenis.com/", which has apparently collected 25,000 reasonable penis sketches. I guess for ML training to keep penis drawings off of your website?


Exactly! The full dataset can be found here:

https://github.com/studiomoniker/Quickdraw-appendix


Reminds me of Samy's website. For a fun time, try to view the page source: https://samy.pl/


I thought I was smart by doing "curl https://samy.pl/". :(


IIRC, at one point he offered a semi-bounty on anyone who could actually reveal the source, only partly in jest.


Well there goes an hour.


For some reason the page blanks out as soon as I load it and then when I open the console it loads normally and I can see the source.


> No source for you! You found easter egg #7. Close the console to return to samy.pl ;)

Hmmmmm...


The detection is pretty simple; in FF undocking the console bypasses all of it and you can debug whatever you like. :)


Oh, damn. Well, I'm glad I didn't figure that out. Doing it the "right" way was fun!


Or open the console while you're on hacker news and then click on the link.


This does not work for me in Safari.


Just open the debugger. You can see the entire sources in there.


This is great. Thought I'd got it for a second when I hit #11...


Site is down, but this is an explainer by the site's creators (with demo):

https://studiomoniker.com/projects/click-click-click

Getting to #1 on HN never easy for the robots. :-(


Sorry about that. It should be live again. Not sure for how long though... We (studiomoniker.com) are quite busy with other projects and we don't have to time to properly support this project. Hopefully we can make it more HN proof in the future. The project definitely deserves it!


Security’s worst nightmare! But brilliant creation from a psychological point of view. Awareness Awareness Awareness...


Nifty. Now if only this was the required homepage on all major browsers starting up for the first time then folks at home would quickly become aware of just how well tracked they all are.


If curious, past thread:

A demonstration of browser events used to monitor online behaviour - https://news.ycombinator.com/item?id=12985644 - Nov 2016 (165 comments)


I have JS disabled by default using uBlock Origin. The first thing I done was press CTRL+U to inspect the source. Then I deduced the site wasn't malicious (or is it malicious?) and played around with it with JS enabled. It creeped me out, as all these data points could be used to fingerprint a user using simple heuristics like mouse movements etc


I don't understand why someone is paranoid of having their mouse movement tracked.


Most users have very unique styles, or mouse cadence. Same with typing cadence, and there are even services[0] which determine if a user is who they claim to be, based on their unique typing 'DNA'. All a site has to do is embed some JS that measures your mouse-movement style and they can reliably determine if it's 'you' who is on the site, and can more accurately target ADs at you, or even sell your data out the back door for a profit.

[0] https://www.typingdna.com/


Why does a webpage get to know how many CPU cores I have?


https://developer.mozilla.org/en-US/docs/Web/API/NavigatorCo...

There is no good answer to why, except to improve fingerprinting. Which is most of what this site shows, the amount of data a site, even open in the background, can use to continuously fingerprint you. Or maybe I misjudge what this is supposed to do.


I bet it helps for some niche apps like webgl/webgpu games.


You've been given several good answers as to why, you just refuse to believe anything anyone is telling you.


I believe that you think that your answers are true but my desktop will report 12, my work laptop will report 16, my macbook will report 16 as well, my work desktop will report 64 (my surface laptop will report 12 as well, my ipad pro will report 6... I can continue this for some time).

Which machine do you think has the best performance for your app?


Those numbers actually mean something. That’s approximately how many concurrent threads I would want to use. I’m not sure I follow the point you’re making


I don't care. You picking the machine is up to you. I only care about giving the best performance on whichever one you've chosen.


There does seem to be a fingerprinting angle on it, but I wrote about using CPU, Battery, Memory etc. to be consideration over how much JavaScript you load for your users: https://umaar.com/dev-tips/242-considerate-javascript/#load-...


So I can know how many worker threads to create before it's just a waste of thread scheduling


    window.navigator.hardwareConcurrency
[edit: well this is more of a 'how' than a 'why']


The answer to why: It helps with efficient allocation of worker thread pools.

I helped make a timing attack[1] as justification for adding this API, and then presented this suggested API to each browser vendor along with the timing attack. The result was that every browser has adopted my suggestion.

If this API was not present, ads could get this data in a more resource-intensive manner anyways.

1. https://eligrey.com/blog/cpu-core-estimation-with-javascript...


Ahah, I like the "give up" approach to fighting browser fingerprinting! "If ads can track us, we might as well make it efficient".


The threshold of information needed to gain reliable fingerprintability is so low that we could rewind the browser development clock 20 years and still be nearly 100% identifiable. We'd gain nothing in terms of privacy, but we'd lose everything in terms of the first and only application platform that runs on every system short of a greeting card, is free to use, easy to use, not tied to an app store, not tied to a single vendor.


The original sin of the web is that the code comes from the server again every time you run it. That means you need robust sandboxing and anti-fingerprinting etc., because you're running potentially hostile code that nobody has been able to audit.

Other types of programs don't have that problem. If you get some code from Github, you can review it yourself before the first time you run it. Then every time after that, it's still the same code so you only have to do it once. And you can have someone you trust do it for you, like a Debian package maintainer.

But with nobody reviewing the code, the machine has to do it, i.e. there have to be a bunch of technical constraints on tracking and malicious behavior.

It's a terrible rubbish fire that we don't have any kind of real application platform for real applications that runs the same on every system and doesn't have a monopolist dictating terms. We should fix that. But we could fix that, and be better off than by giving up and conceding the world to surveillance dystopia.


> Other types of programs don't have that problem.

Well, they didn't. They do now, as you're expected to update everything continuously. A typical user of a PC or a smartphone has something downloading an update pretty much every day. Even a tech-savvy user can't hope to keep up with trying to track down all sneaky automatic updates and read a changelog before applying them (assuming there even is one, beyond "This update improves experience and fixes bugs" zero-information boilerplate).

At this point I'd be willing to pay for a service that would intercept all automatic updates on my devices and warn me about the ones that bring in telemetry, malware, performance degradation or other misfeatures. Unfortunately, such a service would require impossible feats of crowdsourcing to keep up with the deluge, and itself would be a huge privacy/security risk.


Aren't you just describing a package manager or an app store?

The reason mobile app stores are garbage is that the store is glued to the platform, making it high-friction to switch to another one if they do a bad job. Then they do a bad job by allowing things you don't want and prohibiting things you do want (and charging high fees etc.) and get away with it.

There is no reason for this to be centralized into a single approver. If you got 90% of your software through the Debian package manager but specifically need a newer version of Blender than they package, you could get that in particular directly from the Blender developers because you trust them not to intentionally distribute malicious code, while still relying on the package maintainers to do the work for all the other software you use.

That's possible right now on Linux. The problem is mostly that it's not possible right now on everything.


You're right, in a way. What I described would be a reality under a package manager with curated repositories, if I sourced all my software from there.

My wish came from the opposite end - I have all this software on my devices that's sourced from a lot of different places, and some of the software on my PC has built-in auto-update that's independent of the original installation method. What I want is a curation add-on - a single (at least per-device) component that would intercept all automatic updates of everything, coupled with a database (the service part) that could tell me roughly what the update contains, and flag anything problematic (telemetry, ads, feature removals, performance degradation, ...).

Your reply made me realize two things:

1. I used to hate default package sources on Debian for shipping a small selection of outdated software. I formed this impression back when I was young and naïve, and didn't question it since. But now I can see the value in having actual humans curate the software. I need to get out of the habit of adding random sources and PPAs just for the sake of having everything bleeding edge.

2. I'm really mostly pissed about this on behalf of other people. I've learned to manage my devices - mostly by being very selective about the software I run. Most people I know in the meatspace don't have the necessary experience and time, and helping everyone individually doesn't scale.


[flagged]


I don't make ad tech. I work on privacy tech that forces advertisers into compliance with privacy laws at my current employer.

That's a pinned tweet from May 2020, and I still stand behind it.

Here's something that will definitely blow your mind: I've unironically experimented with making an ad blocker blocker (i.e. forces ads to re-display after being removed) before, just to see if it could be done. This helped me understand what is currently possible. It was quite an interesting and educational experience which helped me improve an actual ad blocker in practice.

A good defense requires a good understanding of offense.

If you don't like the capabilities present in a browser, appeal to the browser vendor's developers in their public mailing lists.


[flagged]


Whoa, crossing into personal attack like this will get you banned here. Please don't post in the flamewar style generally.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


> How is client core count contributing to this in any way?

I maintain an open source encryption/decryption library[1] for E2EE privacy request fulfillment at my current employer. We use navigator.hardwareConcurrency to distribute decryption jobs across multiple CPU cores so that you can get your E2EE files downloaded in a timely manner.

> It's not really blowing my mind that you are an disingenuous douchebag

If it can be done, it will be done. Real progress lies in the social norms that we perpetuate through the standards and policies of the applications that we use. Try to be the force that makes changes for a world that you want to live in.

1. https://github.com/transcend-io/penumbra/


[flagged]


> Sorry, but what is that even supposed to mean?

I usually frame it as: if something is within reach of our technological capabilities and there's a way to make money on it, someone will eventually do it, no matter how evil that thing is.

Technology is mostly a ratchet - once something becomes possible, it usually never stops being. So if you want to stop or prevent some wrong behavior, you have to address the economics of it: make it not profitable to pursue.

In this particular case: adtech scoundrels find it profitable to know your core count. Explicit API for this information doesn't change much for them, as it's easy to reliably infer that information with a bit of clever code. That clever code, however, will tax your CPU and battery. They don't care, because they're not paying for your electricity. The solution to this problem isn't to oppose the API (that, beyond saving you battery life, has many beneficial applications). It's to make fingerprinting unprofitable.

To make fingerprinting unprofitable, you can make it harder to perform (removing this API doesn't achieve that). But that's pretty much impossible without completely lobotomizing the browser, or without technologies that don't exist yet. So the best avenue of attack is changing (widely understood) social norms to make adtech fingerprinting unprofitable. In case of as widespread practice as this, pretty much the only effective way is regulatory - try to get your government to make this stuff illegal.


Nice analysis, but you left out another possible solution, by Google: monitor resource consumption of web pages and penalize wasteful ones by giving them worse SE positions. Not sure if this is in G's interest though.


> Why do you think you need my core count when that number is useless for any sort of performance indication?

You're clearly being disingenuous, but I will answer your question in more detail. I already answered it above, but you don't seem to have the time to review the linked open source project.

If I use every core on your device for a decryption job that uses 100% of the resources that you provide it, your decryption job will undoubtedly finish sooner if you provide more resources. It doesn't matter how fast or slow your cores are, or if your computer runs on a big.LITTLE architecture or not.


So, why do you need to know how many cores my platform will provide if you can not judge the performance of the cores anyway?


I answered that question in the comment that you are replying to.


And it is up to the client to manage all this. You do not gain anything by knowing the clients thread count. Which is what I have been complaining about all along.


What do you mean “you do not gain anything”? It seems you are confusing GHz with parallelism. The thread count matters, regardless of cpu speed


I actually use this feature for a VR app I'm building.

I'm so sick of the privacy fetishism. You're fingerprintable, ok. You're so fingerprintable that to make you not fingerprintable, we'd have to make browsers only do static document retrieval. Oh, wait, no, not even that would really be enough. Your IP address plus the pages you visit at which times of day is enough.

The browser hasn't been a document-reading platform since CGI was invented. What is that, 30 years now? You don't like that apps are built in browsers? I don't care. Functionally nobody cares. If we couldn't, we'd be building Java apps instead, someone would have invented an open source, live, searchable, distributed system for finding apps, and then you'd still be fingerprintable.

Go sit in a cave if you don't want anybody to know who you are. The rest of us have work to do.


Let's avoid straw man arguments. There is nothing wrong with wanting to reduce the amount of information collection possible to the absolute minimum. You don't need all, or even most of that information either. Reducing information down to IP address and window size would be a significant win.


You don't need it. I do. I use a ton of the browsers APIs. Every single one of them improves the experience in some way. Several of them that people poo pop the most would make the apps I make fundamentally impossible.

What is so special about the browser that this one place you don't want apps to function? If I couldn't do it in the browser, then I'd have 5x more work to build my app for all the platforms I support. It wouldn't stop my app being made, it would just cost more. What possible purpose is there to artificially inflate the cost of development?

Or are you under some illusion that native apps are not tracking you?


> Every single one of them improves the experience in some way.

As would a full read-write access to the system, no doubt.


"What is so special about the browser that this one place you don't want apps to function? "

This is where the disconnect is. I am not saying "I don't want apps to function". I am saying "app developers request way too much data, and could absolutely live without most of it". I don't care if your and my jobs are made harder. Boo-hoo. The world does not revolve around software developers. We don't do ethical things only when it's convenient.

"Or are you under some illusion that native apps are not tracking you?"

This sentiment clearly also applies to native applications. Clearly, system calls weren't designed with invasive data collection in mind. This is why I support what Apple is doing in its latest updates. Give an inch, and a greedy dev will take a mile.

"It wouldn't stop my app being made, it would just cost more."

If a system is not perfect, we're not allowed to adopt it? Good, let it cost 5x as much.


I understand what you've written to mean that you're advocating for artificially crippling software.

This whole conservation thread has been about `navigator.hardwareConcurrency`. I understand what you're saying (browser apps shouldn't have access to this field and native apps should also have the same anti-fingerprinting restrictions you want for browser apps) to mean that even native applications shouldn't be allowed to know the number of threads they can run. That would cripple lots of software.

If that's not what you're saying, then I re-iterate my question to another user from down-thread: what is your proposal? How would you suggest we make it possible to maximize software performance without leaking the user's fingerprint?

I submit that, whatever proposal you make, would either not significantly impact fingerprintability or be fundamentally useless for running software. It's not just that the systems that we have are imperfect, it is that the ideal is impossible to achieve.

Browsing habits alone identify you. Ad networks and shady app developers collude on the backend, selling their databases to each other. No browser-side security policy could prevent it. If you could force a fantasy world where all software running on your local machine was written exclusively by your own self, including a browser that could only send GET requests that you initiate from the address bar, or linked images from the same domain--100% trust in your local systems, 100% trust in no transmission of internal state--you're still completely trackable.

This is why I call it privacy fetishism. You can't create a technology solution to this problem. The very act of connecting to the network requires you put a huge degree of trust into the organizations that run the stuff you're connecting to, otherwise even benevolent actors won't be able to deliver the products and fulfill the services you actively request of them. You want privacy? You will have to stop doing business with anyone doing business with Facebook.

I'm not saying "give up on privacy and share everything in the open". I'm saying these foolhardy attempts at obscuring identity online do nothing torwards their goal, can do nothing torwards their goal, and only achieve making the browser a bad platform for software. All else being equal, I'd rather the browser be good at running software.


What's stopping someone from forking Firefox and making obfuscated local APIs for it? Things like core count, history length and amount of memory are all arbitrary, so there's no reason a website wouldn't work if I randomized them each time I visit a site.


"Haha! I lied to you on this form about how many fingers I have! I actually have 10! Now you can't pick me out of a crowd!"

"You're the only person here, and I can clearly see your face, Jerry. And now I need to make you a new set of gloves."


And I do not get this at all. For my desktop you would get 12, I'm on a i7-8700k with a base frequency of 3.7GHZ and a permanent boost of 5.1GHZ. This rig runs Oculus VR/Steam VR all the time.

If I ran that site from my Surface laptop the response would be 12 as well. The 12 cores on my surface boost is 12 1.3GHZ cores though if they boost. The cooling might work out to boost 2 cores to 1.9GHZ. Or however the Intel boosting works out in this thermal constraint.

So, what is the useful info you get for your VR app from these values?


If I spawn 8 threads to do some kind of processing while you have 12 logical cores, I am missing out on a 50% performance boost (assuming the work is CPU bound).

If I spawn 20 threads when you only have 4 logical cores, they will just stand in each other's way (caches, context switches, etc.) and possibly eat 5x as much memory as needed on top of that.

I don't want to skirt the HN rules, but let me be clear that your replies appear as aggressive and incendiary while hinting at a lack of understanding for basic concurrency concepts. I don't know whether that's what it actually is or whether there's some miscommunication happening, but either way you're coming off as an ass. Be kind.


Your native VR games need to know how many CPU threads you have to efficiently allocate game processes. Some processes can run at a lower priority, and your OS scheduler figures out the rest.

I can assure you that most of your VR games are using this same data.


But how is my CPU count useful to you in that situation? My i7-8700k has 12 threads, my Surface book 2 pretends to have 16 threads, my iPad Pro pretends to have 16 cores, my iPhone pretends to have 12 cores.

What is the API I can use to map that to any sort of useful 3D performance? How are 3d game processes using this data to evaluate what they are supposed to be doing?


Never said it was for the 3D rendering. If all my app did was 3D rendering, it wouldn't be an app, it'd be a tech demo.


So, considering the amount of responses for thread count that you can not count on for any sort of performance, why do you need the thread count? Since you already answered downstream, I'll spare you the bother of doing it here...

>You don't need it. I do. blabla, it makes it easier for me blabla native apps are tracking you as well


Do you... Do you maybe not know how concurrency works?

It's not about hitting a minimum threshold of performance. It's about achieving the best possible performance for whatever system it's running on.

Maybe you think because I said VR it must mean I need to be running on massive gaming rigs. We run quite nicely on 5 year old smartphones, too. We run a pancake mode for people without VR. We run on every headset on the market, and we don't have to pay a lick of attention to what Facebook or Google or Apple thinks should and should not be in their app stores. And we can do this because of the broad range of browser APIs.


Maybe you do not know what words in the English language mean? I have a Quest and an Oculus VR and a Samsung Gear VR and a shitty cardboard daydream.

I get how concurrency works. I do not get how you get any sort of useful information from my desktop claiming 12 threads, to my phone claiming 8 threads, to my quest claiming 6 threads, to my ipad/samsung tablet claiming however many threads over webxr.

I do not see how you can do anything useful with the thread count which is what I have been disputing. CPU thread count seems especially useless since it has no relation to VR performance.


I told you what usefulness I get out of it. Any one of those cores can decode a texture in T time. Do you want me to take T*N time to decode them on the render thread, or would you rather I took T*N/C off render thread on a system with C logical cores? I don't care that T is different on different machines. I don't care that C is different. Even small values of C makes up for some very large values of T on old CPUs.


And that is totally fine. If you don't care you don't need to know.

That has been my issue all along. Client computation of course needs to know how many threads are available to distribute computation in an efficient way. Why would the host need to know this though?


Who said anything about the host?


>Why does a webpage get to know how many CPU cores I have?

The question we have been replying too? Nobody gives a shit what you can do within the client. Fingerprinting the client as the host is an issue though, you might at least appreciate that...


What is your proposal? Forget the browser for a minute. Just any way, short of airgapping, that a user could receive an app, want that app to perform maximally, but also not have information about the system being egressed?

I don't appreciate it because it's childish. You want to make this out to be a privacy issue when it's not. It adds nothing meaningful to how easily you can be fingerprinted, but removing it would detract significantly from how performant a browser-based application can be made. Without it, you're pushing developers towards having to build native apps instead, giving even MORE access to the hardware fingerprint, where they are now stuck having to pay fealty to the platform gods, while also gaining nothing in reducing fingerprinting in the browser.


I do not dispute that you can use this in your app, my issue is with the server knowing about it (as it is in the demo here) and everything the server can call back from the client. It is not about airgapping but about realizing that current browsers can exfiltrate basically anything.

Your JS app wants to compute the amount of threads it can use on a client, that's great. If your JS app can calculate me switching browsers and using independent websites and their usage, that's cool. Why would you send that back to your server though?

If you do not think that current browser implementations have an issue with the amount of fingerprinting they can do, that is up to you as well. I think that the amount fingerprinting is at the very least difficult though. It is not about building native apps, it is about the amount of information that is send back. Or I'm drunk at this point and should go to sleep, I think I'm done in any case. Sorry about being an ass.


How many concurrent operations I can run. I use it to download and decode textures and audio data. Without it, you're waiting 15 second between scene transitions, massively dropping frames all the while. With it, is 1 second and there is never a hitch.


Reported by the browser but not always accurate. Used in browser fingerprinting btw


Yeah, I have a 12C/24T cpu, my browser appears to report 16 cores.


this sites are the exact reason why I browse without JS by default. There is no business a site should know how many previous web site I have visited as well as my CPU. All these JS API ought to be able to disabled by default.


A WIRED article from 2015 talking about the benefits of browsing the web without JavaScript: https://www.wired.com/2015/11/i-turned-off-javascript-for-a-...


Firefox has started nuking some of them like battery api and has a setting for fingerprint resistance which returns a standard value for all of the useful but not essential fields like time zone.


"You visited about n sites before coming here" How the hell does that work ?


I can't find the website, but there's a site that specifically tells you everything it knows about you just by keeping the page open.

It tells you if it knows where your cursor is, what pages you've been to, what your computer is, etc.

It was intended to show that you, the user, give out a lot of data without even realizing it.

EDIT: I did a search on reddit and it looks like it actually is https://clickclickclick.click, but because the site hasn't been accessible I can't be sure.


There was a very nice attack regarding you being logged in to different websites by attempting to fetch crafted URLs that would be available only if you were logged in.

Quite clever.


but what would the issue be if some random site knew I was logged into google/amazon/facebook at the time?


- consider that the same thing applies to sites that aren’t google/amazon/facebook that you might care about

- it helps with fingerprinting

- there’s no reason to allow it


Good points, especially the first one. Thanks.


Great question. When I read about that i was working for an online lending startup. We briefly discussed on using such a feature as a signal for credit risk underwriting.

The logic was that, maybe you were logged to Telcel or to Movistar, or to Liverpool vs Sears (Mexican retailers). That could give us signal on the probability of default


This is called super logout.


Would be interested in it if anyone finds it!


    window.history.length
would be my guess.


Ugh, slightly creepy. Thanks for your answer !


Yes, this is the property that was used to measure the number of previously visited websites.

It only counts the previous sites in the current browser tab if I remember correctly.


You opened a new domain and didn't have it in an isolated container. So it could query the history length, even if it can't query the actual history.


I must be missing something.

    “Subject has clicked the button”
    “Subject has scrolled down”
    “Subject has resized the window”
    etc.
So what?


It is a demonstration of how easy it is for websites to track your behavior online.


43% of the domains using the .click top-level domain are parked:

https://icannwiki.org/.click


site is down.

downdowndown.down


The deathly HN kiss of lovelove.love


Aw, not a TLD.


Damn it HN! Y'all giving this place the hug of death and now I can't get past the "Subject clicked the button" level. It just won't load any other level for me. I guess I'll try tomorrow when it's off the front page.


If you have an ad blocker, you have to disable it for the website to work correctly.


Wow. This is literally clickbait.


I own idareyouto.click and have no use for it. If anyone wants to use it for something fun email sam at habosa dot com


I love this. I also really liked the dance: https://tonite.dance/


That's really neat - fun use of VR, and entertaining without it too. It took me a while to figure out that you could click on the dancers and see it from their viewpoint though.


Given the trick they use to prevent you from using the browser back button, I was expecting it to comment "subject is trying to escape!"


Not of course to be confused with stick: https://www.youtube.com/watch?v=K05N2jqFHc8


Love this. I sketched out a similar single-button idea for an art project back in 1998 with a programmer. I wanted human-human interaction, but this is much better.



It's crazy how many things they predicted. Love it


Seems to be failing socket requests in the background for me... likely being swamped right now. I shall have to click later.


Getting HN DDoSed. 500 Internal Server Error.


what's this supposed to do? I do not get beyond "subject clicked the button", perhaps because the add-blockers and privacy extensions I use...

EDIT: Either firefox or edge the result is always the same:

"failed: WebSocket is closed before the connection is established."


I tried to click but it never loaded


I am so interested in seeing the source code. The voiceover dialogues are so apt.


> subject disconnected from internet

Not sure why I got this message, which is not true.


Doesn't place nice with i3 tho. Nor does it appear to recognize Brave


doesn't detect when you delete all dom

YouTube does that, if you try to manipulate the dom it throws some error. I ended up injecting CSS instead.


Mật khẩu 123456789


The experience was a bit limited on mobile.



If you like that, I encourage you to try out Universal Paperclips! [1] It's actually a quite clever take on the genre. :-)

[1] https://www.decisionproblem.com/paperclips/index2.html


It's also quite short. You can try the Kittens Game for more depth [1], or if you like short story rich versions you can try A Dark Room [2].

[1]: https://www.bloodrizer.ru/games/kittens/ [2]: https://adarkroom.doublespeakgames.com/


Thanks! I wasted quite some time… :-D

Now there is nothing left in the universe besides:

> Paperclips: 30,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000


Thanks for the recommendation, this game has taken over my life for the last 24 hours.


This is crazy. So addictive!


The page is down :(


Site not opening


Kiss of death!


I suspect this is using GPT-3 or something


Down




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: