

Double-clicking on the Web - Mojah
https://ma.ttias.be/double-clicking-on-the-web/

======
jxf
HTTP already handles this just fine if you have sensitive forms: your form can
include a one-time token which your server validates. If the token has already
been used, you don't process the second request.

What we definitely shouldn't do (as the author suggests) is disable form
submissions on subsequent clicks. What if the first response fails? You'll
have to enter the entire form all over again, instead of being able to hit the
back button. Madness!

The author's proposal makes things marginally easier for developers and shifts
all of the pain to users. That's not really acceptable, at least not to me.

~~~
atomwaffel
While I agree that you should include a one-time token with all sensitive
forms, I don't think that disabling links/submit buttons on the first click is
a bad idea in general. It's an easy fix that will immediately prevent most
accidental redundant server requests, but as it's a client-side fix it's
inherently unreliable.

What's wrong though is the author's solution of disabling _all_ links and
submit buttons on the page indefinitely. Here's an alternative solution that
only disables whatever has been clicked on for half a second:

    
    
        (function() {
          function stopClickEvent (ev) {
            ev.preventDefault();
            ev.stopPropagation();
          }
          document.body.addEventListener('click', function (ev) {
            if (ev.target.tagName === 'A' || ev.target.getAttribute('type').toLowerCase() === 'submit') {
              setTimeout(function () {
                // Needs to happen _after_ the request goes through, hence the timeout
                ev.target.addEventListener('click', stopClickEvent);
              }, 0);
              setTimeout(function () {
                ev.target.removeEventListener('click', stopClickEvent);
              }, 500);
            }
          });
        })();
    

Unlike the author's solution, this really works for links (they don't support
the `disabled` attribute), it doesn't change the appearance of the button, it
only disables one element, and it will work for elements that have been added
to the document after the DOM was loaded. You can try it here:
[http://codepen.io/anon/pen/xGKdLX](http://codepen.io/anon/pen/xGKdLX)

~~~
danneu
I like the 500ms timeout as a quick solution.

Of course, a common source of duplicate form submissions is when then
submission request is taking long (>1000ms) and the user gives the button
another click in a little while.

~~~
atomwaffel
I know, and you could set the timeout higher to prevent these (the button
becomes a placebo button that's still clickable but doesn't do anything), but
it gets hard to draw a line at which the request really might have failed and
the button should be re-enabled. I feel like this is a different problem that
needs to be addressed server-side, both with quicker response times and
single-use tokens.

------
captainmuon
Firefox seems to remove the second click sometimes. If I click a link, and
then, while the page is loading but the old page is still visible change my
mind and click another link, it sometimes ignores the second click. However it
doesn't seem to happen all the time.

As an aside, the distinction between double click and single click used to be
much clearer (in Windows 3.1/95 times!):

\- If its on a white background, maybe in a well, then it is a _selectable_.
Click on it to select it, right mouse button to do something with it, and
double-click to do the default action with it (which is bold in the context
menu).

\- If it looks like a physical button (3D and raised), then you can perform an
action by clicking on it.

Links are a bit odd, as they look selectable (and in fact they are, by
clicking and dragging from outside in), but one click performs an action. But
by now, they are so ubiquotious that everybody knows that one click activates
them (Well, at least one click. Two clicks do no harm in most situations.)

What I hate about modern flat design is that it often removes these hints
towards what kind of control something is (the "affordance" in hip UX speak).
If I click/tap on something, does it get selected, opened, or does nothing
happen because it is a label? No way to know without trying.

~~~
riquito
> Firefox seems to remove the second click sometimes. If I click a link, and
> then, while the page is loading but the old page is still visible change my
> mind and click another link, it sometimes ignores the second click. However
> it doesn't seem to happen all the time.

To have better response time (and be a good citizen and save bandwidth), if
you try to load in quick succession two different pages the first request is
aborted. If you're too late the content of the first request has already been
downloaded and being processed, resulting in that page being loaded.

------
hammerdr
I don't disagree with the premise that double clicks are a broken thing on the
web. There's too much of a cognitive shift between "the web" and "everything
else" where double clicks are either a bad thing to do or a required action to
complete tasks. Like the author said, even tech savvy people will double click
things that they wouldn't need to.

But, saying that idempotency is hard and thus we should build features to get
around it.. that seems wrong to me. The web is inherently a distributed system
and you're going to have concurrent requests and weird edge cases that arise
in those situations. The right way to handle that isn't to bandaid it with
disabling client side behavior. That just hides to problem. Your server is the
place to handle it.[1]

And, it really isn't that hard either. It's just a couple extra validations on
your input before you do a write. There are places where it is more difficult
but still not overly complicated to implement.

[1] Dogmatism alert. Of course, there are exceptions to everything that's a
'best practice'. Here, I'm just talking about disabling double-click as a
general solution for the web to a concurrency problem.

------
snowwolf
Because never trust user input and race conditions.

The author says "Server-side, this is a much harder problem to solve." and he
is correct. But that doesn't mean that if the browsers did solve this problem
that you wouldn't still need to solve it server side. For the same reason you
still need to implement server side validation even though you have client
side validation (never trust user input).

Say you had an order form only allowing one order per customer (some special
offer) and on the server you have a check to ensure order.count==1 you might
assume that because the browser prevents double clicks on the form you are now
safe. Except you aren't. It's pretty trivial to still send that form
submission multiple times simultaneously (curl, ab) and trigger a race
condition where each order.count check returns 0.

~~~
dexen
Agreed, this should be solved server-side.

Indeed it is possible to implement idempotent behavior of POST forms server-
side. Even if a bit tedious to implement at first, it meshes very well with
multi-user web apps, when several people may be changing the same underlying
data simultaneously.

One possible approach is carry in POST both old (original) state and new (user
input) state, and apply it as a diff to underlying storage.

Used this approach with 100% success on WWW based kiosks used by tradesmen,
some of whom were habitual doubleclickers. Since the form was very simple --
one or two fields -- the old state was carried in action url. In case of
double clicks, the seconds ended up changing nothing in backend storage, and
returning correct data.

~~~
snowwolf
The problem isn't idempotency as that just solves the problem of multiple
synchronous submits (If order submitted what happens if order is submitted
again). The real problem is the asynchronous nature of the web where both
submissions could arrive at exactly the same time. And this is solved
generally by using some kind of global lock (often at the database level).

~~~
oldmanjay
coarse locking is only a possibility at fairly small scales, unless your users
like waiting. CSRF protection is the solution here as well. this entire
article could have one comment that says nothing more and it would perfect.

~~~
snowwolf
Who said anything about course locking? Most databases support row level
locking (think lock user row so each user cannot submit multiple orders at the
same time).

And I think you are misunderstanding CSRF. As the name implies it is
protection against Cross Site attacks and will do nothing to help you prevent
race conditions.

------
denma
That reminds me of the way my mother is using a computer. She clicks
EVERYTHING twice. No matter if online or offline. That often causes problems.
For example if you double-click an icon in the windows taskbar to open an
application it opens twice.

~~~
KJasper
Exactly a lot of real world (older) users do this and I cringe a little every
time I see it. They don't even know that you can single click. This isn't a
huge issue, but it's much more common than the people here seem to think.

~~~
smorrow
"They don't even know that you can single click."

I see alot of younger people are like this about some things. (but not single-
clicking obviously.)

How about raking your scrollwheel through fifteen pages to get somewhere?
Analog scrolling, I call it. Because that's what it is, a skeuomorph of
scrolling through a microfilm... I would've thought the advantage of having a
computer was that you can go directly to page _n_ , to line _n_ , whatever.

In terms of efficiency, I would liken the scroll wheel to the arrow keys: it's
alright for small distances, but not more.

------
eXpl0it3r
All I can think of is that someone is using the mouse (and keyboard) quite a
bit different than I do.

I can understand that single vs double clicking can be an issue with non-
technical users, but disabling double click won't really change that issue,
because they usually don't understand that these are two different actions. Of
course one can and should try to make things easier for them, but once it
starts to affect the usability for others, it's not really worth it anymore.
These people should learn to adjust their behavior.

Now that this is out of the way, I really wonder what he meant with:

> _For techies like us, a double-click happens by accident._

What kind of techies is he talking about, because I'm using the PC many hours
each day and I don't just accidentally double click stuff. Maybe it's the
environment they're working with, but for me I'd say, I do a larger portion of
single-clicks than double-clicks per day and if a signification percentage
were accidental double-clicks I would have more issues that just a site that
opens twice.

And last but not least I'd say a big percentage of my clicks on the web are
middle clicks anyways, i.e. open link in a new tab. Something that wasn't even
looked at, which again makes me wonder how the author is using the web.

What we can learn from this however is, that websites should add checks to
prevent double submissions of forms. And provide proper feedback if e.g. an
AJAX submission failed or if it's still sending data.

------
ptx
This makes no sense. The examples he gives of "double-clicks everywhere" are
all instances of the same thing: items that are selected with single-click and
opened with double-click.

We're _not_ "trained to double-click anything", unless some particular trainer
is extremely misguided.

Double-clicking is used when there is some action available for an item _in
addition_ to the single-click action, where the double-click usually leads to
both actions taking place: we single-click to select files and double-click to
open them, single-click the title bar to activate the window and double-click
to maximize, single-click the window menu to view the menu and double-click to
close the window (which is one of the options in the menu).

Some examples of things we only single-click: buttons, scrollbars, tabs,
menus, dropdowns, text (for selection).

Single-click is everywhere. Everything that can be double-clicked can also be
single-clicked. Single-click is the default. If a single-click only leads to
an item beings selected, that's a pretty good hint to try double-click if you
also wanted it to be opened.

(Incidentally, what do those who double-click everything do when they want to
select a file? Do they just open it, close it again (leaving it selected) and
accept that as normal?)

~~~
cxseven
Self-guided experience "trains" many people, and some of them infer a rule
that double clicking is for opening things. Older people especially can fail
to notice that many widgets respond to their first click and continue with the
same action on the second click, so their behavior doesn't stand much chance
of correction.

------
creshal
Are double click actions still a thing? Windows has had an option for single-
click mode since forever, and most users I know use it – mainly because
they're used to single click from the web. The remainder and most Mac/Linux
users don't even bother and navigate with the keyboard.

I can't remember the last time I had to double-click anything, except for text
selection.

~~~
johnbennyton
There's something particularly ugly about people who feign ignorance of the
way people less technical than them do things, in order to make themselves
sound more elite.

~~~
nailer
It's not being less technical: using double click simply means you did desktop
computing in the mid nineties on Windows. Anyone younger simply doesn't have
that memory.

~~~
userbinator
Or Macs... the "single click to select, double-click to apply default action"
occurs both in Windows and MacOS.

~~~
nailer
OK I'm wrong. I use a Mac every day and thought they didn't do double click.
Apparently it's a somewhat subconscious action.

~~~
DonHopkins
Wasn't that the whole point of the article? How many times did he point that
out? More than two:

>Everywhere in the Operating System, whether it's Windows or Mac OSX, the
default behaviour to navigate between directories is by double-clicking them.
We're trained to double-click anything.

>Want to open an application? Double-click the icon. Want to open an e-mail in
your mail client? Double-click the subject. Double-clicks everywhere.

>We know we should only single-click a link. We know we should only click a
form submit once. But sometimes, we double-click. Not because we do so
intentionally, but because our brains are just hardwired to double-click
everything.

>For techies like us, a double-click happens by accident. It's an automated
double-click, one we don't really think about. One we didn't mean to do.

~~~
Retra
Following a link and opening a directory are distinct enough in most people's
minds to not confuse the two. That's why hyperlinks are normally underlined,
colored, and give you a different mouse cursor.

~~~
DonHopkins
I disagree that you can make such a sweeping statement without evidence,
having worked on HyperTIES [1] [2], an early hypermedia browser and authoring
system with Ben Shneiderman, who invented and published the idea of
underlining links, and who has performed and published empirical studies
evaluating browsing strategies, single and double clicking, touch screen
tracking, and other user interaction techniques.

Hyperlinks do not necessarily have to be triggered by single clicks. In
HyperTIES, single clicking on a hyperlink (either inline text or embedded
graphical menus) would display a description of the link destination at the
bottom of the screen, and double clicking would follow the link. That gave
users an easy way to get more information on a link without losing their
context and navigating away from the page they were reading. Clicking on the
background would highlight all links on the page (which was convenient for
discovering embedded graphical links in pictures). [3] [4]

The most recent anecdotal evidence close at hand (in the sibling and
grandparent comments to yours) that it's confusing is that nailer did indeed
confuse double clicking with single clicking in his memory, not remembering
that he subconsciously double clicks on Macs all the time.

I would argue that much in the same way the Windows desktop gives users an
option to enable single-click navigation like web browsers, web browsers
should also give users an option to enable double-click link navigation like
HyperTIES, so a single click can display more information and actions related
to the link without taking you away from your current context, and a double
click navigates the link. (Of course in the real world, scripted pages and
AJAX apps probably wouldn't seamlessly support both styles of interface, but
double click navigation could be built into higher level toolkits, and
dynamically applied to normal links by a browser extension.)

In order to make a sweeping statement like "Following a link and opening a
directory are distinct enough in most people's minds to not confuse the two"
you would have to perform user testing -- you can't just make up statements
like that without any supporting evidence. Can you at least refer me to some
empirical studies that support your claim, please?

[1]
[http://www.cs.umd.edu/hcil/hyperties/](http://www.cs.umd.edu/hcil/hyperties/)

Starting in 1982, HCIL developed an early hypertext system on the IBM PC
computers. Ben Shneiderman invented the idea of having the text itself be the
link marker, a concept that came to be called embedded menus or illuminated
links. Earlier systems used typed-in codes, numbered menus or link icons.
Embedded menus were first implemented by Dan Ostroff in 1983 and then applied
and tested by Larry Koved (Koved and Shneiderman, 1986). In 1984-85 the work
was supported by a contract from the US Department of Interior in connection
with the U.S. Holocaust Memorial Museum and Education Center. Originally
called The Interactive Encyclopedia Systems (TIES), we ran into trademark
conflicts and in 1986 changed the name to HyperTIES as we moved toward
commercial licensing with Cognetics Corporation. We conducted approximately 20
empirical studies of many design variables which were reported at the
Hypertext 1987 conference and in array of journals and books. Issues such as
the use of light blue highlighting as the default color for links, the
inclusion of a history stack, easy access to a BACK button, article length,
and global string search were all studied empirically. We used Hyperties in
the widely circulated ACM-published disk Hypertext on Hypertext which
contained the full text of the 8 papers in the July 1988 Communications of the
ACM.

[...]

Today, the World Wide Web uses hypertext to link tens of millions of documents
together. The basic highlighted text link can be traced back to a key
innovation, developed in 1983, as part of TIES (The Interactive Encyclopedia
System, the research predecessor to Hyperties). The original concept was to
eliminate menus by embedding highlighted link phrases directly in the text
(Koved and Shneiderman, 1986). Earlier designs required typing codes,
selecting from menu lists, or clicking on visually distracting markers in the
text. The embedded text link idea was adopted by others and became a user
interface component of the World Wide Web (Berners-Lee, 1994).

[2]
[http://www.donhopkins.com/home/ties/LookBackAtHyperTIES.html](http://www.donhopkins.com/home/ties/LookBackAtHyperTIES.html)

Designing to facilitate browsing: A look back at the Hyperties workstation
browser

Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William
Weiland

Human-Computer Interaction Laboratory, A.V. Williams Bldg., University of
Maryland, College Park MD 20742, U.S.A.

[3]
[https://www.youtube.com/watch?v=fZi4gUjaGAM](https://www.youtube.com/watch?v=fZi4gUjaGAM)

University of Maryland Human Computer Interaction Lab HyperTIES Demo. Research
performed under the direction of Ben Shneiderman. HyperTIES hypermedia browser
developed by Ben Shneiderman, Bill Weiland, Catherine Plaisant and Don
Hopkins. Demonstrated by Don Hopkins.

[4]
[https://www.youtube.com/watch?v=hhmU2B79EDU](https://www.youtube.com/watch?v=hhmU2B79EDU)

Demo of UniPress Emacs based HyperTIES authoring tool, by Don Hopkins, at the
University of Maryland Human Computer Interaction Lab.

~~~
Retra
You can't just request informed studies without providing funding.

~~~
DonHopkins
Of course I can. I said "Can you at least refer me to some empirical studies
that support your claim, please?", and I provided links to informed studies
that I and other people published.

But I'll humor you: How much funding would you suggest that I should offer him
to look up some proof of what he said on google or wikipedia? And how much
money should I have asked him to pay me for the information I gave him for
free?

I didn't realize it was customary to pay people for supporting their
statements with evidence on Hacker News. Can you please refer me to the
section of the FAQ about that? Or do you have Hacker News confused with
Kickstarter or experiment.com?

------
danbruc
_Double-clicks everywhere._

I disagree. By far the most actions are performed with a single-click, double-
clicks are mostly for one very specific use case - having a container control
and you want to select one of the children and perform the default action on
it.

------
sebastianconcpt
Interesting problem and valid questioning but flawled reasoning.

"because our brains are just hardwired to double-click everything." that might
be true for people that formed muscular memory in the golden days of the mouse
and heavy OS use (as oposed to heavy internet use).

An acceptable solution has to anticipate the muscular memory we are generating
in legions of mobile users: tapping.

If we apply that author's central argument to today's mobile influence, then
who instead of tapping will have her brain just wired to double tap?

------
Animats
Browsers do not provide immediate visual feedback on some things for which
they should. When you click on a link which exits the page, something visual
should happen immediately, long before the new page loads. The browser knows
you're leaving. Dimming the page, or some other visual transition, would be a
good start. That would make a clear distinction between page-exiting actions
and ones which keep the page active.

~~~
ptx
There used to be much better feedback with the "throbber", i.e. the big
animated Mosaic/Netscape/Mozilla logo. But that seems to have gone away with
the decrease in size of the toolbar buttons.

------
ptx
Here's an example of what happens when you try to treat single- and double-
click as the same thing (which may or may not have been the intent in this
particular case).

In Android 5.0, swiping from the top brings up the notification menu and a
two-finger swipe brings up the settings shortcuts. From the shortcuts screen,
the back button brings you back to the notification menu. Another press of the
back button brings you all the way back to where you were.

So to quickly go back after bringing up the shortcuts screen, you would
naturally press the back button twice in quick succession. But that's a lot
like a double-click, and if we want to treat those as single-clicks ... it has
to ignore one press and just bring you back to the notifications screen
instead. Which is exactly what happens.

So to actually go back twice, which is a very common and natural thing in this
case, you have to go back ... then wait ... wait ... and then finally go back
again.

> If the same form submit has been registered by the browser in less than 2
> seconds, surely that must have been a mistake and would count as an
> accidental double-click?

Or maybe they're using POST requests to control some process which they get
feedback from through another channel. They could be rotating pieces in a
Tetris game – should I have to wait for 2 seconds between my two identical
commands for rotating a piece 180 degrees to the right?

------
zaroth
Backward compatibility? Some click targets _are_ meant to be clicked more than
once, and can you guarantee that none of those targets may inadvertently be
swept up in a change like this?

I'd flip the proposal around, and go for:

    
    
      <form action="/something" onlyOnce>
      ...
      </form>
    

The problem then is for the hung responses which still happen frequently
enough, the typical hack to unfreeze a requeust is to mash the link/button a
second time. Now, would the user have to hunt for the "Cancel" button (an
increasingly diminishing target) before being able to click Submit again?

Since you can't guarantee a client will have this behavior, you have to plan
for it anyway on the server. I think that's why features like this tend not to
be deployed. Features that might help most of the time tend to be beaten to
death by the people who keep insisting that they don't completely and entirely
solve a problem, so somehow that means it's better not to have them at all,
i.e. false hope.

~~~
hamhamed
never a good idea to camelCase html attributes :) (in case you didn't know,
this won't work: $('form').attr('onlyOnce') but this will
$('form').attr('onlyonce')

if your attribute is written as onlyOnce

~~~
boomlinde
Mixed case in attribute names is perfectly valid HTML, and tag and attribute
names are case insensitive. If jQuery attribute selection isn't, then maybe it
should be considered a bug in jQuery.

That's not to say that it isn't a bad idea, but I don't think the inconsistent
behavior in jQuery should be considered in arriving at that conclusion.

~~~
hamhamed
no it's not valid html.. that's like saying writing gibberish is valid HTML.
You can't read mixed case attribute names.. jQuery nor Javascript. It's always
converted to lowercase without you knowing.

~~~
boomlinde
Yes, it's valid HTML. It's not at all like saying that writing gibberish is
valid HTML.

Why would you say these things without looking it up in the w3 reference
([http://www.w3.org/TR/html-markup/documents.html#case-
insensi...](http://www.w3.org/TR/html-markup/documents.html#case-
insensitivity)) or just _trying it out_
([http://5ccf7f9c97075d1b.paste.se](http://5ccf7f9c97075d1b.paste.se))

------
nodesocket
Is it me, or why is a setTimeout needed in this example?

$(document).ready(function(){ $("form").submit(function(){
setTimeout(function() { $('input').attr('disabled', 'disabled');
$('a').attr('disabled', 'disabled'); }, 50); }) });

~~~
nailer
50 should be a constant called NEXT_TICK with a value of 0. This makes it run
on the next event loop, ie, default event submit form, next tick afterwards
the form is disabled

~~~
xrstf
You are looking for setImmediate[1].

EDIT: Oh, never knew that this method was a Node-ism only.

[1] [https://developer.mozilla.org/en-
US/docs/Web/API/Window/setI...](https://developer.mozilla.org/en-
US/docs/Web/API/Window/setImmediate)

~~~
nailer
Actually it's common enough that

    
    
        var setNextTick = function(nextTickFunction){window.setTimeout(nextTickFunction), 0}) 
    

...sounds like a damn good idea.

Hate the name 'setImmediate' though, immediate implies the current tick and
people would wonder why you're using it.

------
dutchbrit
Only one click via dock, double click is only for when you want to browse with
your keyboard so you can click once to select starting position, or for
selecting if needed. But you don't really need that in a browser, for the odd
occasion you have a file browser, it's currently easy enough to implement if
wanted.

------
buro9
I allow multiple form submits on my sites. What I don't do is allow the same
form to be submitted twice if the checksum of values it is submitting is the
same as a prior submit.

This allows for the best balance between preventing double-submission errors,
and a good user experience if a user has made an error.

I still de-dupe check on the server too (in case JS is disabled), and in a
distributed environment the server submissoin de-dupe check isn't guaranteed
(what if two submissions went to different servers?).

Perhaps if anything should be added to browsers it's simply a checksum of all
of the values to submitted and an option to prevent submission if a prior
checksum has been submitted. But even then... this is trivial to do in JS.

The above only applies to POST, for GET I don't care... it's cached, and
perhaps the second request is the because the first one stalled (bad mobile
network, etc).

------
pornel
Please don't disable links permanently! (as in the included jQuery snippet)

If submission gets stuck and doesn't go through (because mobile networks,
public wifi, and "cloud" servers have more failure modes than we'd like…) user
is stuck with permanently-disabled buttons and can't retry submitting.

------
mungoman2
As an interesting data point, since about a month Google Docs have introduced
double-clicking as a gesture.

~~~
lmm
I hate that. I have single click everywhere, all around in my usual OS... but
not in Google Docs because there's no way to configure it.

------
voyou
I don't see the problem. Contrary to the post, double clicking a link doesn't
open it twice, and double clicking a submit button doesn't submit the form
twice; at least, not on Chrome or Firefox on Linux.

~~~
protonfish
That's true so I assume that the author is talking about ajax requests. I have
a rule to always disable buttons that trigger ajax requests until they finish.
Not only does this reduce double-submissions, it makes it clear to the user
that their click was successful and to wait for the action to complete.

------
lwh
Don't go breaking the word selector.

------
return0
I dont think this is a problem at all.

------
raymondgh
I love double clicking in the new google drive file browser!

------
sjwright
Browser makers like Mozilla think it's appropriate that holding down F5 should
repeatedly pummel the server in accord with keyboard repeat rates. It's like
the browser makers are being intentionally hostile towards web servers.

E.g.
[https://bugzilla.mozilla.org/show_bug.cgi?id=224026](https://bugzilla.mozilla.org/show_bug.cgi?id=224026)
[https://bugzilla.mozilla.org/show_bug.cgi?id=873045](https://bugzilla.mozilla.org/show_bug.cgi?id=873045)

There's no excuse, but they don't fix it.

~~~
quotemstr
> There's no excuse, but they don't fix it.

Browsers exist for my convenience, not yours.

~~~
sjwright
Is a denial-of-service tool so convenient for you?

~~~
babby
Thats a poor argument. Literally no one gets DoS'ed by a few guys F5'ing in
coordination. If your server is so poorly set up as to allow any small number
of IP's to impact it in any way then you are doing it wrong.

When I was getting into nodejs a few years back I wrote DoS script to kill a
site that was scraping content from one of my sites and posing it as their
own. I made it just for shits n giggles in about 5 minutes and I was surprised
when it actually worked, their website just went down.

DoS is and always will be easy.

~~~
sjwright
That is a poor deflection of the underlying point. It's absurd to conclude
that an obviously undesirable behavior -- however unlikely to pose a problem
in reality -- should not even be considered let alone addressed.

It could be as simple as a modest global rate limit on repeated GET requests
to the same URL. We could start with 250 msec and see how that goes.

Or it could be as simple as limiting F5 reloads to once per keydown. Let users
work for their accidental DoS attacks. :-)

~~~
oldmanjay
what's absurd is protecting the server from a vanishingly rare accident by
changing the client. if you feel you need to be protected, put that protection
where it belongs, on the server, where it works against more likely things as
well.

~~~
sjwright
It protects the end user as much as it does the server.

------
codecamper
double clicking must die. my parents still continue to double click on almost
everything they see — all because their first computer experiences were on my
Mac SE. (Mac OS 5 or 6 or 3?)

plus... double click means double RSI no?

------
XCSme
Why is this so high on HN? While the premise of the post sounds interesting,
the whole story says nothing new and provides no real solution.

------
smorrow
I was using that Google 3D plot viewer thing the other day and assumed right-
click-dragging would do something for me. It didn't, and it occurred to me
that the reason we have this is that Microsoft, Apple, or some other
microcomputer vendor once decided people were too dumb to know they have more
than one finger.

And now on the subject of double-clicking, it occurs to me that if the one-
finger thing hadn't been the way, everything that was ever a double-click
action could've just been a single click on the next button over. Which would
be simpler.

~~~
quesera
I think you'd be less dismissive if you were better informed about your topic.

~~~
smorrow
If we're being objective, I think we can agree that having multiple buttons is
more basic and more self-evident knowledge than being able to click a button
in multiple ways.

