Hacker News new | past | comments | ask | show | jobs | submit login
Wacom tablets track every app you open (robertheaton.com)
1594 points by krithix on Feb 5, 2020 | hide | past | favorite | 449 comments



On Linux you'd be using the community-maintained drivers[0], so I assume this wouldn't be a problem (correct me if I'm wrong).

But as far as I can tell, there's not equivalent Open Wacom drivers for Windows. People with more Windows knowledge than me: any thoughts on why? Is it just that someone using Windows probably doesn't care about Open drivers, so the demand isn't there? Or is there something about Windows that makes substituting drivers harder?

Wacom doesn't provide their own Linux drivers, but looking at the state of drivers around GPUs, printers, I vaguely suspect that somebody in Linux would be working on Open alternatives even if they did. I'm trying to think off the top of my head what Windows-compatible hardware has 3rd-party driver options. Maybe some printers?

[0]: https://linuxwacom.github.io/


Here is an open source driver that has been made that supports Wacom and Huion tablets https://github.com/hawku/TabletDriver

These were made to reduce input latency to increase performance in a rhythm game called "osu!"


Now, if only someone could make Wacom drivers that make the Wacom touch functionality interface better with Photoshop and friends, rather than just handing keyboard shortcuts to them. Somehow, that's the best Wacom has managed to do, which makes those features effectively unusable in the very apps those tablets are most commonly used...


Wow, thanks. Mentioning osu! will probably cost me a few hours after work.

Never thought of using a tablet for it, but I am so going to try that!


Thank you.


Open source drivers are rare on Windows because manufacturers almost always ship proprietary drivers that are good enough, and Windows users clearly have no issues running closed-source software.

Proprietary drivers on Linux are often crap, if they even exist at all.


Linux purposely makes proprietary drivers crap. The kernel offers no stable binary interface so drivers become broken every single time linux updates unless the drivers get compiled as part of the kernel. This forces manufacturers to either decide that they will open source the drivers, not have any at all or put in the work to keep them up to date every release.

It seems like forcing the all or nothing choice made a lot of OEMs open source their drivers or provide none which lead to the community making them.


>Linux purposely makes proprietary drivers crap.

No it doesn't, the devs refuse to provide stable in-kernel APIs because they want the flexibility to be able to modify them as they please when a better solution comes along. Also maintaining support for proprietary drivers is harder due to them being black boxes, not only in terms of debugging, but also in security and stability.

NVidia is basically the one major holdout these days, and its proprietary driver for Linux is very good, so it's not as if it's impossible to maintain a proprietary driver in the Linux ecosystem. The motivation here comes from Linux being huge in accelerated computing and 3d, not due to any particular love for Linux on Nvidia's part.

Indeed the lack of a stable interface has made it cumbersome to maintain a out-of-tree driver, which is GREAT since it means hardware vendors are more likely to open source their drivers or at least give enough documentation for them to be created by a third party. This ends up being a huge part of Linux's success, as it supports the widest range of hardware of any system 'out of the box', hardware support which is then functional on any platform on which Linux runs, which in turn is practically everything under the sun.

And if this wasn't enough, it is also a boon for alternative systems which will never see official proprietary drivers due to being niche, as they can port Linux drivers, or even add Linux driver compability layers.


> Linux purposely makes proprietary drivers crap.

As the situation on windows shows us, the alternative is drivers that are crap for other reasons. If Linux offered a stable binary interface for drivers, we'd have proprietary drivers that "worked" but were nevertheless still crap insofar as they were essentially malware, as is the case with this wacom driver.


And yet google is pushing hard to keep ABI stable in the name of extending android support because manufacturers just won't release their SoC drivers.


Proprietary surveillance company is okay with manufacturers creating proprietary drivers to be used for surveillance. Shocking.


Rather cheeky to urge the Linux developers to change their ways so that Google can continue to profit off GPL violations.


Thanks for mentioning the elephant in the room.

Is there a reason the Linux folks don't do something about this? If I were them I wouldn't be happy seeing my licensing terms treated like a joke.


Probably because desktop users expect their hardware to continue to work for a long time and to keep up with OS updates. For mobile people have been conditioned to accept throwing away and buying a new device every 2 years.


If this was true, the BSDs would have terrible drivers.

What happens instead is Linux drivers are mostly BSD ports. Go figure.


ISTR a discussion some years back over the lagging of drivers on the BSDs relative to Linux.

Not finding that now, though there's a 2014 discussion of a driver wrapper for FreeBSD to access Linux device drivers:

https://www.phoronix.com/scan.php?page=news_item&px=MTgzMjY

Linux of course also makes use of some driver wrappers, the most well-known of which is probably NDISwrapper, supporting wireless networking cards:

http://ndiswrapper.sourceforge.net/wiki/index.php/Main_Page


A standardized ABI does not inevitably lead to malware closed drivers. That's a ridiculous assertion.

Device managers don't care about Linux anyway, and wouldn't suddenly start caring if Linux announced a stable ABI.


> Device managers don't care about Linux anyway, and wouldn't suddenly start caring if Linux announced a stable ABI.

From what I've seen, facilitating proprietary drivers seems like the motivation of most people lamenting the lack of a stable ABI. An example of this being the comment I responded to; "Linux purposely makes proprietary drivers crap. [...]"

Discounting proprietary drivers under the assumption that they wouldn't be written anyway, what does a stable ABI afford us? Out-of-tree FOSS drivers? In other words, drivers that aren't good enough to be accepted into the kernel?


Out of tree foss drivers are still not affected since you could most of the time just recompile them to work with the latest kernel.

Also when I said linux made proprietary drivers crap I meant that as a good thing. It lead to open source drivers where there otherwise would not have been. Some OEMs like AMD eventually went open source on linux while remaining proprietary on windows.


There is also the little matter of Windows (since 7 I think?) requiring kernel drivers to be code signed, unless you want to run your system with a permanent "Development mode" text overlay, not to mention the arcane procedure required to activate that in the first place. (You can't add another cert to the trusted set, either.)

So that puts a little damper on the whole "open source" thing. Of course it is also not effective at all, Stuxnet was famously signed by Realtek.


The process to get a driver signed doesn't seem too hard for an open source project to do. Biggest hurdle is the certificate costing around $300/year as far as I can tell, so it would need to be a project with a reliable stream of donations or an author/s willing to pay it.

https://docs.microsoft.com/en-us/windows-hardware/drivers/in...


> The process to get a driver signed doesn't seem too hard for an open source project to do.

These hurdles are a bigger impediment than they appear.


Not too long ago I had to do some INF editing to get a driver installed on Win10, and the editing did invalidate the signature so it (silently!) refused to install, but booting with the "disable driver signature enforcement" option made it install, and it continued to load and work normally even after I booted back into normal mode. This was only a few months ago so unless something drastic has changed since then, maybe it's not that hard to install drivers with bad (or missing, but that's really the same if you just have an arbitrary signature) signatures. I thought I'd be out of luck and have to resort to something deeper and less reliable like kernel patching (tools exist to do that, but they get marked as malware, and you have to do it after every update...) but that was an unexpected surprising positive.


Editing the INF de-authenticates the installation of the driver, which can also be bypassed by adding to the Trusted Publisher root store, which is mutable (as Zadig/libwdi does), but the actual kernel-mode .sys binary still needs to be signed by Authenticode unless the system is in driver Developer Mode. Your method worked for installing a modified INF file, but will not work for installing a modified binary.


Sounds like my Linux experience in the late 1990s: lots of weird invocations without understanding what they do, just to keep the system barely functional. The roles sure have reversed...


As for manually forcing a particular signed binary for a specific device, the “Have disk...” or “manually select from” route still works without that developer mode nonsense.


That’s trivially easy to get around by installing you’re own CA cert when you install the driver.

This is arguably worse security wise but it makes the driver install process identical to the way it used to be as far as the average consumer can tell. This is why (IMO) free software is so important, to the point where I’ve begun to agree with the radicals and think it should be mandatory.


No, that's exactly not how it works. You can not change the set of root certificates.


How does Zadig work, then?


There are two separate authentication processes for drivers on Windows: Authenticode, which is used for the kernel-mode driver (.sys) and is strictly enforced, and driver package signing (.cat/.inf installation packages), which has a mutable root storage called Trusted Publisher system store. Zadig works by adding its own certificate root to the Trusted Publisher system store and self-signing the installation packages, but the three possible installed drivers (WinUSB, libusb0, and libusbK) were all still signed by Authenticode.


> Zadig works by adding its own certificate root to the Trusted Publisher system store and self-signing the installation package

Is this different than the local CA cert list? Sorry I don’t use Windows.


Yes, it is its own list for software publisher signing specifically, and is separate from the Trusted Root Certification Authorities certificate store.


Windows users should push for FOSS drivers as well also when the proprietary ones run perfectly. Privacy and security issues aside, being forced to depend on a closed driver means that the manufacturer can make the product obsolete just by stopping support on newer Windows versions, therefore turning the product essentially into trash, thus forcing the user to buy a new one.


The vast majority of Windows users don't really care about that though.

In the 90s, software modems ("winmodems", see [0]) were popular because they were cheaper than using dedicated hardware for generating and decoding the audio signals sent over the phone line. Those would break if the manufacturer didn't upgrade their driver for newer versions of Windows, since they're completely software driven.

I'd be very surprised if things have changed since then, and I bet that the majority of consumers would just pick the cheapest option at the big box store.

[0] https://en.wikipedia.org/wiki/Softmodem


I remember getting PCI based Winmodems working on my old machines back in the late 90s/early 2000s. A lot of people bought ISA modems or physical COM port models so they wouldn't have to deal with that bullshit.

Now most Linux distributions are littered with binary blobs in linux-firmware that have to be loaded for everything from Wi-Fi to Bluetooth. We've gone the total opposite direction of where we should be .. except for like .. amdgpu.


Winmodems were not popular with anybody except manufacturers, and exponentially increased the support issues with modems as an entire technology. Read: people had more problems with Winmodems than they did before.


Not true in all locales. In 2001 I bought a computer locally (Haifa, Israel) with a winmodem. I installed Red Hat on it, so obviously the winmodem would not work. I could not find a hardware modem locally. The only solution that I had was to go with an ADSL line (250k up, 750k down) which cost a fortune but whose modem would plug into the network card.

It would be a full two years before I would see any other home users on anything other than dialup. 750k down in 2001 was so impressive, you could start listening to songs on Kazaa as soon as the download started!


My bf got given a wacom tablet for free because the wacom windows drivers no longer support it but it works perfectly fine on Linux.


The windows approach to driver certification makes this really difficult. Microsoft virtually requires every driver maintainer to pay $100-odd every couple of years for a signing certificate (and ironically enough this is done in the name of improving driver quality). For a corporation that's nothing, but it's a pretty steep ask for the hobbyist maintainers that OSS drivers tend to rely on.


That’s true, but... that culture isn’t there for Windows users the way it is for Linux users.


>Proprietary drivers on Linux are often crap, if they even exist at all.

Not so with Nvidia GPUs. The open drivers are awful; the proprietary drivers are good.

(But IS the case with AMD GPUs, to the point where the proprietary driver seems to perform worse[0] and everyone pretends it doesn't even exist, which is upside down unintuitive coming from A.) Windows and B.) Nvidia.)

0: https://www.phoronix.com/scan.php?page=article&item=nvidia-a...


I have to use their proprietary drivers and I beg to differ. Nvidia drivers are still crap, given all the pain you have to endure to get them running. Yes, nvidia has managed to sneak into the linux world through cuda but as far as ease of use, they are still nothing short of crap. Not to mention if you want to use anything other than ubuntu.


The proprietary drivers are not "good". They don't support Wayland which has been the default on many distros for years. They also don't support prerelease or custom kernels. I had to build a custom kernel to include some patches for new hardware I just got and found out it was impossible for me to use the nvidia drivers on it. I ended up getting an AMD card because of that.


As a developer however, nvidia's closed source drivers are buggy as hell. The amount of issues and times they break the spec is astounding and a constant annoyance. AMD and Intel via the open Mesa drivers are blissful in comparison, plus the amazing debuggability.


I would argue that Nouveau is bad purely because of poor performance, and the proprietary drivers are tolerable but perform well once you have them working.

One thing I can say for Nouveau over the proprietary drivers is that they actually work without any real fuss. I've run into numerous instances where the proprietary drivers would prevent the system from booting. And I've yet to get them to work at all with any realtime kernel in Manjaro.

And then we get into the nightmare that is any laptop with an integrated Intel GPU and a dedicated Nvidia GPU...


>I would argue that Nouveau is bad purely because of poor performance

That and, if you have a G-SYNC monitor (which, in retrospect, you shouldn't, but I and several friends of mine do), it won't work at all with the Nouveau driver. :D


> Not so with Nvidia GPUs. The open drivers are awful; the proprietary drivers are good.

Good by comparison to nouveau (the open source driver) perhaps, but definitely not good compared to the open source intel/AMD drivers.


Not my experience. They work reliably and perform well. AMD drivers don't.


Ever since the Windows multi-platform tablet push windows has had built-in drivers for touch & pen support. For the purpose of drawing, these drivers are actively antagonistic, as they have weird touch macros that make fine detail virtually impossible. And as they are built-in to windows, these drivers are also a pain to override by third-party drivers.

Add that to the complications that already arise from interfacing a 3D (touch sensitivity) precision input device with a computer and you end up with poor official driver support, and even worse community driver support


It's not that hard to turn off any more. But if you don't know about the setting, you will be wondering how anyone can work with a grafics tablet.


What is it called, and how do you turn it off? I haven't run into this issue on Surface devices, but I do have a couple of Wacom tablets too (mainly used on Linux on another machine).


Clearly this isn't a trivial or very fun job (last time I used them, the Linux drivers were buggy as hell). Who would have the motivation to do it on Windows, where you have a driver that works and users have little expectation of privacy to begin with?


The Linux drivers for Wacom tablets have never been buggy in the past decade in a half I’ve used them. Other parts of the stack (Xinput, libinput, GUI libraries) have been, but the actual driver provides good data to userspace.

Even then, to me the drivers on Linux have been perpetually less buggy. On Windows I found myself needing to restart the usermode service and restart applications frequently, especially if the USB connection was unreliable. The Linux driver did not have similar issues.

For example, you’ll never have to follow this guide on Linux: https://www.deviantart.com/kiiroikat/art/How-to-Fix-Wacom-Dr...

I don’t recall having issues with the Mac drivers either.


Ok the bugs might have been isolated to the configuration GUI and not the driver strictly speaking, but the point still stands.


LinuxWacom doesn’t provide a configuration GUI that I am aware of, though. The xf86 input driver has knobs you can tweak but as far as I am aware the only official way to do it is CLI.

If you are talking about GNOME’s Wacom settings, then I can understand the confusion: under Windows this would be part of the driver package, but under Linux this bit just happens to be completely unrelated and maintained by GNOME. I realize this does not matter much to the end user but it kind of matters in the context of this discussion; the bugs aren’t inherent, they are probably mostly a result of how the software ecosystem works on Linux...


Just because it would be better doesn't mean it will be made. So long as something is "good enough", it will strangle new projects in the crib.


1. Did you reply to the wrong comment? I was only remarking on the Linux driver not being buggy.

2. If you’re referring to a Windows open source Wacom driver, one already exists, as mentioned elsewhere in the thread, though it has a pretty specific purpose in mind. https://github.com/hawku/TabletDriver


the comment you are replying to is talking about the motivation to make a driver.

I assumed your comment was also about that, I didn't expect that you were ignoring the context of the conversation and just commenting about whether the current driver was buggy or not. Sorry.


>users have little expectation of privacy to begin with?

The cynical side of me wonders how long it will be before prosecutors argue, with a straight face, that using evidence obtained from mass surveillance against people using Microsoft Windows is okay because Microsoft collects a massive amount of data so nobody should expect their files and activities to remain private; that there is or should be no expectation of privacy on such a system.

And then how long until warrant applications come in with supporting evidence that the subject of the warrant uses Linux and therefore their increased desire for privacy is prima facia evidence that they're doing something illegal.


The first part is long since true. Courts have already (wrongly) established that you have no expectation of privacy in anything you entrust to a third party agent like a bank or a cloud service. If you give the provider a key to your stuff, they can give it to law enforcement.

The latter is a tougher desecration of the constitution to sell.


Constitution schmonstitution. Just say the magic words -- "national security" -- and that problem goes away.

The NSA is already selecting people with "Linux" and "Tor" in their search histories for added scrutiny.


>The NSA is already selecting people with "Linux" [...] in their search histories for added scrutiny.

Source for this? If it's true it would be both sad and hilarious.



That site hasn't existed for some time...


This particular issue wouldn't be a problem. You'd have new problems of the tablet not working consistently or easily configurably across apps.

It is certainly on Wacom for not providing better drivers to Linux, but neither is the FOSS solution a complete one.


Windows drivers on 32bit Windows request and on 64bit Windows require certification.


Yes, it doesn't apply to Linux. Also on Linux libinput and synaptics drivers support wacom tablets, so you don't necessarily need to even install linuxwacom (in my config I do "xinput set-prop ..." for both cases to setup pressure curve).


> On Linux...this wouldn't be a problem

This, perhaps not, but Linux distros track app usage, too: https://popcon.debian.org/


... if you install and configure popularity-contest, which includes an explicit opt-in process [0] and it still doesn't track usage, merely installation.

[0] http://www.linuxandubuntu.com/wp-content/uploads/2019/07/con...


It's a tiny bit more than that: popcon weekly reports which packages have been used that week based on their atime. atime, ctime and filename are reported (the times are truncated to a multiple of twelve hours).

See https://popcon.debian.org/FAQ (thanks to toastal for the link).


Which is still a far cry from "opts you in by default (!!!), and collects every time (!) you open the program."


Oh yes I agree completely. When I said "a tiny bit more", I meant it literally: it's a bit more, but only a very tiny bit.


Wait people still leave atime enabled?


I suppose many people use relatime, which is good enough for popcon I think.

Edit: Debian uses relatime by default. I don't know about other distributions.


One of the first things Debian does is ask consent about this, and the FAQs are clearly published: https://popcon.debian.org/FAQ. You can't say the same about most things.


And if you install by the usual method of "just keep pressing enter" you opt out of popcon. You have to make an effort to enable it.


The sad bit is that (some reasonable) telemetric data is really, really useful for software engineering. If you have a large enough program, of course it'll have way more bugs than you could ever fix. Crash tracking and usage analytics is how you make a data driven decision on what to fix, and what to ignore. This enables a data driven approach to software quality that's a huge improvement.

Having worked on projects that did and did not have telemetrics, working without them feels absurd - it seems like you're just randomly fixing the side mirror on a car without any idea what's actually broken on it (independently of your overall testing posture).

Vendors tracking excessive information without proper disclosure destroy this information source for those developers that try to collect reasonable information (with consent, disclosure, in context, etc).


Really great article. But I wonder, why the author did not cover if/what the driver publishes, If I do:

- open "Wacom Desktop Center"

- Top right (next to "Login") is "More" (click!)

- "Data Privacy Settings" (click!)

- "Participate In Wacom Experience Program" => on => off!

My setting was "On" - and I swear: whenever a program/website/installer asks I go "No thankx". So it must be dark UI patterns with evil defaults that this super-hidden thing was "on" for me. Shame on you, Wacom!


When you install it, the first window they show you is a Terms of service agreement that has "Agree" "Disagree" buttons - except, it's not a Terms of Service agreement. It's an agreement to turn the program on.

So you have to click "Disagree" and continue the install to have it on.


"to have it off", is what I meant to say there.


In my experience they turn it back on each driver update, I also believe they mentioned it somewhere during the setup but I’m not sure.


Or you could vote with your wallet and buy a Huion instead. They are just as good, if not better. And about half the price. It's all made in China any way.


An iPad Pro with the Apple Pencil is actually a great alternative to traditional drawing tablets, even if you use Windows.


Yeah, but a Wacom starts at $59 and an iPad Pro starts at $799, and comes without a pen.


I shouldn’t have said “Pro” as the recent non-Pro iPads under $400 have Pencil support too.


I mean sure that's better, but now Apple is tracking you instead and you got ripped off $340 for it


Apple’s tracking opt-outs don’t silently reset themselves to “share everything” on each update the way Wacom does.


Windows art apps often need hover support since they assume a Wacom, and the general windows interface definitely does. I guess you can simulate hover by holding a hot key or something while touching though.


They make a top of the line art pad. And your in the apple eco system, so it's not super shady. Or at least you know what your getting.


Not for professional drawing purposes


There are many professionals using it. See YouTube.


But do they spy on you ? not knowing something does not mean it does not exist


Dayum. Almost every designer in my team at https://draftss.com has a Wacom tablet.

I guess I'll have to send a company-wide emailer along with the above instructions. Thank you very much for your writeup.


FWIW that isn’t a unique identifier for the author, it’s for Wacom’s GA account. I didn’t see any meaningful identifier being sent. Of course the set of most opened apps and your IP are probably enough to identify you.

That said, yep, it seems lame they don’t disclose this tracking. I can understand why they’d want to know what apps their device pairs most often with, but tracking all app opens seems aggressive, but maybe it’s the only way to identify what app is open when the device is used.

(I work for an analytics company)


Tracking the currently open application software side is perfectly within scope for a drawing tablet - they often have buttons that can be bound to keyboard shortcuts, etc. It makes sense that it should know when you're focused on Photoshop vs Google Chrome.

But why are they sending this data to a server? My best guess is that this helps them focus on what software people are using. This allows them access to the popularity of graphic applications. They get to see what percentage of users use say Photoshop vs [Other program here] - so they know where to prioritize integrations and testing.

But I'm not sure how much "integrations" or work with third parties Wacom does - the drawing tablets are following an api standard after all. But maybe wacom does work directly with application devs, I don't really know.

I doubt they're doing this to try to track individual users - even if there are ways to do it. That said I really wish they approached this with a more friendly "Would you like to enable some basic Telemetry to improve Wacom products - Yes, No" instead of a very unfriendly user agreement where they try to force it.


IMO the more simple explanation is they want this data to sell to data aggregators, who can in turn enrich the profile they have on you. There's a similar thing going on with smart TVs, right?


I know this is the popular conclusion - but from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization. People hate web tracking, but knowing what browsers or devices are visiting your website can be enormously helpful.

Also I don't know all the details here - I know that Vizio TVs where collecting data and explicitly kept the IP and other personal data with it. I don't know if wacom is doing that.

Now that said - I don't like that they're handing this data to Google through Google Analytics. I also think they should be far more up front about what they collect, what they use it for, etc.


> I know this is the popular conclusion - but from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization.

Maybe if it were only used for that it wouldn't be so bad. But I don't trust a company not to take another bite at the apple by selling customer data if they think they can get away with it. Matter of fact, refusing to do so is leaving money on the table and could get a CEO fired for not making the company as profitable as it could have been. Once companies have the data, they are almost certainly obligated to use it in ways to their benefit and your expense.


Not really, it depends on the revenue strategy. Shareholders would likely be more displeased at initiatives that could end up breaking user trust and harming core revenue (e.g. the sale of peripheral hardware). Blindly leveraging everything in a way that doesn't align with vision or strategy generally leads to disappointing returns. Smart leaders know this.


GDPR helps a litle with this.


> I know this is the popular conclusion - but from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization. People hate web tracking, but knowing what browsers or devices are visiting your website can be enormously helpful.

What happened to actually communicating with users to learn more about how they use the product?


It's a confirmation bias problem. Customers who fill out surveys and give direct feedback back to vendors are frequently lower-usage enthusiasts that aren't representative of your overall userbase.

I work on a product, and we include some telemetry. I'm also a strong privacy advocate, and I believe I've done my best within the corporate realm to ensure that the data we're collecting is extremely scoped AND useful for decision making and prioritization. In my experience, there aren't that many of me, but I implore folks to realize that as PMs and engineers, we absolutely do have a say in making sure that blanket data exfiltration and aggregation doesn't happen in our products.

Communicating with your customers proactively about what you're collecting and why is important too. And not buried in some privacy policy legalese: publish a blog post, explain what you're gathering, give examples of how it's driven decision-making for you in the past, and what you're hoping to learn in the future. It goes a long way.


As a user, even if you're up about these things and why they're important for product development, my answer is still "No."

Because I will only say "Yes" if I'm being paid or otherwise compensated specifically for that input.

I think it's important companies pay for usage testing so that they value that information and are more likely to hold it closely since it represents an investment and is perceived as competitive advantage.


That's why a sane data collection is aggreggate, upfront about the data it collects, avoidable, and most of all, explicitly opt-in. Alas, this requires eternal vigilance, as the pushback is neverending.


> What happened to actually communicating with users to learn more about how they use the product?

https://www.nngroup.com/articles/first-rule-of-usability-don...

"watch users as they attempt to perform tasks"


That assumes a great deal about both the resources available to do it and the ability to get responses from a representative sample of customers.

It's not challenging to see why someone might choose a one-time cost in software engineering over an ongoing cost in communication.


I really like the approach that the Steam survey uses which blends engineering and communication:

1) Pop up and ask for permission to scan the machine.

2) Show the data collected that will be sent back and give a second chance to decline.

3) Allow everyone to see the aggregate results.

Being mostly automated, it's lower friction than a manual Q&A survey. But it also feels way more respectful that trying to snoop around and then clandestinely exfiltrate the data. It's one of the few cases where I'm willing to opt-in to data collection.


Great point! And being able to see the results in aggregate is also interesting. It inclines me to share, because it becomes a two-way share, even though I don't actually have use for the information.


Being able to see the aggregate data isn't just interesting but publicly helpful.

From a game developer perspective, looking at it right now tells me that (simplified):

* Most gamers have at least a GTX 1050 and 8GB of ram or higher. Perfect now we know where to aim our medium settings.

* 74% use Nvidia GPUs, 15% use AMD - now we know where to focus driver optimizations

* English, Simplified Chinese, and Russian are the top languages (where to focus translations)

* 72% play on 1080p, 14% on 1440p, etc. Tells us what resolutions to make sure our UI works on.


This. You aren't asking for vague blanket permission, nor are you asking for the user to manually fill out a survey. And you give them the opportunity to review what they're about to send.


It's not challenging, but it's wrong. If you want to explain it to them, making a binding promise of what it would be used for, and have it be opt-in, that would be another thing.

It's not hard to see how to do it right.


Morally and ethically, you're absolutely correct in every single possible way.

I was attempting to illustrate the decision-making processes that may have led to this juncture and what happened to communicating with customers. Please accept my apologies, as I have plainly failed to be clear that this was not an argument as to the moral or ethical questions concerned.

Again, you're completely right. It's not at all difficult to see the morally and ethically correct way to go about this.


Users stopped communicating. They install a thing, they uninstall it. 99% don't leave any comment at all.

(and yes, the obverse inference is also true. If you see one person complaining, there are probably 99 more who have had the same issue and have said nothing)


That's not an excuse, but a reason to pay for a focus group or study.


Like a survey? I'm not sure the information would be all that accurate.

If you want to put resources into "hey folks like to use this product with ours" you need accurate information.


Do what companies have always done: pay for user studies and focus groups.


Big companies maybe.


Slashed in pursuit of VC money and profits.


Wacom's a public company. There's a lot of things they might be chasing, but VC money seems unlikely to be high on that list.


It appears to be this communities' commenters unpopular conclusion. That every piece of tracking is part of a conspiracy to sell you advertising.

> In section 3.1 of their privacy policy, Wacom wondered if it would be OK if they sent a few bits and bobs of data from my computer to Google Analytics, “[including] aggregate usage data, technical session information and information about [my] hardware device.”

What wasn't upfront about this? That they didn't add more details about what the session information was? Legally why would they? The post includes an image the section where they legally disclosed it. People not reading the privacy policy before using a product is not Wacom's legal problem.

Can you ask them to put this section on a separate screen? Sure. Will they do it? Who knows. I'm sure they'd want to know if you are a customer giving ideas than a low priority non customer as would any person.

How many blogs or websites disclose the use of Google Analytics in their privacy policy?

You could talk to many customers and this is the least thing they have on their minds. Paranoia displayed by commenters here is amazing.

As the post concludes, if you are a (prospective) customer who does not like what they collect then there are other brands. I might add who probably have a hidden, more intrusive way to track you because they are smaller, have smaller volume/margin and have the incentive to build and sell your profile like other small companies not in the field.


I don’t think anyone is denying that the data is valuable, but what is usually missing in the implementations (not sure if true in this case) are 1. Transparency about what is being collected, 2. Requesting consent from the user and 3. Providing control to the user in the form of an opt-in or opt-out.

Transparency, consent, and control.

If every company addressed these three issues, we wouldn’t be having this conversation about privacy and data collection over and over and over.


I fully agree that they need to address those problems. I really like the way Valve does it with their Steam Hardware Survey.

What I'm addressing is that I feel many people see a company tracking data, and assume this data is valuable enough to sell, and that the data is for sure being sold.

My point was that the data isn't just valuable to sell (maybe), but is legitimately valuable in making a better product/service.


> from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization.

I don't think anybody is disputing that. But that its very valuable to devs does not excuse collecting it without getting the user's informed consent first.


That seems pretty unlikely for a mid sized, independently operated/capitalized Japanese company tbh.


Unless they want easy money. Show me a modern mid-sized, independently operated/capitalized corporation that doesn't want it.


Having started numerous tech companies myself, interviewed hundreds of others who have, and started a community that instructions others on how to do so, I will say that HN's perception of how easy, profitable, and common it is to have a business model focused on selling data is overblown. I'd wager the vast majority of data collectors (e.g. Google Analytics accounts) just want it for their own internal decision-making and analysis.


Maybe you're right, maybe you're wrong.

But trusting companies to do the right thing is untenable. That trust has been broken far too often, by far too many companies. The only rational position a concerned user can take is to assume that anybody collecting such sensitive information (particularly in a sneaky way) intends to monetize it or use it for purposes other than product improvement.

And even if the data really will only be used for product development, getting the user's informed consent -- and refraining from data collection without it -- is critical.

Further, using GA automatically means that the data is being used for Google's purposes as well as the application creator's.


As much as I would like to believe you, I cannot. With literally almost every company around me gathering analytics and with every average person's operating system, web browser, social media, phones, televisions, smart locks, smart fridges, graphic card drivers and graphic tablets sending this data out in bulk, you are not going to convince me that the authors of all that software are siphoning data out of us all for nothing.


> you are not going to convince me that the authors of all that software are siphoning data out of us all for nothing.

That's not what they are saying at all, and suggesting it's for nothing proves you don't understand the value of using the data for internal decision making. Simply put: the main use of this data is for that reason: internal decision making. Answering questions like:

* How are our customers using our products? * What errors are they experiencing? * What features are they using? * Where are they confused? * What features cause the most problems? * What feature should we work on next?

These are all regular questions that are answered by collecting these types of metrics, including the one described in this post. Selling the data to third parties isn't easy. The data is generally gathered to inform product decisions, not to sell, so it's not in any easy format that makes it easy to sell. One has to go out of their way to sell this data, and the cost to put together this data in a way that's useful to sell would almost certainly cost more to setup and manage than they would get from such a small number of relative users.

The simple fact is, everyone sends data back to their servers for collecting and parsing, including Apple, the company everyone puts on a pedestal for privacy.

Simply put: show me the evidence they are selling the data to third parties for profit. Anything less is speculation.


Can you point me towards the marketplace where I can sell data I've siphoned off? Thanks!


If you have large data-sets/streams, companies like Acxiom and Innovis are interested buyers. Are you trying to imply that brokering consumer data is a marginal or non-existant business?


> Are you trying to imply that brokering consumer data is a marginal or non-existant business?

For small/mid-size datasets (which is what we're talking about), yes, that's exactly what I'm implying. It's not actually easy for most companies to sell user data for a quick buck like is being claimed.


Why? Because they don't have enough users for this to be worth it you mean?


It's certainly the more cynical explanation.

I believe wanting it for product development is just as simple.


It would be "simpler" to say, but that doesn't invalidate the other reason.

My tablet behaves differently per application. If I typically have one app open only on one screen I can limit the tablets "workspace".

Context-specific buttons based on app.

And if you're doing that _and have build sufficient app infrastructure around it_ as Wacom has to support fairly custom per-app behavior, the more realistic conclusion is that they're trying to get more info on that - now you can argue about opt in on the "share experience data" privacy setting - and I would agree, absolutely.

But "more simple to say that they're just selling data for money" is a pretty reductionist argument that jumps solely to the most negative possible motivation. "What's the worst they could be doing with it? Selling it? That's probably what they're doing, not making their tool more useful."


> IMO the more simple explanation

For me Occam's Razor points towards internal analytics.


Can you even export this information with PII on Google Analytics or only aggregate of that information?


Only aggregate


Then if the goal was to sell, they wouldn't use Google Analytic.


> IMO the more simple explanation is they want this data to sell to data aggregators

Seems the obvious answer, yep.


It doesn't really matter what their reasons are. Having the data is a liability. The author explained one scenario where these things can go wrong. Another is if they're hacked. Or if they're purchased by a larger unethical company. Or if they accidentally keep the data in an open database on an Amazon cloud service. Or a million other scenarios.


The browser is one to commonly put personal identifiers in the title bar.

Pretty much every site you visit puts PII in the title, which the browser dutifully includes in it's title.

GSuite leaks my email address:

   "Inbox - my.email@example.corp - Example Corp Mail - Mozilla Firefox"
Desktop apps are pretty much no different.

Outlook leaks my email address, and subject lines of emails or meeting information:

  "Inbox - my.email@example.corp - Outlook"

  "EMBARGOED Friday 7th ::: Corp Revenue for 2019, +25% over expectations! - Mesage (HTML)" 
 
  "Meeting: Pre-Announcement, Dial-In +1 555 1234 ext 1234"
Visual Studio leaks filenames, repository information:

  "page.html - corp-project-repo - Visual Studio Code" 
Pretty sure most office suite and Adobe apps will do something similar.


Isn't the cid being sent specifically to identify the user?


Heya - I could swear that wasn't there when I originally wrote the comment, but obviously it is there. Thanks for pointing that out. With that said, it doesn't change the substance of my comment too much - as I pointed out one can get a pretty solid unique identifier many ways, not limited to what I said above, you could even call out the presence of a permanently identifying header that Chrome gives some users[0].

[0] https://news.ycombinator.com/item?id=22236106


This tracking becomes a big issue when aggregated with other data. As someone who works for an analytics company, you should be well aware of this.


I am aware.


The cid parameter (the one that's censored) is a unique identifier for the author.


Heya, replied to another comment that brought up the same thing, sorry about that - here's what I wrote there:

Heya - I could swear that wasn't there when I originally wrote the comment, but obviously it is there. Thanks for pointing that out. With that said, it doesn't change the substance of my comment too much - as I pointed out one can get a pretty solid unique identifier many ways, not limited to what I said above, you could even call out the presence of a permanently identifying header that Chrome gives some users[0].

[0] https://news.ycombinator.com/item?id=22236106


This is just one of many reasons to use StevenBlack's Hosts [1] list to block this type of behavior. While it doesn't currently block link.wacom.com, it would have prevented the subsequent requests google analytics. It works even better when paired with a PiHole [2] to protect all devices on the network.

[1] https://github.com/StevenBlack/hosts

[2] https://pi-hole.net/


I mean I put Pihole on all my networks but this is at best a solution to “nice malware” that doesn’t bother to hardcode addresses or perform lookups via an attacker controlled DNS server.

You can catch slightly more aggressive malware by forcing all DNS traffic to your server at the network level but you’re now playing the role of malicious network operator. I would whitelist this to only devices you own.


I don't think anyone would make the argument that a PiHole is a replacement for following best practices in terms of computer and network security. I'm just pointing out that a PiHole can block google analytics and other common violators of privacy. Its not a security tool and isn't advertised as such.


And even if you go that far, DoH lets the device use whatever DNS server it wants.


Sadly, some of these list don't currently include google-analytics.com since some sites would otherwise break as a result. So when using one of these hosts files it's often a good idea to double check whether they include Google's domains first.

(Also sad to say that GA is so big that a lot of websites/app rely on it)


> some sites would otherwise break as a result

Wow, that's weird. I don't remember ever seeing one site like that. Can you point one out? I mean, GA has been blocked at my places since 2015, and I don't remember anything ever was broken, on phone or desktop.


Can't think of any specific sites, but it's happened to me a few times. It's usually because there's bad code that's waiting on the GA init function before doing anything else.

This is why some blockers like uBlock Origin stub out the Google Analytics interface.


It happens. If they add older GA tracking code in other JS and don't try / catch it will throw an exception when GA is blocked.

When I find a site with this problem I go elsewhere.


Is there a way to create a whitelist instead of a blacklist?

In some VMs / computers, I'd like to whitelist Internet domains instead of blacklisting, for security reasons.

Edit: Seems PiHole supports whitelisting: "Manage White And Black Lists" https://pi-hole.net/


I don't know enough about webdev, but why is google analytics request sent by the client? Wouldn't it be easier for the webhost to send a request to google "this IP with this browser connected to me requesting this content", making it impossible to block on the client side?


Microsoft has started resetting hosts files which is really annoying with Defender, can't seem to disable it either. Annoying when deving on a local server!


Operator of lots of DNS stuff here.

Hosts files are literally the devil. They break so much shit. Hostnames sometimes change behavior (like an ad server that starts hosting a redirect script for legitimate clicks), kids who are "good with computers" set them up on relatives computers over the holidays unmaintained, malware that uses them to block antivirus updates, etc.

If you want to block ads, fine. Use a content aware proxy or browser extension.


> browser extension

Using browser extensions to block ads is much higher risk than doing DNS blocking. Most ad blockers have full access to all web pages, which essentially means they could trivially scrape your usernames/passwords for your email/banks/etc or perform actions on your behalf.

There's room for this to go bad (AdBlocker dev turns bad, or sells extension to a bad guy for a wad of cash, or extension has security vulnerabilities, or keys for publishing extension are not propery secured) so while DNS-level blocking might not work as well, it's definitely not an obviously-worse solution.

(though FIWI PiHole in the past had some really agressive default lists which stopped my from using it - though I set it up again recently and it's been much better - I haven't had any broken websites besides Amazon's own sponsored product links at the top of their own search results pages).


Bravo for a really well written article. I'm interested in this kind of thing but not familiar with techniques & tools used, so it was really nice that the author included lots of detail, reasons for doing things, etc.


Burp Suite and Wireshark come default in Kali. Give it a try:

https://www.kali.org/downloads/

(Also make sure to check out Maltego, Metasploit Framework and Armitage.)


I work on UX, coming from an engineering background. It means that everyday I work close to product management and engineering.

The trend over the last 10 years is to collect tons of data to improve the product. Some PMs and UXrs believe that they’ll get a magic insight from the data, and the skeptics do it anyways because is another data point to have. For engineering, services like GA are cheap and easy to integrate.

Nobody has a bad intention. But, we are distracted by the next product release to see the long term consequences for the society.

The reality is that some data is useful, but most of it is BS. To measure adoption and engagement you can do a pilot and then deactivate data collection. Big app errors are reported soon after a release, and you don’t need to continue collecting that for a long time.

To improve the UX you can do research with less data points, and smaller groups. The irony: I wish to have data to prove it, my hypothesis is based on my experience. I got more actionable insights from qualitative research, self-reported metrics, or quantitative data focused in certain aspects (instead of collecting all just in case). Some times having nice reports based on tons of data is more useful as an argument for corporate politics rather than to improve the product, but users doesn’t need to pay the consequences of your company stupidity (I’m looking at you MS telemetry ;-) )

There is a simple thing that we can do to change this trend. Ask yourself: What is the goal of collecting the data? What product hypothesis you want to prove? Can you get insights from a small group? If you don’t know.... hold on your data collection desires.


The one technical reason I see for doing this is to help in dealing with customer bugs and complaints. So strictly in a diagnostic capacity.


For those cases the app can collect the exceptions only (as many apps and OS do).

I worked on a desktop product with this type of data collection. Usually what happens is that after a new release you may see new errors coming up, and then they start to repeat. The data collection becomes a burden, new reports of the same error type doesn’t give you more information.

It’s a good opportunity for a good UX, e.g point the user to the relevant support info to solve the problem.

For support cases you may be able to ask for diagnostics on demand. The app can collect it internally without sharing and send part of it when an exception occurs and the user accepts to send it.


That's actually a good piece of advice, thanks for posting it.


I wonder what the comfortable medium between privacy and letting developers get feedback about how well their code works is. It seems to me like Wacom just wants to know if their drivers work, so they can focus engineering efforts around fixing the issues that are affecting their users. "Oh hey, the new beta of Photoshop breaks our drivers!" They don't make a "cloud product" and they have an obligation to make their hardware work with any software the user might want to use, so they are kind of painted into a weird corner here. If they collect data to drive their engineering, they're spyware. If they collect no data, they're a bug-ridden disaster area whose product you would never buy.

I am guessing that the answer will be "they should test everything in house and tell users to complain via email when shit is broken"... but we all know that synthetic QA is never going to be as good as "ground truth", and that 99% of users will just silently be unhappy. So I wonder what the privacy balance is here.


I understand the need for developers to know more about the hardware and software running on their client machines. For example, I believe information like the hardware survey from Valve [1] are very valuable for the whole industry.

But there's a some kind of an etiquette you need to follow, if a company wishes to collect data:

- Be straightforward. Say what information you are collecting, at what time and what for.

- Tell me in what way this information will be stored and how will it be anonymized.

- Will the data be stored forever? And is there a way for me to request the data or it's deletion?

- Don't collect data per default. Make it opt-in.

- Publicize the data in a suitable way. It may be useful to others.

Wacom simply ignored all of that human decency. How can you ever trust this company again?

[1] https://store.steampowered.com/hwsurvey


Strangely enough, all your points are required under GDPR.


> I wonder what the comfortable medium between privacy and letting developers get feedback about how well their code works is.

I consider the nut of the problem to be informed consent. If you have user's informed consent to get the feedback, then there is no problem. If you don't, then the whole operation is unacceptable.

And no, mentioning it in the privacy policy or terms of use don't count as "informed consent".


This would be a real challenge for some companies. Having a clear privacy policy creates a hard dependency between it and the code. And developers are notorious for not even being able to keep comments updated along with their code changes.

It's not impossible at all, just in the current state of the industry there's a good reason we have vague agreements (also including good old-fashioned laziness, of course). It'd probably need to be developed ground up as an API with side effects, so when the code is compiled it spits out some details about how it's used.


> Having a clear privacy policy creates a hard dependency between it and the code. And developers are notorious for not even being able to keep comments updated along with their code changes.

That's just a small extra step in the QA pipeline.

Also: analytics and telemetry code doesn't just appear out of the blue. Someone makes an explicit decision to scoop error logs from users, or track clicks, or spy on system configuration. That someone is usually higher up the management or technical chain, and should know enough to recognize that sending anything collected on user's machine that is not crucial (in the most strict, technical sense) to performing the action user activated has privacy implications.


So what? Informed consent is also "a real challenge" for some medical studies, does that mean we should let doctors carry out unethical studies?

I'm actually pretty sympathetic to Wacom in this instance, more sympathetic than the blogpost author at least. But unethical actions are unethical regardless of whether acting ethically is "a real challenge" for some companies.


The deep problems of “informed consent” are apparent in medical studies/treatment. Few patients are equipped to be informed because they don’t have a med school degree.

Since users ”can’t be informed” about tracking, it doesn’t make sense to discuss whether they “should be informed”.


Doubtless there are deep problems with "informed consent", but saying they "can't be informed" is nonsense. Is your plan to not bother to inform people because they "can't be informed", and decide what's best for them without their knowledge or consent?


To the extent permitted by applicable law.


> This would be a real challenge for some companies.

Tough. If a company can't do it the right way, they shouldn't do it at all.

> in the current state of the industry there's a good reason we have vague agreements

Well, I guess that depends on your point of view. I see no good reason for this, but I have no doubt that the various companies do see a good reason by their definitions.

You're right about the current state of the industry, but the current state of the industry is a travesty.


Yeah, 'good' was the wrong word. Maybe 'understandable' but that's still is a bit too charitable.

I was mostly musing about how changing code can have legal/business as well as technical side effects, and we've seen that to some degree with mobile app permissions who just grab everything because it's seen as too much effort to do it right. So I'm curious if this is going to change for the better any time soon.


Thing that I know happens, from personal experience: you can put a giant modal alert, and write in blinking, all caps, 60pt bright-red font that you will do something unless the user presses a button, then draw a bright red arrow to the button. Users will still complain that they weren’t informed.

Users are lazy and dumb, and the most ideological users are often the laziest and/or the dumbest, because they have an agenda. They will go out of their way not to give you the benefit of the doubt (”why was the font not 80pt? Clearly, you’re trying to hide something from users on high resolution screens!”)

It never ends.


If your goal is to eliminate user complaints about this, justified or not, then just stop intrusive data collection entirely. Then you don't need to bother with obtaining consent.


Like I said, ideologues tend to be the ones who will complain no mater what you do.


The way Steam handles the occasional hardware survey requests, which are purely opt-in, seems appropriate here. “Please select which applications you use your Wacom with” with a permanent opt-out checkbox would be quite acceptable. (Steam knows what games we play but not beyond that; Wacom must ask us to specify which apps we ‘play’ since the OS can’t be more specific.)


FWIW, as a customer of Wacom's products they very much do not view themselves as

> [having] an obligation to make their hardware work with any software the user might want to use.

They update drivers for 4 or 5 years then tell you to buy a new product if you expect it to work with current-gen software. Despite the fact that none of their tablets have had a substantial new feature in 20 years beyond the wireless connection kit, somehow a driver for a "Intuos Pro 4" cannot be used with a functionally-identical "Intuos Pro 3".


To me the comfortable medium is 100% privacy. 0% feedback. There is no middle ground because feedback and privacy should not be conflated. Users already own the device and owe no data to their vendor.


I've started turning off analytics everywhere. I turn off reporting on Firefox, Atom, everywhere. No crash report. Nothing if I can untick the box. Windows Firewall Control or LittleSnitch for all the outbound traffic as well. I don't let most windows services talk to the Internet unless it's updates or Windows Defender.

Some stuff is going to get through, but it should just be because you missed it. I'm sorry FOSS people; everyone is collecting way too much and I don't want to give Mozilla my data either. No, not even crash reports.


I have a Wacom tablet. The drivers don’t install on macOS any more. There doesn’t seem to be any technical reason for this. It’s a USB device (“essentially a mouse”) and it worked fine for several major versions. Maybe it was a 32/64 issue.

You’d think if keeping users happy was their primary goal, they might start by keeping their existing USB drivers compiled for the current macOS.

They don’t need me to email them to tell them it’s broken under current macOS. They’re the ones who told me!


> Why does a device that is essentially a mouse need a privacy policy

I mean, crash logs, but yes -- defining question for our time

drivers shouldn't connect to the internet unless that's what they're for. crash logs should be managed by a third party thing that the user can configure


While this implementation obviously has privacy issues, the anonymized aggregate data would be quite interesting, e.g. how many people use photoshop, illustrator, etc. with their wacom tablets.

The problem then seems to be more about the false positives. If you use "Half Life 3 Test Build" that is useless info for wacom because it (presumably) doesn't care about pen input. Q: If the data were filtered to just art/graphics apps using the pen, would that still be problematic?


> would that still be problematic?

Yes. When thinking about data, you need to think about orthogonal uses. Can you imagine reasons why someone might subpoena data to determine whether Photoshop was being used on my home desktop machines at a particular time? They might not care that it was Photoshop at all.


Any data collection of course has a privacy cost and should of course be opt in.

What about aggregate data limited to art apps? For example if it only sends a monthly summary: used photoshop with a wacom tablet for 15 hours this month, illustrator for 3 hours this month?


I don't have an answer for you.

I think any attempt to exfil data not required by the function of the tool should be clearly and transparently disclosed, the use of those disclosures backed by the force of law, and opt-in. This is obviously far from where we are.

Because of that, I would block it no matter what. In a better world, I would selectively allow some instrumentation and such, but as-is, there is no way to trust any of it.

So I'm the wrong person to ask.


> What about aggregate data limited to art apps?

Well with the proviso you stated that informed consent has been obtained first, then this would be fine (as would more frequent/less targeted collection). If not, then this is not fine.


If that's the kind of behavioural information they want, they can pay for it just like anyone else. That's the kind of data that should absolutely not be expected to be free.


In that fictitious scenario, would they have checked with Adobe if they wouldn't mind their users spied up on. What if this information is indirectly used for trading on ADBE stock? Would that be considered OK ?


Could you elaborate? Why would I as a user need permission from Adobe to tell wacom that I am using their tablet in photoshop?

> What if this information is indirectly used for trading on ADBE stock? Would that be considered OK ?

Obviously yes? What is supposed to be the issue here?


The collection of the data is almost never what’s problematic. What’s problematic is the lack of transparency about what is being collected (doing it to users in secret or burying it in a privacy policy somewhere), and the lack of a way for the user to consent or not consent to it via an opt-in or opt-out. If the company provided these, then this is kind of a non-story.


In this case, the author both found the information in the installer, and the apt has a "Opt out of Experience Sharing" privacy checkbox (which I would agree should be opt in, not out), so to me covers most of this.


> Q: If the data were filtered to just art/graphics apps using the pen, would that still be problematic?

Yes.


No wonder everything is slowing down to a crawl when every mouse driver and their dog is doing full surveillance on every move the user makes.


They have all the data from the uninformed, ambivalent or defeated already. We develop things to crush the remaining resistance. Walled garden devices, cert pinning, signed applications, DNS over HTTPS, yes they are all more secure, but not for you. If well implemented, these serve as tools to make sure the privacy policy is the only thing informing you of collection.


I'm not perfectly okay with what you are suggesting (and that's okay of course).

But essentially, coming from a 3rd world country where censorship was the norm before Internet came along, and seeing how TLS and DoH is giving similar states like China a headache, I have to say that I am extremely happy, but concerned.

I believe it is a regulatory problem. In essense, make collecting data punishable but personally (i.e. Person X signed on decision to collect data, person X gets jail time)

I know that's probably not even remotely possible because employees "operate on behalf on the company" but removing that shield will effectively eliminate this. The same way dumping stock at a company means the FTC/SEC/FBI will have you ass on a platter, personally.


Saw this two nights ago installing the driver on Windows 10. Read the UELA. Did not consent, closed the window. Is that good enough?

By the way, My tablet works MUCH BETTER on Ubuntu and Mint than on Windows 10. Krita and MyPaint are cross platform so I might just do my art on a *nix box instead.


Off topic, but would you be willing to expand on your Linux experiences with this?

I'm currently doing all of my digital drawing on an old SP3 tablet running Manjaro, via Krita. The driver support is... acceptable, I guess. Krita has more than a few annoying edges, but shows a lot of potential so I've been sticking with it.

For a long time I've been considering springing for a dedicated setup with one of Wacom's larger devices, but I've held back because I need it to have completely solid Linux support and I can't figure out how to test that in advance. I'm always curious to get more info about what issues other people have seen.

I wish I could find a physical store where I could just bring in a laptop, plug it into the actual device, and draw for maybe an hour to figure out if there are any dealbreaking problems.


"Off topic, but would you be willing to expand on your Linux experiences with this?"

I'm using a Wacom Intuos pen & touch M graphics tablet, connected to a Thinkpad 430. Over the years I was using Debian Stable, MINT, and finally Ubuntu.

The experience is great. Like I mentioned, much better than windows. I only just started to use Krita (I prefer MyPaint, however I feel I should branch out). The work I do isn't special, just stupid doodles and cartoon type of stuff. The wacom I'm using is older, I think I bought it 5 years ago or so.

I don't really have much to add besides that. I remember WAYYY back in the day having to compile the driver myself for an older wacom (Ubuntu 6 or 7 era). It's practically plug and play now, however, I think there is some other apt-get stuff that I did once for some reason that I forget (eraser wasn't working?). If you are having issues maybe try another tablet. I think the one I have can be bought for $50 on ebay. Maybe try a 30 day return place like best buy and sorry to say try the latest ubuntu or mint for compatibility (have a dedicated art machine?)


Well, that kind of thing is what the return policy is for, isn’t it?


Which tablet are you talking about just out of curiosity?


> as far as I can tell anyone with the presence of mind to decline it could do so with no adverse consequences

Makes me think one should try declining these kinds of agreements to see what happens, before accepting. As someone who also has an "anti-privacy-policy-policy," I wonder how many of these kinds of things I've agreed to when it was unnecessary.


As far as I can tell the only consequence of declining it is that it pops up the “hey please let us have all this info” dialogue whenever you reboot. I’ve been doing this on my own machine for most of a year.

Might be different with the latest update, I haven’t bothered with that.


Is there an accessible way to prevent an application drawing a specific window to the screen?

I can see an app like autohotkey could click the "no" button and automatically remove it, but could you (assuming it's not modal; which it probably is) tell Windows not to show it?


The "Wacom Desktop Center" app mostly just sits there looking for updates and bugging you to enable tracking anyway; the Mac version has a menu setting that theoretically stops it from ever running, and thus bugging you to sign up to share your analytics. I just turned it off (since I just noticed it for the first time) but don't feel like rebooting to check if it actually works. (Though I did just run the little script I keep around to restart the driver, which normally brings up the WDC, and it did not show up this time. Huzzah!)

No idea about Windows, I never use that.


One thing the article doesn't talk about is when this tracking happens.

Does it only happen if the pen is touching the tablet, or does it happen all the time even if the pen isn't touching the tablet?

Because there's a huge difference between the 2. Normally you would keep your tablet plugged into a USB port but the pen isn't actively being used.


"In some ways it feels unfair to single out Wacom." - Uh no, it is completely fair to single them out and put them in the spotlight for doing this kind of tracking.


I think the statement was meant to indicate that this kind of behavior is well-nigh ubiquitous, so the only thing really different about Wacom is that they're the one we're talking about, when they are by no means the worst offender.


I'm sure that's exactly what it means -- but it's still fair to call out individual companies that engage in this misbehavior. That others are doing it as well isn't important.


It looks like everything in tech got poisoned, smart TVs taking screenshots, web apps tracking and matching user clicks, smartphones tracking locations realtime and who knows what else, desktop apps monitoring other apps and peripherals, creepy companies building profiles on everyone, health institutions selling data of their users... I want out, I didn't get into this field, keeping myself up-to-date and super capable via top universities, to be just another cog in building a toxic monstrosity this industry is becoming just to make somebody with a limited lifespan feel powerful and rich.


"just to make somebody with a limited lifespan feel powerful and rich."

Wow that last sentence really puts things into perspective. How can be reverse course and throw a wrench in the system? We are the makers, we should be able to wrestle back control and do it democratically and get politicians on our side to legislate this ad industry into the ground.


First step would be to kill advertising-based models. Get them banned. Because it's the advertising industry's race to the bottom that poisons everything, and fuels the creation of surveillance infrastructure. With ad-related snooping gone, it will be much easier to rein in the remaining few players who misuse data in pursuit of optimizing their business models.

(And yes, I know ads enable a lot of free content on-line. But as countless problems like this show, it's a bad tradeoff.)


There was a time before now where advertising was effective and didnt track you - it was the advent of targeted advertisement which really killed things off.

Can we go back to the days where an advert was was just an image and a hyperlink? Where advertisement paid by the pixel and location on the website? Where JavaScript was unused unless in some rare and warranted cases?

I still believe the web can be a free and open market place of ideas.


There are two troubles:

1. The uplift of targeted advertising is unbelievable until you see the actual statistics. It's like slowly sipping a cup of coffee to wake up versus waking up to snort a line of crack.

2. Advertisers were abused and defrauded by adtech. Which has inspired all kinds of surveillance hellscape because the advertisers finally caught wise and have renegotiated to pay for actual performance only -- not clicks -- actually closed sales. But adtech wants paid if you do your research online, respond to an ad online, and then buy in store. And a whole lot of adtech now allows for that. Attributing an in-store purchase with no customer interaction to a prior web session by that same party.

The benefit of those two factors to the advertisers is such that we can't have a serious discussion about this shit going away without a law which assigns criminal penalties for being a beneficiary of the scheme.


Most ads that I remember when I was new to the internet were all running flash and had really annoying circus-game-like objectives such as "Can you shoot the basketball through the hoop?!?"

I hate the means by which advertising is targeted today, but I would be lying if I said the format of the ads themselves were more annoying or less useful than the past.


Ads enabling free online content is a bit of a glazier's fallacy. Yes, when ads are an option most content producers will be funded by ads. it's simple to implement and does not require end user consent but as soon as the degenerate strategy is removed new strategies become viable such as micro-payments.


I think the potential of micro-payments was never reached because transactions are currently too large, and uni-directional. If we could process payments of $0.01, it would create a culture were you are actually willing to pay for content. Also, if you make a insightful comment, a product review, or even a upvote, you should receive some credits to spend back on consuming content. Doesn't have to be blockchain, it would be effective even if limited to one site or content provider.


What we actually need in a practical sense is Kindle Unlimited but for web content.

And furthermore it could be a plurality of those kinds of providers aggregating content.

Deploy single-sign-on schemes, and websites might participate in a plurality of programs from different vendors.

But at the end of the day, you'd pay one or two "providers" a monthly sub, they pool the funds, take their cut, and do prorata distribution of the pool based on views, eyeball time, popularity of content, lots of ways.

No need to perform microtransactions from a banking perspective. You're going to eat $20 of web content this month, and so will lots of others. And then those views can participate in the pool and get paid monthly or something.


Nothing short of inquisition-style inspections of company servers, code, and networks is going to fix it on that front. We'll have to do some really radical invasions of company privacy before it results in more than fines-as-a-business-expense.


Not really, you just need to incentivise the behaviour. Pass a law to make it illegal to show ads based on tracking, with executive imprisonment for violations. Executives will then demand their teams to put controls in place to ensure that no one is using tracking to show ads. Violations of HIPAA have some serious fines, and I've watched executives at all levels take it VERY seriously


Rich executives and therefore the media tend to push back on such limits. And they tend to be more equal than the rest of us.


Wouldn't this executive and Media companies will run afoul of the GDPR act. At least in EU they can be fined and may be prosecuted if they are repeat offenders.

It will be nice if all the EU residents in one go send a request to remove their information from all the US based media and technology giants who under the guise of changing the world are indeed working to sale more to consumers and make themselves rich.


Awesome idea, HIPAA for protection against the advertising industry and their digital intrusion/violation/surveillance of our privacy.

Something like APPAA - Advertising Privacy Protection and Accountability Act

What would you guys call it? what language is necessary to cover all the edge cases for the deceptive and dirty-playing advertising industry?


Luckily 1/3rd-ish of those are happening over HTTP(S) over IP on PC.


The entire targeted ad model has to die.

As in, being a beneficent party of a targeted ad campaign becomes a presumption of criminal activity.

We have to make the advertisers culpable for the behavior of the companies serving up their ads.


Do you know of anyone working on such an effort?


Self-hosted apps/DApps come to mind however they are still alpha-quality. It might take some time (~5 years) to replace 90% of expected FAANG functionality.


This is just a non starter and a pipe dream. It goes against the principles of capitalism and free enterprise.

You need to think about incentives. You have two tablets, identical in functionality and performance. But the one without the ads and surveillance features costs twenty percent more. Which one do you buy? Actually Kindle did this for a while (maybe they still do...). I ended up buying the cheaper one with the ads.


Can confirm that Kindle does in fact sell devices that are ad-supported.

For a $20 one time fee you can remove this feature (which can be done after purchase at any time). But most people won't notice the tiny option select on the amazon purchase page that defaults to "with Special Offers".

To make it even more confusing, they make it sound like the better option to pick is the ad-supported version. I mean you are choosing between the model "with special offers" or the one "without special offers"? Most people that don't know any better will leave the default "with special offers" options selected.

Source: https://www.amazon.com/All-new-Kindle-now-with-a-built-in-fr...


Yes, you're right. I know people who bought Kindle with ads just to save that 20% or so.

But that's why I wrote that advertising-based business models need to be banned. Not discouraged, not badmouthed, but banned. They're anticompetitive and poisonous; when one company starts doing it, others in the whole sector are drawn to follow suit (it's e.g. why it's hard to actually sell apps on mobile or make subscriptions for publications on-line profitable; ad-supported operations create a baseline cost of zero).


So go back to the 90s? Microsoft being the reigning king? Not sure if this time period was honestly better.

Or alternatively, give users ownership of their data like the EU has taken steps to do. Advertising is here to stay and provides its own use. But there could be something that forces transparency.


How about no free stuff on the web? Charge for services and make it illegal to sell private information without the subject’s consent and remuneration.


Making it illegal won't solve much. E.g. FB doesn't sell private information, they sell the ability to target demographics. However I understand the underlying principle which you are espousing.

The genie is out of the bottle at this point. IMHO the only method forward is how do we as a society responsibly allow for coexistence such that all parties are satisfied.


>> How about no free stuff on the web? Charge for services and make it illegal to sell private information without the subject’s consent and remuneration.

> Making it illegal won't solve much. E.g. FB doesn't sell private information, they sell the ability to target demographics.

You just need to appropriately specify what's being made illegal. You're right that FB could weasel out of a law that outlawed selling private information. They couldn't weasel out of a law that outlawed monetizing personal data without user consent.


Precisely. And then extend it further. Make advertisers responsible for the behavior of the ad marketplaces, ad tech, ad venues they're paying.

Make their only defense be cooperation in prosecuting the offending party.


The way to criminalize it properly is to create a presumption of criminal activity when an advertiser benefits from ad targeting. Literally anything more than the advertiser choosing what sites to appear on should be made criminal. No other criteria allowed.


>Not sure if this time period was honestly better.

Canter and Siegel was in 1994.


Quickest thing you can do is quit your jobs en masse. At some index level of number of people * avg skill of people quitting, the industry becomes nonviable. Then you state your terms.

Sounds like a union? Also sounds like Galt's Gulch. Weird dovetail, there.

The kicker is that tech workers are in a FAR better position than the other groups that are pushing or considering a general strike. I suppose that makes the prospect more viable, but also more dangerous to the stability of the overall economy. I guess it's up to you if a shake-up now is worth stopping or forestalling the rising waters.


This. Tech workers need to organize. Unfortunately you have to convince tech workers that moral duty to society is better than their comfortable high-salary lifestyles, and good luck with that.


This is a non-starter for achieving the end result.

There's enough of a delta in both the money paid for online advertising of a target nature and in the better results that yields for the advertisers for them to fund a very rapid turn-over.

There's enough money in it that if they must, they'll be able and willing to buy their way past any unionization issues.


Here's a radical idea. Embrace that technology will bring everything to light and essentially destroy privacy. Instead, let's ensure the powerful people in our society also have no privacy. Let's ensure we can watch the watchers with more eyes than they watch us with.


A person with a private security army, a 40 acre compound and a personal helicopter has very little to fear from the general populace knowing things. They never have to worry about the price of insurance, or getting a job, or securing a loan vs being homeless.

This does not solve the problem, not even close.


They might have to worry about the aspiring plans of that security army. It's always something.


Doesn't help at all.

Privacy of varying levels is and has been a functional requirement for smooth working of free society.

Economic disparities make the impact of lower levels of overall societal privacy have a disproportionate negative impact to those on the lower end of the scale.


IDK, my go-to strategy for now is to support products that respect my privacy and recommend them to friends and family. My friends and family don't know this is going on or that alternatives exist, so I'm their reference for what is and isn't good.

I refuse to work for companies I don't agree with, which hurts me financially. I will never work at a FAANG company, for example, or most of the other heavy weights that are funny l functionally similar.

For things without a clearly superior alternative, I have a list of business opportunities. For example, smart devices are becoming popular, and they're horrendous for privacy. That means there's a market for devices that don't spy on you, and the open source options after inconvenient enough that a packaged deal is attractive, even if it could be DIY. For example, I think there's room for a Ring competitor that is E2E encrypted, provided the app is well designed and the device is unobtrusive. Privacy respecting services and devices are unlikely to take over the alternatives, but merely existing puts pressure on the major players to act better.


Thanks for taking a stand against FAANG, I know my wallet is hurting for doing the same.


Working on a Kubernetes based clone of Instagram.

My plan is to deploy it and a VPN tunnel and give certain folks access to keep in touch. I’ll have instructions for self hosting and VPN key creation/sharing (Wireguard ftw)

There’s absolutely no reason to bother with cloud services. They’re nothing but big corp coopting our problem solving.

It always comes down to be gatekeeped but no one having the guts to gate keep a rich douche whose money to buy security goes away as soon as we do


My biggest challenge is communicating with non-technical family and friends. Teaching them Whatsapp was a challenge, but things like IG and FB are simple to install and log in to. I've never been able to quit them without suddenly being isolated.


I never jumped on the “using social media means I am connected” band wagon.

Still just using email. It’s web scale, and just needs UX love.

But really even that is overkill. Self hosting is too easy and cheap for me to justify cloud services privacy and just generally being in the habit of externalizing every aspect of utilitarian life.

I’m not talking webscale loads. And it could be a hub for IOT. My data streams are not Google scale. But don’t take your eye off them sticks & options. Ooo shiny

Share my data with my doctor directly over local area WiFi, by making it adhere to a specific format. No data middle men needed.

For a culture constantly climbing up its own ass about austerity in economics, we sure enjoy selling ridiculously uneconomical means of communication.

It’s almost as if it’s a purpose built emotional response but who could believe so many people would fall for an emotional mass delusion?


This is an awesome project, please post this to HN if you open source it (please do!)


Create enough noisy data so that the data these companies receive is so unreliable that they give up. Have a Chrome extension that runs and clicks on things randomly or a separate app that controls the mouse that feeds bad data. If it ran on every computer maybe Google AdWords might become useless.


> "just to make somebody with a limited lifespan feel powerful and rich."

> How can be reverse course and throw a wrench in the system?

We start by taking the guillotines down to Sand Hill Road.


It's not their fault they're exploiting a system that allows what they are doing.

They're not good people, but if it weren't this set of people, another would take their place. The world is rife with opportunity for people of low morals.

It is society's fault that we have not explicitly codified what will not be allowed and constructed the right laws and enforcement to ensure that violators are ruinously punished.


Good grief. Many of the people in privileged positions to complain about this stuff got to those positions through Sand Hill road.

Telling some poor person that they have to pay $10 per month to use Google or send Facebook messages isn’t democracy. Don’t want surveillance? Don’t use the products the employ it, but some people, especially those that don’t have means, might be perfectly fine to trade privacy for a “free” service. Destroying the ad industry is elitism and tone deaf — ads are imperfect, but they have enabled people to do things that would have been impossible or unaffordable 20 years ago.

This pitchforks and guillotines talk is ridiculous. Build something better if you don’t like the way things are.


> Don’t want surveillance? Don’t use the products the employ it

That’s as simple as saying “don’t like crime? Don’t be near criminals.”

Data’s being stolen and we’re being watched whether we like it or not. Only sometimes can we easily opt out and have those decisions respected.


Do you use ad blockers, VPN, etc?


Sorry to break it to you but most of this "we" already sold out. Much like the corporate hippy boomers.


Why legislate it? Why not make a thing that doesn’t need advertising and charge people money for it. The way to “end” the industry you don’t like is democratically — using dollars as votes.

The idea of using government to crush an industry is a bit totalitarian — it “the people” agree with you, they should be happy to pay you for your product. If they don’t agree, then there isn’t anything democratic about using a government to shut down an industry you don’t like — that’s not democracy, that’s fascism.


On the bright side, as biotech advances eventually you’re gonna get rich powerful immortal bosses.


Oh yes, the best/worst case scenario all at once.


Only best. Aged meat tastes better.


hello Altered Carbon


I'm 100% like you feel. I also did a top uni for years, it was super stressful but I eventually managed. Now I just got a work at a tech company similar to Wacom. I thought okay, we all know how unethical are Google, FB & friends, but a company selling hardware accessories only, c'mon no way it should not be too evil... Well that Wacom story is cute in comparaison to what they do. Ofc they also log all apps used like wacom, but they are also implementing systems to analyse if you have health issues from your peripheral logs, analyse your emotions and stress level from microphones & cameras, facial recognition, eye gaze, etc.

I love technology and computer science but tech is so screwed up in terms of ethics.

I wish we'd see more people coming together that care about this (like truely care, not the #Tethics of the sillicon valley) to make some open and private alternatives to all this toxicity. But it is super hard to make things change.

I'll work in that direction in my free time, but I feel so alone. HN seems the only place people care a bit about that. Around me at uni or at work, the level of ethics and care for privacy is so low, it's depressing. It's not only that "rich boss" telling its employees to exploit people's data, it's also engineers themselves being happy collabo of this because they make huge salary working for those companies.


Don't forget your employer selling your paycheck records, FFS.


The Work Number's entire sales pitch is... "We'll accept the data from you for free. You can give our our number and website to anyone who calls into your HR looking for employment reference, employment verification, etc. We'll field those calls for you."

They don't pay for the paystub data. The employers give it to them.

Although it's an invasion of privacy, to be sure, it actually does have some benefits for the employee.

In places outside San Fran, where people actually get conforming mortgages, having your data in The Work Number's database automates and cuts out the employment & income verification so that you don't have to track down records and submit manually and can potentially skip multiple must-connect phone calls between the lender and employer.


I don't think we have to quit our day jobs to change the world. What we need to do is (1) change the expectations of the public via conversations, activism, migrating away from FB/Google/Microsoft products, etc. and (2) make it easier for people to prioritize privacy by building open-source apps and hardware.

Inspiring examples that I use daily include Linux, git, and Bitwarden.


The general public are too disinterested to make it possible to convince the majority to change the way they make decisions about software and hardware.

Legislation is the only effective course.


I'm not sure how you can propose legislation without the majority approval.

A functioning democracy legislates according to the will of the people. That means you HAVE to convince the majority first.

In a broken democracy, you still need to convince the powerful (although this might actually be easier). But you still have to convince them to apply the same standard to those without power, which is likely a hard sell.


I'm convinced the parents of the US, and in turn broad bipartisan support, could get behind some sane legislation on this stuff if a massive dump like the New York Times one were deanonymized and focused on kids.


Is Wacom's driver an example of excessive monitoring?

While the author presents the graphics tablet as a glorified mouse, tablets usually offer many more features. How those features interact with various applications is important, and they have to prioritize which applications they support. The data collection that the author describes may be viewed, internally, as part of that process.

Now I am not claiming that Wacom is doing the right thing, nor am I claiming that they are doing the right thing in the wrong way. Yet it is entirely possible that they feel justified in collecting that data for product development without having ulterior motives. Their failure may simply lay in the failure to recognize that many people are sensitive to data collection due to real, potential, or perceived abuses by other parties.


It’s 2020. We’re starting to develop a pretty solid framework around what is respectful to users and what isn’t, how to disclose things, ask for permission, etc. (All knowledge that has been acquired the hard way - through data breaches, users getting their personal data stolen because of a lazy software company, etc)

Wacom is a $500M company. They don’t get the benefit of the doubt.


Starting? It's been known for many years, it's just that $$$ talks louder.


It's just one more example, but one that is rather unexpected. It forces me to ask questions about why? and if we are so deep down that road that this level of monitoring/surveillance became the new normal and an expected outcome of production cycles? However, I think all the gaming companies have much more intrusive monitoring to prevent cheating, NVidia might need to do the same for their GeForce Experience and per-game customization, not forgetting creepy antivirus software like Avast etc.


I have been a Wacom customer for years.

I'm not a graphic artist, but I hate mouse cords and hate having to recharge mice or deal with batteries.

So a series of Wacom "puck" mice on one (over the years, several) of their digitizer tablets has been my mouse substitute at my desktop. I bought the high end ones. On an average of every 3 to 4 years.

They stopped making the puck several years ago. Mine was starting to wear out, so I finally made the leap to Logitech's G703 and the Logitech G PowerPlay inductive mat. So same benefit -- the mouse is just magically always charged.

If I hadn't already switched, I would have anyway after the Wacom selling data thing...


Or the failure may be on your end, believing that "many people are sensitive to data collection" while in fact, most people don't give a dusty fuck about it and happily share everything for saving a few bucks a month.

Hackernews is NOT the people. HN represents a TINY TINY fraction of users.


That's why we should give a fuck on their behalf. In a techno-capital society, it's too much to expect normal people to have to know the technical details of all of these things.

I don't know anything about water treatment or nuclear power, but I still expect the people working in those industries to be held to extremely high standards of competence, virtue and accountability.

We should have the same standards. We don't, so instead we need to demand regulations for these monsters.


Civilized societies don't tolerate "vampires" and cannibals walking among them (or lording from on high). They eliminate them. Eventually the people will wake up.


You're saying this group of "most people" knows what's being sent, that they're exercising informed consent? Surely you aren't hanging this argument on "common sense."


The sad thing is, they don't know, and they don't care. Like at all.


They don't care. Until you can show them you know how often they're on Grindr and where their tricks live.

Or that they got a prescription filled. For Valtrex.

What would be helpful -- but that I am adamantly against -- would be tons of data drops, in communities across the nation, of local church leaders and local community leaders.


It's true, until you show them how much data is collected and who is buying and selling the information without their consent. Then a significant portion start to recognize the threat. There's a reason none of these companies mention it.


One thing I've noticed about far too many people in tech is that they all seem to believe that those not in tech are stupid.

Many of them are not stupid. On average, half of them are above average. They're just uninformed and busy with their own lives.


You are saying that most people doesn’t? Any polls or data supporting that assertion? Bring data to the discussion.

Here’s some: https://www.pewresearch.org/fact-tank/2018/03/27/americans-c...

91% of Americans feel that they have lost control of their personal data and privacy. The logical conclusion is that at least that many understand what they have gotten themselves into. That would indicate that a majority of people are exercising informed consent, despite the vast majority of Americans feeling that way, they continue to use the gamut of products and services.


Well put, I’m ashamed at the current state of our industry.


I wanted to add on the list of privacy invading examples. It's better I make the counter claim.

The data collected has massive potential to improve medical research. Being able to validate database wide experiments on hundreds of millions of people at once is pretty incredible. There's likely to be a decade of insights to be found in this rapidly filling digital ocean of information.

In several years the clamour to get off the known web will empower a lot of security apps (not "privacy" apps, that are the opposite of their name) that are growing behind the scenes.


I recently encountered a company that uses ad-tech pioneered identification techniques to snake around patient privacy laws and clinical trial anonymity to sell re-identified data back to insurance companies.

So no, fuck that.


Seems like folks should get to choose what experiments they participate in? Medical ethics 101.


You have a valid point, I did some Deep Learning-based medical research as well as analytics on top of FHIR and having more data is absolutely essential for e.g. finding correlations/covariate conditions/risk factors etc. However, there are massive privacy risks, economically-ruinous or literally life-threatening.


I'm only one voice, but for my part, I am willing to forego the advancements that may come to medicine via this route.

The societal costs of surveillance capitalism are only just starting to appear, and it's going to get so much worse before it gets better.

And it's not all bad, but there's no preserving the little bit of good without canning the tons of bad.


Poisoned tech anecdote: I recently noticed that in one of my favorite bars the coffee grinder has a screensaver.

Considering what should and shouldn't be done is much less popular than finding ways to do it.


I think one of the great signals of good ethical behavior is the person who'll refuse to build a thing that they know should not exist, even when offered a great reward if they'll do so.


s/limited lifespan feel powerful and rich/limited lifespan powerful and rich/

It's a whole attitude. They're aware of their limited lifespan and intend to either buy their way into more and better lifespan (if possible), but in any event become actually powerful and rich.

At least on a certain scale, they're not wrong. It does work.

This is not to say, however, that they're not slugs deserving of a good salting.


> It looks like everything in tech got poisoned...

> ...building a toxic monstrosity this industry is becoming just to make somebody with a limited lifespan feel powerful and rich.

That's what you get when you combine shareholder-focused capitalism, the attitude that everything's OK as long as it's legal, and the attitude that we should avoid making things illegal through regulation (so as not to stifle innovation such as this).

IMHO, all of those ideas should be rejected or greatly curtailed because of the poisonous tree that's grown from them (which bears delicious poison fruit).


you're getting downvoted just for pointing out why the problem exists.

The discussions around these issues always follow the same pattern that reminds me of a dialogue I recently saw posted somewhere, where an Amish person and a non-Amish person talked about technology and the amish person asked the other one, "do you think having the television on is good for you and your family?" and the other person responded "no, but we don't want to get rid of it because it may be useful", and the amish guy responded with "that's the difference between us, if something is bad for our family we throw it out."

The discussions around tech are the exact same. We all agree the modern internet is screwed, large companies put ads into everything, we're getting screwed over, non-profit domain spaces are being sold, everyone's unhappy, and we do .... nothing. Because of 'innovation' or some other conjured up fantasy term.


> everyone's unhappy, and we do .... nothing. Because of 'innovation' or some other conjured up fantasy term.

The only reason "we" do nothing, is that "we" have no agency. Other reasons are just rationalization to cope with our powerlessness: we pretend ideological debates among the people decide the faith of the country. The only vote "we" have is voting with our wallet, which only works in a truly free market.


How much of this is feature creep when teams feel obliged to keep adding more and more to justify their jobs? Let's say you work for Samsung in their TV software division. Other than incremental upgrades in screen technology coming through for you to accommodate, what else can you keep pushing through the pipeline to impress your managers? Maybe smart TV features are the low hanging fruit for these employees? I mean, I dislike that functionality, but maybe the pushback from the public isn't strong enough?


It's not to impress managers, it's to make more money. And most tech employees just don't care enough (or even have enough of a moral compass) to protest. Paycheck first, ethics second.


The real kicker, though, is the scale at which software allows that combination of ideas to affect people.


You are implying that capitalism is the problem? What does privacy look like in non-capitalist places? What was the privacy like in East Germany? Or China? Or Cuba? Capitalism isn’t the problem, it’s the solution: build things people want and the market rewards it.


> You are implying that capitalism is the problem?

Certain kinds of capitalism, yes. There's not just one kind, and the kind we have now isn't the best kind. Perhaps we should try to discover a better one?

> What does privacy look like in non-capitalist places ... like ... China ...?

China is very capitalist now, if you weren't aware.

In any case, the main problem in those countries (at least with regards to privacy) was authoritarianism, not non-capitalism.


If only there was an system somewhere between the United States and East Germany. Sheesh.


It's important to also acknowledge the crucial role culture plays in this problem, regardless of the political system it cradles.

The public is simply ignorant about surveillance technology issues. Not that long ago we used to tolerate sawdust in our bread[1], and that's food, something humans should be pretty knowledgeable about. People would revolt if this happened now, whether they live under a capitalist or communist system. A free market might accelerate the transition, but education about the issue is still the underlying factor of change.

[1]: http://www.theoldfoodie.com/2011/03/sawdust-bread.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: