On Linux you'd be using the community-maintained drivers[0], so I assume this wouldn't be a problem (correct me if I'm wrong).
But as far as I can tell, there's not equivalent Open Wacom drivers for Windows. People with more Windows knowledge than me: any thoughts on why? Is it just that someone using Windows probably doesn't care about Open drivers, so the demand isn't there? Or is there something about Windows that makes substituting drivers harder?
Wacom doesn't provide their own Linux drivers, but looking at the state of drivers around GPUs, printers, I vaguely suspect that somebody in Linux would be working on Open alternatives even if they did. I'm trying to think off the top of my head what Windows-compatible hardware has 3rd-party driver options. Maybe some printers?
Now, if only someone could make Wacom drivers that make the Wacom touch functionality interface better with Photoshop and friends, rather than just handing keyboard shortcuts to them. Somehow, that's the best Wacom has managed to do, which makes those features effectively unusable in the very apps those tablets are most commonly used...
Open source drivers are rare on Windows because manufacturers almost always ship proprietary drivers that are good enough, and Windows users clearly have no issues running closed-source software.
Proprietary drivers on Linux are often crap, if they even exist at all.
Linux purposely makes proprietary drivers crap. The kernel offers no stable binary interface so drivers become broken every single time linux updates unless the drivers get compiled as part of the kernel. This forces manufacturers to either decide that they will open source the drivers, not have any at all or put in the work to keep them up to date every release.
It seems like forcing the all or nothing choice made a lot of OEMs open source their drivers or provide none which lead to the community making them.
No it doesn't, the devs refuse to provide stable in-kernel APIs because they want the flexibility to be able to modify them as they please when a better solution comes along. Also maintaining support for proprietary drivers is harder due to them being black boxes, not only in terms of debugging, but also in security and stability.
NVidia is basically the one major holdout these days, and its proprietary driver for Linux is very good, so it's not as if it's impossible to maintain a proprietary driver in the Linux ecosystem. The motivation here comes from Linux being huge in accelerated computing and 3d, not due to any particular love for Linux on Nvidia's part.
Indeed the lack of a stable interface has made it cumbersome to maintain a out-of-tree driver, which is GREAT since it means hardware vendors are more likely to open source their drivers or at least give enough documentation for them to be created by a third party. This ends up being a huge part of Linux's success, as it supports the widest range of hardware of any system 'out of the box', hardware support which is then functional on any platform on which Linux runs, which in turn is practically everything under the sun.
And if this wasn't enough, it is also a boon for alternative systems which will never see official proprietary drivers due to being niche, as they can port Linux drivers, or even add Linux driver compability layers.
As the situation on windows shows us, the alternative is drivers that are crap for other reasons. If Linux offered a stable binary interface for drivers, we'd have proprietary drivers that "worked" but were nevertheless still crap insofar as they were essentially malware, as is the case with this wacom driver.
Probably because desktop users expect their hardware to continue to work for a long time and to keep up with OS updates. For mobile people have been conditioned to accept throwing away and buying a new device every 2 years.
> Device managers don't care about Linux anyway, and wouldn't suddenly start caring if Linux announced a stable ABI.
From what I've seen, facilitating proprietary drivers seems like the motivation of most people lamenting the lack of a stable ABI. An example of this being the comment I responded to; "Linux purposely makes proprietary drivers crap. [...]"
Discounting proprietary drivers under the assumption that they wouldn't be written anyway, what does a stable ABI afford us? Out-of-tree FOSS drivers? In other words, drivers that aren't good enough to be accepted into the kernel?
Out of tree foss drivers are still not affected since you could most of the time just recompile them to work with the latest kernel.
Also when I said linux made proprietary drivers crap I meant that as a good thing. It lead to open source drivers where there otherwise would not have been. Some OEMs like AMD eventually went open source on linux while remaining proprietary on windows.
There is also the little matter of Windows (since 7 I think?) requiring kernel drivers to be code signed, unless you want to run your system with a permanent "Development mode" text overlay, not to mention the arcane procedure required to activate that in the first place. (You can't add another cert to the trusted set, either.)
So that puts a little damper on the whole "open source" thing. Of course it is also not effective at all, Stuxnet was famously signed by Realtek.
The process to get a driver signed doesn't seem too hard for an open source project to do. Biggest hurdle is the certificate costing around $300/year as far as I can tell, so it would need to be a project with a reliable stream of donations or an author/s willing to pay it.
Not too long ago I had to do some INF editing to get a driver installed on Win10, and the editing did invalidate the signature so it (silently!) refused to install, but booting with the "disable driver signature enforcement" option made it install, and it continued to load and work normally even after I booted back into normal mode. This was only a few months ago so unless something drastic has changed since then, maybe it's not that hard to install drivers with bad (or missing, but that's really the same if you just have an arbitrary signature) signatures. I thought I'd be out of luck and have to resort to something deeper and less reliable like kernel patching (tools exist to do that, but they get marked as malware, and you have to do it after every update...) but that was an unexpected surprising positive.
Editing the INF de-authenticates the installation of the driver, which can also be bypassed by adding to the Trusted Publisher root store, which is mutable (as Zadig/libwdi does), but the actual kernel-mode .sys binary still needs to be signed by Authenticode unless the system is in driver Developer Mode. Your method worked for installing a modified INF file, but will not work for installing a modified binary.
Sounds like my Linux experience in the late 1990s: lots of weird invocations without understanding what they do, just to keep the system barely functional. The roles sure have reversed...
As for manually forcing a particular signed binary for a specific device, the “Have disk...” or “manually select from” route still works without that developer mode nonsense.
That’s trivially easy to get around by installing you’re own CA cert when you install the driver.
This is arguably worse security wise but it makes the driver install process identical to the way it used to be as far as the average consumer can tell. This is why (IMO) free software is so important, to the point where I’ve begun to agree with the radicals and think it should be mandatory.
There are two separate authentication processes for drivers on Windows: Authenticode, which is used for the kernel-mode driver (.sys) and is strictly enforced, and driver package signing (.cat/.inf installation packages), which has a mutable root storage called Trusted Publisher system store. Zadig works by adding its own certificate root to the Trusted Publisher system store and self-signing the installation packages, but the three possible installed drivers (WinUSB, libusb0, and libusbK) were all still signed by Authenticode.
Yes, it is its own list for software publisher signing specifically, and is separate from the Trusted Root Certification Authorities certificate store.
Windows users should push for FOSS drivers as well also when the proprietary ones run perfectly. Privacy and security issues aside, being forced to depend on a closed driver means that the manufacturer can make the product obsolete just by stopping support on newer Windows versions, therefore turning the product essentially into trash, thus forcing the user to buy a new one.
The vast majority of Windows users don't really care about that though.
In the 90s, software modems ("winmodems", see [0]) were popular because they were cheaper than using dedicated hardware for generating and decoding the audio signals sent over the phone line. Those would break if the manufacturer didn't upgrade their driver for newer versions of Windows, since they're completely software driven.
I'd be very surprised if things have changed since then, and I bet that the majority of consumers would just pick the cheapest option at the big box store.
I remember getting PCI based Winmodems working on my old machines back in the late 90s/early 2000s. A lot of people bought ISA modems or physical COM port models so they wouldn't have to deal with that bullshit.
Now most Linux distributions are littered with binary blobs in linux-firmware that have to be loaded for everything from Wi-Fi to Bluetooth. We've gone the total opposite direction of where we should be .. except for like .. amdgpu.
Winmodems were not popular with anybody except manufacturers, and exponentially increased the support issues with modems as an entire technology. Read: people had more problems with Winmodems than they did before.
Not true in all locales. In 2001 I bought a computer locally (Haifa, Israel) with a winmodem. I installed Red Hat on it, so obviously the winmodem would not work. I could not find a hardware modem locally. The only solution that I had was to go with an ADSL line (250k up, 750k down) which cost a fortune but whose modem would plug into the network card.
It would be a full two years before I would see any other home users on anything other than dialup. 750k down in 2001 was so impressive, you could start listening to songs on Kazaa as soon as the download started!
The windows approach to driver certification makes this really difficult. Microsoft virtually requires every driver maintainer to pay $100-odd every couple of years for a signing certificate (and ironically enough this is done in the name of improving driver quality). For a corporation that's nothing, but it's a pretty steep ask for the hobbyist maintainers that OSS drivers tend to rely on.
>Proprietary drivers on Linux are often crap, if they even exist at all.
Not so with Nvidia GPUs. The open drivers are awful; the proprietary drivers are good.
(But IS the case with AMD GPUs, to the point where the proprietary driver seems to perform worse[0] and everyone pretends it doesn't even exist, which is upside down unintuitive coming from A.) Windows and B.) Nvidia.)
I have to use their proprietary drivers and I beg to differ. Nvidia drivers are still crap, given all the pain you have to endure to get them running. Yes, nvidia has managed to sneak into the linux world through cuda but as far as ease of use, they are still nothing short of crap. Not to mention if you want to use anything other than ubuntu.
The proprietary drivers are not "good". They don't support Wayland which has been the default on many distros for years. They also don't support prerelease or custom kernels. I had to build a custom kernel to include some patches for new hardware I just got and found out it was impossible for me to use the nvidia drivers on it. I ended up getting an AMD card because of that.
As a developer however, nvidia's closed source drivers are buggy as hell. The amount of issues and times they break the spec is astounding and a constant annoyance. AMD and Intel via the open Mesa drivers are blissful in comparison, plus the amazing debuggability.
I would argue that Nouveau is bad purely because of poor performance, and the proprietary drivers are tolerable but perform well once you have them working.
One thing I can say for Nouveau over the proprietary drivers is that they actually work without any real fuss. I've run into numerous instances where the proprietary drivers would prevent the system from booting. And I've yet to get them to work at all with any realtime kernel in Manjaro.
And then we get into the nightmare that is any laptop with an integrated Intel GPU and a dedicated Nvidia GPU...
>I would argue that Nouveau is bad purely because of poor performance
That and, if you have a G-SYNC monitor (which, in retrospect, you shouldn't, but I and several friends of mine do), it won't work at all with the Nouveau driver. :D
Ever since the Windows multi-platform tablet push windows has had built-in drivers for touch & pen support. For the purpose of drawing, these drivers are actively antagonistic, as they have weird touch macros that make fine detail virtually impossible. And as they are built-in to windows, these drivers are also a pain to override by third-party drivers.
Add that to the complications that already arise from interfacing a 3D (touch sensitivity) precision input device with a computer and you end up with poor official driver support, and even worse community driver support
What is it called, and how do you turn it off? I haven't run into this issue on Surface devices, but I do have a couple of Wacom tablets too (mainly used on Linux on another machine).
Clearly this isn't a trivial or very fun job (last time I used them, the Linux drivers were buggy as hell). Who would have the motivation to do it on Windows, where you have a driver that works and users have little expectation of privacy to begin with?
The Linux drivers for Wacom tablets have never been buggy in the past decade in a half I’ve used them. Other parts of the stack (Xinput, libinput, GUI libraries) have been, but the actual driver provides good data to userspace.
Even then, to me the drivers on Linux have been perpetually less buggy. On Windows I found myself needing to restart the usermode service and restart applications frequently, especially if the USB connection was unreliable. The Linux driver did not have similar issues.
LinuxWacom doesn’t provide a configuration GUI that I am aware of, though. The xf86 input driver has knobs you can tweak but as far as I am aware the only official way to do it is CLI.
If you are talking about GNOME’s Wacom settings, then I can understand the confusion: under Windows this would be part of the driver package, but under Linux this bit just happens to be completely unrelated and maintained by GNOME. I realize this does not matter much to the end user but it kind of matters in the context of this discussion; the bugs aren’t inherent, they are probably mostly a result of how the software ecosystem works on Linux...
1. Did you reply to the wrong comment? I was only remarking on the Linux driver not being buggy.
2. If you’re referring to a Windows open source Wacom driver, one already exists, as mentioned elsewhere in the thread, though it has a pretty specific purpose in mind. https://github.com/hawku/TabletDriver
the comment you are replying to is talking about the motivation to make a driver.
I assumed your comment was also about that, I didn't expect that you were ignoring the context of the conversation and just commenting about whether the current driver was buggy or not. Sorry.
>users have little expectation of privacy to begin with?
The cynical side of me wonders how long it will be before prosecutors argue, with a straight face, that using evidence obtained from mass surveillance against people using Microsoft Windows is okay because Microsoft collects a massive amount of data so nobody should expect their files and activities to remain private; that there is or should be no expectation of privacy on such a system.
And then how long until warrant applications come in with supporting evidence that the subject of the warrant uses Linux and therefore their increased desire for privacy is prima facia evidence that they're doing something illegal.
The first part is long since true. Courts have already (wrongly) established that you have no expectation of privacy in anything you entrust to a third party agent like a bank or a cloud service. If you give the provider a key to your stuff, they can give it to law enforcement.
The latter is a tougher desecration of the constitution to sell.
Yes, it doesn't apply to Linux. Also on Linux libinput and synaptics drivers support wacom tablets, so you don't necessarily need to even install linuxwacom (in my config I do "xinput set-prop ..." for both cases to setup pressure curve).
... if you install and configure popularity-contest, which includes an explicit opt-in process [0] and it still doesn't track usage, merely installation.
It's a tiny bit more than that: popcon weekly reports which packages have been used that week based on their atime. atime, ctime and filename are reported (the times are truncated to a multiple of twelve hours).
One of the first things Debian does is ask consent about this, and the FAQs are clearly published: https://popcon.debian.org/FAQ. You can't say the same about most things.
The sad bit is that (some reasonable) telemetric data is really, really useful for software engineering. If you have a large enough program, of course it'll have way more bugs than you could ever fix. Crash tracking and usage analytics is how you make a data driven decision on what to fix, and what to ignore. This enables a data driven approach to software quality that's a huge improvement.
Having worked on projects that did and did not have telemetrics, working without them feels absurd - it seems like you're just randomly fixing the side mirror on a car without any idea what's actually broken on it (independently of your overall testing posture).
Vendors tracking excessive information without proper disclosure destroy this information source for those developers that try to collect reasonable information (with consent, disclosure, in context, etc).
Really great article.
But I wonder, why the author did not cover if/what the driver publishes, If I do:
- open "Wacom Desktop Center"
- Top right (next to "Login") is "More" (click!)
- "Data Privacy Settings" (click!)
- "Participate In Wacom Experience Program" => on => off!
My setting was "On" - and I swear: whenever a program/website/installer asks I go "No thankx". So it must be dark UI patterns with evil defaults that this super-hidden thing was "on" for me. Shame on you, Wacom!
When you install it, the first window they show you is a Terms of service agreement that has "Agree" "Disagree" buttons - except, it's not a Terms of Service agreement. It's an agreement to turn the program on.
So you have to click "Disagree" and continue the install to have it on.
Or you could vote with your wallet and buy a Huion instead. They are just as good, if not better. And about half the price. It's all made in China any way.
Windows art apps often need hover support since they assume a Wacom, and the general windows interface definitely does. I guess you can simulate hover by holding a hot key or something while touching though.
FWIW that isn’t a unique identifier for the author, it’s for Wacom’s GA account. I didn’t see any meaningful identifier being sent. Of course the set of most opened apps and your IP are probably enough to identify you.
That said, yep, it seems lame they don’t disclose this tracking. I can understand why they’d want to know what apps their device pairs most often with, but tracking all app opens seems aggressive, but maybe it’s the only way to identify what app is open when the device is used.
Tracking the currently open application software side is perfectly within scope for a drawing tablet - they often have buttons that can be bound to keyboard shortcuts, etc. It makes sense that it should know when you're focused on Photoshop vs Google Chrome.
But why are they sending this data to a server? My best guess is that this helps them focus on what software people are using. This allows them access to the popularity of graphic applications. They get to see what percentage of users use say Photoshop vs [Other program here] - so they know where to prioritize integrations and testing.
But I'm not sure how much "integrations" or work with third parties Wacom does - the drawing tablets are following an api standard after all. But maybe wacom does work directly with application devs, I don't really know.
I doubt they're doing this to try to track individual users - even if there are ways to do it. That said I really wish they approached this with a more friendly "Would you like to enable some basic Telemetry to improve Wacom products - Yes, No" instead of a very unfriendly user agreement where they try to force it.
IMO the more simple explanation is they want this data to sell to data aggregators, who can in turn enrich the profile they have on you. There's a similar thing going on with smart TVs, right?
I know this is the popular conclusion - but from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization. People hate web tracking, but knowing what browsers or devices are visiting your website can be enormously helpful.
Also I don't know all the details here - I know that Vizio TVs where collecting data and explicitly kept the IP and other personal data with it. I don't know if wacom is doing that.
Now that said - I don't like that they're handing this data to Google through Google Analytics. I also think they should be far more up front about what they collect, what they use it for, etc.
> I know this is the popular conclusion - but from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization.
Maybe if it were only used for that it wouldn't be so bad. But I don't trust a company not to take another bite at the apple by selling customer data if they think they can get away with it. Matter of fact, refusing to do so is leaving money on the table and could get a CEO fired for not making the company as profitable as it could have been. Once companies have the data, they are almost certainly obligated to use it in ways to their benefit and your expense.
Not really, it depends on the revenue strategy. Shareholders would likely be more displeased at initiatives that could end up breaking user trust and harming core revenue (e.g. the sale of peripheral hardware). Blindly leveraging everything in a way that doesn't align with vision or strategy generally leads to disappointing returns. Smart leaders know this.
> I know this is the popular conclusion - but from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization. People hate web tracking, but knowing what browsers or devices are visiting your website can be enormously helpful.
What happened to actually communicating with users to learn more about how they use the product?
It's a confirmation bias problem. Customers who fill out surveys and give direct feedback back to vendors are frequently lower-usage enthusiasts that aren't representative of your overall userbase.
I work on a product, and we include some telemetry. I'm also a strong privacy advocate, and I believe I've done my best within the corporate realm to ensure that the data we're collecting is extremely scoped AND useful for decision making and prioritization. In my experience, there aren't that many of me, but I implore folks to realize that as PMs and engineers, we absolutely do have a say in making sure that blanket data exfiltration and aggregation doesn't happen in our products.
Communicating with your customers proactively about what you're collecting and why is important too. And not buried in some privacy policy legalese: publish a blog post, explain what you're gathering, give examples of how it's driven decision-making for you in the past, and what you're hoping to learn in the future. It goes a long way.
As a user, even if you're up about these things and why they're important for product development, my answer is still "No."
Because I will only say "Yes" if I'm being paid or otherwise compensated specifically for that input.
I think it's important companies pay for usage testing so that they value that information and are more likely to hold it closely since it represents an investment and is perceived as competitive advantage.
That's why a sane data collection is aggreggate, upfront about the data it collects, avoidable, and most of all, explicitly opt-in. Alas, this requires eternal vigilance, as the pushback is neverending.
I really like the approach that the Steam survey uses which blends engineering and communication:
1) Pop up and ask for permission to scan the machine.
2) Show the data collected that will be sent back and give a second chance to decline.
3) Allow everyone to see the aggregate results.
Being mostly automated, it's lower friction than a manual Q&A survey. But it also feels way more respectful that trying to snoop around and then clandestinely exfiltrate the data. It's one of the few cases where I'm willing to opt-in to data collection.
Great point! And being able to see the results in aggregate is also interesting. It inclines me to share, because it becomes a two-way share, even though I don't actually have use for the information.
This. You aren't asking for vague blanket permission, nor are you asking for the user to manually fill out a survey. And you give them the opportunity to review what they're about to send.
It's not challenging, but it's wrong. If you want to explain it to them, making a binding promise of what it would be used for, and have it be opt-in, that would be another thing.
Morally and ethically, you're absolutely correct in every single possible way.
I was attempting to illustrate the decision-making processes that may have led to this juncture and what happened to communicating with customers. Please accept my apologies, as I have plainly failed to be clear that this was not an argument as to the moral or ethical questions concerned.
Again, you're completely right. It's not at all difficult to see the morally and ethically correct way to go about this.
Users stopped communicating. They install a thing, they uninstall it. 99% don't leave any comment at all.
(and yes, the obverse inference is also true. If you see one person complaining, there are probably 99 more who have had the same issue and have said nothing)
It appears to be this communities' commenters unpopular conclusion. That every piece of tracking is part of a conspiracy to sell you advertising.
> In section 3.1 of their privacy policy, Wacom wondered if it would be OK if they sent a few bits and bobs of data from my computer to Google Analytics, “[including] aggregate usage data, technical session information and information about [my] hardware device.”
What wasn't upfront about this? That they didn't add more details about what the session information was? Legally why would they? The post includes an image the section where they legally disclosed it. People not reading the privacy policy before using a product is not Wacom's legal problem.
Can you ask them to put this section on a separate screen? Sure. Will they do it? Who knows. I'm sure they'd want to know if you are a customer giving ideas than a low priority non customer as would any person.
How many blogs or websites disclose the use of Google Analytics in their privacy policy?
You could talk to many customers and this is the least thing they have on their minds. Paranoia displayed by commenters here is amazing.
As the post concludes, if you are a (prospective) customer who does not like what they collect then there are other brands. I might add who probably have a hidden, more intrusive way to track you because they are smaller, have smaller volume/margin and have the incentive to build and sell your profile like other small companies not in the field.
I don’t think anyone is denying that the data is valuable, but what is usually missing in the implementations (not sure if true in this case) are 1. Transparency about what is being collected, 2. Requesting consent from the user and 3. Providing control to the user in the form of an opt-in or opt-out.
Transparency, consent, and control.
If every company addressed these three issues, we wouldn’t be having this conversation about privacy and data collection over and over and over.
I fully agree that they need to address those problems. I really like the way Valve does it with their Steam Hardware Survey.
What I'm addressing is that I feel many people see a company tracking data, and assume this data is valuable enough to sell, and that the data is for sure being sold.
My point was that the data isn't just valuable to sell (maybe), but is legitimately valuable in making a better product/service.
> from a developer perspective, data such as what software your product is used with is INCREDIBLY valuable for project planning and prioritization.
I don't think anybody is disputing that. But that its very valuable to devs does not excuse collecting it without getting the user's informed consent first.
Having started numerous tech companies myself, interviewed hundreds of others who have, and started a community that instructions others on how to do so, I will say that HN's perception of how easy, profitable, and common it is to have a business model focused on selling data is overblown. I'd wager the vast majority of data collectors (e.g. Google Analytics accounts) just want it for their own internal decision-making and analysis.
But trusting companies to do the right thing is untenable. That trust has been broken far too often, by far too many companies. The only rational position a concerned user can take is to assume that anybody collecting such sensitive information (particularly in a sneaky way) intends to monetize it or use it for purposes other than product improvement.
And even if the data really will only be used for product development, getting the user's informed consent -- and refraining from data collection without it -- is critical.
Further, using GA automatically means that the data is being used for Google's purposes as well as the application creator's.
As much as I would like to believe you, I cannot. With literally almost every company around me gathering analytics and with every average person's operating system, web browser, social media, phones, televisions, smart locks, smart fridges, graphic card drivers and graphic tablets sending this data out in bulk, you are not going to convince me that the authors of all that software are siphoning data out of us all for nothing.
> you are not going to convince me that the authors of all that software are siphoning data out of us all for nothing.
That's not what they are saying at all, and suggesting it's for nothing proves you don't understand the value of using the data for internal decision making. Simply put: the main use of this data is for that reason: internal decision making. Answering questions like:
* How are our customers using our products?
* What errors are they experiencing?
* What features are they using?
* Where are they confused?
* What features cause the most problems?
* What feature should we work on next?
These are all regular questions that are answered by collecting these types of metrics, including the one described in this post. Selling the data to third parties isn't easy. The data is generally gathered to inform product decisions, not to sell, so it's not in any easy format that makes it easy to sell. One has to go out of their way to sell this data, and the cost to put together this data in a way that's useful to sell would almost certainly cost more to setup and manage than they would get from such a small number of relative users.
The simple fact is, everyone sends data back to their servers for collecting and parsing, including Apple, the company everyone puts on a pedestal for privacy.
Simply put: show me the evidence they are selling the data to third parties for profit. Anything less is speculation.
If you have large data-sets/streams, companies like Acxiom and Innovis are interested buyers. Are you trying to imply that brokering consumer data is a marginal or non-existant business?
> Are you trying to imply that brokering consumer data is a marginal or non-existant business?
For small/mid-size datasets (which is what we're talking about), yes, that's exactly what I'm implying. It's not actually easy for most companies to sell user data for a quick buck like is being claimed.
It would be "simpler" to say, but that doesn't invalidate the other reason.
My tablet behaves differently per application. If I typically have one app open only on one screen I can limit the tablets "workspace".
Context-specific buttons based on app.
And if you're doing that _and have build sufficient app infrastructure around it_ as Wacom has to support fairly custom per-app behavior, the more realistic conclusion is that they're trying to get more info on that - now you can argue about opt in on the "share experience data" privacy setting - and I would agree, absolutely.
But "more simple to say that they're just selling data for money" is a pretty reductionist argument that jumps solely to the most negative possible motivation. "What's the worst they could be doing with it? Selling it? That's probably what they're doing, not making their tool more useful."
It doesn't really matter what their reasons are. Having the data is a liability. The author explained one scenario where these things can go wrong. Another is if they're hacked. Or if they're purchased by a larger unethical company. Or if they accidentally keep the data in an open database on an Amazon cloud service. Or a million other scenarios.
Heya - I could swear that wasn't there when I originally wrote the comment, but obviously it is there. Thanks for pointing that out. With that said, it doesn't change the substance of my comment too much - as I pointed out one can get a pretty solid unique identifier many ways, not limited to what I said above, you could even call out the presence of a permanently identifying header that Chrome gives some users[0].
Heya, replied to another comment that brought up the same thing, sorry about that - here's what I wrote there:
Heya - I could swear that wasn't there when I originally wrote the comment, but obviously it is there. Thanks for pointing that out. With that said, it doesn't change the substance of my comment too much - as I pointed out one can get a pretty solid unique identifier many ways, not limited to what I said above, you could even call out the presence of a permanently identifying header that Chrome gives some users[0].
This is just one of many reasons to use StevenBlack's Hosts [1] list to block this type of behavior. While it doesn't currently block link.wacom.com, it would have prevented the subsequent requests google analytics. It works even better when paired with a PiHole [2] to protect all devices on the network.
I mean I put Pihole on all my networks but this is at best a solution to “nice malware” that doesn’t bother to hardcode addresses or perform lookups via an attacker controlled DNS server.
You can catch slightly more aggressive malware by forcing all DNS traffic to your server at the network level but you’re now playing the role of malicious network operator. I would whitelist this to only devices you own.
I don't think anyone would make the argument that a PiHole is a replacement for following best practices in terms of computer and network security. I'm just pointing out that a PiHole can block google analytics and other common violators of privacy. Its not a security tool and isn't advertised as such.
Sadly, some of these list don't currently include google-analytics.com since some sites would otherwise break as a result. So when using one of these hosts files it's often a good idea to double check whether they include Google's domains first.
(Also sad to say that GA is so big that a lot of websites/app rely on it)
Wow, that's weird. I don't remember ever seeing one site like that. Can you point one out? I mean, GA has been blocked at my places since 2015, and I don't remember anything ever was broken, on phone or desktop.
Can't think of any specific sites, but it's happened to me a few times. It's usually because there's bad code that's waiting on the GA init function before doing anything else.
This is why some blockers like uBlock Origin stub out the Google Analytics interface.
I don't know enough about webdev, but why is google analytics request sent by the client? Wouldn't it be easier for the webhost to send a request to google "this IP with this browser connected to me requesting this content", making it impossible to block on the client side?
Microsoft has started resetting hosts files which is really annoying with Defender, can't seem to disable it either. Annoying when deving on a local server!
Hosts files are literally the devil. They break so much shit. Hostnames sometimes change behavior (like an ad server that starts hosting a redirect script for legitimate clicks), kids who are "good with computers" set them up on relatives computers over the holidays unmaintained, malware that uses them to block antivirus updates, etc.
If you want to block ads, fine. Use a content aware proxy or browser extension.
Using browser extensions to block ads is much higher risk than doing DNS blocking. Most ad blockers have full access to all web pages, which essentially means they could trivially scrape your usernames/passwords for your email/banks/etc or perform actions on your behalf.
There's room for this to go bad (AdBlocker dev turns bad, or sells extension to a bad guy for a wad of cash, or extension has security vulnerabilities, or keys for publishing extension are not propery secured) so while DNS-level blocking might not work as well, it's definitely not an obviously-worse solution.
(though FIWI PiHole in the past had some really agressive default lists which stopped my from using it - though I set it up again recently and it's been much better - I haven't had any broken websites besides Amazon's own sponsored product links at the top of their own search results pages).
Bravo for a really well written article. I'm interested in this kind of thing but not familiar with techniques & tools used, so it was really nice that the author included lots of detail, reasons for doing things, etc.
I work on UX, coming from an engineering background. It means that everyday I work close to product management and engineering.
The trend over the last 10 years is to collect tons of data to improve the product. Some PMs and UXrs believe that they’ll get a magic insight from the data, and the skeptics do it anyways because is another data point to have. For engineering, services like GA are cheap and easy to integrate.
Nobody has a bad intention. But, we are distracted by the next product release to see the long term consequences for the society.
The reality is that some data is useful, but most of it is BS. To measure adoption and engagement you can do a pilot and then deactivate data collection. Big app errors are reported soon after a release, and you don’t need to continue collecting that for a long time.
To improve the UX you can do research with less data points, and smaller groups. The irony: I wish to have data to prove it, my hypothesis is based on my experience. I got more actionable insights from qualitative research, self-reported metrics, or quantitative data focused in certain aspects (instead of collecting all just in case). Some times having nice reports based on tons of data is more useful as an argument for corporate politics rather than to improve the product, but users doesn’t need to pay the consequences of your company stupidity (I’m looking at you MS telemetry ;-) )
There is a simple thing that we can do to change this trend. Ask yourself: What is the goal of collecting the data? What product hypothesis you want to prove? Can you get insights from a small group?
If you don’t know.... hold on your data collection desires.
For those cases the app can collect the exceptions only (as many apps and OS do).
I worked on a desktop product with this type of data collection. Usually what happens is that after a new release you may see new errors coming up, and then they start to repeat. The data collection becomes a burden, new reports of the same error type doesn’t give you more information.
It’s a good opportunity for a good UX, e.g point the user to the relevant support info to solve the problem.
For support cases you may be able to ask for diagnostics on demand. The app can collect it internally without sharing and send part of it when an exception occurs and the user accepts to send it.
I wonder what the comfortable medium between privacy and letting developers get feedback about how well their code works is. It seems to me like Wacom just wants to know if their drivers work, so they can focus engineering efforts around fixing the issues that are affecting their users. "Oh hey, the new beta of Photoshop breaks our drivers!" They don't make a "cloud product" and they have an obligation to make their hardware work with any software the user might want to use, so they are kind of painted into a weird corner here. If they collect data to drive their engineering, they're spyware. If they collect no data, they're a bug-ridden disaster area whose product you would never buy.
I am guessing that the answer will be "they should test everything in house and tell users to complain via email when shit is broken"... but we all know that synthetic QA is never going to be as good as "ground truth", and that 99% of users will just silently be unhappy. So I wonder what the privacy balance is here.
I understand the need for developers to know more about the hardware and software running on their client machines. For example, I believe information like the hardware survey from Valve [1] are very valuable for the whole industry.
But there's a some kind of an etiquette you need to follow, if a company wishes to collect data:
- Be straightforward. Say what information you are collecting, at what time and what for.
- Tell me in what way this information will be stored and how will it be anonymized.
- Will the data be stored forever? And is there a way for me to request the data or it's deletion?
- Don't collect data per default. Make it opt-in.
- Publicize the data in a suitable way. It may be useful to others.
Wacom simply ignored all of that human decency. How can you ever trust this company again?
> I wonder what the comfortable medium between privacy and letting developers get feedback about how well their code works is.
I consider the nut of the problem to be informed consent. If you have user's informed consent to get the feedback, then there is no problem. If you don't, then the whole operation is unacceptable.
And no, mentioning it in the privacy policy or terms of use don't count as "informed consent".
This would be a real challenge for some companies. Having a clear privacy policy creates a hard dependency between it and the code. And developers are notorious for not even being able to keep comments updated along with their code changes.
It's not impossible at all, just in the current state of the industry there's a good reason we have vague agreements (also including good old-fashioned laziness, of course). It'd probably need to be developed ground up as an API with side effects, so when the code is compiled it spits out some details about how it's used.
> Having a clear privacy policy creates a hard dependency between it and the code. And developers are notorious for not even being able to keep comments updated along with their code changes.
That's just a small extra step in the QA pipeline.
Also: analytics and telemetry code doesn't just appear out of the blue. Someone makes an explicit decision to scoop error logs from users, or track clicks, or spy on system configuration. That someone is usually higher up the management or technical chain, and should know enough to recognize that sending anything collected on user's machine that is not crucial (in the most strict, technical sense) to performing the action user activated has privacy implications.
So what? Informed consent is also "a real challenge" for some medical studies, does that mean we should let doctors carry out unethical studies?
I'm actually pretty sympathetic to Wacom in this instance, more sympathetic than the blogpost author at least. But unethical actions are unethical regardless of whether acting ethically is "a real challenge" for some companies.
The deep problems of “informed consent” are apparent in medical studies/treatment. Few patients are equipped to be informed because they don’t have a med school degree.
Since users ”can’t be informed” about tracking, it doesn’t make sense to discuss whether they “should be informed”.
Doubtless there are deep problems with "informed consent", but saying they "can't be informed" is nonsense. Is your plan to not bother to inform people because they "can't be informed", and decide what's best for them without their knowledge or consent?
> This would be a real challenge for some companies.
Tough. If a company can't do it the right way, they shouldn't do it at all.
> in the current state of the industry there's a good reason we have vague agreements
Well, I guess that depends on your point of view. I see no good reason for this, but I have no doubt that the various companies do see a good reason by their definitions.
You're right about the current state of the industry, but the current state of the industry is a travesty.
Yeah, 'good' was the wrong word. Maybe 'understandable' but that's still is a bit too charitable.
I was mostly musing about how changing code can have legal/business as well as technical side effects, and we've seen that to some degree with mobile app permissions who just grab everything because it's seen as too much effort to do it right. So I'm curious if this is going to change for the better any time soon.
Thing that I know happens, from personal experience: you can put a giant modal alert, and write in blinking, all caps, 60pt bright-red font that you will do something unless the user presses a button, then draw a bright red arrow to the button. Users will still complain that they weren’t informed.
Users are lazy and dumb, and the most ideological users are often the laziest and/or the dumbest, because they have an agenda. They will go out of their way not to give you the benefit of the doubt (”why was the font not 80pt? Clearly, you’re trying to hide something from users on high resolution screens!”)
If your goal is to eliminate user complaints about this, justified or not, then just stop intrusive data collection entirely. Then you don't need to bother with obtaining consent.
The way Steam handles the occasional hardware survey requests, which are purely opt-in, seems appropriate here. “Please select which applications you use your Wacom with” with a permanent opt-out checkbox would be quite acceptable. (Steam knows what games we play but not beyond that; Wacom must ask us to specify which apps we ‘play’ since the OS can’t be more specific.)
FWIW, as a customer of Wacom's products they very much do not view themselves as
> [having] an obligation to make their hardware work with any software the user might want to use.
They update drivers for 4 or 5 years then tell you to buy a new product if you expect it to work with current-gen software. Despite the fact that none of their tablets have had a substantial new feature in 20 years beyond the wireless connection kit, somehow a driver for a "Intuos Pro 4" cannot be used with a functionally-identical "Intuos Pro 3".
To me the comfortable medium is 100% privacy. 0% feedback. There is no middle ground because feedback and privacy should not be conflated. Users already own the device and owe no data to their vendor.
I've started turning off analytics everywhere. I turn off reporting on Firefox, Atom, everywhere. No crash report. Nothing if I can untick the box. Windows Firewall Control or LittleSnitch for all the outbound traffic as well. I don't let most windows services talk to the Internet unless it's updates or Windows Defender.
Some stuff is going to get through, but it should just be because you missed it. I'm sorry FOSS people; everyone is collecting way too much and I don't want to give Mozilla my data either. No, not even crash reports.
I have a Wacom tablet. The drivers don’t install on macOS any more. There doesn’t seem to be any technical reason for this. It’s a USB device (“essentially a mouse”) and it worked fine for several major versions. Maybe it was a 32/64 issue.
You’d think if keeping users happy was their primary goal, they might start by keeping their existing USB drivers compiled for the current macOS.
They don’t need me to email them to tell them it’s broken under current macOS. They’re the ones who told me!
> Why does a device that is essentially a mouse need a privacy policy
I mean, crash logs, but yes -- defining question for our time
drivers shouldn't connect to the internet unless that's what they're for. crash logs should be managed by a third party thing that the user can configure
While this implementation obviously has privacy issues, the anonymized aggregate data would be quite interesting, e.g. how many people use photoshop, illustrator, etc. with their wacom tablets.
The problem then seems to be more about the false positives. If you use "Half Life 3 Test Build" that is useless info for wacom because it (presumably) doesn't care about pen input.
Q: If the data were filtered to just art/graphics apps using the pen, would that still be problematic?
Yes. When thinking about data, you need to think about orthogonal uses. Can you imagine reasons why someone might subpoena data to determine whether Photoshop was being used on my home desktop machines at a particular time? They might not care that it was Photoshop at all.
Any data collection of course has a privacy cost and should of course be opt in.
What about aggregate data limited to art apps? For example if it only sends a monthly summary: used photoshop with a wacom tablet for 15 hours this month, illustrator for 3 hours this month?
I think any attempt to exfil data not required by the function of the tool should be clearly and transparently disclosed, the use of those disclosures backed by the force of law, and opt-in. This is obviously far from where we are.
Because of that, I would block it no matter what. In a better world, I would selectively allow some instrumentation and such, but as-is, there is no way to trust any of it.
Well with the proviso you stated that informed consent has been obtained first, then this would be fine (as would more frequent/less targeted collection). If not, then this is not fine.
If that's the kind of behavioural information they want, they can pay for it just like anyone else. That's the kind of data that should absolutely not be expected to be free.
In that fictitious scenario, would they have checked with Adobe if they wouldn't mind their users spied up on. What if this information is indirectly used for trading on ADBE stock? Would that be considered OK ?
The collection of the data is almost never what’s problematic. What’s problematic is the lack of transparency about what is being collected (doing it to users in secret or burying it in a privacy policy somewhere), and the lack of a way for the user to consent or not consent to it via an opt-in or opt-out. If the company provided these, then this is kind of a non-story.
In this case, the author both found the information in the installer, and the apt has a "Opt out of Experience Sharing" privacy checkbox (which I would agree should be opt in, not out), so to me covers most of this.
They have all the data from the uninformed, ambivalent or defeated already.
We develop things to crush the remaining resistance. Walled garden devices, cert pinning, signed applications, DNS over HTTPS, yes they are all more secure, but not for you. If well implemented, these serve as tools to make sure the privacy policy is the only thing informing you of collection.
I'm not perfectly okay with what you are suggesting (and that's okay of course).
But essentially, coming from a 3rd world country where censorship was the norm before Internet came along, and seeing how TLS and DoH is giving similar states like China a headache, I have to say that I am extremely happy, but concerned.
I believe it is a regulatory problem. In essense, make collecting data punishable but personally (i.e. Person X signed on decision to collect data, person X gets jail time)
I know that's probably not even remotely possible because employees "operate on behalf on the company" but removing that shield will effectively eliminate this. The same way dumping stock at a company means the FTC/SEC/FBI will have you ass on a platter, personally.
Saw this two nights ago installing the driver on Windows 10. Read the UELA. Did not consent, closed the window. Is that good enough?
By the way, My tablet works MUCH BETTER on Ubuntu and Mint than on Windows 10. Krita and MyPaint are cross platform so I might just do my art on a *nix box instead.
Off topic, but would you be willing to expand on your Linux experiences with this?
I'm currently doing all of my digital drawing on an old SP3 tablet running Manjaro, via Krita. The driver support is... acceptable, I guess. Krita has more than a few annoying edges, but shows a lot of potential so I've been sticking with it.
For a long time I've been considering springing for a dedicated setup with one of Wacom's larger devices, but I've held back because I need it to have completely solid Linux support and I can't figure out how to test that in advance. I'm always curious to get more info about what issues other people have seen.
I wish I could find a physical store where I could just bring in a laptop, plug it into the actual device, and draw for maybe an hour to figure out if there are any dealbreaking problems.
"Off topic, but would you be willing to expand on your Linux experiences with this?"
I'm using a Wacom Intuos pen & touch M graphics tablet, connected to a Thinkpad 430. Over the years I was using Debian Stable, MINT, and finally Ubuntu.
The experience is great. Like I mentioned, much better than windows. I only just started to use Krita (I prefer MyPaint, however I feel I should branch out). The work I do isn't special, just stupid doodles and cartoon type of stuff. The wacom I'm using is older, I think I bought it 5 years ago or so.
I don't really have much to add besides that. I remember WAYYY back in the day having to compile the driver myself for an older wacom (Ubuntu 6 or 7 era). It's practically plug and play now, however, I think there is some other apt-get stuff that I did once for some reason that I forget (eraser wasn't working?). If you are having issues maybe try another tablet. I think the one I have can be bought for $50 on ebay. Maybe try a 30 day return place like best buy and sorry to say try the latest ubuntu or mint for compatibility (have a dedicated art machine?)
> as far as I can tell anyone with the presence of mind to decline it could do so with no adverse consequences
Makes me think one should try declining these kinds of agreements to see what happens, before accepting. As someone who also has an "anti-privacy-policy-policy," I wonder how many of these kinds of things I've agreed to when it was unnecessary.
As far as I can tell the only consequence of declining it is that it pops up the “hey please let us have all this info” dialogue whenever you reboot. I’ve been doing this on my own machine for most of a year.
Might be different with the latest update, I haven’t bothered with that.
Is there an accessible way to prevent an application drawing a specific window to the screen?
I can see an app like autohotkey could click the "no" button and automatically remove it, but could you (assuming it's not modal; which it probably is) tell Windows not to show it?
The "Wacom Desktop Center" app mostly just sits there looking for updates and bugging you to enable tracking anyway; the Mac version has a menu setting that theoretically stops it from ever running, and thus bugging you to sign up to share your analytics. I just turned it off (since I just noticed it for the first time) but don't feel like rebooting to check if it actually works. (Though I did just run the little script I keep around to restart the driver, which normally brings up the WDC, and it did not show up this time. Huzzah!)
"In some ways it feels unfair to single out Wacom." - Uh no, it is completely fair to single them out and put them in the spotlight for doing this kind of tracking.
I think the statement was meant to indicate that this kind of behavior is well-nigh ubiquitous, so the only thing really different about Wacom is that they're the one we're talking about, when they are by no means the worst offender.
I'm sure that's exactly what it means -- but it's still fair to call out individual companies that engage in this misbehavior. That others are doing it as well isn't important.
It looks like everything in tech got poisoned, smart TVs taking screenshots, web apps tracking and matching user clicks, smartphones tracking locations realtime and who knows what else, desktop apps monitoring other apps and peripherals, creepy companies building profiles on everyone, health institutions selling data of their users... I want out, I didn't get into this field, keeping myself up-to-date and super capable via top universities, to be just another cog in building a toxic monstrosity this industry is becoming just to make somebody with a limited lifespan feel powerful and rich.
"just to make somebody with a limited lifespan feel powerful and rich."
Wow that last sentence really puts things into perspective. How can be reverse course and throw a wrench in the system? We are the makers, we should be able to wrestle back control and do it democratically and get politicians on our side to legislate this ad industry into the ground.
First step would be to kill advertising-based models. Get them banned. Because it's the advertising industry's race to the bottom that poisons everything, and fuels the creation of surveillance infrastructure. With ad-related snooping gone, it will be much easier to rein in the remaining few players who misuse data in pursuit of optimizing their business models.
(And yes, I know ads enable a lot of free content on-line. But as countless problems like this show, it's a bad tradeoff.)
There was a time before now where advertising was effective and didnt track you - it was the advent of targeted advertisement which really killed things off.
Can we go back to the days where an advert was was just an image and a hyperlink? Where advertisement paid by the pixel and location on the website? Where JavaScript was unused unless in some rare and warranted cases?
I still believe the web can be a free and open market place of ideas.
1. The uplift of targeted advertising is unbelievable until you see the actual statistics. It's like slowly sipping a cup of coffee to wake up versus waking up to snort a line of crack.
2. Advertisers were abused and defrauded by adtech. Which has inspired all kinds of surveillance hellscape because the advertisers finally caught wise and have renegotiated to pay for actual performance only -- not clicks -- actually closed sales. But adtech wants paid if you do your research online, respond to an ad online, and then buy in store. And a whole lot of adtech now allows for that. Attributing an in-store purchase with no customer interaction to a prior web session by that same party.
The benefit of those two factors to the advertisers is such that we can't have a serious discussion about this shit going away without a law which assigns criminal penalties for being a beneficiary of the scheme.
Most ads that I remember when I was new to the internet were all running flash and had really annoying circus-game-like objectives such as "Can you shoot the basketball through the hoop?!?"
I hate the means by which advertising is targeted today, but I would be lying if I said the format of the ads themselves were more annoying or less useful than the past.
Ads enabling free online content is a bit of a glazier's fallacy. Yes, when ads are an option most content producers will be funded by ads. it's simple to implement and does not require end user consent but as soon as the degenerate strategy is removed new strategies become viable such as micro-payments.
I think the potential of micro-payments was never reached because transactions are currently too large, and uni-directional. If we could process payments of $0.01, it would create a culture were you are actually willing to pay for content. Also, if you make a insightful comment, a product review, or even a upvote, you should receive some credits to spend back on consuming content. Doesn't have to be blockchain, it would be effective even if limited to one site or content provider.
What we actually need in a practical sense is Kindle Unlimited but for web content.
And furthermore it could be a plurality of those kinds of providers aggregating content.
Deploy single-sign-on schemes, and websites might participate in a plurality of programs from different vendors.
But at the end of the day, you'd pay one or two "providers" a monthly sub, they pool the funds, take their cut, and do prorata distribution of the pool based on views, eyeball time, popularity of content, lots of ways.
No need to perform microtransactions from a banking perspective. You're going to eat $20 of web content this month, and so will lots of others. And then those views can participate in the pool and get paid monthly or something.
Nothing short of inquisition-style inspections of company servers, code, and networks is going to fix it on that front. We'll have to do some really radical invasions of company privacy before it results in more than fines-as-a-business-expense.
Not really, you just need to incentivise the behaviour. Pass a law to make it illegal to show ads based on tracking, with executive imprisonment for violations. Executives will then demand their teams to put controls in place to ensure that no one is using tracking to show ads. Violations of HIPAA have some serious fines, and I've watched executives at all levels take it VERY seriously
Wouldn't this executive and Media companies will run afoul of the GDPR act. At least in EU they can be fined and may be prosecuted if they are repeat offenders.
It will be nice if all the EU residents in one go send a request to remove their information from all the US based media and technology giants who under the guise of changing the world are indeed working to sale more to consumers and make themselves rich.
Self-hosted apps/DApps come to mind however they are still alpha-quality. It might take some time (~5 years) to replace 90% of expected FAANG functionality.
This is just a non starter and a pipe dream. It goes against the principles of capitalism and free enterprise.
You need to think about incentives. You have two tablets, identical in functionality and performance. But the one without the ads and surveillance features costs twenty percent more. Which one do you buy? Actually Kindle did this for a while (maybe they still do...). I ended up buying the cheaper one with the ads.
Can confirm that Kindle does in fact sell devices that are ad-supported.
For a $20 one time fee you can remove this feature (which can be done after purchase at any time). But most people won't notice the tiny option select on the amazon purchase page that defaults to "with Special Offers".
To make it even more confusing, they make it sound like the better option to pick is the ad-supported version. I mean you are choosing between the model "with special offers" or the one "without special offers"? Most people that don't know any better will leave the default "with special offers" options selected.
Yes, you're right. I know people who bought Kindle with ads just to save that 20% or so.
But that's why I wrote that advertising-based business models need to be banned. Not discouraged, not badmouthed, but banned. They're anticompetitive and poisonous; when one company starts doing it, others in the whole sector are drawn to follow suit (it's e.g. why it's hard to actually sell apps on mobile or make subscriptions for publications on-line profitable; ad-supported operations create a baseline cost of zero).
So go back to the 90s? Microsoft being the reigning king? Not sure if this time period was honestly better.
Or alternatively, give users ownership of their data like the EU has taken steps to do. Advertising is here to stay and provides its own use. But there could be something that forces transparency.
How about no free stuff on the web? Charge for services and make it illegal to sell private information without the subject’s consent and remuneration.
Making it illegal won't solve much. E.g. FB doesn't sell private information, they sell the ability to target demographics. However I understand the underlying principle which you are espousing.
The genie is out of the bottle at this point. IMHO the only method forward is how do we as a society responsibly allow for coexistence such that all parties are satisfied.
>> How about no free stuff on the web? Charge for services and make it illegal to sell private information without the subject’s consent and remuneration.
> Making it illegal won't solve much. E.g. FB doesn't sell private information, they sell the ability to target demographics.
You just need to appropriately specify what's being made illegal. You're right that FB could weasel out of a law that outlawed selling private information. They couldn't weasel out of a law that outlawed monetizing personal data without user consent.
The way to criminalize it properly is to create a presumption of criminal activity when an advertiser benefits from ad targeting. Literally anything more than the advertiser choosing what sites to appear on should be made criminal. No other criteria allowed.
Quickest thing you can do is quit your jobs en masse. At some index level of number of people * avg skill of people quitting, the industry becomes nonviable. Then you state your terms.
Sounds like a union? Also sounds like Galt's Gulch. Weird dovetail, there.
The kicker is that tech workers are in a FAR better position than the other groups that are pushing or considering a general strike. I suppose that makes the prospect more viable, but also more dangerous to the stability of the overall economy. I guess it's up to you if a shake-up now is worth stopping or forestalling the rising waters.
This. Tech workers need to organize. Unfortunately you have to convince tech workers that moral duty to society is better than their comfortable high-salary lifestyles, and good luck with that.
This is a non-starter for achieving the end result.
There's enough of a delta in both the money paid for online advertising of a target nature and in the better results that yields for the advertisers for them to fund a very rapid turn-over.
There's enough money in it that if they must, they'll be able and willing to buy their way past any unionization issues.
Here's a radical idea. Embrace that technology will bring everything to light and essentially destroy privacy. Instead, let's ensure the powerful people in our society also have no privacy. Let's ensure we can watch the watchers with more eyes than they watch us with.
A person with a private security army, a 40 acre compound and a personal helicopter has very little to fear from the general populace knowing things. They never have to worry about the price of insurance, or getting a job, or securing a loan vs being homeless.
Privacy of varying levels is and has been a functional requirement for smooth working of free society.
Economic disparities make the impact of lower levels of overall societal privacy have a disproportionate negative impact to those on the lower end of the scale.
IDK, my go-to strategy for now is to support products that respect my privacy and recommend them to friends and family. My friends and family don't know this is going on or that alternatives exist, so I'm their reference for what is and isn't good.
I refuse to work for companies I don't agree with, which hurts me financially. I will never work at a FAANG company, for example, or most of the other heavy weights that are funny l functionally similar.
For things without a clearly superior alternative, I have a list of business opportunities. For example, smart devices are becoming popular, and they're horrendous for privacy. That means there's a market for devices that don't spy on you, and the open source options after inconvenient enough that a packaged deal is attractive, even if it could be DIY. For example, I think there's room for a Ring competitor that is E2E encrypted, provided the app is well designed and the device is unobtrusive. Privacy respecting services and devices are unlikely to take over the alternatives, but merely existing puts pressure on the major players to act better.
My plan is to deploy it and a VPN tunnel and give certain folks access to keep in touch. I’ll have instructions for self hosting and VPN key creation/sharing (Wireguard ftw)
There’s absolutely no reason to bother with cloud services. They’re nothing but big corp coopting our problem solving.
It always comes down to be gatekeeped but no one having the guts to gate keep a rich douche whose money to buy security goes away as soon as we do
My biggest challenge is communicating with non-technical family and friends. Teaching them Whatsapp was a challenge, but things like IG and FB are simple to install and log in to. I've never been able to quit them without suddenly being isolated.
I never jumped on the “using social media means I am connected” band wagon.
Still just using email. It’s web scale, and just needs UX love.
But really even that is overkill. Self hosting is too easy and cheap for me to justify cloud services privacy and just generally being in the habit of externalizing every aspect of utilitarian life.
I’m not talking webscale loads. And it could be a hub for IOT. My data streams are not Google scale. But don’t take your eye off them sticks & options. Ooo shiny
Share my data with my doctor directly over local area WiFi, by making it adhere to a specific format. No data middle men needed.
For a culture constantly climbing up its own ass about austerity in economics, we sure enjoy selling ridiculously uneconomical means of communication.
It’s almost as if it’s a purpose built emotional response but who could believe so many people would fall for an emotional mass delusion?
Create enough noisy data so that the data these companies receive is so unreliable that they give up. Have a Chrome extension that runs and clicks on things randomly or a separate app that controls the mouse that feeds bad data. If it ran on every computer maybe Google AdWords might become useless.
It's not their fault they're exploiting a system that allows what they are doing.
They're not good people, but if it weren't this set of people, another would take their place. The world is rife with opportunity for people of low morals.
It is society's fault that we have not explicitly codified what will not be allowed and constructed the right laws and enforcement to ensure that violators are ruinously punished.
Good grief. Many of the people in privileged positions to complain about this stuff got to those positions through Sand Hill road.
Telling some poor person that they have to pay $10 per month to use Google or send Facebook messages isn’t democracy. Don’t want surveillance? Don’t use the products the employ it, but some people, especially those that don’t have means, might be perfectly fine to trade privacy for a “free” service. Destroying the ad industry is elitism and tone deaf — ads are imperfect, but they have enabled people to do things that would have been impossible or unaffordable 20 years ago.
This pitchforks and guillotines talk is ridiculous. Build something better if you don’t like the way things are.
Why legislate it? Why not make a thing that doesn’t need advertising and charge people money for it. The way to “end” the industry you don’t like is democratically — using dollars as votes.
The idea of using government to crush an industry is a bit totalitarian — it “the people” agree with you, they should be happy to pay you for your product. If they don’t agree, then there isn’t anything democratic about using a government to shut down an industry you don’t like — that’s not democracy, that’s fascism.
I'm 100% like you feel. I also did a top uni for years, it was super stressful but I eventually managed. Now I just got a work at a tech company similar to Wacom. I thought okay, we all know how unethical are Google, FB & friends, but a company selling hardware accessories only, c'mon no way it should not be too evil... Well that Wacom story is cute in comparaison to what they do. Ofc they also log all apps used like wacom, but they are also implementing systems to analyse if you have health issues from your peripheral logs, analyse your emotions and stress level from microphones & cameras, facial recognition, eye gaze, etc.
I love technology and computer science but tech is so screwed up in terms of ethics.
I wish we'd see more people coming together that care about this (like truely care, not the #Tethics of the sillicon valley) to make some open and private alternatives to all this toxicity. But it is super hard to make things change.
I'll work in that direction in my free time, but I feel so alone. HN seems the only place people care a bit about that. Around me at uni or at work, the level of ethics and care for privacy is so low, it's depressing. It's not only that "rich boss" telling its employees to exploit people's data, it's also engineers themselves being happy collabo of this because they make huge salary working for those companies.
The Work Number's entire sales pitch is... "We'll accept the data from you for free. You can give our our number and website to anyone who calls into your HR looking for employment reference, employment verification, etc. We'll field those calls for you."
They don't pay for the paystub data. The employers give it to them.
Although it's an invasion of privacy, to be sure, it actually does have some benefits for the employee.
In places outside San Fran, where people actually get conforming mortgages, having your data in The Work Number's database automates and cuts out the employment & income verification so that you don't have to track down records and submit manually and can potentially skip multiple must-connect phone calls between the lender and employer.
I don't think we have to quit our day jobs to change the world. What we need to do is (1) change the expectations of the public via conversations, activism, migrating away from FB/Google/Microsoft products, etc. and (2) make it easier for people to prioritize privacy by building open-source apps and hardware.
Inspiring examples that I use daily include Linux, git, and Bitwarden.
The general public are too disinterested to make it possible to convince the majority to change the way they make decisions about software and hardware.
I'm not sure how you can propose legislation without the majority approval.
A functioning democracy legislates according to the will of the people. That means you HAVE to convince the majority first.
In a broken democracy, you still need to convince the powerful (although this might actually be easier). But you still have to convince them to apply the same standard to those without power, which is likely a hard sell.
I'm convinced the parents of the US, and in turn broad bipartisan support, could get behind some sane legislation on this stuff if a massive dump like the New York Times one were deanonymized and focused on kids.
Is Wacom's driver an example of excessive monitoring?
While the author presents the graphics tablet as a glorified mouse, tablets usually offer many more features. How those features interact with various applications is important, and they have to prioritize which applications they support. The data collection that the author describes may be viewed, internally, as part of that process.
Now I am not claiming that Wacom is doing the right thing, nor am I claiming that they are doing the right thing in the wrong way. Yet it is entirely possible that they feel justified in collecting that data for product development without having ulterior motives. Their failure may simply lay in the failure to recognize that many people are sensitive to data collection due to real, potential, or perceived abuses by other parties.
It’s 2020. We’re starting to develop a pretty solid framework around what is respectful to users and what isn’t, how to disclose things, ask for permission, etc. (All knowledge that has been acquired the hard way - through data breaches, users getting their personal data stolen because of a lazy software company, etc)
Wacom is a $500M company. They don’t get the benefit of the doubt.
It's just one more example, but one that is rather unexpected. It forces me to ask questions about why? and if we are so deep down that road that this level of monitoring/surveillance became the new normal and an expected outcome of production cycles? However, I think all the gaming companies have much more intrusive monitoring to prevent cheating, NVidia might need to do the same for their GeForce Experience and per-game customization, not forgetting creepy antivirus software like Avast etc.
I'm not a graphic artist, but I hate mouse cords and hate having to recharge mice or deal with batteries.
So a series of Wacom "puck" mice on one (over the years, several) of their digitizer tablets has been my mouse substitute at my desktop. I bought the high end ones. On an average of every 3 to 4 years.
They stopped making the puck several years ago. Mine was starting to wear out, so I finally made the leap to Logitech's G703 and the Logitech G PowerPlay inductive mat. So same benefit -- the mouse is just magically always charged.
If I hadn't already switched, I would have anyway after the Wacom selling data thing...
Or the failure may be on your end, believing that "many people are sensitive to data collection" while in fact, most people don't give a dusty fuck about it and happily share everything for saving a few bucks a month.
Hackernews is NOT the people. HN represents a TINY TINY fraction of users.
That's why we should give a fuck on their behalf. In a techno-capital society, it's too much to expect normal people to have to know the technical details of all of these things.
I don't know anything about water treatment or nuclear power, but I still expect the people working in those industries to be held to extremely high standards of competence, virtue and accountability.
We should have the same standards. We don't, so instead we need to demand regulations for these monsters.
Civilized societies don't tolerate "vampires" and cannibals walking among them (or lording from on high). They eliminate them. Eventually the people will wake up.
You're saying this group of "most people" knows what's being sent, that they're exercising informed consent? Surely you aren't hanging this argument on "common sense."
They don't care. Until you can show them you know how often they're on Grindr and where their tricks live.
Or that they got a prescription filled. For Valtrex.
What would be helpful -- but that I am adamantly against -- would be tons of data drops, in communities across the nation, of local church leaders and local community leaders.
It's true, until you show them how much data is collected and who is buying and selling the information without their consent. Then a significant portion start to recognize the threat. There's a reason none of these companies mention it.
91% of Americans feel that they have lost control of their personal data and privacy. The logical conclusion is that at least that many understand what they have gotten themselves into. That would indicate that a majority of people are exercising informed consent, despite the vast majority of Americans feeling that way, they continue to use the gamut of products and services.
I wanted to add on the list of privacy invading examples. It's better I make the counter claim.
The data collected has massive potential to improve medical research. Being able to validate database wide experiments on hundreds of millions of people at once is pretty incredible. There's likely to be a decade of insights to be found in this rapidly filling digital ocean of information.
In several years the clamour to get off the known web will empower a lot of security apps (not "privacy" apps, that are the opposite of their name) that are growing behind the scenes.
I recently encountered a company that uses ad-tech pioneered identification techniques to snake around patient privacy laws and clinical trial anonymity to sell re-identified data back to insurance companies.
You have a valid point, I did some Deep Learning-based medical research as well as analytics on top of FHIR and having more data is absolutely essential for e.g. finding correlations/covariate conditions/risk factors etc. However, there are massive privacy risks, economically-ruinous or literally life-threatening.
I think one of the great signals of good ethical behavior is the person who'll refuse to build a thing that they know should not exist, even when offered a great reward if they'll do so.
s/limited lifespan feel powerful and rich/limited lifespan powerful and rich/
It's a whole attitude. They're aware of their limited lifespan and intend to either buy their way into more and better lifespan (if possible), but in any event become actually powerful and rich.
At least on a certain scale, they're not wrong. It does work.
This is not to say, however, that they're not slugs deserving of a good salting.
> It looks like everything in tech got poisoned...
> ...building a toxic monstrosity this industry is becoming just to make somebody with a limited lifespan feel powerful and rich.
That's what you get when you combine shareholder-focused capitalism, the attitude that everything's OK as long as it's legal, and the attitude that we should avoid making things illegal through regulation (so as not to stifle innovation such as this).
IMHO, all of those ideas should be rejected or greatly curtailed because of the poisonous tree that's grown from them (which bears delicious poison fruit).
you're getting downvoted just for pointing out why the problem exists.
The discussions around these issues always follow the same pattern that reminds me of a dialogue I recently saw posted somewhere, where an Amish person and a non-Amish person talked about technology and the amish person asked the other one, "do you think having the television on is good for you and your family?" and the other person responded "no, but we don't want to get rid of it because it may be useful", and the amish guy responded with "that's the difference between us, if something is bad for our family we throw it out."
The discussions around tech are the exact same. We all agree the modern internet is screwed, large companies put ads into everything, we're getting screwed over, non-profit domain spaces are being sold, everyone's unhappy, and we do .... nothing. Because of 'innovation' or some other conjured up fantasy term.
> everyone's unhappy, and we do .... nothing. Because of 'innovation' or some other conjured up fantasy term.
The only reason "we" do nothing, is that "we" have no agency. Other reasons are just rationalization to cope with our powerlessness: we pretend ideological debates among the people decide the faith of the country. The only vote "we" have is voting with our wallet, which only works in a truly free market.
How much of this is feature creep when teams feel obliged to keep adding more and more to justify their jobs? Let's say you work for Samsung in their TV software division. Other than incremental upgrades in screen technology coming through for you to accommodate, what else can you keep pushing through the pipeline to impress your managers? Maybe smart TV features are the low hanging fruit for these employees? I mean, I dislike that functionality, but maybe the pushback from the public isn't strong enough?
It's not to impress managers, it's to make more money. And most tech employees just don't care enough (or even have enough of a moral compass) to protest. Paycheck first, ethics second.
You are implying that capitalism is the problem? What does privacy look like in non-capitalist places? What was the privacy like in East Germany? Or China? Or Cuba? Capitalism isn’t the problem, it’s the solution: build things people want and the market rewards it.
> You are implying that capitalism is the problem?
Certain kinds of capitalism, yes. There's not just one kind, and the kind we have now isn't the best kind. Perhaps we should try to discover a better one?
> What does privacy look like in non-capitalist places ... like ... China ...?
China is very capitalist now, if you weren't aware.
In any case, the main problem in those countries (at least with regards to privacy) was authoritarianism, not non-capitalism.
It's important to also acknowledge the crucial role culture plays in this problem, regardless of the political system it cradles.
The public is simply ignorant about surveillance technology issues. Not that long ago we used to tolerate sawdust in our bread[1], and that's food, something humans should be pretty knowledgeable about. People would revolt if this happened now, whether they live under a capitalist or communist system. A free market might accelerate the transition, but education about the issue is still the underlying factor of change.
The best part is where the author admits to using google analytics himself to track who visits his blog. At some point we all have to say enough is enough.
Given that the Wacom utility is full of app-specific references and "customize your tablet, per app", I'd say that this is on par.
Ask random person, "Hey, do you know that when you visit John's blog, he sends your information to Google, too, not just himself?" and I guarantee you the answer is probably closer to 7% than 70%.
That's wishful thinking, not reality. In reality, many different actors can benefit from info about the used software on a particular users' PC. And now they know who to bribe to get it.
> Has this investor got a beta build from that new startup everyone's talking about? Their bets are always winning, I better frontrun them if that's the case.
> What apps could I exploit to get into this guy's computer?
> Wait why is my employee suddenly running tor browser after I involved her into this new secret deal? Better be careful with her, she might be talking to someone.
> Damn, our competitor's engineers are all running our app. Let's correlate the timestamps with our own backend to discover their accounts and push a special update to them.
Why do people use GA? Every HTTP server generates a log. That's always been good enough for me. Then again my site doesn't have ads, so maybe that's the answer.
Ohh you bring me back memories of 1999 when I was using WebTrends to analyze http access logs.
HTTP Log analysis is slow, and requires a lot of server side setup. Also, it will not give you navigation events in a SPA.
Using GA... just drop a line of JavaScript and you are done, with near real-time insights that are more detailed than an access log. You don’t need any server conf, or extra knowledge (not even JS knowledge: copy & paste the embed code). And Google gives you that for “free”... that’s why tons of sites doesn’t care about Http access log analysis anymore.
For desktop apps is easy too. The GA API is very simple: send the app id, event + any event data you want. Your dev team can do that with self service (no need to setup a service, no extra costs to handle data).
I think what escapes most people is that they think Google Analytics is a free product to help you track your visitors. Yes it does do that, but the more valuable product to other google customers is you've linked your website to a global ad network even if you don't display ads.
I've read a lot of comments recently about how Google Analytics is bad, but no one explain why. Can I ask why this is not something people want? Is it not anonymised?
GA gives Google surveillance over a large portion of the web. Even if the author of a webpage trusts google with their data, they shouldn't be forcing their opinion of Google on others. Trust isn't transitive!
Google receiving browsing histories for a single website is rude, but it probably isn't a serious problem for many websites (although the risk will depend on the nature of the website). In isolation, the fact that Alice read Bob's webpage isn't isn't very interesting, but Google can aggregate that data into s very accurate pattern of life[1].
> Is it not anonymised?
Not for any meaningful definition of "anonyms". At best GA will zero the low 8 bits of the IP address by request of the website. (The opinion of the person visiting the website apparently isn't worth considering) See this[2] post for a more detailed explanation of GA's perfunctory "Anonymize IP" feature.
One reason why is that Google Analytics is not being limited to giving the web site owner traffic information, the analytics are also being used by Google to collect and correlate larger traffic patterns, as well as track individual users across multiple sites. These are things Google gets to see, but the neither the site owners nor the users have access to. What Google does exactly with this information is not entirely known outside of Google, though it's certainly used at least to improve personalized advertising, which many people feel is a privacy concern.
When a company with the reach and market share of Google is involved, "anonymized" is meaningless. They have so many other channels to gather data (Android, ReCaptcha, Chrome, Google Search, etc.) that I'm sure it's trivial to de-anonymize and correlate GA data to a real person if that becomes profitable/necessary.
It's anonymized to the website owner (who has your IP anyway), not to Google. That means even non-Chrome users are forced under Google's omniscient watch of the web.
I know that Google builds its business on the backs of user data. That doesn't change the fact that data regarding me or my machines is "none of their business" in the colloquial sense. That Google forcefully disagrees is what makes them a spy agency.
Lol, comparing visit counters from the 90s to Google Analytics.
GA is used by countless websites. It's likely hooked into the adwords codebase so that they can track websites you visit even if that website does not have Adwords ads on it.
These days I assume that if something is possible, profitable, and legal, it's being done. Sometimes I question whether there's a company executive on earth who doesn't deserve to get stood up against a wall.
I wonder if the extremely affordable tablets from Monoprice also have this problem. I don't have a use for one but I know a few others who use them and claim they are fantastic in quality and not overpriced. If they don't phone home, perhaps that could be another marketing point: respects your privacy.
For years I've avoided the software packaged with hardware whenever possible, e.g. printer drivers (a few MB of actual driver at most, and a few hundred MB of useless bloatware all installed together); now I guess there's another reason to do that.
I received one as a gift but, to be honest, I never used it after seeing the obscure mandatory propietary format to save the files. If I can't open my files later in GNU and there is not a method at sight to save it as jpeg, png, etc... is useless. Just a cheap model probably. Collecting dust somewhere.
My other wacom, and older model, was awesome as a mouse replacement; but it toke months to work in Linux and I don't feel too much inclined to repeat the experience.
I was a species of "mid-early adopter", probably. All should be much easier now. In any case I eventually stopped to use the older model because it was just a little cumbersome to move it around with the laptop all day.
It was a nice piece of hardware. Is a pity to hear that they are now tracking what users do with their computers. For me this is a no-way (It seems that I did the right thing dismissing the second model).
Wacom tablets work fine in Linux, Gimp, Krita, etc. in my experience (and haven't required significant setup for over a decade). Why are you trying to use obscure proprietary software instead?
> Why are you trying to use obscure proprietary software instead?
I'm not trying. In fact, I rejected that Wacom tablet exactly for that.
Some time ago when you unbox a product it was not uncommon to hear something like: "Sorry but as you are a Linux user we, the makers, will try to make your journey miserable not providing any support. Ha haa!. Maybe some volunteer working for free will fix this new model in six months. Maybe not".
Sorry maker but as you don't provide drivers for users like me, I will not use it. Bye. Have a good day.
Hold on. This is a hardware device we are talking about here. You put it on your desk, physically interact with it, and plug it into your computer. Do you realize all the ways the manufacturer could harm you if they were malicious, cut corners, were coerced by a state to compromise the device, or failed to comply with regulations?
And you’re worried about their data handling policy?
A short and very incomplete list of the things a purchaser of a Wacom tablet is trusting to be true:
- That the tablet is safe to use - it will not fail in a way that exposes the user to electric shock hazards, sharp edges, dangerous chemicals, etc.
- that the EM emissions used to communicate between the pen and tablet don’t interfere with other systems in ways that could compromise safety
- that the device complies with usb standards and won’t damage electronics you interface it to
- that there are no hidden surveillance devices in the tablet or pen
- that, as an input device with access to your usb bus it doesn’t have the ability to be remotely induced to control your computer
Then you’re installing a piece of driver software, giving it sufficient permissions that it can read what application is currently running, and you are worried about it exfiltrating that information, rather than - say - the fact that as an input driver, again, it has complete control over your computer; it can record input - what if you use your Wacom to sign a pdf? Now it knows your signature. Or you tap out your banking password using an on screen keyboard. Who knows what else it can do - acting at the user input level presumably it can do anything you the user can do.
So sure, be concerned about what happens to the data it sends to Wacom, but if you don’t trust Wacom, your problems started much sooner than when you accepted the data sharing agreement.
What prevents a small peripheral company from being a vector for hardware attacks via a foreign state? Is there an impartial inspection process that checks devices? It seems like it would be extremely lucrative for a person to facilitate that operation if they don't value the integrity of their nation.
I see that you've worked to raise some good points here and that you're not simply reducing this to whataboutism, right? I find it acceptable that a person focuses on their area of interest and expertise and reports what they find. I don't expect or want him to get into a 360 degree product review. Let's save "electric shock hazards, sharp edges" for someone else, and if they find nothing, that does not diminish his own findings, yes?
After reading his analysis, I'm not sure how much I can trust Wacom's behavior when it comes to data collection. My concerns don't then jump to sharp edges and electrical shocks. I think about data retention. How well do they protect that data? I think about what Wacom might do with that detail of personal behavioral information if approached by a data broker with cash in hand and ready to make a purchase.
Negative, sir. Actually, I was responding to the points you had made. I still trust his investigation and I find it to be significant even if he didn't address all the concerns you raised (which are outside of the area of software engineering). He doesn't need to address outside issues (like the physical safety of the device) to validate his findings or concerns.
Still, since you invited, I'll talk more about my own concerns. Do I trust Wacom as a company, as a whole? I think it depends on how they respond to this, right? Do I still trust them to make a tablet that doesn't have the problems you raised with "electric shock hazards, sharp edges"? Yes. At this moment, do I trust Wacom in the area of data collection? No, that seems to me like a questionable decision. I want to know more. I don't think I want an accessory manufacturer to compile a dossier of what programs I use at what time and from what (partially masked) IP address. More so when they're not being up-front about it (certainly from a layperson's perspective). I'm also not very confident right now that the behavioral data will stay within Wacom's walls and go no further.
In fact, I'm forwarding this to my CISO's office for further evaluation. Is that bad in some way?
This investigation was started because the original author was installing the drivers, and was presented with a legal agreement explaining that Wacom wanted to collect some data.
So they are being up front about it, right? I mean, maybe not in layperson-friendly language, but in compliance with regulations and under the guidance, presumably of their legal team.
In the box alongside the tablet, there was also probably a little booklet full of safety notices, warranty indemnifications, compliance statements, and arbitration assertions about the fitness for purpose of the hardware itself - also not written in layperson-friendly language. But the reaction on seeing that was... well, probably to toss it aside and go ahead and plug in the device, not to immediately assume that because the company presented a bunch of dense legalese, they might be trying to get away with something.
You said yourself: you don't trust Wacom not to sell the data to a data broker when presented with enough cash. But all sorts of Wacom business processes had to comply with regulations, be carried out diligently and ethically, and be generally trustworthy for Wacom to have produced an electronic device that you can safely plug into your computer. So I'm just trying to get you to consider:
What is it about their data processing that leads you to all of a sudden question their corporate ethics, diligence, compliance and trustworthiness?
> This is all just another example software devs' parochial belief that because software is eating the world, any problem in software is terrible, while ignoring the whole stack of hardware in meatspace that supports the software in the first place.
Actually, I'm interested in exploring more of your own view here. You seemed to take exception that he limited his findings to his apparent area of expertise and interest (software engineering, security/privacy). Is that still the case, or have your views evolved on this issue?
> What is it about their data processing that leads you to all of a sudden question their corporate ethics, diligence, compliance and trustworthiness?
Your questions for me are really best answered by the author:
1. Apparently, it defied a reasonable expectation that the purchase of such a minor peripheral of this type would lead to the manufacturer's attempt to obtain a regular stream of what applications he launched on his PC (and at what time, and from what partially masked IP address). He was a smart cookie. His tip-off was that it somehow needed a privacy policy. And he had the smarts to launch his own technical investigation.
2. When he finally saw what they were pulling from his PC, once again, he was shocked, because that seemed to conflict with his own understanding of what Wacom said they were doing. He hadn't just casually scrolled through the privacy notice. It looks like he read it quite carefully.
I suspect this might be what he took issue with:
> Information Automatically Collected – Google Analytics When You use the Tablet Driver, certain information as described below may be automatically collected for purposes such as improvement of the Tablet Driver, troubleshooting bugs, providing the functions of the Tablet Driver, managing the services and improving overall performance of the Tablet Driver. Such information includes aggregate usage data, technical session information and information about Your hardware device.
No, I'm not interested in pulling in more sections of text and going back-and-forth in a game of Internet Lawyer. Someone else here might be a more willing partner.
> So they are being up front about it, right?
That's the issue. Was Wacom clear and transparent? Or did Wacom manage to generate a body of text which obfuscates what they are actually doing while still maintaining legal compliance? Or did they overreach? As it turns out, the FTC has a special page to submit complaints regarding privacy policies. I imagine that corporate privacy policies are turning into a hot topic for the FTC right now. I guess there's enough interest here, so I'll go ahead and submit this issue to the FTC (Federal Trade Commission) and see if they want to help Wacom figure out the answer to your question.
Beyond that, you have some interesting questions about trust. Not my area of expertise, but I'll take a crack at it. Your boss might say that you're someone he trusts. He might give you authority over an application which processes millions or billions in yearly revenue. But he wouldn't trust you to take care of his kids for a week. Trust is not binary (yes/no), and it is not universal (trust in area X must equal overall trust or trust in area Y). That's as much as I've got. If you've got followup questions about trust, they might be better directed towards an online resource which focuses on that issue.
This is all just another example software devs' parochial belief that because software is eating the world, any problem in software is terrible, while ignoring the whole stack of hardware in meatspace that supports the software in the first place.
Like the whole 'internet-connected-cars' panic that occasionally grips developers. You know what's more dangerous than putting a car on the internet? putting a car on the road. There are other drivers out there who could kill you. Thousands of people actually die in accidents. And you want to worry about the infotainment system containing a remote execution vulnerability?
I think it's not that software devs don't care about hardware/meatspace issues, they're just not as well qualified to evaluate them. So they tend to focus on what they know. It's not bad to do that - you can't focus on everything.
Also, wouldn't your argument equally imply that mechanical engineers should drop their parochial belief that hardware issues are terrible while ignoring software problems? (Presuming they do that, which seems not unlikely)
Even my drawing tablet? Im starting to hate the present day philosophy of pervasive surveillance acceptability when I can't even pay premium for a device and have it not track me constantly.
last wacom drivers I used on Windows ( I don't know, 2013? ) would eat memory all day, every day, until self-destruction. I have a screenshot somewhere of the process taking ~28 gigs of memory.
It doesn't at all surprise me that it's sneaking around.
This might be some cargo-cult level religion of mine , but if a driver package has a lot of flashy UI stuff (Wacom, Logitech, Creative), it's probably doing something suspect.
The more the apps look like key-gens, that's when you have to start wiresharkin'.
Until this kind of shit is legislated out of existence, every company that makes an installable program is going to be tempted to do it to generate more revenue. If it's not forbidden, every company will think they have to do it to remain competitive, morality be damned.
I haven’t seen this written anywhere (perhaps I haven’t looked hard enough) but I’m starting to think that perhaps Google and Facebook are reaching out to and paying companies like Wacom to capture these analytics.
It makes little real sense for Wacom, a manufacturer of tablets, to capture this amount of data, and doing so has a cost. But it makes heaps of sense for Google to do it since they can infer all sorts of stuff from the applications you install.
It also explains why this crap is so pervasive, why the privacy policy is so vague (Wacom may not even know the extent of the exfiltration - don’t ask don’t tell), and why the quality of the data collection is so good.
I mean I’m guessing there a google product called something like “Google Analytics for OSX Drivers” and google would want that in popular products.
These sort of back room deals and outreach programs are pretty common in general, but if I’m right, then Wacom, while certainly an accomplice, is not the root cause of this.
That's actually really good to know. I used to own a wacom (pronunciation war: It's Whack-om haha) about 10 years ago for drawing - I don't think I would buy one now until they walk this one back (and push it off the cliff to die).
So, if the driver hadn't had the courtesy to use the OS cert store, respect the OS proxy settings and use unencrypted DNS, how would we go forward to find out the siohoned-off data then?
As the side note, the following sentence from the post "I began my investigation with a strong presumption of chicanery" is something I am going to steal and use from now on. :)
So they got list of apps, and did some analysis on the data.
Should they disclose it upfront? Sure, it would be nice.
If this a big deal? Doesn't seem so.
On their website, they list their plans Enterprise through Community, left to right. That's the opposite of what's standard and immediately makes me wary.
They might be great, I don't know. But if something as non-standard as that is done, what other weirdness behavior does their software have?
I mean the author notes that you can decline and the software keeps working. I don't have a Wacom tablet so can't confirm, and it doesn't justify what they're doing but at least it's relatively easy to opt-out.
Wacom has application specific settings for compatibility. You can't have that feature without tracking individual processes and that data would be important for any sort of troubleshooting. It should be anonymized and they should be clear about what they are collecting, but the data does have a legitimate and benign use at least.
Software side you're right, it makes sense for the driver to keep track of the current application for things like button binds, etc.
But that doesn't mean they need to transmit that information off your computer.
Although I agree, this is likely relatively benign, it's most likely useful as a market research tool to see what applications they should prioritize support/testing for.
Does this tracking data really need to be transmitted outside of the local environment, though? The driver simply needs to download a list of the available application profiles and compare it to the process list locally. Then, if a specific profile is available, it can request it and Wacom doesn't learn about the other processes running on the system.
> Wacom didn’t say exactly what data they were going to send themselves. I resolved to find out.
[...]
> since Wacom’s privacy policy makes no mention of their intention to record the name of every application I open on my personal laptop, I’d argue that it doesn’t even give them the technical-fig-leaf-right to do so. In fact, I’d argue that even if someone had read and understood Wacom’s privacy policy, and had knowingly consented to a reasonable interpretation of the words inside it, that person would still not have agreed to allow Wacom to log and track the name of every application that they opened on their personal laptop.
"In Wacom's defense (that's the only time you're going to see that phrase today), the document was short and clear, although as we'll see it wasn't entirely open about its more dubious intentions (here's the full text)."
The "document" is actually comprised of three documents. Lawyers call this "incorporation by reference." The link given by the author is therefore only a starting point. When we incorporate the other two^1 documents -- https://www.wacom.com/privacy and https://www.wacom.com/cookie-notice, this is not a "short" document.
1. Actually it is comprised of four documents if we include the external list of companies -- www.wacom.com/about-wacom/our-passion/our-company that are also beneficiaries of the terms of these policies. Unless the user reads all three documents, she has not reviewed the entire contents of the "policy".
"Wacom didn't say exactly what data they were going to send themselves."
Looking at the privacy policy is there anything that could be in HTTP traffic from the tablet that would be outside the scope of what Wacom has stated they might collect.
Excerpts
3. Scope of this Privacy Policy
This privacy policy explains how we collect and use information that relates to you when you:
- use our other software and products; or
We refer to these uses and interactions as our "Services."
|------------------------------------------------------------+----------------------------------------+------------------------------------------------------------|
|Usage Information (e.g., indicators of engagement with our |(1) to improve our products and create |(a) with our service providers, including analytics |
|website or usage of Services, IP address, device identifier,|new products |providers, to help us deliver and improve the Services, and |
|etc.) | |to provide targeted advertising |
| |(2) to provide targeted advertising | |
| | |(b) our Affiliates |
| |(3) to better understand how our | |
| |customers' use our Services | |
| | | |
| |(4) for our internal accounting, | |
| |security, and operational purposes | |
| | | |
| |(5) for purposes required by law | |
|------------------------------------------------------------+----------------------------------------+------------------------------------------------------------|
Usage Information. We collect information about your interactions with our services. This includes or can relate to your personal information. This information enables us to, among other things, improve our Services and your experience, see which areas and features of our Services are popular and count visits, provide you targeted advertising based upon your interests and to analyze trends, administer our websites, track how you engage with our websites and other Services, learn about the systems, browsers, and apps you use to interact with our Services, gather demographic information about our user base as a whole. We also use analysis tools and methods to allow us to better understand how our customers use our Services. This includes how often the Services are used, the events that occur within the application, aggregated usage, performance data, any exceptions that occur within the software and the source from which the application was downloaded.
"Some of the events that Wacom were recording were arguably within their purview, such as "driver started" and "driver shutdown". I still don't want them to take this information because there's nothing in it for me, but their attempt to do so feels broadly justifiable.
Assuming Wacom respects resolv.conf as it does system-wide HTTP proxy settings, why not run localhost or LAN DNS server, either authoritative or recursive, that does not return a Google IP address for queries like www.google-analytics.com originating from the tablet IP address
The "broadly justifiable" reasoning does not account for the possibility Wacom may collect the data and then fail to improve the product, service or "user experience". Wacom is making no promises of any user benefits arising from collection of data. Even if there were "something in it" for the author, he has no way to hold Wacom to this promise. They get his data and he may or may not get something in return.
Real shady devs already don't do this. All signs here point to "salaried employees being asked to implement a feature, and just following the ticket to the letter".
How much of this is specifically Watcom sending that info to Google Analytics and how much of it is the stock Google Analytics SDK automatically slurping up that stuff by default?
> even if someone had read and understood Wacom’s privacy policy, and had knowingly consented to a reasonable interpretation of the words inside it, that person would still not have agreed to allow Wacom to log and track the name of every application that they opened on their personal laptop.
I agree completely. Tracking every application one uses and reporting on that to third party Google is so contrary to their stated EULA that both a class action lawsuit, and prosecution in jurisdictions that protect privacy, are warranted.
They are still the really popular is both the professional (>$2000+) and the ultra cheap market segment (<$50). iPad doesn't really cater to either of those cases.
Not everyone who uses a Wacom tablet is a bedroom illustrator, even a professional bedroom illustrator. Many projects simply don't fit, and most professional applications do not have their full version available on the iPad, and that will likely continue to be the case indefinitely.
It's really great. iPad + Apple pencil feels better for making marks, and the Cintiq Pro is just a bit behind in drawing feel but more capable overall.
Sidecar step 1: upgrade your Mac to Catalina, lose access to all your 32-bit apps. Including that treasured copy of the last version of Adobe's apps you paid good money for before they switched to a subscription-only model.
An operating system needs to be aware of every application running on it. (It doesn't need to be reporting that back to the mothership, though)
I'm not sure what that has to do with a third party peripheral. You may be confusing a Wacom tablet for something like an iPad or Nexus 7?
EDIT: For others who may be confused, a Wacom tablet is used to provide a pen/stylus interface to a computer. It's an additional peripheral, similar to your keyboard or mouse, for your computer. It is not a standalone independent computing platform.
Wacom produces multiple devices, the simplest are basically a pen based touchpad. No display and no real computing power, needs to be plugged into a computer to function in any way. The more high end ones are a pen touchpad+display but still requires a computer (ie: they're a monitor). There are also standalone tablets that use Wacom technology although I'm not sure offhand if Wacom makes any directly.
You can buy tablet computers from Wacom, I have a “Wacom Mobile Studio” gathering dust that I really should get rid of someday. Just didn’t fit into my art workflow better than a laptop and an old Wacom Intuos 4 stuffed into the bag next to it.
They do mostly make external pen input devices, with or without screens. Which people have been calling “drawing tablets” since loooong before “tablet computer” was a phrase anyone used outside of maybe science fiction.
A Wacom tablet is what used to be called a digitizer tablet, ie, a HID that uses a pen and a surface to allow for easier and better drawing. Essentially an odd-shaped mouse with some extra pressure sensitivity.
GP must have confused with the currently better known use of tablet, which is a full-blown computer with touch screen and no keyboard.
I think the egregious thing with Wacom is that 1) no one expects a drawing pad to have any need for remote connectivity and 2) use said remote connectivity to share usage information with third parties.
More broadly, we need to get away from this whole "but don't other people do the same thing?" dialogue when it comes to privacy. Yes, these issues are prevalent. That doesn't mean it's trivial---quite the opposite, in fact.
This is getting voted down, but is this not true? My iPhone knows all my top apps and shows them on whatever that screen is called, with the search, and adjusts them over time as usage changes. Maybe it’s not “more important,” but why vote this down? How do I know that data doesn’t go back to Apple with “telemetry”?
A Wacom tablet is not the same as a phone, it is a peripheral. This is like your Dell keyboard tracking your keystrokes and what application they go into.
A specialized keyboard using different button layouts and settings per app. Also, not tracking keystrokes at all but rather "used special keyboard with app foo".
As a better example, suppose it were a game controller. Then the question "what games are people using this controller with?" seems quite reasonable. Filter to games only and anonymize the data and that would be fine.
I learned something in this thread as well from the question asked, wish a few commenters would drop the "whattaboutism" attitude, some of us are actually trying to learn something here.
It's an attempt at technical whataboutism (alternately it's just confused and thinks a wacom tablet is comparable to an iPad or something). The operating system obvious has to know what apps you're running, and can store metrics to the same -- locally -- to improve your experience. There is no reason Wacom needs to know, much less share it externally. It adds zero value for the end user.
Being actual operating systems, I can understand why. I mean, it's pretty convenient to know what's installed on my phone. However, a writing/drawing tablet doesn't need to. At all. Ever.
Wacom tablet drivers have application specific settings for compatibility. We should scrutinize all data collection, and this may be too far, but "[no need for tablet data collection] at all. Ever" is simply not true.
I strongly disagree. The driver does not have to collect an event log in order to react to focus changes and switch settings. It much less has to transmit this event log, let alone to a third party such as Google.
I'm sure that most people feel as you do. So manufacturers should get informed consent before doing the data collection -- that way, you can give such consent and people who feel differently about such tracking can withhold it. It's win-win.
"their device, which - remember - is essentially a mouse"
... that has per-application configuration settings that change how the tablet can be used. They aren't just wantonly collecting unrelated data. They have features tied to this.
I read the whole article to see if there was any mention of app-specific config. Doesn't come up once.
Hmm... Good point. I could see how Wacom could make context sensitive control panels based on the app, without having to ask the user, but then I still would want control over that: what if the functionality becomes too different between to apps and I find it annoying? This should be exposed to the user.
It makes it feel less nefarious, I guess. But I still don't want a C&C server knowing this much about me.
We aren't talking about Facebook or Amazon or Google here. I think we can walk back the apocalypsism.
Most companies aren't well-oiled, gigantic machines of user-data-manipulation. I'm sure there is a better way for Wacom to do things. I'm also pretty sure their staff are doing the best they can.
It doesn't matter if this is done by Facebook, Amazon, Google, or Joe's Basement Software Company, nor what the data is actually being used for. Doing this without obtaining the user's informed consent is unacceptable.
It also has quite a few quirks depending on the app so it’s logical they want to know which compatibility work is most pressing. Not that they have done much lately, but that’s a different subject. Wacom is an extremely frustrating company to deal with.
I'm fine with Wacom collecting this kind of data as long as it doesn't open any security holes. There are certain classes of products where I would not care what they collected as long as it was relevant to improving the product. i.e. Wacom wants to know popular apps I use my tablet on, ok. If they wanted to know my approx location though then I'd be alarmed.
Then they should be very open, up front, and explicit about asking for your consent, including describing what they will be collecting, and also making it very clear you can decline without negative consequences.
I'm fairly comfortable with data collection if the user opts in, but the current trend--dark patterns where you put out a blanket "we will collect stuff" disclaimer that lacks any specifics, while not making it clear what the consequences are of declining--is deeply troubling and, I hope, becomes illegal thanks to things like GDPR and CCPA.
[...] Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. [...]
Interpret what you want. The comment was upfront very low effort and derogatory.
Wacom makes premium products for professionals, is a vendor for their premium digitziers and uncontroversial. You don't need two minutes to find what products they make to find that ADs are not remotely viable on the hardware.
The comment was hastily posted and lazy and for that one can assume that the poster did not own the products and I was correct, so read what was posted before coming to your conclusion that what my comment was done in bad faith.
Nowhere else in article or comments under it has anyone mentioned the possibilty of ADs on a drawing tablet or the digitizer hardware they provide for many products outside of their own. It is like saying because you bought a Mac that Apple can use your data for advertising even though they explicitly said they use it for product development.
There is no strong interpretation or defense of that person's comment. If you have a strong interpretation of that post that you want to add, put a comment with the interpretation or go to another and ignore this one thanks!
Who said you said that? If you had done the research you would have not mentioned ADs. Where in the world does one get the idea they can places ADs on the hardware? Your commentary looks very very low effort snide attempt at humor. There is zero substance in the post as there isn't anything to discuss in it.
> Wacom’s privacy policy does say that they only want this data for product development purposes, and on this point I do actually believe them.
That piece from the post alone makes me wonder if you read that in the post as well before you commented. There are many things one can do with data, assuming they could use it for advertising with no reasoning behind it is very low effort.
But as far as I can tell, there's not equivalent Open Wacom drivers for Windows. People with more Windows knowledge than me: any thoughts on why? Is it just that someone using Windows probably doesn't care about Open drivers, so the demand isn't there? Or is there something about Windows that makes substituting drivers harder?
Wacom doesn't provide their own Linux drivers, but looking at the state of drivers around GPUs, printers, I vaguely suspect that somebody in Linux would be working on Open alternatives even if they did. I'm trying to think off the top of my head what Windows-compatible hardware has 3rd-party driver options. Maybe some printers?
[0]: https://linuxwacom.github.io/