Ghostery is owned by Cliqz and Cliqz GmbH (which is majority-owned by Hubert Burda Media) is based in Germany.
CC instead of BCC has been a regulatory offence with fines up to 300000 EUR before GDPR. This is not a theoretical risk, companies have been fined for that in Germany.
> CC instead of BCC has been a regulatory offence with fines up to 300000 EUR before GDPR
As an American, I'm skeptical about the odds of enforcement. If it were an American company that did this, on the other hand, I'd assume a decade of uncertainty and indecision followed with a fine.
I strongly hope that regulation is enforced independently of political convenience. And if it turns out to be misused that way, I'd demand it to stop. However, let's wait for the facts first, shall we?
The fact is political bodies have been empowered with increased enforcement discretion. They are motivated, as political bodies, by politics.
I watched the same thing happen, around 2012, in Frankfurt and Berlin with respect to Germany's regional banks. Weak and undercapitalized, in violation of EU rules, they were summarily ignored due to political necessity. The same tendencies will apply with GDPR. The lack of a requirement for politically-independent enforcement was a mistake.
> They are motivated, as political bodies, by politics.
The "political bodies" in this case are the National Data Protection Authorities. They are indendent, and their only politics is ensuring compliance with the data protection regulation, as required by the GDPR, Article 52 [0].
Furthermore, due to lack of evidence, your example is not conclusive as to show that regulatory bodies act for political reasons they were not supposed to pursue. Also, if one regulatory body broke the rules, that does not mean that others will do so, too.
However, I'd love to know the details of what bank was breaking what regulation and what authority was not acting according to their mission, if that is what you want to exemplify.
He has to be political to get the position in the first place and then again if he wants to get renewed (assuming it's not limited) or if he wants to move to a higher position after the four year. The only way to make it minimally political is to have the term limited to one and have the person be ineligible for any political appointment or government job of any kind at the end of the term.
Oops, that isn't good when a privacy policy rollout causes mass private info to get out.
They should have just sent the email one per individual user anyways for safety.
When you CC/BCC it spawns the copies downstream and can cause issues sometimes if there are too many.
Back in the day over dual ISDN lines, we used to update pharmaceutical machines with FTP and email back in late 90s and when we BCC'd the backup updates (machine tried to connect FTP then looked for email updates to update itself) the ISPs would get mad at us as it hosed their systems a few times due to the number per message.
Depending on the volume and sensitivity, it is sometimes better just to send one per user not only for privacy/protection but for cpu/memory reasons up or downstream.
As far as I remember, (B)CC is implemented by just setting multiple RCPT headers in the SMTP envelope, and for BCC not setting them in the DATA (email body)—surely then, the only difference with 1-envelope-per-recipient is less SMTP messages? If you implement the same level of rate limiting as you would have on 1-per, how could you ever cause more traffic / resource usage?
Do you mean it’s easier for humans to then not set those rate limits if you’re using multiple RCPTs?
At some point the CC/BCC messages duplicate per consumer even if it is on receive/pull.
Probably less of an issue now, but back then the ISP/provider would duplicate the sending of the messages with multiple recipients, could have just been a flaw at the ISP which was Frontier back then. Could have just been a DDoS type event where too many were pulling at once rather than throttled/tapered or spaced out over time. The group emails spiked their systems, probably not an issue today.
Personally though the cost of CPU/memory is small now, I wouldn't risk sending sensitive data to a large group unless I could verify BCC was being set not CC. Seems a small price to pay to not allow a mistake like that by duplicating the messages app side rather than provider side. By sending out throttled by user on app side, even if a developer or marketing person inadvertently chose CC rather than BCC, it wouldn't leak sensitive info (emails).
Could also be that if you send one message with 10k bcc, they all hit isp at once. If you send message per recipient, the isp will get a stream of messages.
I love asking why when I get these kinds of request so i can better understand the requirements. I’ve found that some people aren’t interested in helping you understand. I had a recent interaction with a surgeon who was vexed by me asking for more detail and why that reminded me that some people just want to do their job and don’t care about learning.
I urge you to reconsider if this comment adds anything to the discussion. "Less" vs "fewer" doesn't cause any ambiguity or confusion, (versus, say, 'literally' being misused) so I see no point in bothering to correct it.
I found that sentence difficult to understand, though. One problem was the use of "difference with" instead of "difference from", which can have the opposite of the intended meaning. I had to read it several times...
I can't say I'm surprised that someone would do this. I had just written a breach policy that included "Use the Bcc field to send notification, don't make it worse than it already is."
I would say that's a bad policy. You've added it because so many people make a mistake, and putting it in the policy won't stop (or reduce) the mistake, but it gives you something to point at and say "you were told not to do that".
A better starting point for such a policy would be to enforce use of a mailing list for e-mails to customers.
It's a nonprofit, so we have no customers nor paid employees. Honestly I have no idea how a mass mailing works with our system, it'll probably be via Extension:MassMessageEmail or something. Honestly, I need to look into it for fundraising.
But what you're saying is analogous to saying checklists are ineffective for preventing errors. This isn't true. The first thing a person should do after finding and closing a security breach is to pull out the procedure, because not forgetting a step is important.
Stupid question perhaps but why would I want a ghostery account in the first place? It's just a local offline plugin.
In a way I'm glad this happened. This highlights why gdpr is needed. If you can't handle my data safely you shouldn't handle it at all. Keep your plugin offline and account-less.
I've tested Ghostery back when it was launched - as with other blockers you could enable/disable blocking per domain and third-party (iirc); you could then use a Ghostery account to synchronize these settings between your devices.
Or you can do as I do and synchronize settings yourself through Dropbox or a billion other ways.
I remember the good old days when software companies used to like to reuse stuff to prevent having to use user data. I used YNAB and they had all the user data in a Dropbox folder. So their app knew who you are and I/O’d to Dropbox, but the YNAB company didn’t know.
Then they switched from software to service and all of a sudden it was essential that they know their customer and have an account and whatnot.
Proper software design minimizes complexity. [0] If users are likely to already have a common, free way to synchronize settings, design for that. Don’t add another risk by collecting PII. Especially if it makes you money in ways you aren’t disclosing.
It's kinda serious, but kinda funny at just the ridiculousness of the whole situation.
Ghostery users definitely appreciate their privacy though, this may require some. Bending over backwards for ghostery to make it right to their users. I suspect they may be perturbed about this thing.
On the other hand, Ghostery's business model is apparently based on an opt-in feature that reports back data on which ads were blocked... so that subscribing companies can learn how to avoid getting their ads blocked in the future. Very confusing.
I wonder if everyone at Ghostery knows what CC and BCC stands for? I have a feeling that the relatively obscure abbreviations plays a role in the usability of these two different but similar functions. It's like putting a "send test" next to a "send alert" button (menu item)...
I got an email yesterday (on an old-style listserv mailing list) where the admin said 'we have disabled the website for now, and if you want to stay on this list, you have to send an email to confirm you want to be on it'. So what happenend - literally seconds after that mail, other mails started flowing in from people just hitting 'reply', typing 'confirm' and hitting 'send'. Doh...
How long did it take until other people started complaining about the spam on list, and then someone sent 'unsubscribe' to the list? That's usually how these things went.
Your story makes me miss listserv. Doing @here on Slack just isn't the same.
Eli Lilly’s mistake not only cost it some settlement money up front but as I understand it helped push the U.S. pharma industry into a very expensive regulatory regime that now impacts every server they build and every software update they roll out. The process requirements are quite onerous.
(I had worked closely with the teams involved but left about a year before the mistake. When I heard about it I spent a few hours racking my brain to make sure I hadn’t left behind some tool they might have used.)
I would doubt this reaches the bar for a breach notice. Article 34 states that the breach has to be "likely to result in a high risk to the rights and freedoms of natural persons" before notifying data subjects is required.
I wonder how many complaints they got as well. I know I debating sending one after getting an email stating "We'd love to still keep in touch, but if you don't opt-in to hear from us, you may get left out as we won't be able to contact you from 25th May." (which I ignored), then still getting another marketing email today, with the footer "This email communication makes use of a "Clear Image" (gif) to track the results of the email campaign. If you wish to turn off this tracking for future emails, you can do so by turning off the images in the email itself."
Nothing about this GDPR protest has been helpful compared to what the consequences of just rolling forward and silently introducing whatever deletion features are required.
This is why GDPR has the fines up to €20m. Hopefully some of these users who were breached file for a breach, if you can't hold my data you don't deserve it. Good intentions aren't compatible with the law.
> When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case due regard shall be given to the following
> the intentional or negligent character of the infringement;
A company with good intentions is going to be regulated less severely than a company who does't care.
Yet another reason there is ambiguity in the terms. Most websites can portray themselves as good meaning. Socially defined justice is not really the law.
Sure, but note how it also says "negligent". Ghostery is a privacy browser extension. They should be using something more sophisticated than a process so prone to human error when handling user data.
CC instead of BCC has been a regulatory offence with fines up to 300000 EUR before GDPR. This is not a theoretical risk, companies have been fined for that in Germany.