I was working at eBay/PayPal at the time, and we were finding a bunch of new phishing sites every day. We would keep a list and try to track down the owners of the (almost always hacked) sites and ask them to take it down. But sometimes it would take weeks or months for the site to get removed, so we looked for a better solution. We got together with the other big companies that were being phished (mostly banks) and formed a working group.
One of the things we did was approach the browser vendors and ask them if we could provide them a blacklist of phishing sites, which we already had, would they block those sites at the browser level.
For years, they said no, because they were worried about the liability of accidentally blocking something that wasn't a phishing site. So we all agreed to promise that no site would ever be put on the list without human verification and the lawyers did some lawyer magic to shift liability to the company that put a site on the list.
And thus, the built in blacklist was born. And it worked well for a while. We would find a site, put it on the list, and then all the browsers would block it.
But since then it seems that they have forgotten their fear of liability, as well as their promise that all sites on the list will be reviewed by a human. Now that the feature exists, they have found other uses for it.
And that is your slippery slope lesson for today! :)
We should really do something about this issue, where so few companies (arguably, a single one) hold so much power over the most fundamental technology of the era.
I know it's easier said than done, especially when taking the scale of the requests into account, but the alternative has, does, and will continue to do serious harm to the many people and businesses caught in this wide, automated net.
Without a shift in incentives, its unlikely the outlook will improve. Unless the organisations affected (and those vulnerable) can organise and exert enough pressure for google to notice and adjust course, we're probably going to be stuck like this -or worse- for a long time.
The interesting things about predictable paths is that at the start there are a LOT of them, then over time there becomes just one of them. I don't see that this path was any more predictable at the start than any other.
But it's a lot cheaper to pay for a few really expensive programmers to make a just-good-enough AI than to pay for thousands of human moderators. So we end up with a stupid computer creating tonnes of human misery all for the sake of FAANG's already fat profit margins.
I don't want to blame this entirely on the big companies, though.
Also the people want and expect "free" things on the internet. This is how we ended up like this.
I would think the attackers are using automation also, to spam attacks as in other areas of fraud. It can only be a battle of AI ultimately.
Narrator: but it was only ever to get worse
The counter argument to transparency will be that it provides too much information to those who aim to build phishing sites not blocked by the filter.
That said, we’ve experienced systems in which obfuscation wins out over transparency and it would be nice to tackle the challenges of transparency.
Most of the time I run into blocked sites they seem to be blocked because of copyright infringement, not phishing. The only phishing sites I've seen in the last year or so are custom tailored. For example, I had to deal with a compromised MS365 account last year where the bad actor spun up a custom phishing site using the logo, signature, etc. of the victim.
So IMHO the intentions are no longer pure plus the effect is diminished and being worked around.
Businesses become safer, but more regular people will get phished.
It'a not the role of Google to disallow phishing sites (as a browser) just like it's not the role of the ISP.
Make it hookable so people can chose their own phising protection service.
Phishing protection is mostly needed for people who have no clear concept of phishing or technicalities. They just want to do things on the internet, like social media, they don't care about things behind the scenes, that's boring uncool nerd stuff.
Chrome lets you override and proceed to the site. The problem for the small business is that a large fraction of their customers see the scary red warning page.
You just don't want to give control over the blocking blacklist/whitelist to a single entity, even less so to a huge powerful one, possibly in a country other than your own (which e.g. forces their foreign policy dictums to your blacklist), and even less so the one that already makes your browser, that should be a totally neutral conduit.
But also, it would leave the people most vulnerable to phishing unprotected, namely those not tech-savvy enough to install a phishing protection service. Most internet users don't even have ad-blockers.
IMHO yes. It's too much power for one company to wield. And especially a company with such questionable morals as Google. This cure is worse than the disease.
But maybe not do it by default on browser-level.
But if you do, then there really needs to be ways to combat wrong decisions in a timely manner.
Make it easy and affordable to submit legal complaints for tech misbehavior and make the penalties hurt.
This is a great approach, if your goal is to optimize for increasing the amount of dangerous crap on the web. But, eh, that's surely worth it, because the profitability of startups is more important then little things like the security of the average netizen...
 Even if you make the operators liable , in practice, you'll never be able to collect from most of them. Whereas the blacklist curators are a singular, convenient target...
 If you can demonstrate how the operators of compromised websites can be held liable for all the harm they cause, I will happily agree that we should do away with blacklists. Unfortunately, the technical and legislative solutions for this are much worse than the disease you are trying to treat.
According to Google's most recent transparency report, as of December 20th of last year they were blocking around 27,000 malware distribution sites and a little over 2,000,000 phishing sites.
In your view, would turning off those blacklists and allowing those >2,000,000 sites to become functional again count as a "slight" increase?
(edit: That's a real question, incidentally, not a disagreement or an attempt at a 'zing'; I have no knowledge in this area but went to look up the numbers, and am curious whether 2,000,000 is truly a vanishingly small amount, relative to everything else that's out there that's not already on the list)
I mean if we could trust Google (or anybody else of that kind) to have blacklist strictly limited to reasonable definition of malware and phishing, and knew that usage of such list if strictly voluntary under control of the user, it would be an acceptable, if decidedly imperfect, remedy. But we know we can't trust any of this, even if whoever works on this at Google right now are sincerely ironclad committed to never any mission creep and abuse happen, once the means exist, these people can always be replaced with others that would use it to fight "misinformation", or "incitement", or "blasphemy", or whatever it is in fashion to fight this week. There's no mechanism that ensures it won't be abused, and abuse is very easy once the system is deployed.
Moreover, we (as, people not in control of Google's decisions) have absolutely no means to prevent any abuse of this, since Google owns the whole setup and we have no voice in their decision making process. Given that, it seems to be prudent to make all effort to reject it while we still can. Otherwise next time you'd want to make a site questioning Google's decisions about the malware list, nobody would be able to read it because it'd be marked as a malware site.
There's no "report as false-positive" button at Google, so these reports likely have a lot of false positives in them...
Prior to that it was those that controlled the printing presses.
History continues to repeat itself.
I guess the automation started in 2007 or so.
>>> In mid-2000, we had survived the dot-com crash and we were growing fast, but we faced one huge problem: we were losing upwards of $10 million to credit card fraud every month. Since we were processing hundreds or even thousands of transactions per minute, we couldn’t possibly review each one—no human quality control team could work that fast.
So we did what any group of engineers would do: we tried to automate a solution. First, Max Levchin assembled an elite team of mathematicians to study the fraudulent transfers in detail. Then we took what we learned and wrote software to automatically identify and cancel bogus transactions in real time. But it quickly became clear that this approach wouldn’t work either: after an hour or two, the thieves would catch on and change their tactics. We were dealing with an adaptive enemy, and our software couldn’t adapt in response.
The fraudsters’ adaptive evasions fooled our automatic detection algorithms, but we found that they didn’t fool our human analysts as easily. So Max and his engineers rewrote the software to take a hybrid approach: the computer would flag the most suspicious transactions on a well-designed user interface, and human operators would make the final judgment as to their legitimacy. Thanks to this hybrid system—we named it “Igor,” after the Russian fraudster who bragged that we’d never be able to stop him—we turned our first quarterly profit in the first quarter of 2002 (as opposed to a quarterly loss of $29.3 million one year before). The FBI asked us if we’d let them use Igor to help detect financial crime. And Max was able to boast, grandiosely but truthfully, that he was “the Sherlock Holmes of the Internet Underground.”
This kind of man-machine symbiosis enabled PayPal to stay in business, which in turn enabled hundreds of thousands of small businesses to accept the payments they needed to thrive on the internet. None of it would have been possible without the man-machine solution—even though most people would never see it or even hear about it.
I doubt it but I must say it would make me happy and that would be weird because Schadenfreude normally isn't my thing.
They most likely have offloaded the liability to a “machine learning algorithm”. It’s easy for companies to point the finger at an algorithm instead of them taking responsibility.
Either take responsibility, or be transparent.
But we all want our cake and eat it
But if I liked to eat cake as much as Google does, I'd have died of obesity (= have my life ruined by legal issues) a long time ago.
“After careful review, we’ve concluded that the Electronic Frontier Foundation no longer aligns with the goals of Google or its parent company Alphabet Inc. to the extent we require from recipients of our Freedom Fund. We will place these funds in a separate account and use them in ways we believe will be in the best interest of digital freedom, both now and in the future.”
Can anyone explain how a web browser author could be liable for using a blacklist. Once past the disclaimer in uppercase that precedes every software install, past the Public Suffix (White)List that browsers include, how do you successfully sue the author of a software program, a web browser, for having a dommainname blacklist. Spamhaus was once ordered to pay $11 million for blacklisting some spammers, but that did not involve a contractual relationship, e.g., a software license, between the spammers and Spamhaus.
That does not explain why this comment suggests a browser author was afraid to use the list.
The browser author could easily require the list author to agree that the browser author has no obligations to the list author if the list author gets sued by a website, and the list author must idemnify the browser author if the browser author is named in any suit over the list. The list author must assume all the risk.
Government is not the only option. Railroads were fixed by Congress. If you want to fix or split Google, writing your representative about your concerns might help.
I naively used to think, "they probably don't realize what's happening and will fix it." I always try to give benefit of the doubt, especially having been on the other side so many times and seeing how 9 times out of 10 it's not malice, just incompetence, apathy, or hard priority choices based on economic constraints (the latter not likely a problem Google has though).
At this point however, I still don't think it's outright malice, but the doubling down on these horrific practices (algorithmically and opaquely destroying people) is so egregious that it doesn't really matter. As far as I'm concerned, Google is to be considered a hostile actor. It's not possible to do business on the internet in any way without running into them, so "de-Googling" isn't an option. Instead, I am going to personally (and advise my clients as well) to:
Consider Google as a malicious actor/threat in the InfoSec threat modeling that you do. Actively have a mitigation strategy in place to minimize damage to your company should you become the target of their attack.
As with most security planning/analyzing/mitigation, you have to balance the concerns of the CIA Triad. You can't just refuse Google altogether these days, but do NOT treat them as a friend or ally of your business, because they are most assuredly NOT.
I'm also considering AWS and Digital Ocean more in the same vein, although that's off topic on this thread. (I use Linode now as their support is great and they don't just drop ban hammers and leave you scrambling to figure out what happened).
Edit: Just to clarify (based on confusion in comments below), I am not saying Google is acting with malice (I don't believe they are personally). I am just suggesting you treat it as such for purposes of threat modeling your business/application.
Google, somehow, strikes me as this vision of humanity, but without an Ambassador Drill. It simply lumbers forward, doing its thing. It is to be modeled as a threat not because it is malign, but because it doesn't notice you exist as it takes another step forward. Threat modeling Lovecraft-style: entities that are alien and unlikely to single you out in particular, it's just what they do is a problem.
Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms. I can imagine it still muttering "The algorithms said ..." as anti-trust measures reverse-Frankenstein it into hopefully more manageable pieces.
That's fine when you're a plucky growth startup. Less fine when you run half the internet.
If Google doesn't want to admit it's a mature business and pivot into margin-eating, but risk-reducing support staffing, then okay: break it back up into enough startup-sized chunks that the response failure of one isn't an existential threat to everyone.
Of course they can; Google and the rest earn enough to throw people at the problems they cause/enable. If they can't, then they should stop. If you cannot scale responsibly, then you should not scale at all as your business has simply externalised your costs onto everyone else you impact.
Same applies to Facebook and other tech companies. The root issue is taking huge profits from area of business into other avenues which compete with the market on unfair ground (or out right buying out competition)
However anti-trust in US has eroded significantly.
Perhaps compared to the 40s-70s, but certainly not compared to the Reagan era. Starting with the Obama administration, there's been a strong rebirth of the anti-trust movement and it's only gaining momentum (see many recent examples of blocked mergers).
Renata Hesse was part of that effort, and has since worked for Google and Amazon, and is now expected to be in charge of anti-trust at Biden's DOJ.
It's never fine.
The abdication of responsibility and, more importantly, liability to algorithms is everything that's wrong with the internet and the economy. The reason these tech conglomerates are able to get so big when companies before them couldn't is because it's impossible to scale the way they have without employing thousands of humans to do the jobs that are being poorly done by their algorithms. Nothing they're doing is really a new idea, they just cut costs and made the business more profitable. The promise is that the algorithms/AI can do just as good of a job as humans but that was always a lie and, by the time everyone caught on, they were "too big to fail".
It kind of is, though.
The idea is that the full algorithm is "automation plus some guy". Automation takes care of 99.9% of it, and some guy handles the 0.01% that's exceptional, falls through the cracks, and so on.
The problem is when you scale from 100,000 events per day to half a trillion, and your fallback is still basically "some guy". At ten failures a day, contacting The Guy means sending an email, and maybe sometimes it takes two. At a million failures a day, your only prayer of reaching The Guy is to get to the top of HN, or write a viral Twitter thread.
There are some things which are important enough that they can't be left up to this formula, and maybe you're thinking of those. I'm not, and I doubt the person you're replying to is either.
The responsible thing would be to (1) staff up a support org to ensure reasonable SLAs & (2) cut that support org when (and if) AI has proven itself capable of the task.
This is a concept that I think deserves more popular currency. Every so often, you step on a snail. People actually hate doing this, because it's gross, and they will actively seek to avoid it. But that doesn't always work, and the fact that the human (1) would have preferred not to step on it; and (2) could, hypothetically, easily have avoided doing so, doesn't make things any better for the snail.
This is also what bothers me about people who swim with whales. Whales are very big. They are so big that just being near them can easily kill you, even though the whales generally harbor no ill intent.
That's generally my rubric for whether a safety concern is possibly worth avoiding an activity over.
It depends on how many passengers you pack in a whale.
Good story. I can imagine what the specialized humans did to the generalist humans eons ago.
Google is striving hard to remove the "human" part of the problem.
> Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms. I can imagine it still muttering "The algorithms said ..." as anti-trust measures reverse-Frankenstein it into hopefully more manageable pieces.
I immediately pressed C-f to search the string "paperclip maximizer", and was not disappointed. Thanks for mentioning it.
FROBNICATION QUERY ID [#1234]
1. Requester data
[bunch of boxes]
1a. (*) Details on stuffs
[bunch of boxes]
1b. (*) Details on different stuffs
[bunch of boxes]
4. Additional documents
- [Frobnication Registration #432]
- [Frobnication Query #1111]
(*) - Fill section a) if $something. Fill section b) if $somethingelse.
It's called COBOL
This approach to system's engineering is the technological equivalent of the personality trait I most abhor: the tendency to jump quickly to conclusions and not be skeptical of one's own world-view.
A lion may not be malicious when it's hunting you, it's just hungry; look out for it anyway. A drunk driver is unlikely targeting you specifically; drive carefully anyways. Nobody at Google is specifically thinking "hehehe now this will ruin jdsalareo's business!" but their decisions are arbitrary, generally impossible to appeal, and may ruin you regardless; prepare accordingly.
This is a monopoly.
As a local businessman I can ruin someone’s life by applying the right legal pressure. Likewise, if one of my customers is reliant on my product to run their own business, and I drop them suddenly (akin to what google sometimes does), that could ruin them. But it’s not because I’m a monopoly, only because people rely on me. Monopoly implies there’s no choice, and while that IS true with google and search. It is not implied by “arbitrary, impossible to appeal, and may ruin you”. The two are distinct (though often related) problems that are both exemplified in Google.
And very well said I might add. I don't mean to leave a vapid "I agree with you" comment, but your analogies are fantastic. They are accurate, vivid, and easily understandable.
I have no doubt they'd use similar "oops" for crushing a new competitor in the ad space. Or perhaps quashing a nascent unionizing effort. It's all tinfoil of course because we don't have any public oversight bodies with enough power to look into it.
It seems that the inflexible workflows of data processing have crept into meatspace, eliminating autonomy from workers job function. This has come at the huge expense of perceived customer service. As an engineer who has long worked with IT teams creating workflows for creators and business people, I see the same non-empathetic, user-hostile interactions well known in internal tools become the standard way to interact with businesses of all sizes. Broken interactions that previously would be worked around now leave customer service reps stumped and with no recourse except the most blunt choices.
This may be best for the bottom line, but we’ve lost some humanity in the process. I fear that the margins to return to some previously organic interaction would be so high that it would be impossible to scale and compete. Boutique shops still offer this service, but often charge accordingly and without the ability to maintain in person interactions at the moment, I worry there won’t be many left when pandemic subsides.
Empathy and understanding for fellow humans is at an all time low, no doubt exacerbated by technologies dehumanizing us into data points and JSON objects in a queue waiting for the algorithm to service.
As wonderful as tech has made our lives, it is not fully in the category of "better" by any stretch. You're totally right about margins being too high, but I do hope it opens up possibilities that someone is clever enough to hack.
A recent example, I forgot to pay my phone bill on time and network access got turned off. I came to pay it on Friday, and they tell me the notice will appear in their systems only on Monday and then it takes 2 days for the system to automatically reactivate my access. No, they can't make a simple phone call to someone in the company, yes I will be charged full monthly price for the next month even though I didn't have access for a few days, nothing we can do - ciao
No, the ombudsman probably can’t get legal to update the T&Cs
I am finding the poorly paid workers who provide service to me polite and helpful.
Perhaps this is geography? Different in different places?
I keep reading this on the internet as if it’s some sort of truism, but every situation in life is not a court where a prosecutor is trying to prove intent.
There is insufficient time and resources to evaluate each and every circumstance to determine each and every causative factor, so we have to use heuristics to get by and make the best guesses. And sometimes, even many times, people do act with malice to get what they want. But they’re obviously not going to leave a paper trail for you to be able to prove it.
I don’t believe this statement was initially intended to be axiomatic, rather, to serve as a reminder that the injury one is currently suffering is perhaps more likely than not, the result of human frailty.
Not Google, but a few months back I suddenly couldn't post on Twitter. Why? Who knows. I don't really do politics on Twitter and certainly don't post borderline content in general. I opened a support ticket and a follow-up one and it got cleared about a week later. Never found out a reason. I could probably have pulled strings if I had to but fortunately didn't need to. But, yeah, you can just randomly lose access to things because some algorithm woke up on the wrong side of the bed.
Retail and hotels and restaurants can insert meaningful human intervention with less than 5% profit margins, but a company with consistent $400k+ profit per employee per quarter can not?
This is what I'm talking about in my original comment about the malice and stupidity aphorism.
Someone or some team of people is making the conscious decision that the extra profit from not having human intervention is worth more than avoiding the harm caused to innocent parties.
This is not a retail establishment barely surviving due to intense competition that may have false positives every now and then because it's not feasible to catch 100% of the errors.
This is an organization that has consistently shown they value higher profits due to higher efficiencies from automation more than giving up even an ounce of that to prevent destroying some people's livelihoods. And they're not going to state that on their "About Us" page on their website. But we can reasonably deduce it from their consistent actions over 10+ years.
Conrad's corollary to Hanlon's razor: Said razor having been over-spread and under-understood on the Internet for a long while now, it's time to stop routinely attributing lots of things only to stupidity, when allowing that stupidity to continue unchecked and unabated actually is a form of malice.
(Hm, yeah, might need a bit of polishing, but I hope the gist is clear.)
: Where stupidity is further defined as "willful ignorance"
I had loading images turned off in my browser.
So I get the checkbox captcha thing, and checking it is not enough, so I have to click on taxis, etc. Which didn't initially show because of images being off.
I eventually did turn on images for the site and reload it. But at first, I was like "wait a minute, why should I have to have images on to pay a bill?" and I clicked a bunch of things I'd never tried before to see if there was an alternative. It appears that you have to be able to do either the image captcha or some sort of auditory thing. I guess accessibility doesn't include Helen Keller, or to someone who has both images and speaker turned off (which I have done at some times).
Maybe this is hard for someone younger to understand, but when I was first using computers, many had neither high quality graphics nor audio - that was a special advanced thing called "multimedia". It feels like something is severely wrong with the world if that is now a requirement to interact and do basic stuff online.
But if you're an ordinary user without special challenges, why would you expect anything to work after turning images off in your browser? If you're that much of a Luddite, maybe computers and technology aren't appropriate areas of interest for you to pursue.
With the browser I use now, it seems to only let you reenable images per-site and then you have to dig in settings to delete the exception.
There IS a Load Image menu item when I right click...but it does nothing! Neither does "Open image in new tab".
I think it's unfortunate if there is a "long tail" of features in a typical application these days that are not expected to work.
This extension died during one of Firefox's periodic Purges of Useful Functionality(tm), and I've been looking for another one ever since. So to some extent I see where you're coming from, but a general jihad against sound and images in the browser seems pretty radical.
When Google kills your business it doesn't help your business to assume no malice, but it may help you not feel as personally insulted, which ultimately is worth a lot to the human experience.
Humans can be totally happy living in poverty if they feel loved and validated, or totally miserable living as Kings if they feel they are surrounded by backstabbers and plotters. Intent doesn't matter to outcome, but it sure does to the way we feel about it.
Everyone I know who approaches the world with a me vs. them mentality appears to be constantly fraught with the latest pile of actors “trying to fuck them”.
It’s an angry, depressing life when you think that the teller at the grocery store is literally trying to steal from you when they accidentally double scan something.
Whether those problematic circumstances, harm, arise due to happenstance, ignorance, negligence, malice, mischievousness, ill intentions or any other possible reason is ancillary to the initial objective and top priority of stopping the bleeding. Intent should be of no interest to first respondents, rather customers or decision makers in our case, when harm has materialized.
Establishing intent might be useful or even crucial for the purposes of attribution, negotiation, legislation, punishment, etc. All those, however, are only of interest, in this context, when the company in question hasn't completely damaged their brand and the public, us, hasn't become unable to trust them.
All this to say, yes, this is a terrible situation to be in, how are we going to solve it?
Do I care if Google is doing harm to the web due to being wilfully ignorant, negligent, ill-intentioned, etc? no, not an iota, I care about solving the problem. Whether they do harm deliberately or for other reasons should be of no interest to me in the interest of stopping the bleeding.
Plus financial incentive creates oh so many opportunities for things to go wrong or be outright miscommunicated it is not even funny.
Given you're the second person who I think took away that I was accusing them of malice, I probably need to reword my post a bit to reduce confusion.
Accusing them of malice is irresponsible without evidence, and if I were doing that it would undermine my credibility (which is why I'm pointing this out).
No worries at all! I interpreted your post the way you intended; and I agree fully being also in InfoSec.
Going by how you phrased your original post, you're probably more patient and/or well-intentioned than me as I'm farther along the path of attributing mistakes by big, powerful corporations to malice right away.
Just to avoid any misreading, I didn't say I thought it was malice on Google's part. My opinion (as mentioned above, is):
> I still don't think it's outright malice, but the doubling down on these horrific practices (algorithmically and opaquely destroying people) is so egregious that it doesn't really matter.
So they are not (at least in my opinion without seeing evidence to the contrary) outright malicious. But from the perspective of a site owner, I think they should be considered as such and therefore mitigations and defense should be a part of your planning (disaster recovery, etc).
To make it relatable, do you care so much for a mosquito if it's buzzing around you, disrupting your work and taking a toll on your patience? Because your SaaS is a mosquito to Google. After a certain point, you will want to kill the mosquito, and that's exactly what Google execs think so as to get to their next paycheck.
It's easy to take this position when you're very tech savvy. Imagine how many billions of less tech savvy people these kinds of blocklists are protecting.
It's very easy to imagine a different kind of article being written: "How Google and Mozilla let their users get scammed".
On the other hand, lots of chrome users most likely do trust google to protect them from phishing sites. For those ~3 billion users a false positive on some SaaS they've never heard of is a small price to pay.
It's a tricky moral question as to what level of harm to businesses is an acceptable trade off for the security of those users.
I'm a fan of google doing their best to protect people from scammers. The real issue here is no way to submit an escalated help request when they accidentally mess up. eg they could build a service where -- and I doubt scammers would play -- $100 (or even $1k) would escalate a help request with a 15 minute SLA. I run a business; we would have no problem paying an escalation fee.
"How Google Runs a Pay-to-Play Protection Racket"
Format your site to suit google, or they don't index it.
Add headers to your emails or google reduces deliverability.
Pay for clicks on your own company's name or google sells ads against the name of your company! They monetize navigation queries.
Run your site through amp and let google steal your traffic or google pushes your search rank down the page.
Let google steal answers to questions contained on your site and display them as answers w/o sending people to your site, or they deindex you (see tons of examples, but also genius).
Let google steal your carefully curated and expensive photographs for google shopping and use them for the item from other vendors or you can't list items in google shopping.
etc etc etc... it's nothing new. So we may as well encourage them to do a more helpful job of what they were going to do anyway.
Uh, it kind of did, when internet-savvy early adopters (and developers) convinced all their friends, then family, then acquaintances, to switch to Chrome a decade ago.
I know there's probably a very large number of FOSS-only types on this site who would disagree with that assessment, and claim that they've always been in the Firefox camp, but the sheer market share of chrome clearly shows that they are the minority.
Everyone switched to chrome because they were tired of IE having too much power and not conforming to standards. Nowadays web devs often build chrome-first, using chromium-only features, and the shoe has almost migrated to the other foot.
Because they run a popular browser and don't want their users getting scammed?
For each tech savvy person mad about this, there's 10 non-tech-savvy people completely oblivious that could get scammed by phishing sites we'd consider obvious.
Sure, they should do a better job, but that blacklist is probably millions of websites big at this point. It's the kind of thing where a perfect job is essentially impossible, and the scale means that even doing a decent job is going to be extremely difficult.
Sadly gmail and google docs are top notch products :(
We have very little real choice.
Occasionally people will pretend this is not so. In particular those who can't escape the iron grasp these companies have on the industry. Whose success depends on being in good standing with these companies. Or those whose financial interests strongly align with the fortunes of these dominant players.
I own stock in several of these companies. You could call it hypocrisy, or you could even view it as cynicism. I choose to see it as realism. I have zero influence over what the giants do, and I do have to manage my modest investments in the way that makes the most financial sense. These companies have happened to be very good investments over the last decade.
And I guess I am not alone in this.
I guess what most of us are waiting for is the regulatory bodies to take action. So we don't have to make hard choices. Governments can make a real difference. That they so far haven't made any material difference with their insubstantial mosquito bites doesn't mean we don't hold out some hope they might. One day. Even though the chances are indeed very nearly zero.
What's the worst that can happen to these companies? Losing an antitrust lawsuit? Oh please. There are a million ways to circumvent this even if the law were to come down hard on them.
They can appeal, delay, confuse and wear down entire governments. If they are patient enough they can even wait until the next election - either hoping, or greasing the skids, for a more "friendly" government.
They do have the power to curate the reality perceived by the masses. Let's not forget that.
Eventually, like any powerful industry they will have lobbyists write the laws they want, and their bought and paid for politicians drip them into legislation as innocent little riders.
We can't vote with our clicks. We really can't in any way that matters.
That being said, I also would like regulatory bodies to step in and do something about it. To level the playing field. If nothing else, to create more investment opportunities.
But investment strategy isn't so much about any underlying reality as it is about the psychology of market participants. You don't invest based on what you hope will happen, but what you believe will happen.
Googles execs and veeps don’t care about small businesses, because most are career ladder climbers who went straight from elite colleges to big companies. Conformists who won’t ever know what it’s like to be a startup. As a group, empathy isn’t a thing for them.
Accidentally unleashing a process that harms people is negligence. Not caring that you are being negligent is malice.
And you don't know what triggered it. It's possible that one of your clients was compromised or one of their customers was trying to use the system to distribute malware.
No significant security is introduced by splitting our company's properties into a myriad of separate domains.
This type of incident can be a deadly blow to a B2B SaaS company since you are essentially taking out an uptime sensitive service that a lot of times has downtime penalties written down in a contract. Whether this is downtime will depend on how exactly the availability definition is written.
In this case, the subdomain they banned was xxx.cloudfront.net, and we know they would not block that whole domain.
We might consider that approach in the future, but I foresee complications in the setup.
This will sound crass, but it reminds me of Soviets cutting off the food supply to millions of people over the winter, due to industrial restructuring, and they brushed it off as "collateral damage".
I'd agree. The problem is there is no financial or regulatory incentive to do the right thing here.
It has zero immediate impact on their bottom line to have things work in the current fashion, and the longer term damage to their reputation etc. is much harder to quantify.
There's no incentive for them to fix this, so why would they?
My assessment might be “nobody in power has time to prevent the myriad of problems happening all of the time, even though they handle the majority, with help from businesses, government agencies, etc., and given the huge impact of some problems to society as a whole, they may feel as though they’re rising in the front seat of a roller coaster, unaware of your single voice among billions from the ground down below.”
“If only the czar knew!”
Also, to your point, an organization becomes something else than the sum of its parts, especially the bigger it gets.
Google can be a malicious actor without necessarily having individuals make act maliciously.
In their defense they acknowledged it and some changes. I can't find the blog post now so going from memory. But that only happened because he got lucky and it blew up on HN/twitter and got the attention of leadership at DO. How many people have beenh destroyed in silence?
In my case, Digital Ocean only allows one payment card at a time and my customer (for whom the services were running) provided me with a card that was charged directly.
A couple months later my customer forgot that he had provided the card. He didn't recognizer "Digital Ocean" and thought he had been hacked (which has happened to him before) and called the bank and placed a chargeback.
When DO got the charge back they emailed me and also completely locked my account so I was totally unable to access the UI or API. I didn't find out about the locked account until the next day. I responded to the email immediately, and called my customer, who apologized and called the bank to reverse the chargeback. I was as responsive as they could have asked for.
The next day I needed to open a port in the firewall for a developer to do some work. I was greeted with the dreaded "account logged" screen. I emailed them begging and pleading with them to unblock my account. They responded that they would not unlock the account until the chargeback reversal had cleared. Research showed that it can take weeks for that to happen.
I emailed again explaining that this was totally unacceptable. It is not ok to have to tell your client "yeah sorry I can't open that firewall port for your developer because my account is locked. Might be a couple of weeks."
After a day or so, they finally responded and unlocked my account. Fortunately they didn't terminate my droplets, but I wonder what would have happened if I had already started using object storage as I had been planning. This was all over about $30 by the way.
After that terrifying experience, I decided staying on DO was just too risky. Linode's pricing is nearly identical and they have mostly the same features. Prior to launching my new infrastructure I emailed their support asking about their policy. They do not lock accounts unless the person is long-term unresponsive or has a history of abuse.
I've talked with Linode support several times and they've always been great. They're my go to now.
It does seem that they're unfortunately borrowing the playbook from AWS/Azure/GCP wrt over-automization as they scale. More old-school support could have been their differentiator, but it seems they're going for growth. They're getting close to the razor's edge.
I no longer recommend them any production usage.
To my mind, one of the big questions about mega corporations in the internet service space is whether this criterion for determining what can be launched is sufficient. It's certainly not the only criterion possible---contrast the standard for us criminal trial, which attempts to evaluate "beyond a reasonable doubt" (i.e. tuned to be tolerant of false negatives in the hope of minimizing false positives). But Google's criterion is unlikely to change without outside influence, because on average, companies that use this criterion will get product to market faster than companies that play more conservatively.
The problem is that there's false positives with substantial harm caused to others and with little path left open to them by Google to fix them / add exceptions-- in the name of minimizing overhead.
Google gets all of the benefit of the feature in their product, and the cost of the negatives is an externality borne by someone else that they shrug off and do nothing to mitigate.
By itself, it won't solve the problem... The immediate reaction could be to address the requirement by resolving issues rapidly to "issue closed: no change." But it could be a piece of a bigger solution.
I once created a location-based file-transfer service called quack.space  very similar to Snapdrop, except several years before they existed. Unfortunately the idiot algorithms at Chrome blocked it, throwing up a big message that the site might contain malware. That was the end of it.
I had several thousand users at one point, thought that one day I might be able to monetize it with e.g. location based ads or some other such, but Google wiped that out in a heartbeat with a goddamn Chrome update.
People worry about AI getting smart enough to take over humans. I worry about the opposite. AI is too stupid today and is being put in charge of things that humans should be in charge of.
Much less control of the Internet.
One lesson is use IP and not the Web.
Linode once gave me 48 hours to respond (with threats to take down the site) because a URL was falsely flagged by netcraft based on what looked like an automated security scan of software I was hosting. Granted, they did not take any action and dropped the report once I pointed out that it was bullshit, but I do not consider this great service. If there is no real evidence of wrongdoing I should not be receiving ultimatums.
You are only focusing on the negatives while completely ignoring the positives here.
Here are a few questions to consider that may give you better perspective:
1) Do you know the magnitude of financial and psychological damage caused by malware, phishing, etc on the web?
2) Do you believe that it is possible to have a human review every piece of automation generated malware on the internet?
3) Do you believe it is possible to build an automated system that provides value with zero false positives?
4) Do you think an open standards body or government bureau would perform any better at implementing protections from the threats described here?
But: Do you believe there is no room for improvement in an automated, opaque system with clear evidence of malfunction, that quite succinctly decides if hundreds of people go unemployed when their company tanks for nothing other than an incorrectly set threshold on some algorithm?
That is the real question to ask. Google is nowhere near its limits in terms of capability, as is made abundantly clear by its extremely comfortable financial position.
I don't agree with the premise of your last question. It's not Google's responsibility to protect the internet and provide a free anti-abuse database for other browser vendors, and yet Google does do this at significant cost. The fact that they don't do it perfectly is not a rationale for killing it or providing it with infinite resources.
I think that's a naive perspective. Google did not create the database to be nice to other vendors, and it also did not make it available to them for that purpose.
An Internet-wide blacklist represents strategic leverage over competitors (or maybe even dissonant voices, should the need arise) and an massive source of data collection probe points. These facts were certainly brought up internally and deemed worth the risk when the massive legal liability of this product was assessed.
Therefore, because of the pervasiveness of this system, it needs to be handled responsibly. They are not doing anyone a favor by making sure it functions correctly. Google is well aware of this, because they don't need regulators and lawmakers gaining yet another excuse to try and dismantle them.
Yes, yes I do. Banks do it for their customers today at scale.
With banks, they only have to do that for their customers, whom they've at least had a chance of getting money from. But Google would need to provide it to every site which gets blocked, (as malware sites pretend to be legitimate). Which
> Around an hour later, and before we had finished moving customers out of that CDN, our site was cleared from the GSB database. I received an automated email confirming that the review had been successful around 2 hours after that fact. No clarification was given about what caused the problem in the first place.
Yes, yes, Google Safe Browsing can use its power to wipe you off the internet, and when it encounters a positive hit (false or true!) it does so quite broadly, but that is also exactly what is expected for a solution like that to work - and it will do it again if the same files are hosted under a new URL as soon as detects the problem again.
> If your site has actually been hacked, fix the issue (i.e. delete offending content or hacked pages) and then request a security review.
The recommended steps for dealing with the issue listed in the article were not what we used, just a suggested process that I came up with when putting the article together. Clearly, if the report you receive from Google Search Console is correct and actually contains malware URLs, the correct way to deal with the situation is to fix the issue before submitting it for review.
Think about the jurisdiction Google is in deciding that they want to force Google to shut down certain websites that correspond to apps that they've already had them and Apple ban from the App Store, for "national security" or whatever.
This is one mechanism for achieving that.
We receive email for our customers and a portion of that is spam (given the nature of email). Google decided out of the blue to mark our attachment S3 bucket as dangerous, because of one malicious file.
What's most interesting is that the bucket is private, so the only way they could identify that there is something malicious at a URL is if someone downloads it using Chrome. I'm assuming they make this decision based on some database of checksums.
To mitigate, we now operate a number of proxies in front of the bucket, so we can quickly replace any that get marked as dangerous. We also now programmatically monitor presence of our domains in Google's "dangerous site" database (they have APIs for this).
0: https://www.enchant.com - software for better customer service
It would seem surprising, but it's the other possibility.
Doesn't Chrome upload everything downloaded to VirusTotal (a Google product)?
It doesn't, unless you opt for SafeSearch "Enhanced Protection" or enable "Help improve security on the web for everyone" in "Standard Protection". Both are off by default, IIRC. Without it, it periodically downloads what amounts to bloom filter of "potentially unsafe" URLs/domains.
On the other hand, GMail and GDrive do run the checks via VirusTotal, as far as we know - which means that OP case may have been caused by having some of the recipients having their incoming mail automatically scanned. It's similar for Microsoft version (FOPE users provide input for Defender Smart Screen), at least last time I checked.
TL;DR is you download a chunk of SHA-256 hashes and check if the hash for your URL is there. There is of course the chance of collision but that is minuscule.
Other people downloading the same file would get the same "protection", but in this case this goes a step further:
The S3 bucket itself gets then blacklisted. As it was a private bucket, one of the ways this could happen is that once chrome found the blacklisted URL, it sent back to Google the url (s3 bucket) where the file with the blacklisted URL was found.
Hosting a virus on a domain and then downloading it a few times with different chrome installations sounds like a good way to get the whole domain blacklisted...
It normally isn't that much of a challenge to mitigate the issues, but other things get priorities. Companies end up leaving pivots to XSS attacks and similar bugs too.
I'm actually not telling the truth but at what point did you realize that? And what would be the implications if Google actually did release a service like this? It feels a bit like racketeering.
But then, I realized: 1). I'd be integrating further into Google because of a problem they created (racketeering), and 2). They seem to really dislike having paying customers (even if they made it, they'd kill it before long).
They actually blacklist you even faster, because of course they have in their database that you have the now-evil-file.
or does GSB not ban the entire TLD when a subdomain has malicious content?
Would be great if our overlords at least publish the overzealous rules we need to abide by.
This is the same reason why the block of the TFA company did not cause an outage of everyone using CloudFront - GSB does not block full TLDs if it can be shown content is distinct. Same for anyone using S3, Azure equivalents and so on.
e.g. u1234.dropboxusercontent.com is treated as a unique domain just like u1234-dropboxusercontent.com would be.
Edit: here we go, from another comment - the Public Suffix List: https://publicsuffix.org/
Seems I might have just hit the limit? ... Nope, 8.1MB zip file also wasn't sent anywhere.
I just checked the nginx access logs - both the 32MB and the 8MB zip files have been accessed only once (both were created only for this experiment).
However, phishing detection and blocking is not a fun game to be in. You can't work with warning periods or anything like that, phishing websites are stood up and immediately active, so you have to act within minutes to block them for your users. Legitimate websites are often compromised to serve phishing / malicious content in subdirectories, including very high-level domains like governments. Reliable phishing detection is hard, automatically detecting when something has been cleaned up is even harder.
Having said all that, a company like Google with all of its user telemetry should have a better chance at semi-automatically preventing high-profile false positives by creating an internal review feed of things that were recently blocked but warrant a second look (like in this case). It should be possible while still allowing the automated blocking verdicts to be propagated immediately. Google Safe Browsing is an opaque product / team, and its importance to Google was perhaps represented by the fact that Safe Browsing was inactive on Android for more than a year and nobody at Google noticed: https://www.zdnet.com/article/mobile-chrome-safari-and-firef...
Lastly, as a business owner, it comes down to this: Always have a plan B and C. Register as many domains of your brandname as you can (for web, email, whatever other purpose), split things up to limit blast radius (e.g. employee emails not on your corporate domain maybe, API on subdomain, user-generated content on a completely separate domain) and don't use external services (CDN) so you can stay in control.