> ...after taking into consideration the information that you have provided, we have confirmed that we are unable to reinstate your publisher account.
I hate when using euphemism slides into flat out lying like this. They are not "unable" to reinstate the account, in fact they are the only party able to reinstate the account, that's why the account holder was contacting them instead of someone else. They are "unwilling" to reinstate the account.
I know it's all just bullshit but it bothers me anyway.
It's not lying because there is some implicit information in the "we are unable" statement. What is implied in statements like this is that they're unable due to their policies.
If not for implications like this, almost every single use of "unable" (or "can't", for that matter) ever in a sentence would be "lying" unless something is against the laws of physics.
I disagree. If you buy a product from me with 30 day warranty and it breaks on day 31 and you contact me, I will not give you a refund because: a) I haven't agreed to do so b) I'm not bound to do so c) I don't think it's warranted in this case.
But I'm not "unable" to issue a refund.
In another case I may say "hm it's out of warranty but you know what, it really shouldn't have broken like that and you're a good customer, so I'll give a refund anyway." I can do that because I am able to issue a refund.
As for their policy, they are both the authors and interpreters of their own policy, so the "my hands are tied" argument is pure BS. If they are unable to reinstate accounts, why do they have an appeals process at all?
These are all examples where someone clearly could for physical reasons, but they can't for other reasons they are bound to, whatever these reasons are.
If Google chose to use the "uncowardly" wording, I'm sure someone would just post saying Google is arrogant and cocky bastard. No matter what someone will find some point to complain. Human nature.
"People will criticize no matter what you do" is a great line. It gets used a lot - not so much here, I've noticed. Probably because it doesn't address the particulars of any criticism, and instead provides a nihilistic view of the world where "real improvement" is impossible.
"We're unable to" shifts responsibility to something vague, unspecific. It's like the "run around" only with this phrase you've been redirected to /dev/null. I'm glad the OP said something.
It's shifting the blame to you - something in their policies prevents them from reinstating your account without allowing you to break the contract. There's no ulterior motive, it's simply the best wording for saying that you still violate some policy.
The original email is context for the review rejection email by saying
> In your case, we have detected invalid traffic or activity on your account
The rejection email isn't isolated, so when it says "unable" it's conveying the implied message "unable to reinstate your publisher account <without bending the rules because we think our ad fraud detection systems were correct>".
Well, that is, strictly speaking, a false statement. I agree the discussion has been civil; but consider that the support to Google's position requires adding phrases to their statement which fundamentally change the statements meaning.
I'm not sure what it's called, something like the opposite of a strawman attack. It is indeed simple to defend a statement when you give yourself permission to rewrite it in a post hoc fashion. It's an argument I find uncompelling, no matter how civilly it's presented.
> The only takeaway is that there is no obviously correct wording for google to make when wielding the banhammer.
This is true on kh because ‘correct’ is both a relative term and a black and white one, which doesn’t apply to the situation we are discussing. You have to decide what you value, and perfection is not an option, so there is indeed no ‘correct’ wording, but that doesn’t mean there aren’t better wordings, nor does it make the one they chose ok.
It seems to me that there are many ways Google could be more honest and less opaque.
It seems like quite a few people here (myself included) value those traits and would like to see Google adopt them.
That won’t immunize them from criticism or make everyone agree with their judgements, but it would still be better.
Here, I think the concern is that the alternative wordings considered/proposed are possibly open to just as much criticism and negative readings as the original. That is, others consider it difficult to improve here.
If it's based on a real policy that can be verified by others, then there is no ambiguity here. "We reviewed your case, and based on our policy, we cannot reinstate your account. Because if we did, we'd be the ones violating our policy, and someone -including you- could then actually sue us for unfair business practices, rather than merely complaining about overly restrictive policies that are blindly enforced through a system that is hard to penetrate".
No lying, no ambiguity. They can't reinstate this account.
Should they change their policy so that after that change, they can? Maybe, but good luck getting them to.
They can certainly change it, thus changing what they can do, in the future, but they can't make exceptions. A policy, written down, becomes a contract as far as the law's concerned. Contract changes are fine (provided all parties involved are then allowed to cancel the contract), but contract violation is not.
There are plenty of policies that state that they may be changed by one side or that exceptions may be made in cases where the wording doesn’t cover the intent.
Pretty sure if you file the proper paperwork with google's legal department, you can get a copy of the exact text in question. The downside of business in the US: companies are by law required to make documents available on request, but they are in no way required to make that easy.
Getting the text of the policy isn't the whole issue. As you pointed out both the policy and method of verification need to exist, or there is ambiguity.
Given the information in the article, you can't verify any of the traffic or actions that were supposed to break policy. After all Google wrote, "We understand that you may want to know more about the issues that we’ve detected. Because this information could be used to circumvent our proprietary detection system, we’re unable to provide our publishers with information about specific account activity."
"We understand that you may want to know more about the crimes that we’ve detected. Because this information could be used to circumvent our proprietary policing system, we’re unable to provide defendants with information about alleged criminal activity."
And yet no one, including people in this thread who are claiming that the intent of Google's wording is to deceive, are actually the slightest bit unclear about what Google means.
It isn't about wanting to deceive, it is about their seemingly unwillingness to admit; these are related, but not at all the same. It is the same kind of semantic difference people think is really important when discussing scenarios like "you didn't misplace my item on accident, you discarded it on purpose and we both know that: I want to hear you say it out loud".
But, as others have mentioned, it's extremely standard to use phrasing like "can't do that" and "unable to do that" like this. It's simply not reasonable to interpret this messaging as intended to claim that it's physically impossible to do the thing. I totally understand being upset with Google's policy and decision here, but this particular criticism about the wording they used is simply disingenuous.
We all know what it means because we are familiar with numerous other cases of such dissembling. That does not make it any less dishonest; it just means they are members of a big, dishonest club.
It is not how everyone uses language. I do not use language that way. If you used language that way, you would be dishonest. It is how dishonest corporations use language.
That we are aware they are dishonest does not make them less so.
It’s not dishonest because it is language used with a certain intended meaning and that intended meaning is clear to everyone including the people complaining about it. That’s precisely what normal, honest usage of language is: you have an intended meaning and you effectively convey the intended meaning.
Brazen lying does convey an intended meaning; an action that speaks louder than words, it always conveys the same meaning: "You're a fucking peasant who has to take what you get."
Sometimes that's true. Most times they hope you will believe it's true. Many people have been trained to believe it, true or not.
A business concerned you may take your custom elsewhere will not insult you this way. It is the mark of a monopolist or a crook.
I will make a point of remembering your relationship with truth.
And deliberately so, to defuse the situation and not anger the customer further. The "we are" is of the same origin. It creates the assumption that the customer is dealing with "some Google team" that made the decision, when in fact it was one person or even just a stupid algorithm.
There's an important distinction between externally and internally imposed restrictions. (In the case of killing, hopefully both restrictions are in play.)
The language is intentionally deflecting responsibility by obscuring the source of the restriction. They're trying to reduce argument by giving the impression responsibility exists in some other unidentified channel.
Yes, but without those reasons these are just ambiguous unprovable statements.
Without reasoning we cannot tell if the auxiliary verb is even correct.
“I can’t eat meat anymore because it’s illegal”, really should read “I shouldn’t eat meat anymore” as although it’s a bad idea you’re still physically capable of eating meat.
I think the issue we’re talking about is ambiguity, and this really just emphasises the point.
It isn't really lying from a personal perspective either. The "person" writing the response is unable to reinstate the account as a matter of policy, which is a valid reason forbidding something in a civilized society. Likewise, by matter of a different policy the author is unable to comment on the specifics of the suspension.
It could be as simple as "Google management reviewed this app and decided it cuts into the bottom line of some service offered by Google." If the low-level person writing emails is aware of this fact then it would be reasonable to understand why they are unable to share the true motivation for suspension. What is more likely is the low level email writer looks up the account and the reason for suspension listed is, verbatim "Ad fraud - <3 Mgmgt." Then the low-level person would not be stretching the truth at all when they say they are unable to reinstate the account and are unable to provide more information.
> It isn't really lying from a personal perspective either. The "person" writing the response
I agree that the person writing isn't lying, but the language is deflecting responsibility away from management choices. The clearest wording from the prospective of the writer would be "I am unable because we are unwilling...".
I agree with those who feel it's important to keep a distinction between externally imposed limitations and internally imposed limitations.
They are speaking on behalf of Google. So what is effectively said is "Google is unable to ..." which is clearly BS.
I mean it is like "I'm unable to return the money [as it would be against my policy to do so]".
Though it is even worse than that as Google wouldn't even say which policy was violated. Full Kafka. The fact that people still put up with this is a clear evidence of Google's monopoly position.
This is like being pedantic about usage of the word "literally"...
Just because this usage of "unable" doesn't match your strict personal definition doesn't mean they are wrong. People generally agree and understand what "unable" means in this context, so you're kinda SOL.
You make valid points. I believe the contradiction here lies in the exact context. The individual shop employee might be "unable" to issue a refund, due to the policy set in place by the managers of the shop. The shop as a whole is certainly able to change the policy and provide refund (unless it is out of money, of course). The individual --- phrasing the message --- might be accurate. But the customer is not interested whether the person in front of him is able to issue the refund. He is usually interested in getting the refund from the shop as a whole.
Similar arguments apply to the Google store the person or more likely the software system that composed that email might be unable to reinstate the account, but Google most definitely is. But Google is merely unwilling to do so.
If I were to ask you if I could get a refund for an item out of warranty, what language would you use to refuse me? I'm struggling to come up with a response that doesn't use the terms "unable" or "can't" that wouldn't come across as fairly rude.
You would be lying - and people will call you out on this, because they will find out that you have in fact issued refunds for products with expired warranties.
They could write "We generally do not issue refunds for items outside of warranty" and they're back to the statement being just one level more vague, and thus more true.
But in reality, both of those mean the same thing. Writing "We don't issue refunds outside of warranty periods" has an understood "excluding exceptional circumstances". Everyone knows it's there. Only people who are pedantic to the point of uselessness will argue about this, and you'll find out that the courts generally have little sympathy for that.
All human languages so far are inexact. Math is probably the most exact language we've invented for communicating ideas, but languages that the general public knows are all inexact.
If the correct thing is communicated unambiguously, that's already a success, even if a pedantic person can say "I know you mean that you don't 'generally' do it, so the absolute there is a lie", the fact that the pedant can point it out means they absolutely understood what was being conveyed correctly.
This level of semantics is indeed pointless - to clarify, your comment supports both what you wrote and Google's use of the "unable" wording in their response; they are unable to reinstate your account <without introducing liability to lawsuits regarding unfair business practices> <and except in exceptional circumstances>.
The person responding at a big corporation is often unable to, for practical purposes as a result of policies other than in exceptional circumstances.
When you write in and ask them, please steal a million dollars and give it to me, while they might be able to figure out a way to steal and give it to you, for policy and job performance reasons they are unable to. They say - "I'm unable to do that for you". Who cares if they somehow could - we all understand they have chosen not to.
We are unable to reinstate your account = person responding does not have policy authority to reinstate your account and the exceptional circumstance was not identified.
That feeling is specifically because we all know that depersonalizing and speaking passively 'softens' the blow.
"As your product is out of warranty we will not be issuing a refund."
Sounds rude, right? Because it draws attention to the fact that the decision is, at some level, completely arbitrary. But if you have your left hand write the policy and your right hand enforce it then you can say.
"I'm sorry but I'm unable to issue a refund because your product is out of warranty."
Makes it sound like that's just how the world works, doesn't it? And you come away feeling like "aww man they can't" instead of "they won't, money grubbing assholes." Customer service is, at its core, about managing emotions and often delivering bad news in a way that preserves the company's image.
> Unfortunately the warranty on your product has expired and we do not issue refunds for products outside the warranty period.
If you pressed me I would admit that yes, in some exceptional cases we issue refunds for products outside of warranty but we're not doing so in this case because [whatever, the product broken due to misuse, etc.].
To say I am not issuing a refund or that I do not issue refunds on out-of-warranty is truthful or reasonably so. It's perfectly possible to communicate that without being rude or claiming to be "unable."
A pet peeve of mine is the deferral and personification of "policy". Policy is just your opinion that you happen to have written down in the past. It holds no power over you, you write the policy! It's not like the US law, which while also just words on paper, is enforced (and often chosen by) other people over you. Me deferring to the law (vs. my own opinion) has meaning because they can be different. The way we really know this is that we repeatedly see policy broken all the time -- again, because it's just a pretend separate agent, not an actual entity that wields power over you. It does in fact ultimately just serve to disguise an active action as a passive one "Oh, I checked the book of rules (that I wrote) and it said I can't let you do that. Shucks. Man, that book, its a tough negotiator. Nothing we can do I'm afraid." I think it is their right to write the rules, but just own up to it. Say "we aren't doing it because we don't want to," that's the truth, because if they did want to, they would, regardless of the "policy".
The "policy" is indeed the law, in the form of a contract - you or Google breaking that contract could mean you end up in a civil lawsuit that would cost you tens or hundreds of thousands of dollars assuming it isn't frivolous. Instead of actually taking everyone to court and scaring off developers, they simply give themselves the option to terminate the contract if you break it instead of going to court.
The thing is that they can't bend the policy for certain players without being sued for unfair business practices/anti-competitive behavior, which is why Google has to enforce it on everyone if they want to enforce it on anyone.
Plenty of companies get sweetheart deals from Google and Apple. Your contract has no bearing on another company’s. That’s why I can’t sue Apple for giving Netflix and Amazon a more favorable deal than 30%. It’s also why I can’t sue Target for making an exception and letting someone else return something one day later than their return policy explicitly allows but not doing that for me. Furthermore, most contracts you sign (click) with Google and such usually have a catch-all clause that says everything is up to their discretion and they can change their minds on most of these subjective judgement calls.
You're right, but I think you're not doing justice to the OP's complaint.
You're right that this isn't solely a faceless corporate thing. People say "I can't" when "I won't" for the same reasons Google did. We even ask "can you watch my kids?" Again, the same reasons drive the language. It lets a false but face-saving implication stand: You will pick up my kids if you can and if you won't than I'll assume you couldn't.
We also "ask" our employees or waitresses to do things, even though it's technically an order.
All this is good and fine. Language is supposed to embed cultural niceties that speak to our values and smooth relations between people.
The Orwellian shit comes in when it comes in. These cross from figures of speech into euphemization and the Orwellian point is that these things run deep. A bank manager is literally unaware of where her own prerogatives, organisational norms, hard corporate policies and regulatory rules begin and end. They are constantly implying (and thinking) that whatever is annoying/abusing their customers is not because of them. Usually it is.
Is "can you watch my kids?" really the same as "are you able to watch my kids?" or an idiom in which "can" does not have precisely its stand-alone meaning?
Well... I don't think you can nail down language, especially idiomatic phrases, to that level of specificity.
"Can you X" literally/etymologically means "are you able." If you were to translate, you'd translate to "will you." I don't think the etymology is coincidence, or lost on an average speaker. "Can you" feels softer and more polite. It reflects something about how people want to interact with each other. You have to leave work early because your mom "couldn't" watch your kids, not because she refused to.
Incidentally, in Ireland we do say "will you," "would you" and "are you able to" more commonly than "can you," which sounds slightly american/international to my ear. Oddly (or not), "will you" is (IMO, locally) more informal. You'd say it to friends, when making trivial (pass the salt) requests. "Would you not" is also (I think) an irish choice of words. It's used to make suggestions, rather than requests.
These things don't bother us until/unless they're coopted into a different context, and used tactically. Going back to the original point about corporate-drone speak.... The "can't" vs "won't" language is used in the first person to obscure responsibility. I didn't call it Orwellian because it's evil or onerous. I called it Orwellian because it affects culture/thought deeply. The language helps maintain an impenetrable ambiguity, implying that every contentious decision is actually not a decision. It's dictated by regulators, or at least by "corporate."
Also, in context... it's an (passive?) aggressive way to cut off a conversation. The equivalent of "Good day Sir!"
You aren’t wrong, but (taking the corporate entity in question as a monolith, which is fair from the outside) “unwilling” is a much more honest word choice in cases like this since it clearly communicates that there was a real practical decision that could feasibly have gone either way. “Unable” lines up better with things that are infeasible, e.g. Apple can’t recover the data on an encrypted hard drive without the password or recovery key because it’s literally impossible or would at least require nation-state level computing resources to have a realistic shot at cracking even a weak password.
“Unable” is dishonest because it passes responsibility beyond the veil of the typical user’s ignorance. We’re so used to this sort of language that we’re conditioned to allow it even when we know it’s bullshit. It shuts down discussion and allows its wielder (inevitably a corporation) to avoid explaining itself. In the developed Western world we have a big problem with letting corporations do whatever the hell they want without explaining themselves, so I don’t think we should let them get away with this sort of thing anymore, and not being satisfied with mealy-mouthed evasion is one of the first steps down that road.
Well, I am unable to give someone your money because you won't agree. It's not against the laws of physics, but I still can't do it. Google can do it, they just don't want to.
Hell, they can even change their policies if they want, so they aren't really "unable".
Yes, but it is a dodge. Like an apology wrapped in an excuse. I read this post and I made a mental note to try to never say I am "unable" when I am unwilling. It's corporate speak that I have used myself.
But there lies the lie. When they use 'unable' they are implying that an external factor is blocking them, so arguing further is not possible and unfruitful.
As a cashier, I am certainly "able to" just hand you the goods and let you leave without paying, but in reality due to laws, regulations and good morals I am unable to do that.
As a cashier you are not empowered to make this decision. You are not "able to" violate store policy this way and keep your job. If a store owner or manager wishes to give someone a product for free or issue a full refund, yes they are "able to" do that.
The rep in TFA uses "we," referring to Google. Google is able to reinstate accounts, and The Google Ad Traffic Quality Team is able to reinstate accounts depending on their judgement of whether someone is violating policy. If they are not able to reinstate accounts, can you explain to me why they're adjudicating account ban appeals? Do they say "no" to everyone?
The key point here is that the agent(s) are responsible for interpreting the policy. They have decided that Droidscript violates their policy, and I personally have no opinion about that. But to imply that it's "out of [our] hands]" is dishonest.
Just say "upon review we've determined that your app violates our policies so we will not be reinstating your account."
I think you're overcomplicating things. Fine, substitute "business owner" for cashier, and the point stands. I am "able to" just hand you the goods, but my policies and morals prohibit me from doing so. They are abbreviating the longer statement, "we are unable to reinstate droidscript at this time without significantly redefining our policies"
It's reasonable to say you're unable to do something because it's against the law and doing it would make you a criminal. Equally its fair to say you 'can't' do something that would go against your morals.
That is not equivalent to what's happening here. There is no law preventing Google reinstating the account, and corporations don't have morals because they're not people. The only thing preventing them doing it is that the employees involved choose not to.
It would be correct correct for the Google employee to say "I am unable to..." because that is against their employer's policies. But they say "We are unable to...". "We" meaning Google, and Google is certainly not unable to reinstate the dev's account because they are unwilling to do so, which is what "against Google's policy" means.
No, you _will_ not do that, and made that decision so long ago it feels inviolable to you.
When someone points a gun at a cashier and says "this is a robbery and I'm gonna shoot you if you move a muscle," the cashier usually uses their ability to hold still out of concern for their safety.
We've gone from a world where we can run any software on our devices, to one where Apple and Google tell us how we can make money, what we can run, and what speech is permitted.
It's Orwellian, but with corporate greed instead of nation state fascism.
For a startup community, where founders will often be in a customer support role, the wording and tone of user communications (especially those with an unwelcome message) is often top-of-mind, so a language discussion can be relevant.
I’ve sometimes spent hours crafting a single reply to politely decline a request. It’s not even proportional to the prospective importance of the customer, it’s a matter of respect for all potential users.
Also, sometimes, investigating the issue to determine the right words has revealed hitherto unknown problems, uncovered new possibilities, highlighted alternative solutions that may be palatable, or even changed the outcome entirely.
Discussing the language used is actually thereby more productive & constructive than simply piling on Google for being careless, callous and pompous yet again.
Yeah I don’t get why it’s top comment either, I just meant to vent about an annoyance that’s frankly not material to the question at hand. I was unable to refrain from laughing at your quip incidentally.
Plain English is a wonderful thing. I wish it were used more often.
Non-plain English is usually a flag that the person you’re dealing with is not smart.
The usage of plain English words is something we do strive to see more often. We do hope that it can be utilised more often going forward. Unfortunately at this time we advise that the occurrence of non-plain English may indicate a violation of our MTC (minimal thought capacity) guidelines.
> Non-plain English is usually a flag that the person you’re dealing with is not smart.
I don't think it's that they're not smart. It's that they have a separate agenda, often deflecting responsibility. People often use this sort of indirect wording even without consciously realizing it.
This stuff is on the rise. I used to be able to resolve issues with customer service And they would admit fault. It’s becoming a liability shield where no one accepts fault.
The comments created by this “unable” vs. “unwilling” matter exemplifies but one reason I love HN. I know of nowhere else I can read intelligent back-and-forth arguments in Philosophy, Linguistics, Information about a Google Ads email—-one I find of importance as, I too have received such emails from big G over the past decade... And, like DroidScript, the experience was devastating for my business. But the more painful, the better. Failure is feedback. And because of my painful past experiences with Google, it’s impossible for me to forget. It’s a case study in Diversification vs. Focus. The downside of Focus is... “A business of one is a business of none.”
Yes the wording is intended to soften the interaction.
They use “we” to refer to the team you are interacting with emphasis on bound by the company policy/process
You may see “we” as the company itself setting its own policy/ process
> In your case, we have detected invalid traffic or activity on your account (Publisher Code: pub-********) and as a result it has been disabled. Because of this, the ability to serve and monetise through all products which depend on AdSense will also be disabled (for example, AdMob and YouTube).
> We understand that you may want to know more about the issues that we’ve detected. Because this information could be used to circumvent our proprietary detection system, we’re unable to provide our publishers with information about specific account activity.
> Once you’ve made changes to your site(s), app(s) or channel(s) to comply with our programme policies and terms of service, you can reach out to us using our appeal process. Please make sure that you provide a complete analysis of your traffic or other reasons that may have led to invalid activity in your appeal.
I realize that the term Kafka-esque is a bit overused nowadays... but this sounds exactly like a plot summary of Der Process.
I used to work detecting ad fraud. Publishers would do bad things, call in, and try to get their account rep to get details.
Obviously I can't say "of the last 2500 ad clicks zero of them had any mouse movement over the ad before the click event" because then the publisher obviously just fixes their fraud software.
This isn't specific to Google or even advertising. Every company has figured out when dealing with abuse and fraud sharing the minimum amount of information is beneficial to the health of the ecosystem as a whole.
In a case like that, sure. But they don't provide any information even when they want the publisher to make a change. Our Adsense account once got suspended because ads were appearing on pages that contained user-entered search keywords. Occasionally users would enter keywords that google considered 'naughty', and didn't want their ads appearing alongside. If they'd just told us that, we could have added a filter to not show ads with the list of keywords they had a problem with. Instead it was an infuriating, weeks-long process of pulling teeth to get clues as to what the problem might even be, and then making a list of every conceivably bad word we could find or imagine (admittedly that part was a bit fun) before we were finally able to get re-approved. And presumably we only got that much leeway because we were a reasonably large account.
The subtility is they don’t actively _want_ these small publisher to do something.
It could be better if the issue is fixed, but Google’s skin in the game is small enough it doesn’t matter, and they already chose to get rid of the publisher as an efficient solution.
A valid explanation, but not one I believe applies here.
This ban is not only not explaining how it detected unwanted activity, it is not explaining what activity it detected.
"We detected you faking ad impressions, though we won't tell you how we (believe we) know" is very different to "We detected you (or your app) doing something wrong, stop doing it and you will be fine. We won't tell you what you did wrong".
"Our ML system registered a hit on your account, which is almost always associated with policy violations, but we don't know what the trigger was, or why that set of data about your account is almost always associated with policy violations... it just is."
"and we know that it doesn't always work and don't care because despite our ML model being opaque and hard to introspect when we tested it against our database of known bad actors it performed better than both human review and our old heuristic based system."
There is an opposite side to this. If you have a human appeal process your accuracy drops to min(ml_model, humans) and I have bad news about which one of these is smaller.
I would believe this if Google wasn't notorious for providing the worst customer service that will avoid someone going to prison. Cable monopolies have a better reputation.
If the justice system started acting this way too where they don't tell you what crime you committed, what proof they have and how others can not commit such crimes, then I don't think that would mean "minimum amount of information is beneficial to the health of the society as a whole". This is basically letting get away with arbitrary rules without any accountability.
Is there anything in place to prevent bad actors from deploying fraudulent ad-click tools to takedown accounts?
(What you said makes sense to me, I just don’t know if/how ad networks could differentiate between the account-owner committing fraud and a malicious 3rd party.)
You make your peace with the fact that you'll have a certain rate of false positives, where you'll intentionally lose also some legitimate business in order to keep most of the "ecosystem" cleaner. Perhaps an unsatifying answer, but that's it.
It's not a situation like putting someone in prison where "beyond all reasonable doubt" is the appropriate mark; you can refuse to do business based on mere suspicion that may be mistaken. There's a limit where extra investigation or appeals is too costly compared to just accepting the lost revenue, and for small-scale customers, that limit is quite low. With fraud detection, you have to balance the tradeoff between false positives and false negatives, but you'll certainly have both.
In Google’s case, this is not enough. They exert too much control over the online advertising industry that it’s simply unfair to ban anyone with no explanation and recourse. It should be illegal. It’s almost impossible to effectively monetize an app or website using ads without including various Google technologies and services, and that’s Google’s own doing; they’re the ones who purchased all of those companies and integrated their own products in a way that makes them inseparable.
We viewed them as a cost of doing business. Some small accounts got nuked. :shrug: If we had to have humans investigate everything, and produce reports / interpretations that nuked customers found satisfactory, we wouldn't have been willing to service accounts under probably $40k/year.
And keep in mind the ecosystem is filthy with fraud, particularly on the low end. There very much are groups of organized thieves actively exploiting adtech.
And as @PeterisP says... look, we're not a court. We're a private business that is refusing to do further business with someone. Our right to do this was very clearly explained before the beginning of any relationship, and agreed to by that someone. If that someone doesn't like it, their recourse is to not do business with us.
> And as @PeterisP says... look, we're not a court. We're a private business that is refusing to do further business with someone. Our right to do this was very clearly explained before the beginning of any relationship, and agreed to by that someone. If that someone doesn't like it, their recourse is to not do business with us.
So you are basically justifying Google behavior. You are not a court, that's right. But every ban process should be easily and quickly prosecutable to settle the issue right in a court.
Obviously real fraudsters will never appeal like that, because they know they would incur in even bigger problems.
> Obviously real fraudsters will never appeal like that
I now know you have no experience at all fighting online fraud. People, um, lie.
On a serious note, if you require a prosecutable ban process -- whatever that means, because prosecuting is something the government does -- where you'll end is my original point. Ad companies will refuse to do business with publishers that aren't above some minimum threshold. My guess is $40k a year. Because remember eg google or whoever keeps about 1/3 of that money, so a $1k/mo minimum to staff humans and deal with arguing feels ballpark reasonable.
Separately, I'm not justifying anything. I'm explaining the economics driving behavior. If you want to be mad at me for behavior I don't control or influence... :shrug:
People have this misconception that ad networks some how get joy from turning people off for no reason. Every ad shown is a penny in their pocket, even if its 100% fraud. When advertisers start asking for money back is when investigations are launched and accounts are terminated.
There are literally no false positives. It may be fraud, it may be the ad is too close to a back button and gets accidently clicked, it could be the ads don't display right. But at the end of the day, it is a revenue decision.
There are several similar examples on android development subreddit. The best one which comes to my mind if when a developer got their account suspended because they were using a trademarked name in their metadata. The trademark name was "Windows". The developer was referring to house windows, not the Microsoft Windows....
Note that apparently there's a late-ish ‘restoration’ that seems to correct some of Max Brod's editing decisions so that the plot is more straightforward, if that can be said of the novel. Literally the chapters' proper order is unknown.
I listened to the audiobook of that (read by Geoffrey Howard), and while the reading itself is fine, you'll want to avoid the editors' preface because it gives out some plot points and the ending. (PSA: if you make audiobooks, don't put editors' or critics' opinions anywhere before the end, even in ‘footnotes’. Only clarifying notes for unfamiliar terms.)
This article is big news, because it shows that Google will permanently yank your Android app if your website violates AdSense's secret fraud detector! If you're running Google AdSense ads and you have an Android app that you care about, take down the ads immediately and switch to another vendor.
AdSense is the product where Google pays you for running banner ads; they can and frequently do kick people off of it for secret reasons. When my company was kicked off of AdSense back in 2010, I wrote about it extensively. https://www.choiceofgames.com/2010/08/were-banned-from-googl...
Google will never tell you why they ban people from AdSense, and there's no effective way to appeal. (They have an "appeal" process, but what are you supposed to write in the appeal when the charges against you are secret?!)
At least we can still publish Android apps, right? (We now run Facebook ads instead.)
But Google's email to DroidScript saying that the DroidScript app was removed from Google Play Store for "Ad Fraud" says otherwise.
Publishing status: Suspended
Your app has been suspended and removed due to a policy violation.
Reasons of violation
APK:206 Ad Fraud
App violates Ad Fraud policy.
Surely Google could have just revoked DroidScript's access to Google ads, while allowing DroidScript to ship on the store, like they did for us.
If Google ever yanked our Android app over "ad fraud," we'd have no recourse. We've appealed our AdSense rejection a dozen times over the last 10 years and we always get a form letter rejection. We have no idea what they think we did wrong, and we never will, so we can never fix it.
Thank god we don't run AdSense ads anymore. Based on this, I never want to run them again!
Maybe, but in this case they had advertising in the app which Google is claiming may have been fraudulent, and it seems everything ballooned out from there. There’s no indication that this has anything to do with advertising on their website.
Yeah I think claiming Ad Fraud is pretty much on the libel side of things, also I'm not sure why hasn't any developer taken legal action against outlandish Google practices yet.
There was at least one small claims case (exactly about the unexplainedness of the termination), where in the end Google produced logs that showed that the ToS was violated.
if you don't interlink your app account and your website's adsense google account, then this _shouldn't_ really be an issue right?
So i say always maintain separate google accounts for individual apps, for individual usages. Separate your personal google account from your business account, from your ad-network account etc. And i would argue that each individual app should own their own account (you pay for this of course, because of the fee per account i guess).
Google are wise to this; they'll hit you with an "associated account" ban. All your company accounts, app accounts, ad accounts, personal accounts, your spouse's, dog's, dentist's account - all banned.
This looks like the best course of action in the current situation.
Now the different accounts would still be linked to the same entity, and as stories like this make the news, more people would split their services on multiple account. As a response, it doesn't look like a stretch to me if next time Google would go after all account owned by the entity the deem responsible.
I mean, they already crossed the line of banning all services associated with an account. Wouldn't be surprising if they cross a few more lines as long as there is no critical impact for them.
And blood-black nothingness began to spin... A system of cells interlinked within cells interlinked within cells interlinked within one stem... And dreadfully distinct against the dark, a tall white fountain played.
"DroidScript is an easy to use, portable coding tool which simplifies mobile App development. It dramatically improves productivity by speeding up development by as much as 10x compared with using the standard development tools. It’s also an ideal tool for learning JavaScript, you can literally code anywhere with DroidScript, it’s not cloud based and doesn’t require an internet connection. Unlike other development tools which take hours to install and eat up gigabytes of disk space, you can install DroidScript start using it within 30 seconds!"
So... having read through their marketing material, this is an on-device tool that opens up what appears to be most of the Android application API to at least the user of the device, and potentially to any Droidscript applications they grab from other sources, and... maybe to other apps on the device? It's not clear from a quick read how extensive the runtime control is.
So just right out of the gate this is defeating basically the entirety of the Play Store vetting process. Droidscript itself may not be engaged in advertising fraud, but it makes advertising fraud trivial to deploy. (And it needs to be said: this is the kind of app that would never have been legal at all on any version of iOS.)
Add to that that it's a closed source IDE for an open platform, and my intuition sides with Google here. My guess is that when details come out it will turn out that at-least-plausibly harmful Droidscript garbage was being pushed to users and Google decided to kill it.
> this is the kind of app that would never have been legal at all on any version of iOS.
Pythonista is a complete Python programming environment which provides access to camera, music, contacts, the network, and so on, and has been available for iOS since 2016. What specifically distinguishes Droidscript from Pythonista such that you think Apple would reject Droidscript?
Droidscript has support for writing custom intents, which Pythonista (and Scriptable, a JavaScript version of the same thing) do not have. A malicious Droidscript application could access other applications on the device.
I've done some, although not a lot of, native Android development and I'm not quite sure what's so bad about sending intents. "Could access other applications" sounds dangerous, but as far as I know that "access" is limited to things those apps have explicitly decided to allow external apps to access.
Probably it's not the capability to send custom intents. Everytime i buy a new device, i look for apps with unknown or curious names, check the manifest and use an app like Intent (https://play.google.com/store/apps/details?id=krow.dev.schem...) to poke around.
Applications could be exposing intents they assume will be used by trustworthy applications (i.e. apps in the Play Store). A user could download a Droidscript (which as I understand doesn't trigger the unknown sources policy) which then tries to use intents it shouldn't need without asking the user for permission.
If Droidscript required unknown sources to do anything (not just APK exports), then other apps could check the unknown sources policy on the device and disable certain intents (which they may do anyway at the moment, since that would mean that the applications installed may be untrustworthy). But this way there isn't any way to tell.
trustworthy applications (i.e. apps in the Play Store)
Please don't equate trust with any app store like that. Firstly, many incidents have shown that this blanket trust isn't warranted, and second, the final arbiter of trust is the owner of the device, not the owner of the app store.
100% false. When a user buys iOS they willingly (maybe not stating 'I relinquish my control to Apple', but implicitly) give up a degree of freedom in determining who they trust to Apple. It's well-known that you can only get apps from the App Store when using iOS.
> Applications could be exposing intents they assume will be used by trustworthy applications (i.e. apps in the Play Store).
This is a poor assumption to make. Any data coming into your application should be assumed to be malicious. This would be the same as a server just accepting any data made to its API calls without any validation.
I know that this has but a fat chance of being taken seriously by Google but... Isn't this a good chunk of the reason why people here on HN and elsewhere have been arguing for much more granular intent management on Android like they had in the early days?
When we get permissions boiled down to one or two popups we end up with issues providing accurate privileges to applications (and might be forced to allow WhatsApp to trawl through our contact list if we ever want to send a picture in it).
Granular control shifts the power to the user and allows programs like this to have more fine tuned privileges.
I disagree - it turns into users clicking through piles of crap if you've got a crap UX. If the UX is well tuned to display this information and let the user break out to greater levels of detail or keep things simple then you can find a good middle ground.
Given the amazing strides in usability we've seen in nearly every other field it baffles me why everyone isn't onboard with the fact that we can take the learnings from elsewhere and bring them to the domain of permissions.
Permissions are almost always hierarchical and grouped into classifications that make it easier to present the user with fewer more meaningful choices than asking the user to approve whether an app can see each contact on their phone one-by-one.
I'm honestly a bit cynical (puts on tinfoil hat) that marketers have held us back here since a lack of granular permissions aligns quite well with their effort to grab as much personal data as possible.
There's so many crazy gotchas in android permissions, though... eg, most users won't know that there's a connection between wifi and geolocation data. That's a non-obvious connection with a real trade-off: the app might have some interesting wifi-based functionality, but in exchange the app authors might harvest your geo data.
Consider the permissions for the lowly keyboard app...
A proper understanding of fine-grained permissions basically requires a working knowledge of how that permission might be or has in the past been abused.
And ultimately, fine-grained permissions are probably answering the wrong questions. The user expresses some basic trust via the initial app installation; what permissions ultimately help with is deciding whether or not to keep trusting the developer. If the app ask for lots of unexpected stuff, it's probably malware and should be uninstalled. If the permissions seem reasonable, the app is probably fine, and the user just wants to delegate responsibility to the app to do what it needs to do to get shit done.
It's really /all/ about trust. If you can't trust a random app, installation is a high-friction event. Check the stars, number of users, read a bunch of recent reviews, carefully go through permissions providing access for exactly what's needed. If you /can/ trust a random app, you can just install it, use it to read the fscking QR code and go on with your day. The need for trust is why we've ended up with centralized app stores with stringent content policies, and all the false positives that come along with it.
Android's fine-grained permissions system isn't a good fit for something like Droidscript; one script could use a permission for valid reasons, then another could do something bad.
You can't access any random application just by sending intents. Available intents must be exposed to other apps if desired - for example, the camera app has a "show the camera for taking a photo" intent.
If you don't want another process sending you an intent, don't export your entry point. This isn't hard. Security through obscurity is no security at all.
You can't use it to create a backup script to online backup your phone data. For good measure iOS also blocks all apps since they would lose iCloud revenue.
I think your thoughts on this are plausible, if not likely. However, the usual complete lack of communication by google is the actual problem. Perhaps droidscripts could mitigate googles concerns, if they had the decency to explain them.
But if they do, a malicious actor can use that information to circumvent their restrictions, and its their walled garden, so they have very little incentive to tell everyone exactly what they don't like.
I know this is standard practice for most big companies moderating lots of content, but it has always seemed like such an insane policy to me.
Imagine if this were applied to actual laws enforced by the police. "You're under arrest but we won't tell you what law you've broken, because then other criminals might use that knowledge of the law to avoid being arrested. And by the way, a secret court has sentenced you to life imprisonment and all of your appeals have been denied."
Putting aside that law enforcement has very different risks & pressures from corporate moderation, you don't really have to imagine: US law enforcement have a tool called civil forfeiture which lets them seize assets suspected of being involved in a crime without charging the asset owners of any specific crime. The owners have to prove to the police that the assets were not involved in a crime to restore their property. The US also has FISA courts for sensitive matters, FISA hearings are secret and only involve the judge & government representatives without the presence of all relevant parties.
I'm not endorsing these US policies, but it's worth noting that even in democratic law enforcement it's accepted that the system isn't always bound by transparent policy and process. There are usually justifications for keeping some things secret and discretionary to enhance law enforcement effectiveness.
That's the claim made by Google and many other big corporations. It's plausible enough, but I haven't seen any hard evidence that it's true.
Suppose it is true that these companies can't reveal their decision making because there's so much to be gained by bad actors that game these highly centralized systems.
Then it seems like a larger number of smaller firms could be more transparent and still achieve the same effective level of security.
> However, the usual complete lack of communication by google is the actual problem.
Uh... Seems like the actual problem (given that scenario) is that adware is being pushed to users, not whether or not Google defended its ban in public. Complaints about customer service (from everyone, not just Google) are a dime a dozen, actual user security is clearly more important, right?
Your answer presupposes a frame where Droidscript is innocent. What if it's not, and it knowingly nodded to a community of junkware being pushed to its users (again, I have no evidence!). In that case you'd want it banned without "decency", right?
Banning it first is fine. banning it first, then not giving a reply to the concerns they have is not. Even if they have reasonable believe or proof that droidscript is indeed malware, it looks like at least a chunk of their userbase uses it for legitimate usecases and the devs, who likely invested at least a few hundred hours of work in it, deserve at least some communication.
I used to work at Google, and a friend reached out to me for help – his company's app was in a similar situation, with similar communication from Google. This was a good friend from high school, so I pressed the issue using internal channels. The person handling it on Google's side was very assertive about them violating a policy, and after some back and forth I received a _vague hint_ about what was the supposed violation. I passed the hint along, and after some digging, lo and behold, it turned out one of their people had lifted someone else's images without permission, violating copyright (kudos to Google for figuring it out). My friend apologized profusely to me, to the support rep, his boss, and let the culprit go. They purged the app's assets, changed their processes, and eventually the app was reinstated.
Now, this was a special situation. I had a personal relationship with the developer, and I was happy to vouch for their honesty. Yet it still turned out Google had been right all along. Now, it's a shame Google couldn't let them know what was the issue. However, it's a safe assumption that the vast majority of people Google support deals with are spammers. And there's a lot of them. If Google gave a detailed explanation to all of them it would mean a ton of additional work – which would create an unsustainable situation at this scale.
> However, it's a safe assumption that the vast majority of people Google support deals with are spammers. If Google gave a detailed explanation to all of them it would mean a ton of additional work – which would create an unsustainable situation at this scale.
You describe a situation where Google was going to put a whole company out of business -- probably ending your friend's job, as well as that of many other honest people -- rather than give them the information they needed to fix the problem. And you think this is reasonable, because it would be "a ton of additional work" for Google? We just have to accept people losing their livelihoods as collateral damage in the war on spammers?
Imagine if we applied the same logic to the government. If they think you committed a crime, they just toss you in jail and don't have to tell you why. They could catch a lot more criminals if they didn't have to waste time prosecuting them!
No, we need a Habeas Corpus for tech companies. If you are banned, you have to be told why. Make it a law. I don't care if it results in more spam.
> Now, it's a shame Google couldn't let them know what was the issue. However, it's a safe assumption that the vast majority of people Google support deals with are spammers. And there's a lot of them. If Google gave a detailed explanation to all of them it would mean a ton of additional work – which would create an unsustainable situation at this scale.
I don't think that's reasonable. What if most are spammers ? Better let a few spammers in than treat someone unjustly. Why would it become unsustainable ? I've seen this argument repeated ad nauseam, but have yet to see proper proof.
In this particular example, a copyright violation was detected in a image, so an automated response "someone else's image was used without permission, violating copyright" seems entirely plausible.
Google has the scale to do this, but they also have a large enough monopoly where they don't have to, so they won't. It's not that it's unsustainable, it's that it is entirely sustainable to continue doing things this way.
Can you elaborate? I can see how Google can scale this automatically. But I don't see how Google can terminate, say, one million apps a day, if each termination entitles the spammer a one hour conversation with a technical representative.
Why does it need to cost them an hour conversation?!
Look at the tone-deaf example this employee just shared. All they had to do was say in the same email that they used to ban someone "you have copyrighted images".
The moment they find an infraction they could literally take a screenshot, say "the problem is X" and email it, which would incur the 5 seconds it takes to add a screenshot and say the problem you already identifies, but make a world of difference for developers.
This nonsense about "it's to stop spammers" isn't about the cost, the laughably bad logic Google uses is that by identifying what rules you broke, spammers will get better at not doing stuff Google catches...
As if the spammers don't already know what they did to get caught!
Make the person but the hour, say $100. It's a very different value proposition for some one saving their business vs some one trying to game a system.
> In this particular example, a copyright violation was detected in a image, so an automated response "someone else's image was used without permission, violating copyright" seems entirely plausible.
Google should not be enforcing copyright in the first place without at least a report of infringement by the copyright holder - and in that case they should pass the report along to the developer.
Caveat: I work at Google but know nothing about this area and my opinion here is entirely personal.
> which would create an unsustainable situation at this scale.
Financial sustainability may have something to do with it, but I suspect the larger issue is that providing too much detail essentially trains malware authors to route around the company's defenses.
Imagine the Play Store as a castle which has both good townsfolk coming and going as well as being perpetually under siege by a malicious lord. Sometimes, the castle's defenses inadvertently prevent a townsperson from getting to market to sell their onions. When the townsperson is like, "Hey, I can't get in to sell my onions." it's helpful for the castle defenses to be like, "Well, we have the portcullis raised from 9am-11am on Tuesdays and the gatekeepers listen for your accent to decide if you're a local or an enemy."
But that's, like, exactly not what you want to say if the "townsperson" you're talking to is actually an enemy spy taking notes.
>"Rough consensus, and running code. We are not the Protocol Police."
Half the problems we have nowadays is because we have manufacturers playing "the Program Police", which leads inevitably to the point you just made.
You are now, like it or not, adversarial to any User looking to do anything you find unconformant with your bottom line. You cannot solve these issues by whitelisting, just like you can't solve the problem of crime by whitelisting, and hiding the conformance suite. If you can't know the test, you can spend infinite cycles changing the wrong thing to comply with it, and I do not find that to be a tenable state-of-affairs to push on users, even if intentionally aimed at the malicious ones. This is the same problem we have in meatspace with our overly byzantine legal system; but nobody accepts that secret laws are a good idea because if everyone can read the law, it's a national security risk. At least no one without some serious conflicts of interest.
Do you really think that your company is going to nail down a good solution to a problem that society at large can't even handle reasonably? I mean, think about it. This really is a subset of the general question of how to keep everybody doing something productive. I don't even need an answer. I just want to encourage people to think.
> >"Rough consensus, and running code. We are not the Protocol Police."
This model absolutely does not work when it comes to creating spaces where humans interact. There are bad actors and someone has to police them or they will abuse other users.
If you run a bar, you have to hire bouncers. It's simply part of the cost of hosting a safe venue.
I suspect the larger issue is that providing too much detail essentially trains malware authors to route around the company's defenses.
Perhaps so, but it seems not unreasonable to have SOME ability to work with the creator of an app that's been on the store for years with a substantial number of ongoing users and (speculating) a non troublesome patten of installs and purchases.
Nobody believes that Google is technically out financially unable to do this, which leaves the other option - at a corporate level not giving a shit enough to even bother trying.
Google will often do the right thing whether by plan or by happenstance, but it pays to be aware that when it does the wrong thing there is no recourse and will be no correction.
I'm sorry, but the "security" excuse is BS. You don't have to tell users what automated tool flagged them or how their violation was discovered.
You do have an ethical obligation to inform them of what policy was violated with sufficient detail that a good actor has a reasonable chance of complying with your policy.
I think that this should be required of any company that to provides publicly available goods/services, not just Google. This doesn't just help with monopolies, but also makes it harder to hide racism and censorship behind opaque policies.
That doesn't seem to be a problem in this case? Telling spammers they are blocked due to copyrighted images trains them not to upload copyrighted images. Win-win.
Well, this is the essence of discrimination and we wouldn't tolerate it for a whole range of indicators (you're black, gay, if a particular race, etc etc). My guess is the real reason they won't tell people is that they would end up in court pretty quick.
so, in you mind, detecting copyrighted images and using that as a metric to detect spammers is discrimination? Are antivirus programs discriminating too??
From a definitional point of view yes. Using an attribute to place someone in a class and then making decisions on a class basis without actual evidence they possess the other attributes of the class is discriminatory behavior.
How can google even decide that a copyrighted image was used in an illegitimate way? They’d need to check back with the copyright owner to confirm that there is no license and they’d need to confirm that none of the various exemptions apply. This is also a matter that’s entirely between the copyright holder and the author of the app. I could understand if the problem was that the copyright holder explicitly notified google, but then that complaint could just be forwarded to the app owner with no information about any secret sauce being revealed.
i disagree about unsustainability. there are real people on the other side of the business among these bots and spammers and if you ignore them because they might be bots and spammers, they'll leave and tell other real people that google can't be reasoned with because they assume everyone is a bot and a spammer.
you see exactly this happening all the time here on HN. the sentiment for the past few years is abysmal. google is actively blowing up their power user/developer customer base. looks like a metric somewhere got optimized a bit too well.
I think so as well. As a duopoly Google and Apple owe it to their customers and 3rd party developers to know why something gets banned. Being in that position requires special consideration to hold that much power. Government has to do it, why don't huge corps?
> It's a safe assumption that the vast majority of people police deal with are criminals. And there's a lot of them. If they gave a detailed explanation of why they are under arrest it would mean a ton of additional work - which would create an unsustainable situation at this scale.
But it's all good, Google is a private company™ and can do whatever they want®.
> Add to that that it's a closed source IDE for an open platform, and my intuition sides with Google here.
If I can't ship my closed source IDE on the platform is the platform really open?
> My guess is that when details come out it will turn out that at-least-plausibly harmful Droidscript garbage was being pushed to users and Google decided to kill it.
Of course they will say it was because x, y, and z were done to protect the users. But is it really for the users' benefit or just about control over their walled garden?
Yes...Droidscript allowed one to use the tiny computer in their pocket similarly to the way one could use the large computer on the desk. One could script small apps on their tiny computer and they could access most of the same api as java apps. It was pretty awesome.
> My guess is that when details come out it will turn out that at-least-plausibly harmful Droidscript garbage was being pushed to users and Google decided to kill it.
Yes, I'm sure Google will carefully release details that paint them as the good guy. Certainly, we don't want to be needlessly unfair to them, but there is zero reason to give them free trust them at this point.
Google will not release details because Google doesn't care if they look like the good guy (otherwise they wouldn't do stuff like this in the first place!)
Best case is the right person sees this social media outcry, silently gets it fixed and Google moves onto destroying the next developer.
We're talking about a development tool. Of course it's going to make any use of the device possible -- that's the entire point. If the point here is that any development tool shouldn't be allowed in the store (which I think google and apple are mostly fine with), that's a pretty sad thing in my opinion. Maybe google is "right" in enforcing their policies, but is it helping anyone?
> Droidscript itself may not be engaged in advertising fraud, but it makes advertising fraud trivial to deploy.
No more than being able to build an app on my laptop and push it over ADB.
> (And it needs to be said: this is the kind of app that would never have been legal at all on any version of iOS.)
It also needs to be said that this is why I don't use Apple devices. What they inflict on their platform is not an argument for what should happen elsewhere.
> Droidscript itself may not be engaged in advertising fraud, but it makes advertising fraud trivial to deploy.
I think that this is what has happened. The author of DroidScript claims that
> Unfortunately we also have to inform our users that we could no longer support AdMob for use in their own apps either, because we can't test it anymore and can't guarantee that Google won't treat them in the same brutal way.
So apparently users were able to do stuff with AdMob on DroidScript's back, and maybe AdMob registered these fraudulent actions with some Google-ID which was assigned to DroidScript.
I don’t get your point. Sideloading apps was always possible on Android even without a jailbreak. We’re not in Apple world, so it’s unclear which Playstore rules got broken here.
Side loading is an Android OS feature, not a Play Store feature. Can you sideload via Play Store apps? F-Droid isn't in Play Store, but APK Manager is, so I'm confused.
You've always been able to use any of the web browsers in the store to download and install a random APK from a website (for example F-Droid), you don't even need to sideload it. Sideloading apps is mostly just a relevant concept for developers or for users who have no alternative to getting custom code on a device. (Edit: Speaking of ad fraud brought up by the GGP, there are also many automation apps, at least one (Automate) uses a plugin flow-chart architecture exposing all sorts of functionality, with users able to share custom scripts. Not to mention tons of plain "auto-clicker" apps.)
Yup. Check out aurora store. It's a open source frontend to the google play store. All apps can be installed(except of course paid apps. Though if you bought the app and sign in to the account with aurora you can)
The fact that Android Play Store had apps like that all the way back to the earliest days is precisely why some of us have an Android phone rather than an iOS one. There are full-fledged Lua, Python, C++, Java etc IDEs there.
Chrome is closed source and has developer tools, and has damn near every permission Android provides. You can app your apps on it, as long as they are of the web variety. Should we not ban chrome too?
If droidscript enables ad fraud, isn't it an issue with how the android sandboxing model is fundamentally broken? Given that there are far more people using phones than computers, and a lot of new smartphone users will have never used a desktop or laptop computer, droidscript might be their first venture into programming and/or hacking. Let's not shut it down.
Chrome does not provide raw access to the APIs from JavaScript. Instead everything is sandboxed to the hilt.
Also the product has a very heavy emphasis on security, the security team is superb quality and well funded, and Google know that the team is trustworthy.
Chrome polices websites with per-site permissions, controlled by the user. Does DroidScript give users the same level over control over 3rd party code?
Whatever "open platform" might mean Android is becoming less and less of one as Google has made huge efforts to move more and more core operating system functionality into closed source Play Services and continues to remove developer access to many APIs in the name of security. In fact what you're advocating for in this comment is to make the platform less open.
> (And it needs to be said: this is the kind of app that would never have been legal at all on any version of iOS.)
Exactly, iOS is not an open platform and Google has decided they want to be more like iOS.
> wrapping every API with Javascript sounds non-trivial.
I am not an expert in JS or the Android API, but I wonder if you couldn't do it automatically? If types line up closely enough, I would think that you could get a list of Android APIs (pull it from AOSP if you have to) and mechanically translate to a JS API.
Still seems strange to me they focused so hard on the ad fraud part of it, unless they had a sudden change of heart and needed an excuse to get Droidscript out of the Play Store. They could just as well simply have said that any app that allows for easy, arbitrary code execution is a security liability and won't be accepted on the Play Store, which does include a fair number of root-required tools that have been removed at some point before. I don't necessarily agree with it, but that'd be a pretty believeable justification.
My gut feeling says these devs aren't telling the whole story.
> So... having read through their marketing material, this is an on-device tool that opens up what appears to be most of the Android application API to at least the user of the device, and potentially to any Droidscript applications they grab from other sources, and... maybe to other apps on the device? It's not clear from a quick read how extensive the runtime control is.
When did we collectively decide that programmable computers were a Bad Thing?
Some of us realised that end users don't want to program and that they can be better protected from themselves by only allowing execution of arbitrary code when they explicitly say they want it.
I thought the parent comment was speaking generally. Maybe I misunderstood.
Either way, I was thinking of downloading Droidscript as one way of saying "I want arbitrary code exec."
I think it's reasonable for Google to say "Most users don't understand what this implies, so if they want this they'll have to get it outside the Play Store."
That said, that doesn't actually seem to be what they're saying here.
I just think it's a reasonable stance to hold as an App Store.
Interpreters are problematic as they all are for executing what amounts to arbitrary, un-vetted and unsigned code. Weather or not to allow them should be up to the user and it is. Google is saying here, if you want this, you'll have to sideload it.
This is my primary hacking tool for throwing little scripts together on Android. You can bring up an IDE in chrome on your PC and interactively execute it on your phone. I hope this gets fixed.
I wouldn't really be surprised if EVERY scripting/programming app in the play store technically violates some play store rules, though.
Well damn, now I want to download it. I've never gotten into mobile development because getting started always seemed like a chore, but this sounds like it would be fun to play around with.
Try making a React Native app using Expo. You write JavaScript on your PC (but you can access native functions) and the app will automatically refresh on your phone near instantly.
You can later eject it to an Android Studio/Xcode project if you want.
Whatever you choose, moving to mobile development is extremely fun once set up. Usually IDE if your choice reloads the app on the phone over the cable for you, so the feedback loop is really nice.
Having tried neither, Flutter sounds like the polar opposite of both the experience and capability that GP mentioned. I'm sure it's nice but can it be developed interactively in a PC browser as described above?
Was it used to publish malware? Given that it's a general purpose scripting tool I can imagine that some people would abuse it and use it as some sort of backdoor to get clueless users to run malware without having to publish it on the app store.
If that's the argument I can sort of see Google's point here. The Play Store is supposed to be curated and the application should follow certain guidelines. This tool as I understand it effectively provides a loophole that lets people run non-curated code without jailbreak. I know that Apple removed apps for similar reasons in the past.
TFA is a bit misleading, the whole "AD FRAUD" angle is frankly irrelevant, it's just that since Google considers that the app violates the guidelines it can't be eligible for the ad program.
> This tool as I understand it effectively provides a loophole that lets people run non-curated code without jailbreak.
Installing non-curated apps has always been supported on Android - no jailbreaking required. Just get an APK either straight from the developer or through any number of alternative app stores, open it, click the "yes, I'm sure" option in the security popup and you've got yourself an app.
One of the specific features of DroidScript is that it is a remote IDE. That is, when you start DroidScript on your phone it will serve the IDE UI via HTTP and you can then connect it by using your phones IP address (DroidScript conveniently gives you a URL to use). Maybe that is the reason for Google's decision.
Also, according to DroidScript itself, Google accused them of ad fraud, so maybe there is something there.
I hate when using euphemism slides into flat out lying like this. They are not "unable" to reinstate the account, in fact they are the only party able to reinstate the account, that's why the account holder was contacting them instead of someone else. They are "unwilling" to reinstate the account.
I know it's all just bullshit but it bothers me anyway.