The issue reported here is linked to App Engine and Gmail tightening up their spam filters. The root cause was an increase in organizations sharding out their spam systems to utilize App Engine’s free tier in such a way that is (a) in direct violation of our ToS and (b) making all of our lives suck a bit more (raise your hand if you want spam). It’s unfortunate that while App Engine is trying to provide a free tier that enables developers to easily use our platform, others see it as an opportunity for exploitation. Even more unfortunate is that it has a negative effect on legitimate users. It’s a fine balance that has been highlighted by several users within this thread.
Spam filtering is not a perfect science, and we’re constantly tweaking things -- with our customers in mind. This issue should be limited to new applications where the trust signal might be a bit lower. Thus existing apps / customers shouldn’t be experiencing issues (which was also highlighted by a few within this thread). If this isn’t the case email me: firstname.lastname@example.org. For those asking, “hey, why am I being penalized for being a new customer?” See my previous comment about spam filtering not being a perfect science. Then email me.
We’re here and we want to help.
-- Chris (Lead PM for App Engine)
The best part for me when using it for customers has been the bounce/click through rate tracking. When someone asks me, "how do I know they got it?" It's incredibly nice to point them to
the dashboard and show a less than 1% bounce rate (people putting in bad email is almost always the reason) with a log of every single email sent.
Most my clients get this service for free because their volume us low enough. They have quite a generous free tier.
Anyways, it's up to Mandrill to choose who they want to serve. I really liked the service though. Best of Luck.
edit: Not to say Mailgun isn't fancy, I have no idea if it is or not, I can say that it works in minutes though.
> This issue should be limited to new applications where the trust signal might be a bit lower. Thus existing apps / customers shouldn’t be experiencing issues.
1. My app is not new. It's been running without issue for 2 or 3 years.
2. My app is not on the free tier.
3. On average, my app sends out under 8 emails a day, and it's done the same for over 2 years.
How your algorithm considers my app a spam risk sure beats me.
I'm missing 11 days worth of quote requests from customers. Are these recoverable? (is there a hidden outgoing-spam bin?)
My app started sending mail again today after I changed the src of an image in it from https://example.appspot.com/images/logo.png to http://www.example.com/images/logo.png
How well do you think email clients are going to like emails with images embedded in them without HTTPS?
I sent him an email 14 hours ago and haven't received a response. I'll wait a day and if I still don't hear from him, will contact you.
The numbers Mandrill released about that business suggest that they had a large number of low volume senders, who may now be looking for a new home.
Or are you afraid that "rogue" applications will use it to produce messages that are SPAM but yet not trigger the SPAM filter ?
As an email service provider it's like that and more, since (1) once an abuser uses your service, they've gotten the benefit immediately and keep it even if their account is discovered as fraudulent later, e.g. stolen CC number & chargeback. (2) Abusive users can directly harm good users such as by harming the deliverability of the overall platform. It's not just bad debt, it's bad experience too. (3) Unlike Candy Japan where fraudsters mostly just wanted to check CC numbers and not actually buy product, email abusers really want to send emails (4) It can be hard to tell good and bad senders apart because some companies with an internet presence aren't email savvy and might make mistakes or might get hacked.
Spam filters are always tough because if you give someone transparency into which actions of theirs that you consider abuse, then they will quickly detect and route around your attempt to block them. (See Candy Japan article) It's pretty easy for a human to guess what might be the sign of their fraud and run a few experiments to see what gets flagged e.g. By comparison a machine learning system might be hard to outsmart, but then it's also challenging to explain and troubleshoot false positives. Hence what's effective is often a combination of machine-learned filters and heuristics along with manual overrides by human judgment.
All other things equal, new users are a lot more likely to engage in fraud than existing ones, and so tend to be under more suspicion. Aside from B2B fraud where companies take out lines of credit and then go bankrupt intentionally, it's uncommon for existing established customers to turn fraudulent - they're already vetted. (Consider: who is more likely to be fraudulent. The first time subscriber to Candy Japan, or a subscriber who has been using it for 12 months and is about to buy their 13th month?) It's not a great experience as a new user to be under suspicion, but if it's temporary and easily overridden by a human it can be a decent trade-off - the need to reach out acts a deterrent to spammers but does not deter legitimate users as much (speaking generally).
I've noticed the spam filter on Gmail for Google Apps has gotten significantly less accurate recently, resulting in far more false-positives than usual. Any ideas if this is a known issue? I can only presume it's more accurate overall, but it was definitely a noticeable change for our organisation.