> Dear ProtonMail user,
Starting at around 4:30PM New York (10:30PM Zurich), Gmail suffered a global outage.
A catastrophic failure at Gmail is causing emails sent to Gmail to permanently fail and bounce back. The error message from Gmail is the following:
550-5.1.1 The email account that you tried to reach does not exist.
This is a global issue, and it impacts all email providers trying to send email to Gmail, not just ProtonMail.
Because Gmail is sending a permanent failure, our mail servers will not automatically retry sending these messages (this is standard practice at all email services for handling permanent failures).
We are closely monitoring the situation. At this time, little can be done until Google fixes the problem. We recommend attempting to resend the messages to Gmail users when Google has fixed the problem. You can find the latest status from Google's status page:
The ProtonMail Team
Many of them auto-unsubscribe after a bounce.
The underlying issue (wherever this occurs) seems to be lack of nuance regarding error codes when people try to implement robust systems. Different codes imply different things and shouldn't all just fall back into generic buckets.
Like HTTP, SMTP is also designed to be stateless so, in the first place, the remote server shouldn't return a permanent error in temporary failure scenarios.
The default error should be 450: "Requested action not taken – The user’s mailbox is unavailable”, not "the user has deleted everything and left".
These standards worked well before big players came and told "My responses tell what I chose them to say, and these meaning doesn't always overlap with the established standards". The only exception is spam and we now have standards for helping to reduce it.
Google's mailserver could genuinely believe that the user doesn't exist, if the user service doesn't fail completely but cannot access part of the data and thus doesn't find a user record. In this case the returned "user doesn't exist" error is intended behavior of the mail server and the post you replied to still stands. If you sent to that email successfully earlier, it's much more likely that the server is responding erroneously than that the email actually got deleted.
Actually, I don't think so.
> Google's mailserver could genuinely believe that the user doesn't exist, if the user service doesn't fail completely but cannot access part of the data and thus doesn't find a user record.
As a system administrator and/or provider you have to think about worst case scenarios and provide sensible defaults. Your mail gateway should have some heartbeat checks to subsystems it depend on (AuthZ, AuthN, Storage, etc.) and it should switch to fail-safe mode if something happens. Auth is unreliable? Switch to soft-fail on everyone regardless of e-mail validity. Can hard fail others later, when Auth is sane.
Storage is unreliable? Queue until buffer fills, then switch to error 421 (The service is unavailable due to a connection problem: it may refer to an exceeded limit of simultaneous connections, or a more general temporary problem) or return a similar error.
SMTP allows a lot of transient error communication. Postfix, etc. has a lot of hooks to handle this stuff. Just do it. Being Google doesn't allow you to manage your services irresponsibly. If we can think it, they should be able to do it too.
Google SMTP servers should have returned a soft bounce here (not hard bounce), so then retry can work.
If the protocol is stateful, why the state should be kept by the "sender" and not by the "receiver"? Being stateless removes this ambiguity in my opinion.
Also we should remember how bad is for spam reputation sending emails to a non-existent address and thus I would not blame it on the mailing list for being "overly cautious".
Hard-failing good addresses is a much worse bad than soft-failing bad addresses. In the latter case, remote sender tries again later and eventually gets a hard bounce. In the former, good addresses are permanently dropped from numerous services, and sent mail is lost rather than retried.
Critical failures should soft bounce until positively determined otherwise.
Mailing lists believing what an email provider tells them and acting in an overly cautious way is a separate issue.
This can't work; you can say that gmail's system should have a component that recognizes the difference between various failures, but that new component can itself fail. You can't solve the problem of "what if something fails" by saying "just add a new component that won't fail".
Note that this is rather different from physical, mechanical systems which can fail in all kinds of exciting and unpredictable ways due to physical wear and tear, things getting jammed in places, component failure, etc.
That's true, but human behavior is also fundamentally deterministic, and those two observations are about equally useful.
> Note that this is rather different from physical, mechanical systems which can fail in all kinds of exciting and unpredictable ways due to physical wear and tear, things getting jammed in places, component failure, etc.
No it isn't. Those are deterministic too.
That is true in a perfect world. In the current world, there are all sorts of ways that code implemented one day does not run the same the next day. Say the code is in an interpreted language and an unrelated sysop updates the language runtime in a way that changes the behavior. Again, in a perfect world that doesn't happen, but that is not always the world we live in. I have great sympathy with people who treat software systems AS IF they were "physical, mechanical systems which can fail in all kinds of exciting and unpredictable ways".
If the a mail server can't tell whether a user/email is valid, it should either return a temporary failure or accept and queue.
Unless of course you're too big to fail, then you just do whatever you want.
I have good experience with them fixing issues related to their spam-related flagging for messages that are coming from our self-hosted email server, but never got any specific reply.
Email service providers are HIGHLY incentivized to act 100% in accordance with the wishes of the system where the mailbox exists because it’s highly likely that acting in any way that’s considered abusive could get your emails landing in a spam folder.
Mail boxes cease to exist thousands of times a day at places I’ve worked previously. Employees leave all the time and people shutdown mailboxes, this is Google’s fuckup, nobody else’s.
1. If the user's mail service penalizes you equally regardless of whether the recipient's addressed existed 1 day vs. never existed, that itself is absolutely inexcusable nonsensical behavior that needs to be fixed. You shouldn't do that, just as you shouldn't shoot the mailman (or even arm yourself...) merely because he knocked a second time.
2. Notwithstanding the previous point, I don't buy this as valid justification anyway. The proposal isn't that you should blast 100 emails toward the mailbox every time you get a bounce due to an address not existing. The idea was to just exercise some intelligence in the matter. Like maybe just retry a couple times, spaced out by a day or two. The bounce rate increase due to such an event is very negligible here—people don't suddenly delete their accounts en masse. When that happens, it's clearly due to an outage, not because half the users at that domain suddenly decided to delete their accounts. (Which is something you can also easily detect across the domain as another useful signal to drastically lower the bounce rate across the entire domain, btw, if you're absolutely paranoid about your immaculate delivery rate dropping by an epsilon. But it shouldn't be necessary given how negligible the impact should be.)
So I don't buy this excuse one bit.
What you're proposing is to explicitly ignore the specification (which says that you should _not_ retry when you receive a 550) and try to implement a custom smart retry logic that handles temporary error cases, but also does not get you blocked.
> So I don't buy this excuse one bit.
I'm all for building resilient services, but "try to detect when a server incorrectly returns 550" is not something I would prioritize at all. I'll happily manually clean up after this occurrence than to have this complicated retry logic. It's not an "excuse", it's a very sensible trade-off.
That means there are two sides to the interpretation of what SHOULD NOT means. And in this case, senders have, due to experience, interpreted what Google does when someone SHOULD NOTs:
- The sender SHOULD NOT send us the same sequence again when we reply 550, if they do they MUST go on our shitlist.
Obviously it's not so binary and it takes retrying to several different recipients, but people have very good reason to interpret this SHOULD NOT as MUST NOT.
Gmail screwed up here, returning a 550 error, it's not anyone else's job to try to second guess that or retry in contradiction of the accepted standard.
Re: the RFC, note it says "should not", not "must not". That seems to suggest they acknowledge repeating might actually make sense in some cases. And honestly the practicalities of this situation and the risk-reward tradeoff seriously tilts toward repeating the request later regardless of what the RFC says. The world isn't going to end.
For any small provider, getting on the shitlist is catastrophic as unlike the big providers, getting off of it will be hard / impossible.
That is exactly the thought process that leads to non-standard mess that we see numerous examples of.
If you believe the standard is not robust enough to handle problems like this, first work towards a fix to the standard and then implement the solution. Not the other way round.
I didn't suggest people should apply this thought process in arbitrary cases. I said it should be applied in this case. You can take any thought process that gives a good outcome in one situation and obtain a bad outcome by applying it to the wrong situation. That's not an indictment of the thought process. It's just an indictment of the person failing to correctly judge its applicability.
That said, by all means, do try and go fix the standard; I wasn't trying to imply you shouldn't do that.
Exactly, that is why it is important to follow standards. Most engineering decisions are not clear-cut and are born out of tradeoffs. That is why we agree on standards that define those tradeoffs instead of every one of us having our own take on situations.
> Nobody cares if their mailman's knocks follows an RFC or not
If there is a Mailman RFC which says:
"If someone opens the door and says `Mike does not live here' then DO NOT attempt delivering the same package"
THEN I expect the mailman to not bother me again, EVEN IF it was actually my mistake that I forgot my roommate Mike actually does live at this address.
These incorrect responses could be caused by mistakes which the remote server admins could reasonably avoid, like software bugs. I understand not having much sympathy for that case, especially from an organization with no shortage of resources. But they could also be caused by, for example, hackers or governments exerting control over the remote server temporarily.
A standard which explicitly refuses to acknowledge these possibilities is not what I would describe as “robust.” An obvious better alternative would be to set some standards around what constitutes a polite retry policy.
It's a difficult one though, because as you rightfully state, covering up for Google is not the best course of action for the system as a whole, yet it's likely a good course of action for those users who didn't get their emails.
: 4. SHOULD NOT This phrase, or the phrase "NOT RECOMMENDED" mean that
there may exist valid reasons in particular circumstances when the
particular behavior is acceptable or even useful, but the full
implications should be understood and the case carefully weighed
before implementing any behavior described with this label.
I saw someone on Reddit say his SES was suspended for sending tons of bounced emails in a short period of time - it's taken very seriously by ESPs.
E: also user rtx a few comments below
If you want revenge for modal popups, your best bet is to create a bunch of throwaway email accounts, subscribe to the mailing list from them, and start reporting the individual messages as spam when they arrive. Flag them as junk at the mailbox provider (Gmail, Outlook, etc.) and use the links in the List-Unsubscribe headers to flag them at the ESP's end, too.
That's the standards-compliant way. Also I'd argue that spec'ing your code to handle cases where Google fails that badly is (was?) a poor allocation of LoCs.
So if you are not getting any notifications from GitLab, even though your email is correct, I suggest contacting them and asking if you have been blocked due to an error.
'Check email service status before sending emails' - https://needgap.com/problems/178-check-email-service-status-...
I fear that this will lead to many lost mails. In my experience, users are often confused by the technical "Mail delivery failed" mails and tend to ignore them or write them off as spam.
Or returning one of the 4xx status codes which indicate less-permanent failure state like:
- 451 Requested action aborted: local error in processing
Which is kinda like a HTTP internal server error as it can mean anything.
Another option would’ve been to accept everything with a very lightweight smtp ingest service, journal it all, and play it back to the full frontend after their code fix was pushed out.
Not an SRE so ¯\_(ツ)_/¯ just some thoughts from my time in a similar role and similar pain points (but thankfully not at this scale)
This is not a cheap shot, but a message to inform users that it's an issue with Google that Protonmail can do nothing about.
> Running a datacenter is no easy task.
Sure, but then there are very view companies which have more experience with running data-centers and (normally) providing reliable email service.
So any outage for more then just a short time is very unusual. I'm really interested what went wrong.
That's a peak of 90% of Gmail inboxes bouncing – and this has been going on for almost 24 hours.
This year i decided to do "something" about it, so every mailing list mail received in my inbox that i don't want/care for gets an unsubscribe. It has already reduced my daily mails by a somewhat large amount. It's hard to say exactly how much, but i estimate around 10 emails less every day.
Most of the unsubscribed lists are from companies where i've purchased something andthe seller took the liberty of subscribing me to their mailing list. Those are mostly pre-GDPR that i've just never gotten around to dealing with.
The execption is of course obvious spam mails, to which unbsubscribing will probably do more harm than good.
Rant: As I side note I usually try and buy direct when shopping online rather than through Amazon (for all but the most trivial purchases) and this is the 2nd largest drawback (behind filling in CC and shipping info) - because I bought one item from you, once in my life does not mean send me a daily email, and then when unsubscribing pretend like I signed up for them! For me it’s one of the easiest ways to destroy brand loyalty/reputation.
Plenty of critical communications get caught in this storm...
What a joke. And this after we're leaving AWS Workmail because of bounced emails.
No luck with signing up so far.
About your query
I gather that you are concerned about your Ads Disapproval for your Google Ads Account.
I understand that this is taking a bit longer as we are working with a limited staff due to Global pandemic and there is another team who reviews the account so there can be a slight delay in the decision I apologize for the inconvenience caused as I understand this is not the answer which you are looking for but be rest assured I will get back to you on coming Friday 12/18/2020 end of business day.
For any further assistance, I am just an email away.
The reason why this is so nasty is not because Gmail went down, but because they returned a 5XX permanent failure and not a 4XX temporary failure for these bounces. Literally every email provider will respond to a permanent bounce by suppressing all further emails to that email address (it's permanent, after all!), so the fallout from this will be huge.
I logged into our sendgrid and mailgun accounts and manually purged all the failed gmail records.
Customers generally cannot change this on their end as far as I can imagine -- this is on the ESP end and is a protection built in because you are sending from their IP / Server and they don't take kindly to that.
The action for rectifying isn't too difficult, but the implications are still pretty big...
A lot of clean up is going to be needed as a result of this.
To add some more details, when using a 3rd party email delivery service, those services will either black-list or just outright remove email addresses when they get a hard bounce "email address no longer exists" message back.
Some providers make re-adding an address after a hard bounce a non-trivial task, since after all, the authority on that email address just said it doesn't exist.
This is going to be really ugly.
That simple fix buys them 24-72 hours to solve this properly.
Yeah, it burdens servers sending mail to them because now they have to hold on to all mail (including mail that really is permanently undeliverable) for another day or so, but that's still better than what's happening right now.
His solution would result in exponential retry failures baked into most services, which would buy them a few hours, and result in no lost emails, and no suppression list additions.
That is a better scenario, than 5xx.
There is no way that putting in a hardcored hack like that would have been faster. Making the change is, of course, fast.
But then you need to review it (and this is a super risky change, so the review can't be rubber stamped). Build a production build and run all your qualification tests. (Hope you found all the tests that depend on permanent errors being signalled properly). And then roll it out globally, which again is a risky operation, but with the additional problem that rolling restarts simply can't be done faster than a certain speed since you can only restart so many processes at once while still continuing to serve traffic.
The kind of thing you describe simply can't be done by changing the SMTP server, in 2.5 hours. The best you could get is if there was some kind of abuse or security related articulation point in the system, with fast pushes as required by the problem domain but still with the sufficient power to either prevent the requests from reaching the SMTP server at all, or intercept and change the response.
As a trivial example, something like blocking the SMTP port with a firewall rule could have been viable. Though it has the cost of degrading performance for everyone rather than just the affected requests.
My mail server logs show about 20 failures in all of the last week until yesterday 20:43 CET, then 350 failures between 20:43-00:21, then nothing after that. So fair enough, from the client side rather than the status page it looks like 3.5 hours rather than 2.5.
But still, given that resolution time, the suggested solution of changing the SMTP server is absolutely ludicrous.
Kind of happy I had to do something else and I didn't burn hours investigating.
(And if no such thing is detected deleted quarantined mail addresses.)
The problem here is Gmail has been throwing out "NoSuchUser" errors which are an instant unsub in most systems because Gmail takes repeated delivery to non-existing addresses into account for deliverability purposes.
I'm extremely paranoid about email hygiene, tiny bounce rates and high delivery rates, so we aggressively unsubscribe troublesome addresses (often to the point of getting reader complaints about it) for many reasons beyond that, however.
I think you mean "reputation purposes"?
If so, wow, that sucks. Their opaque rules have conditioned their counterparties to punish Google as hard as possible for a screwup.
Good for karma, bad for everyone though.
That better describes what I was trying to say, yes. Reputation then affecting deliverability.
Over 80% of our subscribers use Gmail so to say I'm paranoid about maintaining a good record with them is an understatement ;-) Gmail is a huge weak link for us.
Most systems operate more immediately in isolation on individual addresses than that right now, because such analysis is generally not needed (until today, of course ;-)).
"Gmail going down" would not have caused this problem. Even if all their SMTP servers went offline.
That means I can't just resend the the emails blindly, because I'm too scared to trigger some sort of automatic suspension...
(I don't do this regularly, so I'm not familiar with all features... additional mail verification could help probably ....)
 - https://en.wikipedia.org/wiki/List_of_SMTP_server_return_cod...
I am astonished that either (a) this switch has not been flipped yet or (b) this switch does not exist.
Somebody is incompetent here.
I'd absolutely hate to be hit by this at this time. Thankfully I've made an time investment to run my own mail server years ago. A handful of times it broke down, it either went offline or started returning 4xx codes due to misconfigured or broken milter after an update. Neither meant lost messages from normal senders that use queuing MTAs.
Is it? Is dealing with IP reputation, getting your emails accepted by major providers, and being on the hook for fixing everything yourself very easy? I haven't tried, so I don't have personal experience, but I've heard enough horror stories to think that it's not a good use of my time.
Receiving side is where there is a great range of options, and many things to try and have fun with. You can have anything from a single catchall mailbox with no filtering, no GUI, and a simple IMAP or POP3 access for MUA, to a multi-account, multi-domain setup with server side filtering, database driven mailbox and alias management, proper TLS, web MUA access, etc. It can also be built up gradually, starting from very simple setup to something more complicated so that you never lose account of how things work.
Regarding getting a bad IP rating, normally that's due to having an insecure config, like acting as an open relay, or not having DKIM enabled. There are lots of tutorials online about this, if you know Linux it really is easy.
TLDR: Before you spin up a mail server, check if your IP address is on any of the blacklists - as well as Proof Point's list . If it is, then try and get a different IP address.
I spun up a hosted server on Digital Ocean and received an IP address. I checked several black lists from a few email testing/troubleshooting sites  and  and all was groovy; my IP address wasn't on any list.
I got a bunch of 521 bounces when I tried emailing a neighbor who had an att.net address.
So, I checked the troubleshooting websites, and my IP address was listed as clean.
My logs said I should forward the error to email@example.com, so I did.
Those emails were never delivered, because abuse-att.net had its own blacklist. I was getting 553 errors. In the logs, the message from their server told me to check https://ipcheck.proofpoint.com.
Proof point runs their own blacklist that some enterprises use (e.g. att and apple ). I checked their list, and lo and behold, my IP address from Digital Ocean was blocked . Digital Ocean wasn't able to remove the IP address from their blocklist and suggested I spin up a new droplet with a different IP address.
I didn't want to do that, so I sent Proof Point an email that went unanswered; the email asked them to remove my IP address. I forgot about the issue for five or six months (this is a personal server), and ran into the issue again a few months ago. So I sent Proof Point an email again, this time with different wording emphasizing that "my clients" were having delivery issues. Within a day, they removed my IP address from their block list.
So, my main suggestion is to check if your IP address is on any of the blacklists as well as Proof Point's list before you start on your server. If it is, then try and get a different IP address.
Does anyone have more "enterprise" lists, like Proof Point, to check?
On the other hand, their status dashboard reported similar issues yesterday and here we are again: https://www.google.com/appsstatus#hl=en&v=status
The triggering event may be an email bounce. I get a lot of github notifications sent to my email, and the failure of just one/a few may trigger the reverification.
When this happens, you can spin up a temporary server and have a mechanism in place to redirect email so you don't go down when your provider does.
> When this happens, you can spin up a temporary server and have a mechanism in place to redirect email so you don't go down when your provider does.
Use a commercial provider, but fall back to your own server when it goes down without changing your email address.
Losing incoming email is pretty much the worst case scenario when it come to configuration errors. It about as bad as not having backups, in that both cases results in unrecoverable loss of data.
Sure there will be some internal turmoil going on right now, but isn't there some non-confidential info to share? Can't imagine this will hurt the image of google neither in the short nor long run, quite the opposite.
How much do you hate it as an engineer when sales people make tech promises to customers without asking you? For comms people, engineers leaking info publicly feels the same way.
1) Harmless to share
2) Will never be shared by PR teams
I don't see anything wrong with asking people to share what they can.
> Sharing inside info on an ongoing incident is a great way to get fired
You're not disagreeing.
* We have a lot of automation/tools to prevent incidents when mitigation is straightforward (e.g. roll back a bad flag, quarantine unusual traffic patterns), which means that when something does go wrong it's often a new failure mode that needs custom, specialized mitigation. (e.g. what if you're in a situation where rolling back could make the problem worse? we might be Google, but we don't have magic wands)
* Debugging new failure modes is a coin flip: maybe your existing tools are sufficient to understand what's happening, but if they're not, getting that visibility can in itself be difficult. And just like everyone else, this can become a trial and error process: we find a plausible root cause, design and execute a mitigation based on that understanding, and then get more information that makes very clear that our hypothesis was incomplete (in the worst case, blatantly wrong).
As Douglas Adams says, "The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair."
Here comes the poison pills!
Well I guess the thing is left unanswered for now is why the quota management reduced the capacity for Google's IMS in the first place.
Maybe we will know someday :)
When you operate at Google's scale then everything that can go wrong, will go wrong. Google does an amazing job providing high-availability services to billions of users, but doing so is a constant learning process; they are constantly blazing new trails for which there are no established best practices, and so there will always be unforeseen issues.
Yes, apps are highly distributed. Yes, roll-outs are staggered and controlled.
But some things are necessarily global. Things like your Google account are global (what went down the other day). Of course you can (and Google does) design such a system such that it's distributed and tolerant of any given piece failing. But it's still one system. And so, if something goes wrong in a new and exciting way... It might just happen to hit the service globally.
When things go down, it's because something weird happened. You don't hear about all the times the regular process prevented downtime... because things don't go down.
However, I'd speculate that in this instance, when you get that .0001% problem, less hands on deck makes work from home aspects less easier. Akin to remotely fixing somebodies PC over standing behind them.
With that premise I'd speculate in this instance that whilst not the root cause, may of been a small ripple that led to that root cause and/or lead to a slower resolution than what would normally get.
Those speculations aside, it will only highlight what that some tooling needs to adjust for remote workers as does design and set-ups more. Water cooler talk is not just for gossip and a counter would be more regular on-line group socialising at a work level so that not only the companies but the workers can fully adapt and embrace the work medium; But so the kinks and areas that need polishing can be polished and made better for all.
Lastly, I'd speculate that I'm totally wrong and yet what I said may well anecdote with some out there and resonate with others.
It should not be a problem that gmail is "down". Unless this would be happening for more than a few days, noone would lose e-mail. It's a problem that it's not returning a temporary error code, but permanent one.
I think a lot of time and effort is spent categorizing errors from external systems into transient or permanent, and it's always kind of a one-off thing because some of them depend on the specifics of the calling application. It definitely takes some iteration to get it perfect, and it's very possible to make mistakes.
You don't have milliseconds. You can take quite some time to handle the client. 10s of seconds for sure. For example default timeout for postfix smtp client when waiting for HELO is 5minutes.
Sometimes it's a script responsible of deployment that will propagate an issue to the whole system. Sometimes it's the routing that will go wrong (for example when AWS routed all production traffic to the test cluster instead of production cluster).
Now that everyone's replaceable, the popular culture desperately tries to shift focus into arguing about pronouns and terms.
Watch out, this is a road to nowhere. Forcing others to use the right pronoun won't build up your retirement fund, but will distract you from worrying about not having one. And the fact that you care about it more than about your opponent's T-shirt color could be an indication that you are being manipulated to not think about the long-term things.
Thank you, sir, for elevating our collective level of discourse.
This is where it crosses from insightful into conspiracy theory territory for me. People seem perfectly capable of groupthink-deluding themselves. Why cheapen your argument by postulating some master manipulator when it's not necessary for the deeper point you're making?
It will only lead to people focussing the discussion to challenge this particular aspect, or them disregarding all you've said, instead of engaging with the actual meat of the argument.
'Singular "their" etc., was an accepted part of the English language before the 18th-century grammarians started making arbitrary judgements as to what is "good English" and "bad English", based on a kind of pseudo-"logic" deduced from the Latin language, that has nothing whatever to do with English... And even after the old-line grammarians put it under their ban, this anathematized singular "their" construction never stopped being used by English-speakers, both orally and by serious literary writers.'
The same reason it ever mattered how you refer to people, politeness and respect. If someone you consider "him" asks you to refer to them as "her" it's like someone asking you to call them by their full name "Rebecca" instead of "Becky" or "Jonathan" instead of "Jon". If you like and respect them, you do as they request because things which matter to them matter to you, and being polite to them is important to you. If you ignore what they ask, call them what you want, you communicate that you don't respect them and don't want to be polite, that you want to dominate and 'win' instead.
> "Pronouns can mean whatever you want them to mean"
Only one way. A specific person asking you to use a specific pronoun for themselves is wildly different from you unilaterally and universally saying that all women should feel included by the word "him" because "him" has no meaning anymore.
Though with respect to 'ages' apparently it's been around since at least the 14th century but certain purists tried to stamp it out at various times (just like the singular 'you' which no one currently has grammatical issues with I hope).
In other words:
Maybe time to switch to a more reliable provider.
Did you try pulling them down using the API tester?: https://developers.google.com/gmail/api/reference/rest/v1/us...
Some of the internal formatting that Gmail uses has changed over the years, so more likely than not the API that parses the stored message for display in the Gmail UI is just throwing some kind of error.
Either way my point is that this is a pretty serious bug and they haven't even acknowledged it! Not a good look.