Exactly. Because the aggregation is so slow, you cache the aggregates, but then the caching layer becomes slow to invalidate, and you get eventual consistency problems.
I wouldn't call that a consistency issue. That's just lag. The aggregation is valid for a specific time point. Caches aren't the source of truth. The source of truth remains consistent here.
The event store and the cache are both part of the same system, and it's this whole system that is "eventually consistent".
Say I create widgets on screen 1 and they are persisted with event sourcing into Postgres. I see the list of created widgets on screen 2, loaded from a materialised view in Postgres (or Elasticsearch). The "lag" between it being created on screen 1 and appearing on screen 2 is the "eventual consistency" issue I'm referring to here, whereas I think you're referring to the consistency only of the persistence on screen 1.
I'm sure we both agree there's no getting away from CAP theorem. Event sourcing accepts less consistency, and every part of the system needs to deal with that.
>The event store and the cache are both part of the same system, and it's this whole system that is "eventually consistent".
Then every single system of the face of the earth at a high enough level has a consistency issue. Just go to the level of the full client and backend system using web browsers. You don't refresh the browser you of course will eventually have a "consistency" issue.
Usually when they refer to this stuff it's referring to the source of truth: The database. When you shard the database into two synchronizing copies, you increase the availability with a second copy but that leads to the potential for both copies to be inconsistent.
Then it's an outdated materialized view. If you don't refresh your view from a browser is it a consistency issue? No. Not in in the way the term is usually used.
Read the first paragraph on eventual consistency on wikipedia, it describes the agreed upon definition.
"Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value"
It refers to the same concept and technically and colloquially its referring to databases or things similar to databases. If it didn't then every system on the face of the earth is eventually consistent because of browsers and caches and timing. Every browser eventually presents an outdated view if it's not refreshed. If that's the case what is the point of the word? The word is obviously used for categorization.
Thus The scope is usually and colloquially a database systems or some combination of services that represent a source of truth.
If you read further into the article you cited you will encounter this:
"In order to ensure replica convergence, a system must reconcile differences between multiple copies of distributed data. This consists of two part..."
Literally the article assumes we are talking about distributed systems where replicas can exist.
Is a cache a replica? Is a browser a replica? No. The scope is obviously the source of truth at the resolution where you can have replicas aka: two or more sources of truth.