This seems like something was done incorrectly the comparison shouldn't be that drastic.
> Just one PostgreSQL 9.6.10 instance (shared_buffers = 128MB)
This looks way too low. The postgresql docs say a good starting point for shared_buffers is 25% of the server's memory. In this case that would be 32GB.
But in my brief experience with an Elasticsearch-backed web application I found it difficult to write integration tests for the portions of code that dealt with ES. Thanks to ES's "eventually consistent" nature, tests would fail intermittently because we'd be e.g. querying some data that was written to ES but hadn't been fully persisted yet. ES gives you some "flush the data to disk right now, please" commands but they're merely suggestions and cannot be relied upon.
Obviously, you want to stub/mock out as many of those actual physical database reads/writes as possible, but sometimes what you want to test is the ES queries themselves and I don't know what on earth the best practice is there.
Just to reiterate, this is an issue that any non-ACID datastore would experience. I'm not criticizing them, I'm just sort of wondering how people typically solve that...
Do you have other information about the refresh flag because their documentation clearly states that forcing a refresh is applied to primary and replica shards meaning that it will be available for query directly after the call to refresh is made.
However, this was back in ~2015 and Elasticsearch 1.3 or something like this, which is of course a now-ancient version. Perhaps things are different now.
edit: Perhaps we were using the refresh command and not the refresh flag. It was a few years ago and I don't have access to the code any more, and my memory may be failing here. If the refresh flag works as advertised (enforces an index update and guarantees a consistent view of the data for the next query, which the command did not seem to) then that of course solves my initial problem W.R.T. writing tests.
I've run countless integration tests with ES and never seen something fail due to refresh not working as advertised. If you have, what version of ES was it? Can you give some sample code that sporadically exhibits the problem?
The refresh command can also be called (which is what you're doing) but this is a different operation and just triggers the index build with no guarantees that it finishes or is consistent with any particular data mutation.
Did you read the previously posted documentation for the refresh flag?
When you try to scale something to large sizes AND want high availability, it's pretty much a given you'll be dealing with eventual consistency.
We use ES to ingest billions of records per day. For us, being able to immediately query a row that was just added is less important than being able to deal with the volume in relatively predictable performances.
ES is not meant to be an OLTP database. It's a search index with a much better wrapper around Lucene, but the distributed part has always been weak. The last several years of updates have primarily been around fixing the home-grown replication and storage.
Refresh the relevant primary and replica shards
(not the whole index) immediately after the operation
occurs, so that the updated document appears in search
Where did you get the behavior you described? Are you sure you're not confusing this for the separate refresh command itself? That is not attached to any particular insert/update.
If so I wouldn't call this a "primary data store", since durability isn't critical.
The article says:
> After drafting many blueprints, we went for a Java service backed by Elasticsearch as the primary storage! This idea brought shivers to even the most senior Elasticsearch consultants hired
I'll shiver if Elasticsearch corrupting irreversibly loses data, but if it can be rebuilt from another source I don't see any problems with it at all.
We’ve been running large Elasticsearch clusters as our primary search/analytics engine. While it’s overall very stable, stuff does occasionally happen that requires an index rebuild. We use HBase as our primary store and index via map/reduce or Spark Batch.
As much as I love Elasticsearch, I definitely wouldn’t be able to sleep at night knowing it was the primary datastore.
I was one of them. I don't work there anymore.
I believe is it not the actual situation in bol.com. If it is, I would be disappointed.
Last I remember, Bol.com has really good set of ops and dev tooling on hadoop, hbase, spark, flink etc. for scheduling, running jobs etc.
I wouldn't know why they replicated data both on hbase, elastic search etc. Having read the blog, I don't see how this fits the event sourcing pattern that bol.com was trying to implement and also, the idea of self service BI that they envisioned.
If I am not mistaken the majority of the PL/SQL glue is owned by Gert, though you might recall better. Quite some VCS history was lost while migrating from SVN to Git. ;-)
The reason we are "replicating" the entire data is to 1) determine the affected products and 2) re-execute the relevant configurations (facets, synonyms, etc.) while making retroactive changes. (For instance, say someone has changed the PL/SQL of "leeftijd" facet.) Here, the storage is required to allow querying on every field, for (1), and on id, for (2). While id-based bulk querying is (almost) supported by every ETL source, querying on every field is not. Hence, we "replicate" the sources on our side to suffice these needs. Actually, the entire point of the post was to explain this problem, but apparently it was not clear enough.
For your remarks on event sourcing and BI, I am a little bit puzzled. I will need some elaboration on these remarks. We do have event sourcing on our side (that is how we can replay in case of need) and BI is not really interested in ETL data. Maybe I misunderstood you?
I am also confused by how you relate scheduling/running PL/SQL jobs via Hadoop, Spark, Flink, etc. Did you see the link to Redwood Explorer I shared in the post?
But I understand now what you actually mean. I wouldn't call it ETL, as ETL is more about prepping the data for BI and not cooking data for search.
yea, I remember they used to have redwood for scheduling PL/SQL queries but I think majority of ETL jobs for BI were in hadoop/spark/flink.
Having said all these, I think it is quite some neat and cool engineering work, I hope you guys are successful implementing the solution.
Isn't ETL an intermediary in BI? I think I am a bit confused, to give some context, this is my understanding, you have all the services generating data, you have ETL jobs, that extract data from these services, transform and move the data to a star or snowflake schema in RDBMS prepared for BI tools for query efficiently.
We had a tight deadline on implementation (3 months to extract from Google) and chose ES in order to satisfy a kv store as well as TF-IDF corpus search.
It sounds like the primary data store would be the stream of events.