Hacker Newsnew | comments | ask | jobs | submitlogin
ecaron 518 days ago | link | parent

After the 3rd thing that was wrong about Solr, I stopped caring to write anything more than this comment.


rgrieselhuber 518 days ago | link

Given that this appears to be a community resource and not sponsored by either SOLR or Elastic Search people, I'm sure your specific critiques would be useful.

-----

nzadrozny 518 days ago | link

Looks like a sales/SEO play for a Solr/ElasticSearch consultant. Still seems pretty helpful as a community resource. I emailed the author to see if he's interested in setting up a public GitHub repo to take pull requests.

Personally, I'd like to see similar comparisons for other search engines, like Sphinx and Postgres Full-Text. When I talk to people about search engines, the first questions they ask me are to compare one against some other.

-----

codewright 518 days ago | link

Which is especially egregious, since both fall apart in more serious use-cases.

-----

nzadrozny 518 days ago | link

Can you expand on what you mean?

-----

Zombieball 518 days ago | link

Not sure about codewright's use cases. However in my own brief experimentation with SOLR I ran into performance issues with garbage collection.

I setup a cluster of about 15 cc2.8xlarge machines (5 Shards with 3 replicas each) containing 240Gb worth of documents (48gb per shard). Each node was given on the order of 40GB heap space. While performing load tests with a relatively small load (~150 QPS) after a few minutes the garbage collector on nodes would kick in and run on the order of 15 to 30s. This had a cascading effect of causing zookeper to think nodes were down, start leader re-election, etc.

Admittedly I am quite inexperienced when it comes to dealing with applications using such large heap sizes. Though I tried a few different JVM options with respect to GC I was unsuccessful in resolving the problem.

If any folks here happen to have some good resources regarding GC and large Solr clusters I would definitely be interested.

-----

fizx 518 days ago | link

That huge heap is extremely counterproductive, because large heaps have terrible GC performance, and you're actually stealing memory from the natively memory-mapped files that make up your index.

Try it again with sane GC parameters, e.g.:

    -Xmx<N>G -Xms<N>g -XX:NewSize=<N/2>G -XX:MaxNewSize=<N/2>G -XX:+UseConcMarkSweepGC -XX:+DisableExplicitGC -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+CMSIncrementalMode
Where <N> is a value between 2-8.

Edit: I was benchmarking a similarly sized (though very differently configured) Solr cluster for a well-known internet company, and was able to tune it to do 5000qps, with p50 ~2ms and p99 ~20ms.

-----

Zombieball 517 days ago | link

Thanks for the tips. I was considering trying testing again with more partitions w/ smaller machines. Perhaps N x m1.xlarge w/ 8 GB heap space.

I was starting to think that since the heap space was so big perhaps I should be worrying about page sizes as well. While I tried various GC settings (UseConcMarkSweepGC, ConcGCThreads, UseG1GC, etc. ) I didn't take a stab at playing with the size of New Genearation. Could you explain the reasoning behind this? Is the idea that most objects die young so try to increase the number of short run minor GCs and avoid bigger Major GCs? I am quite interested.

Edit: Regarding the cluster you were working on. Would you be able to give general dimensions to the number of nodes & partitions in your cluster + memory for each? Just trying to get a general guideline to aim for.

-----

fizx 517 days ago | link

In general, I fix the newgen size mostly to avoid the optimizer choosing something braindead in a pathological case. 50/50 is safe, but not optimal.

In general, you should have enough unallocated memory on the box to cover your working dataset (it'll get used by caches and memmaps). If you can, find a way to exploit data locality. I shoot for (number of cores * 1-4)-ish partitions per box depending on workload. Using bigger boxes is usually better, because you can avoid communication latency and variance that arises from having tons of boxes.

If you want to know more, you can email me at kyle@onemorecloud.com.

-----

codewright 518 days ago | link

Your Solr cluster kicked the bucket at 150QPS?

Jesus dude. I couldn't reproduce that with my single or multi-node ElasticSearch clusters if I wanted to.

How were the EBS backing stores setup for these EC2 nodes?

Edit: Also, when I was talking about "them" falling apart, I meant Postgres or Sphinx, not Solr/ElasticSearch.

Well-configured Solr and ElasticSearch clusters can work very well for most people.

-----

fizx 518 days ago | link

EBS shouldn't really matter, because with a reasonable heap, he should have 40-50G of available filesystem cache, and 48G of data.

-----

codewright 518 days ago | link

If you need a serious search engine, using Postgres and Sphinx won't last long. You'll end up moving to Solr or ElasticSearch. (I've used both, but use ElasticSearch now)

-----

xentronium 518 days ago | link

I am not codewright, but we had troubles with sphinx on search queries containing larger number of terms (for us hiccups started after 100 terms or so). Besides, setting up delta-indexing is PITA and extensibility/configurability is limited. We ended up using solr (which is a memory hog) but at least it works.

-----

mcantelon 518 days ago | link

Yeah, Elasticsearch is dead easy to update documents with. Elasticsearch is also easier than Sphinx to set up as well (basically throw some data at it and it'll suss out a mapping for it).

-----

codewright 518 days ago | link

The running meme among the engineers at my company is that ElasticSearch is our secret weapon we love to whip out for various problems.

I almost wish it was more of a standard data store. Here's to hoping RethinkDB can fill that void.

-----




Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library

Search: