
The Elastic Stack: Future of ELK Platform - hurrycane
https://www.elastic.co/v5
======
spdustin
I got pretty excited to read about this unified endeavor, and hungrily clicked
the CTA to _Get 5.0 Alpha_.

> Don't see a button? You may have to disable your ad blocker.

What a shitty experience. You're pitching a product that you're clearly
intending to sell commercially (which is both fine and dandy), and your first
interaction of this brave new World Stack is to tell me I'm browsing wrong!
And that's immediately after I click the previous "Get 5.0 Alpha" Call To
Action (which was a button that appeared just fine, mind you) and I'm faced
with the bitter realization that it's not _Get_ , it's _Get In Our CRM_.

This kind of thing rubs me the wrong way, hard. I live in the B2B world, I
totally get the pitch and the signup, but don't mislead me on the call to
action, and don't make me distrust your intentions (or web development skills,
for that matter) by telling me I need to open my privacy and performance
preserving ad blocking gate and let you come over to my yard just to fill out
a damn signup form on your fancy clipboard.

It's inauthentic, and I don't think I'm the only one who feels that way.

~~~
msellout
I don't care about the authenticity/ickiness/etc. that just strikes me as a
terrible interface decision, one that would make me worry they've made other
terrible interface decisions where it matters -- in the product itself.

The idea of forcing the user to change a browser setting to get work done is a
very "enterprisey" mindset. An old way of thinking that it's the user's fault
if they can't use the software.

~~~
yeukhon
Kibana 4 interface looks nice first, but actually very confusing, even more
confusing than the old interface.

~~~
kkirsche
Maybe you'll be happy 5 is another new interface.

~~~
dozzie
And another change of underlying stack? To what this time? Kibana 2 was PHP,
then (K3) it was static HTML + user-side JavaScript, and Kibana 4 runs on
Node.js.

------
packetized
This is the latest in a long string of irritations that will have me looking
for a replacement logging datastore post-haste. I've already replaced Logstash
with Heka; I guess ES is next.

~~~
dozzie
Good luck with that searching, and remember to share with HN.

I find ElasticSearch somewhat brittle (I need to restart it every few weeks or
so, because it stops accepting any data or queries), and I really do want to
replace it, since it's a memory hog (no data, and it already needs 230MB RAM),
but I haven't found any sensible log storage yet. All I got is this document
searching engine.

~~~
Jgrubb
That was my experience exactly. The value of being able to search through ours
logs was immediately apparent, so when it kept tanking every couple weeks it
was fairly easy to talk my boss into paying a tiny bit more for Loggly. AFAICT
Loggly is the ELK stack with a different theme applied to the UI, and I don't
have to worry that it's down when I need it.

The preceding has been an unpaid endorsement for Loggly.

~~~
packetized
Loggly is a five digit monthly bill at our present logging volume. No thanks.

~~~
Jgrubb
What are you using? I'd rather run it myself, but like I said ELK was a bit
too bleeding edge for me in the stability dept.

~~~
packetized
Heka for ingestion/parsing/message routing, RabbitMQ as the queuing/delivery
mechanism, Elasticsearch as datastore, Grafana for visualization.

------
GordonS
So I was interested in finding out more about Beats. From the product page I
click 'View More' beside 'Beats Overview & Demo Video'... and get taken to a
registration form with all fields mandatory... eh... no... forget it then.

What the hell has happened to Elastic?!

~~~
packetized
Please take a look at Heka ([http://hekad.rtfd.org](http://hekad.rtfd.org)) -
it's a fairly complex tool, but I'm convinced that it's infinitely better than
anything Elastic will put out anytime soon for log shipping.

~~~
GordonS
Looks nice! I wasn't aware of Heka either (I've been out of the Elasticsearch
world for a year or so), so thanks :)

------
stuartaxelowen
Does anyone have a good description of the ES query DSL? In spite of the time
I've spent with it, I'm consistently unable to write a reasonable query
without going to google first, even for basics.

~~~
NDizzle
Here is what I have linked on one of my search pages: (for advanced customers
to examine)

[https://www.elastic.co/guide/en/elasticsearch/reference/1.7/...](https://www.elastic.co/guide/en/elasticsearch/reference/1.7/query-
dsl-query-string-query.html#query-string-syntax)

I have SEVERAL elasticsearch-hellride.txt files stored in my documents folder
with a bunch of example queries, because it's so wordy I can't keep it all in
my head. I just refer to those every few months when I'm adding functionality.
I wouldn't get caught up with having to google for basics. Find some things
that work FOR YOU and save those, with your personal notes added in the right
places.

Here is what I can recommend as far as plugins go:

[http://www.elastichq.org/](http://www.elastichq.org/) \- clean, simple
interface, but not quite as powerful as:

KOPF: [https://github.com/lmenezes/elasticsearch-
kopf](https://github.com/lmenezes/elasticsearch-kopf)

This will give you access to a great query interface. You can use the GUIs
here to build a few queries until you get a feel for what the JSON should look
like.

How I use elasticsearch is kind of... Well. It's just what I do. We have PDFs
that are OCRed and we store the text in the document along with ~50 fields
that are entered by humans. It's probably too complex, but it's financial data
and people love to be super verbose with their queries. I can't rely on the
OCR to be perfect for SEC filings.

~~~
packetized
Kopf is phenomenal. All the node data and index stats that I want, all super-
accessible.

------
clebio
I was excited and hopeful to try the ELK stack a while back, but, like all the
other comments here, have decided it was too unreliable and brittle.

I have an open issue on Logstash, spent a fair amount of time detailing it,
but have gotten no feedback. And then I realized there's 600+ open issues!
[https://github.com/elastic/logstash/issues/4389](https://github.com/elastic/logstash/issues/4389)

I'm considering using Syslog-ng and would love to hear if anyone has comments
on that. Based on other comments here, will be checking out Riemann and
Fluentd as well. [https://syslog-ng.org/](https://syslog-ng.org/)

~~~
Xylakant
I'm not an elastic person, but I can shed some light on this: You're holding
it wrong. It's not a bug. You have multiple config files in one directory - if
you do that, all those files are combined to one, that means that each event
gets handed to each of your individual outputs - multiplying the message. See
[https://www.elastic.co/guide/en/logstash/current/command-
lin...](https://www.elastic.co/guide/en/logstash/current/command-line-
flags.html)

Feel free to hop on the IRC if you have further questions, there's usually
somebody qualified to answer.

~~~
clebio
I appreciate the help, though isn't the point of `/etc/ _/ conf.d` directories
generally that you have multiple config files? This is a common idiom that
other packages handle correctly (differently?).

I have hopped on the logstash IRC at times to ask about some of this, though I
guess not this exact item. In fact, there's a different (well-known issue)
that the init script for logstash has the config path hard-coded:
[https://botbot.me/freenode/logstash/2015-11-17/?msg=54338903...](https://botbot.me/freenode/logstash/2015-11-17/?msg=54338903&page=7)

There's also the problem that logstash (and forwarder) doesn't seem to let me
do anything useful with the file names. I could work around that, sure, but it
would be nice to have meaningful file names (not the "ls._" thing that LS
uses). Syslog-ng, for comparison, gives you a lot of control of that.

~~~
Xylakant
> isn't the point of `/etc//conf.d` directories generally that you have
> multiple config files?

Yes certainly. It's totally fine to place multiple config files there, I do as
well. I split up my configs in the various outputs, inputs etc. It's just that
logstash combines them to a single pipeline and does not run a pipeline per
config file. Nginx doesn't run a webserver per config file either :).

It's certainly something that's unexpected and could be much better
documented, but alas, I'm just a user :)

(and I do agree, your issue could have been handled much better, especially
since it's not actually a bug)

~~~
clebio
Awesome, thanks for the clarification. This might help (if I ever go back to
using logstash at this point!).

Yeah, my point was more that they accepted my issue, but there's no action and
there are more than 600+ other open issues. Seems Elastic is too busy branding
and pushing breaking changes to their APIs.

I do sincerely appreciate your clarifications and comments on this one,
though.

------
willejs
So logstash V5, is that a rewrite in go? I say this because it seems all their
other tooling is now written in go, and also, logstash agent and server are
very resource intensive in jruby. If theres a v5 alpha, which isnt on github,
is it not open source?

~~~
gerhardhaering
Apparently all they've announced is that they'll release all their products in
lockstep with a unified version number from now on.

I myself have hoped for a Go rewrite of Logstash for a long time, but there
are apparently no plans for this. They are creating lightweight _forwarders_
with their *beats, though. But they are only for forwarding to ElasticSearch,
not a general log pipeline processor like Logstash.

FWIW there is a thing that's like Logstash in Go, it's Heka by Mozilla. I am
very fond of it, but for some reason not many people seem to be aware of it or
deploy it.

~~~
LeoHexspoor
They have a couple of outputs from *beats though, not only ElasticSearch.
Besides ElasticSearch there are outputs to console, file, Logstash and a
deprecated Redis output.

------
hurrycane
More details on the blog post too: [https://www.elastic.co/blog/heya-elastic-
stack-and-x-pack](https://www.elastic.co/blog/heya-elastic-stack-and-x-pack)

------
jamesblonde
What i didn't like about Kibana 4 was that they tried to force you to use a
nodejs server. I wanted to embed Kibana in my webapp, but Elastic are trying
to make you use their services and lock them into their platform. No surprise,
i guess, but annoying nonetheless.

~~~
bni
Has it ever been possible to embed Kibana in a webapp? Excluding iframe hacks
ofcourse

~~~
jamesblonde
Yes, in v3. In v4, we're using this: [https://github.com/kibana-
community/kibana4-static](https://github.com/kibana-community/kibana4-static)

------
glial
Is X-Pack going to be a subscription based service?

~~~
wyaeld
Considering all the components currently in it are only available if you're in
their subscription plan, that's basically the core business model.

~~~
glial
I wonder whether the components will be available a la carte like they are
now, or if it's going to be a single bundle.

------
forgotpwtomain
I've been extremely unimpressed with logstash - many of the plugins in the
standard repo are poorly maintained; certain plugins misbehaving - can kill
the entire logstash process; it's really inexcusably bad for such core-
infrastructure software tbh.

------
vacri
Dear Elastic:

For the love of internet-god, please stop your constant moving of stuff
around. Now we have new logos.

Last year's fun, I was using logstash-shipper to ship logs. Early on, the
package got pulled completely - sucks to be you if it's in your deployment
script or in your documentation. Then it had a name change. Then it moved to
one domain. Then it moved to another domain. Then it got switched out for
Beats.

Not everyone finds setting up and maintaining an ELK stack so fascinating that
they want to keep up to date with exactly where everything is this month.
While you can do other things with ELK, the primary use-case is logging.
Logging is supposed to be reliable and 'just work'. Every time I see the
elastic website, something else has changed, and everyone is pushing the new
stuff.

ELK is cool and all, but it's frustrating to follow when you just poke your
nose in every few months.

Love,

\- Vacri

~~~
bpchaps
Agreed. I stopped using Logstash for about a year and used it for a bit about
a month ago. Awful experience. Awful documentation. Deprecated shit
everywhere. Inconsistent stackoverflow information and TWO external websites
too help make logstash actually functional. Oh, and since Logstash is a Java
based application - would it hurt to give some java stacktrace log parsing
configs?

Also, their shitty Debian repo management resulted in a bug that caused my
company to lose $30,000.

The world needs more ELK hate.

~~~
dozzie
It's not Java, it's Ruby. Which is even worse, because distribution tarball
with logstash weighs ridiculous 71MB (?!?) and requires JVM to run (?!?) (or
at least nobody talks about it being runnable with MRI).

> Also, their shitty Debian repo management resulted in a bug that caused my
> company to lose $30,000.

Well, this is not their fault that much. If you had put any thought about
using repositories, you wouldn't use random packages from random sources over
which you have no control and no trust with regard to package retention policy
or packages quality.

Or maybe you would happily install also MongoDB from Mongo's site?

~~~
vacri
Elasticsearch is Java, and the logstash tarball includes it and kibana from
memory, so you can run it as an all-in-one where logstash launches it's own
Elasticsearch.

While I don't have the tiniest font in my terminal, I still couldn't read the
entire Elasticsearch process line in htop, even when I'd stretched the
terminal all the way across three monitors! The middle one was an ultrawide! I
really wish Java would stop using arguments instead of storing config
somewhere...

~~~
dozzie
> Elasticsearch is Java, and the logstash tarball includes it and kibana from
> memory,

Not really. It's just logstash, along with some plugins (and what the heck are
Maven bindings doing there?).

ElasticSearch is another 29MB compressed, which is fine for a database-like
thing, and Kibana 4.x takes 30MB compressed (150MB uncompressed, of which not
the biggest part is Node.js copy).

