Hacker News new | past | comments | ask | show | jobs | submit login
Gravwell Community Edition: A Splunk Alternative Built with Go (gravwell.io)
115 points by floren 8 months ago | hide | past | web | favorite | 73 comments

> The Community Edition is limited to 2GB/day of ingest (real ingest, not indexed data) and should handily cover any home use and most smaller I.T. and Security shops.

So it's not so much "a free splunk alternative" as it is "a system with slightly different free tier than splunk"?

From https://www.splunk.com/en_us/software/pricing.html

> Splunk® Free (...) Scale up to 500 MB data per day

Yes, that's a fair summary. We chose 2 GB/day because our own home installations have stayed under that when we ingest netflow, DNS lookups, collectd hardware stats, and syslogs. We think it's enough to be actually useful to a home user, especially in combination with our ingesters (https://dev.gravwell.io/docs/#!quickstart/downloads.md)

OK, we'll take "free" out of the title above.

Given its prominent position (currently #4 on the HN front page right now), I would guess people just blindly upvoted the post without reading.


You can grab a 10GB/day Dev license for Splunk for free. IANAL, but should be suitable for home use. Lasts a few months, and renewals are free.

You are then chained to a renewal process and the whim of the agency giving out those licenses. That isn't freedom, that is servitude.

Every single time we have to renew our Splunk license, it's a hassle.

Unfortunately Splunk's installation process is much more streamlined and junior-sysadmin-friendly than ELK, at least in our limited view.

I need to look again at ELK to see if someone has created or improved the log input UI and indexing UI. Just something that lets you select log locations and types on a client in a web UI "somewhere", and something similar on the server to perform various operations for indexes and other such stuff.

Graylog2 (the open source version) is batteries-included and a very easy install. Definitely a lot easier than ELK stack.

Sumologic also has a 500mb/day free tier. I pay for Sumo and really like it.

The convention in software marketing is for a “Community Edition” to be the batteries-not-included DIY-install version of an open source project. That’s what the “community” in the name refers to.

Is that what this is? Where’s the source? How do you limit the ingestion capacity of an open source project?

If the core isn’t open source, that’s fine! But that makes “Community Edition” a misleading name.

The paid Gravwell licenses are targeted at businesses; we don't expect any home users to buy one. But we wanted to make the functionality available for the hobbyist community, so we called this version Community Edition. Apologies if the convention confuses, I'm not familiar with the usage you mentioned.

Microsoft visual studio also has a community edition but they also have not released its source code. So, I don’t think it is expected or mandatory to share product’s code with public

That's a good point.

Microsoft's evident misunderstooding of open source software is not an example that should be followed by others.

Why does it matter what it’s built with, if it’s closed-source?

Never underestimate the power of hype.

Generally, I agree with you - but it’s 2018, and I can’t see how hyping a SaaS by name dropping Golang is actually hyping anything.

The simple answer is that it helps. The longer answer involves cost analysis and whatever it creates a positive connotation or not, and what is the risk of dropping this name, basically, marketing.

Isn’t this just an ad since this isn’t open source and the free version is a trial?

Some feedback: I clicked on this community edition blog post to learn more about the product, only to be immediately smacked in the face with a product demonstration of a user installing a license file. Yikes. That turned me off immediately. Will re-review later, still sounds cool.

You're right, it was kind of a weird and glaring thing to put there. We've removed it and replaced it with a screenshot of Gravwell in action.

Great, thanks!

“Built with Go”. Seems like there are a lot of “built with X” articles. When I see these I always think who really cares that much which language something is built with? Isn’t there anything more headline-worthy about it than the programming language?

At least with Go, usually you get cross platform static binaries which generally means easy to setup/get up and running/try.

It gives leverage when someone states that X cannot be done in Y.

So for example, if someone states C# cannot be used for writing an OS, then I can point to Midori.

I haven't yet met anyone that wasn't just trolling when they claimed something like that. It should be fairly obvious you can do practically everything at any level of abstraction. It will just get more or less efficient (and even that is not a given).

You had luck, some people only accept certain truths specially in IT, if it is shown running before them.

And even then, they might still dismiss them even if proven wrong.

Just for fun, we ingest comments from Hacker News, so I made a quick search to find out who's been posting about Gravwell the most:


How embarrassing, it's me! :)

Please, please, please stop using pie charts to try to visualize ratios! See here[0] from 2007 for better alternatives. I'm sure there are newer visualizations as well.

[0]: https://www.perceptualedge.com/articles/visual_business_inte...

This is John from Gravwell; I'd be happy to answer any questions about Gravwell or Community Edition. We're rolling it out to the public today, and I thought our network security focus might interest the HN crowd!

I think this post (and the home page) would benefit greatly with some example dashboards and queries. You really have to dig deep into the docs to see what it is capable of.

Splunk on the other hand has dashboard examples on almost every page (on their homepage, a carousel with five different examples of the things they purport to solve).

(another blog post does a great job with showing off capabilities: https://www.gravwell.io/blog/gravwell-and-collectd )

It's a good point. What would you find particularly compelling as an example dashboard? We've built dashboards around hardware stats (cpu temp etc.), port scanning & brute-forcing attempts, even Reddit comments. All of the above? :)

I like the blog post I mentioned above, which covers cpu/disk/ram monitoring. I'd also like to see a fleshed out network analysis example.

Thanks. Here is a question about Gravwell, non-community edition: how much? I personally get really annoyed when I look at a tool or system that is possibly interesting, depending on cost, and don't find a price on the site. I don't have time to go through the whole sales-spiel for every tool of interest, and just want to quickly filter on "now, maybe later, no way" and cost is key factor in making that determination. And usually, if the answer is "it depends" it typically means "it depends on how far we think we can push". So thanks, but before I download a trial, community edition, or sign up to become an honoured member of your sales-funnel, please, lets' not waste anybodies time, and let me know "how much".

Agreed. This looks interesting, but I just do not have it in me to care about a company's sales funnel.

The last time I put up with this kind of thing I was on an hour-long call with a HashiCorp sales guy who didn't have the answers to any questions but did want to tell me that support would be $25,000/year for 9x5 access (lol).

Shit, I know the frustration. This one's on me. I'm Corey, one of the co-founders. Pricing is a big "it depends" but average cost savings over Splunk are 30%. For starters, a single node Basic unlimited data license is $25k annually. Things obviously go up for larger enterprises that need bigger clusters but our pricing model is a step function rather than the bullshit "pay per GB" model that everyone else seems stuck on. Our view is, it's your hardware so use it. Every license is unlimited data. You only add more nodes if you need more phatty IOPs for fast searches - hence the it depends.

Some people use it more like a black box and don't issue many searches so a single node with a shitload of storage is just fine. Others rely on active searching to monitor security incidents and KPIs so they want responsiveness and immediate insights.

Hey - no worries, and I appreciate the forthright (and detailed) response. I Show HN'd something the other day and got beat up a little too, I know how it goes. ;)

I typically use AWS's built-in tools, but I've been looking for something for home, so I'll be checking this out. Thanks.

Speaking of AWS, you may find our Kinesis stream integration useful: https://www.gravwell.io/blog/amazon-kinesis-streams-and-grav...

Website bug report: when visiting the trial request page[1] the header nav bar ("Home", "FAQ", etc) disappears.

[1]: https://www.gravwell.io/local-trial-request

> Unlike regular Gravwell licenses, Community Edition licenses are restricted to 2GB of ingested data per day.

I wanted to give this a chance but I see we're up from Splunk's 500MB to a mere 2GB, and I have to learn and deploy a completely new product and train everybody else on how to use it as well.

I've got a team of users who are familiar with Splunk, have written code and parsers for Splunk and simply want to use Splunk. I can't get the funding for it though. If Splunk raises their cap to match yours, what is Gravwell's advantage (besides presumably licensing costs)?

It searches faster. It's easier to get data in--we store binary data and then crack/parse it at search time, so you can even ingest things like raw packet capture or video streams. It's easy to deploy.

We intend Gravwell Community Edition as a way for home users to experiment on their own network, or to try things out at work to decide if they want a full license. We of course also do unrestricted evaluation licenses for interested parties.

Update: We went ahead and pushed out the first of our blog posts on building a home network monitoring center, to give a little more of an idea of what you could use it for: https://www.gravwell.io/blog/gravwell-and-collectd

I'm particularly fond of this article because it showcases the Turing-complete scripting interface which I built. The blog post shows how to set up a script which runs on a schedule and emails you if your disks get too full.

We'll be posting more articles over the coming days.

Two things:

1. Your docs site stays as a blank white screen for me. Some chrome extension is interfering, but uBlock Origin isn't flagging it as blocking anything and the site stays white even with it disabled. It displays in Chrome Incognito mode (Windows 10)

2. Is there a query language, or is this not designed to ingest textual/json logs? A let-down with a number of services is how opaque querying is (Scalyr for example). Examples would be good.

I've noticed that Splunk tends to be quite crufty and bloated due to it likely just being added to over time, rather than refactored. It can be a pain to work with, especially when it comes to writing custom "apps" and other plugin type scripts for it. The included libraries aren't great and the system takes a fair bit of massaging to get things working at a usable level.

Is Gravwell as extensible, and, if so, is it easier to work with?


I'll offer up first the caveat that Gravwell is a pretty new product, but we've definitely been building with an eye to extensibility. It helps that we haven't been around long enough to build up too much cruft!

The core of Gravwell is the search. You can run searches in the web GUI, or in the CLI client. You can schedule searches to run at certain times (specified with a cron spec, currently) so you could have search results ready first-thing every morning.

More powerfully, you can write scripts to run searches. These scripts can be run on a schedule, or you can run them by hand using the CLI client. Check out our other blog post https://www.gravwell.io/blog/gravwell-and-collectd for an example script that runs a search on disk stats entries from collectd, checks if any machine is running out of disk space, and emails someone if there's a problem. Of course, scripts can be complex to write, so we're exploring options for simpler flowchart-like scripting within the GUI too.

You can also run scripts within the pipeline, which we frequently do when existing search modules don't quite meet our needs.

We've open-sourced the library that lets you ingest data, so you could pretty quickly ingest anything you want. Unlike Splunk, you don't have to massage every data source into a key-value sort of text format; we'll just take binary if you like, we don't care.

Gravwell has a REST API, so you could also interface with it that way. We'll probably open-source our Go client library at some point, but we want to clean it up a little first and make things more idiomatic in places.

I hope that answers your question a little? All the functionality I mentioned here is included in the Community Edition, of course!

How does it compare to ELK stack (elastic logstash and kibana). Do you do zero copy in golang like kafka does?

Its a bit of a different paradigm. From a deployment perspective we are a few static binaries and we don't require that you fully understand your data prior to ingesting and operating on it. The storage system is a bit different too, in that it treats storage as a cost center (e.g. use expensive storage when you want speed and ageout and optimize when you need longevity. A short answer is that we are truly unstructured, and will handle a lot of data in its native form.

To your second question, our entire system is built around processing local, copying is a HUGE no no in the platform until you absolutely have to. We're VERY happy with the performance we're getting out of this sucker. That's one of the primary reasons we were dumb enough to start from scratch on the storage and search architecture. We're just glad it paid off.

Definitely interested, but would like to see a demo or some screenshots of the UI or something featured more prominently.

There's some screenshots in the high-level quickstart (https://dev.gravwell.io/docs/#!quickstart/quickstart.md) but in general we do seem to be pretty demo-sparse on the main site. I'd also suggest checking out this older blog post about WiFi analytics (https://www.gravwell.io/blog/wifi-analytics-wild-west-hackin...) with the caveat that some things may have been refined a bit since that time.

Edit: we're putting screenshots into the blog post now

This looks great, is it free as in speech or free as in beer? I didn't see it on https://github.com/gravwell

It's free as in beer. The source was not made public, and no mention of plans to open source at https://www.gravwell.io/community-edition . This is just a free tier of their product.

Gravwell Community Edition is a free-as-in-beer license for our core product. Our github contains associated tools that we've made free as in speech: the ingesters, which gather data and store it in Gravwell; the ingest library, which you can use to write your own ingesters; and some additional libraries of more niche interest.

Why do we care that it's written in Go then?

1. HN users often seem to like knowing how something was made

2. Our open-source components (github.com/gravwell) are written in Go

Got it, well thanks for open sourcing some of it!

Speaking frankly, unless a Splunk alternative implements API compatibility for search, it’s a nonstarter in the marketplace. At the enterprise I work at, developers and operations teams both use splunk to observe and analyze application behavior, and have thousands of dashboards and alerts set up and integrated into mature operational processes. Migrating this over to another application is nontrivial. And the biggest problem isn’t even a technical one, it’s a social one - how do you train three hundred engineers to use a different log search tool.

I would absolutely love to see a competitor emerge that addressed the migration problem through a compatible search api. Handling other timeseries data like metrics would just be icing on the cake.

As a former Splunk employee, I would be really sad to see Splunk's search language enshrined as a standard. It wasn't designed, it grew organically from a set of shell scripts. It has no grammar, and using it effectively is largely knowing a grab bag of special commands that someone hacked on to fix a specific weakness of the language.

Interestingly, you don't need a special language. The relational calculus is isomorphic to the regularity calculus, which, in practical terms, means that SQL is a perfectly good language for Splunk's use case.

Just because it's not API-compatible with Splunk doesn't mean it's a "nonstarter."

Splunk is far from the only way to do what it does.

I don’t disagree. What I meant was that it’s a nonstarter for replacing any existing installation of splunk, which is desirable for me as an enterprise customer who spends a nontrivial amount of money every year on this sort of tool. There are a lot of obstacles to replacing an existing, effective implementation of a tool, and I listed what they are in the hopes that somebody pays attention.

I get what you're saying. I don't think you're wrong. This isn't currently a priority for us but I hope that someday someone builds something like that. That's one of the aspirations of releasing a Community Edition. Our API docs are open: https://dev.gravwell.io/docs/#!api/api.md

That's fair, and I sympathize with that position.

Has anyone tried using jupyter + plotly + pyspark or similar as a poor-man's splunk?

The usual "poor-man's Splunk" is the ELK stack.

For search yes but I haven’t figured out how to do any real analytics with it.

Even for regular search, Splunk's search language absolutely blows Elasticsearch's/Lucene's out of the water. Splunk is one of the best software products I've ever used and developed extensions for. A shame it's so obscenely expensive.

Yeah it's going to sound ridiculous but there are occasions where I really feel like I'm sculpting with information when I'm using Splunk...and that's with me using maybe five different commands from this list - http://docs.splunk.com/Documentation/Splunk/7.1.1/SearchRefe...

I'm fortunate to work for a company that invests 8 digits per year in their Splunk infrastructure, it's a travesty that I don't leverage more of its capability.

Any plans of having a SaaS offering?

It's easy to deploy to the cloud and right now we're doing that on a customer-by-customer basis. As we grow I could see turning it into a formal SaaS offering or finding a partner to do so.

Not necessarily SaaS, but we've been planning to roll out ready-to-go AWS Community Edition images so you could just provision a VM, upload the license via the web UI, and start using it.

I'll re-post my experience here:

I tried installing Gravwell from the Debian repo. This unfortunately seems broken.

  W: Failed to fetch http://update.gravwell.io/debian/dists/community/InRelease  Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)
(also, that should probably be https://update.gravwell.io...)

Once installed from the tarball, it failed to start.

  The gravwell_webserver process is not running!

  If you kept an old configuration file the configuration parameters may have changed.  Try manually starting the services and looking for errors.
Sure, I need to go update the web ports to not interfere with nginx. Makes sense.

  $ vim /opt/gravwell/etc/gravwell.conf
  $ systemd start gravwell_webserver
  $ systemd status gravwell_webserver
  gravwell_webserver.service - Gravwell Webserver Service
     Loaded: loaded (/etc/systemd/system/gravwell_webserver.service; enabled)
     Active: failed (Result: start-limit) since Thu 2018-07-12 02:15:07 UTC; 23min ago
    Process: 3411 ExecStart=/opt/gravwell/bin/gravwell_webserver -stderr %n (code=exited, status=255)
Huh, maybe I missed something else.

  $ journalctl -u gravwell_webserver
  -- Logs begin at Fri 2018-07-06 20:35:20 UTC, end at Thu 2018-07-12 02:41:26 UTC. --
That's odd, where's the logs?

  $ ls -la /opt/gravwell/logs/web
  drwxr-x--- 2 gravwell gravwell 4096 Jul 12 02:07 .
  drwxr-x--- 4 gravwell gravwell 4096 Jul 12 02:17 ..
No logs. Lets check the crash folder?

  $ ls -la crash
  drwxr-x--- 2 gravwell gravwell 4096 Jul 12 02:17 .
  drwxr-x--- 4 gravwell gravwell 4096 Jul 12 02:17 ..
  -rw-r--r-- 1 root     root      322 Jul 12 02:07 gravwell_webserver.service_2018-07-12T02:07:36Z.log
  -rw-r--r-- 1 root     root      354 Jul 12 02:15 gravwell_webserver.service_2018-07-12T02:15:07Z.log
There it is. Double click the filename to copy to...oh, it has colons.

  $ less gravwell_webserver.service_2018-07-12T02\:15\:07Z.log

  Version         2.0
  API Version     0.1
  Build Date      2018-Jul-06
  Build ID        bcd7739a
  Cmdline         /opt/gravwell/bin/gravwell_webserver -stderr gravwell_webserver.service
  Executing user  gravwell
  Parent PID      3411
  Parent cmdline  /lib/systemd/systemd --system --deserialize 14
  Parent user     root
  Failed to wait for new license: listen tcp bind: address already in use
Why's it listening on port 80? I must be missing a config option. Lets check the docs... https://dev.gravwell.io/docs/#!configuration/parameters.md

Nope. Not there. Nothing defaults to port 80 or looks like it would change it, except for Web-Port.

So, I really tried to give your product a shake. I'm very interested in having some centralized logging for my hobby projects, but not at the cost of nginx on port 80. So...\o/

We got a support email about this, if it was you then you probably already know the solution, but otherwise:

The port 80 listener is just a redirect to https. Add `Disable-HTTP-Redirector=true` to gravwell.conf to disable that redirector. The option is documented in the document you linked, but I can see how you'd miss it: we talk about "HTTP" and "HTTP" rather than "port 80" which would make it more of a pain to search for.

Changing `Web-Port` will change the HTTPS listener port, as you figured out.

We're working on populating our new knowledge base now with the answers to this and other questions which came up during our community edition rollout: http://help.gravwell.io/knowledge/


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact