Wow - I guess I'm both surprised and completely unsurprised. Surprised because Splunk is a pretty big pill to swallow. Unsurprised because they've obviously been interested in the space for a long time (they attempted to acquire Datadog and got shot down).
Good luck Splunk folks - Cisco isn't exactly known for their software innovation in the upper stacks (they still do pretty incredible things at the network OS layer).
It's possible someone was selling contracts as a hedge since the tech market has been really bad this week. A market maker was obligated to buy the contracts.
The person selling the contracts gets $22k in premium, and misses out on the pop. The market maker will absolutely exercise the contracts and profit.
(This is coming from someone who sold APPL calls expiring tomorrow for .08 at a high strike today)
Personal opinion: It's insider trading. You'd need a ton of shares to be able to sell $22k worth of contracts at a high strike unless you're doing naked options selling.
This is not quite how things work. Market makers don’t just take risk and not hedge. They would have hedged deltas (by shorting stock) and gamma/vega by selling other stuff (or this offset stuff they had sold previously). Impossible to say whether an MM would have made or lost money but usually gap moves like this cost MM on a net basis.
Not that I recommend you try this, but my understanding is that if a careless splunk executive were talking about the merger on the phone at the local coffee shop and you, a total stranger, happened to overhear you could trade on that without it being a crime.
Cisco and Splunk merger/acquisition rumors are at least 1 year old. Lots of Splunk employees speculated that it would be announced at the last Splunk conf (2022). Either way I vote for this being a blatant insider trading.
I was working for a startup that was acquired by Splunk in 2018. At the next Cisco Live that was all the talk/rumor: Cisco acquiring Splunk. At one point it sounded like Cisco attempted a somewhat hostile takeover of Splunk. My sources and I rehashed this last week and the initial bid for Splunk was $23B sometime between 2018-2020. At the current price it's hard to tell if the current offer is higher, or actually lower, given inflation and market movement/sentiment.
Either way, it's a bad deal for both Splunk employees and their customers. SIEM is a space that is hard to be a leader in when you're not vendor agnostic. This is basically what XDR has become: vendors who have EDR/NDR/whatever are claiming to have some unique (it's not) data lake that can ingest any source, when in reality all of these solutions suck at everything outside of their own product set. I've worked with countless clients over the last year who, as an example, made the mistake of thinking Microsoft Sentinel was a cost effective tool, only to realize that once you're outside of the Microsoft ecosystem analytics/detections quality becomes very close to zero in terms of quality and the price is not cost effective. But SIEM has always had a flair of vendor lock in to it anyway. It's a hard platform to move from once time has been invested in wrangling all the data sources for ingest, transforming them to some bespoke schema and then all of the detection engineering on top of that. It's almost as bad as large scale firewall migrations.
What a lot of folks don't know is that when Splunk decided to move to a Cloud/SaaS model they literally just lifted and shifted the unoptimized bits of on-prem Splunk to a managed VPC under the direction of then-CTO Tim Tully. Splunk was losing money on every deal due to the infra outcosting the insanely high quotes Splunk was churning out. This is a great case study on Innovators Dilemma as Splunk drug their feet for years internally saying that cloud would never impact them. And then they realized they were far behind the 8-ball and decided to hemorrhage cash so as to not churn customers. They eventually optimized it, but the underpinnings still aren't what a fresh take on the bits would have looked like had Splunk done the "right" thing.
Cisco will continue to play ELA games with customers just like VMware. For those who don't know both companies like to get customers into ELAs. Why? Because those contracts basically state that said customer will buy X number of new products annually or risk losing some, or all, of their currently negotiated discount. For smaller orgs this works less well, but you'd be amazed at how those smaller are easily manipulated by snake oil sales folks. For large orgs this puts them in a bind. I've even seen shady contracts written (from Splunk) that had language wherein if the customer does not renegotiate or cancel a, let's say, 3 year contract in writing 90 days before it's going to expire that the contract will autorenew at a ridiculous percentage increase in cost.
Move away from these enterprise product sets where and when you can. These companies are focused on the bottom line - and that is profit, not the customer. The industry has it all backwards, and it's working for them... Still.
Matt Levine's money stuff offered the hypothesis that it could just be normal gambling. But, it's almost definitely insider trading, and either way, someone will definitely be getting an SEC visit.
Which every time he invoked said law, he also backs off and says it's probably gambling or automated hedging, or in the original case, a failed "broken wing iron condor straddle" going the opposite direction
Possibly. I guess you can't remove the idea that the information was found through some open means. For all we know the private jets of the Cisco leaders might have been in the same location as those from Splunk.
I don't have the knowledge or the patience to find out, but it would be interesting see the overall pattern of 1 day calls on Splunk stock to see if this was an outlier.
I think the SEC loves cut and dry cases like this. You see enforcement actions all the time about similar situations. Usually some VP of one of the two companies is behind it and they amateurishly try to cover their trails by getting their brother to do the trade, or using their mother-in-law's account, etc.
IMO though it could easily be just some WSB bro that gambled and got lucky. Robinhood and other platforms make it easy to trade short dated options these days and people love to gamble on them.
My depth of stock trading stops at the buy low sell high level. Can someone explain a little more if you have time? What would have happened to those trades if splunk had went down 20%?
They bought $127 call options (the right to buy Splunk at $127) while Splunk was valued at $119 and the options were due to expire in one day. That's a cheap option to buy, given the improbability of a sudden jump like that.
The only way the buyer could make a profit would be for Splunk to go higher than $127 and if it went significantly higher, they'd stand to make an eye-watering return-on-investment multiple in one day. Which is what happens.
It would be suspicious if this turns out to be a speculative trader making a one-off transaction.
This is an overly simplistic view of options trading. Let’s say I had a view that the stock was going to be volatile, more so than options implied, but didn’t have a directional view. I could buy the calls and short the stock and scalp my gamma during the move.
Or let’s say I was short the stock and wanted to hedge during a volatile FOMC period.
I’m not really sure what you mean. If I buy 10 lots of ATM 0dte puts, and 5 lots of underlying, I will have a delta neutral position. If the market moves up, my put delta will be less than 50 due to gamma, so I am now net long. So I sell some underlying for a small profit which takes me back to delta neutral. Then the market moves back down again, and my put delta increases leaving me net short (again due to gamma). So I buy some underlying to keep me delta neutral. This is called gamma scalping.
In the above, I’ve just realized a small profit by trading the underlying and a small bit of theta burn. As long as the former is greater than the latter (as long as realized vol > implied vol) I make money.
Rinse and repeat this process over and over again.
You and I have vastly different definitions of 'overly simplistic'.
Scalping your gamma?
Feels like the stock market is just a bunch of jargon, subterfuge and financial sleight of hand. Like we learned nothing from 2008, and just created financial 'products' mechanisms and gambits out of thin air.
Stock shorting has got to be one of the most pants-on-head stupid things I've ever heard.
> Feels like the stock market is just a bunch of jargon
This is literally every industry. Do you think the average trader can understand the majority of discussions on HN w/o any domain experience? The jargon exists for a reason.
> Like we learned nothing from 2008, and just created financial 'products' mechanisms and gambits out of thin air.
The financial engineering issues in 2008 were fueled by other issues: simply we had the government suppressing true borrowing costs and fueling a housing bubble under socially progressive cover. These moves almost universally end in disaster historically. The "out of thin air" products I presume you're referring to all had/have legitimate use-cases: the problem is that nobody bothered to do proper risk management because the US Government was fanning the flames in one direction.
> Stock shorting has got to be one of the most pants-on-head stupid things I've ever heard.
That's probably because you don't understand the positive aspects. Shorting is absolutely critical to well functioning and efficient markets. It's not simply evil hedge funds betting against businesses or whatever trope you might have heard.
In fact, if housing was an easily shortable asset class, the above crisis you mention would have been far less severe (or possibly not happened at all) as short selling pressure would have kept prices at more reasonable levels.
> Feels like the stock market is just a bunch of jargon, subterfuge and financial sleight of hand.
Here, what they're doing is establishing a position which will make money if the stock moves either direction out of a narrow band. If you believe there's going to be a big industry upset, but don't know whether it will hurt or harm a specific player, you might enter this position. In turn, the overall market volatility is reduced and liquidity is added by your information being added to the market.
> Stock shorting has got to be one of the most pants-on-head stupid things I've ever heard.
All kinds of simple, legitimate reasons to short stocks. E.g. you are excessively exposed to that company's welfare for some reason (stock options, they're an important vendor, they're a big component in a mutual fund you own but you'd rather not own their stock, etc)-- you can take an opposite position by shorting. Or, here, you can use it to offset an option that moves in the opposite direction.
> Like we learned nothing from 2008, and just created financial 'products' mechanisms and gambits out of thin air.
This isn't too much like the house of cards from 2008. These types of strategies are not new; offsetting short positions by writing or buying options was in frequent use in the 1970s, if not before. Option use to profit from volatility (or hedge volatility) dates back more than 2000 years.
I'm not a big fan of esoteric, complicated financial schemes, or in creating options and financialized products for everything (e.g. cap and trade)... or situations where market players profit from privileged access to marketplaces (e.g. HFT). But the things you name are not any of these.
Most of Cisco's current product suite came via acquisitions[0]. The difference with Meraki, compared to the typical Cisco acquisition, is how independently they were allowed to operate. WebEx was a similar story. Cisco would tell you that acquisition is a core competency of theirs[1], but having worked there for 8 years (including during the WebEx and Meraki acquisitions,) I'd say their track record is far more spotty. A few successes like Meraki, a bunch of mediocre examples and a few really bad ones, like Scientific Atlanta.
0 - Even switching originally came to Cisco via a whole series of acquisitions in the 90s. You could argue -- and Stanford certainly did -- that routing was an acquisition of sorts, as well.
1 - Their M&A guy even wrote a book about it, called Doing Both, which purported to explain how Cisco achieved so many of their goals by refusing to make false "either/or" decisions. Ironically, almost every example in the book was something that Cisco is spectacularly bad at.
I sat in on the all Cisco acquisitions teams from c. 1994 - 1999. Even during that heyday there were awesome acquisitions that took off and others that went nowhere. Cisco was historically always better at hardware acquisitions than pure-play software. It would often kill the software products entirely — Internet Junction, TGV, Precept come to mind.
The one other rule that John Chambers lived by was "no merger of equals." It was always about a big fish swallowing a smaller one. Cisco's market cap is an order of magnitude greater than Splunk's, but this is as close to breaking that Chambers Rule of Acquisitions as anything they've done to date.
Here's the full history of Cisco acquisitions. Maybe someone with more M&A lore would scorecard it to see which were dreams and which were duds.
I think they had better success integrating hardware companies, but SA — which was pretty much a hardware company — was a pretty big counter-example. I’d also argue the further they strayed from their core market, the worse the results. See also: Flip and Linksys.
I worked for NDS when they were acquired by Cisco. They've spat them back out a few years ago. Not entirely sure Cisco should have got into the video space.
I enjoyed Cisco (great 4th July parties!) but it never felt like we were properly integrated.
Based on my experience in (mostly) software companies, hardware just seems more likely to work. The people building it are formally trained, the government forces a minimum amount of safety testing, and a design mistake could cost millions to fix, besides the reputational damage. Software is more like getting retail workers to build a remote controlled forklift out of junkyard parts.
Scientific Atlanta… there’s a name I haven’t heard in a long time. Didn’t they use to make crappy cable boxes, back when cable TV meant a box that connected to the antenna input via coax.
SA made set top boxes along with a bunch of back-end infrastructure to make them work. It was an acquisition that made sense on paper -- Cisco did (does) a lot of business with service providers, they make cable modem termination systems (the headend devices that handle cable modem connectivity,) had dabbled in IP video, so it was a natural evolution to make and sell the rest of the gear you'd need to operate a cable-based service provider. I don't think they were counting on how rapidly Internet streaming would take over, but in any case, the acquisition didn't work out so well and last I heard they had divested it.
One other thing that I think feeds into these acquisition mishaps is that Cisco has, in my opinion, consistently over-estimated how much intelligence would be needed (or wanted) in the core network. In their view, intelligent network services = expensive network devices = revenue for Cisco. I think what the Internet specifically and IP in general, as well as the evolution of LAN technologies over time have proven is that when it comes to the core network, simple is almost always better and intelligence should move to the edge, where innovation can happen quicker and where services can be implemented in software.
As an example, at one point they had what was, essentially, a middleware system (like Websphere,) which they called Application Oriented Networking. The idea was you would deploy these on your network gear, throughout your network, and it would provide message routing and translation services. They had a whole "architecture" built for it, called Services Oriented Network Architecture[0]. I don't think the people who built it really understood that it provided no real advantage over a cluster of middleware/ESB/MQ servers in a data center and that nobody was going to pay a huge premium to build that capability in their IP routers.
> I don't think they were counting on how rapidly Internet streaming would take over
Ironically, those set top makers were in a perfect position to take advantage of it. They could have been Roku - they already had huge market penetration.
Sure, but those set top box manufacturers were beholden to the cable ISPs for their revenue, cable ISPs which would have been furious if a supplier started competing with them. The STB companies also, as a general rule, were really bad at UX -- there's a reason why the interface on those things was universally bad, and it's that the cable companies were the customer of the STB maker, not the end consumer. SA and the rest of them just didn't have consumer UX expertise as a real competency and didn't need to have it.
I was thinking way earlier than that. My grandparents had a Scientific Atlanta box connected to their giant piece-of-furtniture Hughs and Mathis TV. This was late 80s, early 90s, long before digital TV, or cable having more than 30 or 40 channels.
Yep. I worked at SA from the mid-90's through the mid '10's. They left the satellite business and focused (mostly) on cable systems. Was a lot of fun as digital settops rolled out, then DVR, then HDTV. As others have noted, the Cisco acquisition in 2006 did not, uh, work out too well. I believe Cisco had visions of video control "in the network", but that was never going to work for extant cable systems, and we couldn't get an IPTV solution going for lots of reasons. Loved my time at SA but it was oil and water with Cisco.
I worked at Scientific Atlanta in the 90s, designing stealth radar systems. Some very cool tech they developed. They also did a lot of satellite comms. And a lot of telecom tech.
Splunk is hands down the best log analysis tooling I've used. If not for the hefty price tag, I'd use it for my personal stuff and every workplace I've been. Structured logs and Splunk are the stuff dreams are made of if you care about monitoring the quality of software.
The logs into metrics abilities along with the ability to unlock finding relationships in data is amazing. Mouse over the fields found in logs matching your search and see the top N values for other these keys.
Imagine getting an alert and being able to search your logs for that error message and immediately being able to see it affects these N users disproportionally, that it is split 50/50 in two of your seven regions, only affects version X of your service. A couple more searches to dig in and you can see it is only feature Y with setting Z that is the problem. You switch to a timechart view and can see the moment the error started and the affected user counts. A few more minutes and your support team has a list of known affected users. You decide to monitor this new feature so you quickly create a new dashboard (or panel on an existing dashboard) and a new alert. At no time did you have to declare a field of your structured logs as an index or as searchable or aggregatable.
Splunk has delivered this level of innovation and quality since 2007 when I first used it.
We used Splunk to associate a change request ticket number all the way through the change control process to the Puppet log output tagging each change to the original business purpose.
It was like magic for auditors back then and I rarely see that depth of tracing automated changes to business purpose in the field today, though we get close with gitops.
The entire moat is gone. The biggest value driver they had was integrations to get the data there, but now ebpf, telegraf and Vector have destroyed that moat.
With Vector you can even source from Splunk and move elsewhere.
OMB Memorandum M-21-31[0], “Improving the Federal Government's Investigative and Remediation Capabilities Related to Cybersecurity Incidents” which includes directives to ensure event logging goes well beyond the current norms.
By all accounts I've heard it's going to enrich the fortunes of every single SIEM/Log aggregation company out there, pretty much every govt contractor is going to need larger licenses in the next few years as contracts get rewritten with this EO in mind.
Partially, but Splunk has been on the market for sometime actually. Also, large companies that compete with Cisco like CRWD, PAN, etc have been building out SIEM capabilities, as has Cisco, though Cisco being Cisco it didn't get the attention needed.
We [Notion] switched to Splunk Cloud a year or so ago, and it's vastly better than the other logging systems we've used. Much, much better than Kibana/Elasticsearch. We don't need to worry about indexed property limits anymore, yay. I'm a happy user.
Same for us [Obsidian Sync] although we've not had to worry about property limits, yet - although seems like we won't have to either. For us it was having a lot of in house experience with splunk already that gave us a reason to consider and in the end settle on it.
The software seems very lazy. The interface belongs in the 90s. They've been resting on their laurels for eons. The fuckin basic ass PowerShell IDE that comes with windows is about seventeen trillion times more well designed and user-friendly.
That’s… a compliment? There have been very few positive interface developments in the last 2 decades for power users. If you want to rip out 95% of the functionality and 99% of the usefulness so morons with iPads can navigate it, then it probably needs adjustments.
>The interface belongs in the 90s
Maybe this is why I actually like Splunk. Everything is simple an intuitive. Modern UIs seem to be universally terrible.
That's fine as long as your product stays competitive.
But as you lose the smaller and middle-range customers, you're also missing on the trends of the market, while getting shaken up by the big players you can't afford to say no to. If one of your whales needs feature Y, no matter how exotic you think it could be, you'll have to implement Y, bloating your product for the rest of your clients.
And while you're doing that, smaller competitors slowly creep up, eating up the bottom of you market, until you're stuck in a niche.
I'm in a fortune 100 and we are looking at replacing splunk for sentinel because of cost of splunk. I don't use either in my day to day and have no horse in the race, but if my company is doing it then the cost of splunk must not be trivial.
> And while you're doing that, smaller competitors slowly creep up, eating up the bottom of you market, until you're stuck in a niche.
So what, milking mega enterprise for ossified products is a decently profitable niche. IBM, SAP, that huge American company powering a lot of hospital IT, Cisco itself...
ServiceNow actually is quite decent... if you have a good management team, that is. I know a well run implementation and one that's a horrid clusterfuck no one wants to use (and because of that, they're implementing some AI chatbot, which I'm sure will piss people off even more).
I completely disagree with both the spirit of the comment as well as the particular strawman presented.
It is not better at all, by almost any metric other than overhead. Losing 1 of 1000 customers @ $1000 is very different than 1 of 1 customer @ $1M. One is easy to manage, the other leaves you dead in the water. In addition, you'd start to make concessions/unnatural decisions because you're so lopsided in diversity. And you're going to get completely fucked at renewal time. and, and and..
Good M&A teams know this. They build a risk profile when revenue is a component of the acquisition. The acquiring party gets to learn a lot about the fundamentals when putting deals together and it's all factored in.
To put it simply: having a healthy balance of revenue from multiple sources is a premium. Those are opportunities to advance your relationship and grow. Too many eggs in too few baskets are major red flags that will have your revenue working against you.
They're profitable now with Cisco acquiring them. They'll trim the company by at least 20% through restructuring either out or to other places of Cisco.
They'll pick up another 10-20% capex/open/cogs on private pricing that Cisco gets.
Great M&A if Cisco manages to maintain Splunk's customer base. I look at Splunk as the Oracle DB of the world now, does anything a giant enterprise can imagine, but is old and costs a leg & arm.
In sales we call this "Ideal Customer Profile." Why do I want a customer with less money to spend if I have a product with enough capability for the gigantic money-is-no-object customers?
I work in a 100+ year old giga bank, systemic in the country it comes from, in their Hong Kong investment bank branch.
We loved Splunk, we invested quite a bit in it both for technical monitoring and business intelligence. After a while the price went so high we cut it all, moved to kdb/tableau/elk/whatever crappier system that cost less.
Money is ALWAYS an object and Splunk makes sure to dig a hole deep enough for even the deepest pockets. I too prefer my shareholders to collect the fruit of my labor rather than... Splunk. At least they can reinvest some profit in us. Not Splunk, nope, they keep digging that hole in our pockets.
Spot on. I also work in a 100+ year old gigantic corporation with big money and we are also moving off Splunk due to rising costs. Enterprise customers do not just pay whatever the sales folks ask for. Splunk is dead growth wise if they don’t fix their pricing.
We moved a business from splunk to ELK a couple of years ago. The actual work of doing so took less than a day. The maintenance processes changed, and some things are not as good. But aside from the beefy machine we run ELK on it costs next to nothing, and is very reliable.
Mindshare is valuable, was the point GP was making. If midsize customers ignore you because you're too expensive, and then implement something else before they get big enough to afford you, where do you get new customers? Forget growth, how do you replace attrition as your existing customers die?
Personally I can't say if that's actually happening with Splunk, but it's a very plausible scenario.
I've recently dealt with multiple companies who started using IBM Aspera (which as a vendor to them means we have to use it too) only for it to work miserably. I've also seen a couple tiny, perfectly functional MySQL databases replaced by expensive, slower Oracle databases with much higher maintenance costs.
I think once a customer with a big enough budget is recognized by sales at one of these big organizations they make the sale happen. They talk to the higher-ups and either make them happy, or feed them a lot of FUD (or both), and then they're in, regardless of what the people working with the products (many of whom might be external vendors or consultants!) think.
They're basically focused on more traditional sales & marketing instead of more grassroots sales & marketing (mindshare), but at least in my experience they definitely still get new customers.
> Mindshare is valuable, was the point GP was making. If midsize customers ignore you because you're too expensive, and then implement something else before they get big enough to afford you, where do you get new customers? Forget growth, how do you replace attrition as your existing customers die?
Somehow companies manage to make it work extracting money from your existing money-is-no-object customers. Oracle and IBM have basically zero mind-share amongst HN reading folks, but yet there they are.
Microsoft dominated the nineties especially and the naughts less so but still because the marginal price of their OS was zero - due to piracy. Yes they didn't like business to run unlicensed but if you were a customer, nobody cared, because in 5-10-20 years you'd be a paying business or would work for a paying business.
Splunk doesn't get that. There are no hobbyist/prosumer splunk installations. Zero. Nada. That's also how Linux won in the server space - nobody set up Windows servers as a hobby and 20 years later we're here.
IOW it's medium-term short-sightedness, if it makes sense. Tactically good, strategically so-so to bad, depending on your moat and momentum.
> Splunk doesn't get that. There are no hobbyist/prosumer splunk installations. Zero. Nada.
Not true. I ran a free (legit!) Splunk instance in my homelab for years. It's been several years since I shut the homelab down, so I couldn't tell you if they still have hobbyist licensing, but they certainly had it in the past.
It was at one point usable but they drove off the hobbyist/small business crowd a long time ago. We do some work setting up elasticsearch tools that aggregate and filter data later sent to central splunk purely to affect a large reduction in license costs.
Kibana and Graylog on top of elastic/opensearch. Even the commerical licenses on those are usually a tiny fraction of splunk's costs, and Graylog does enough for free that it's a much easier path to stand that up and then buy the correlation functionality if you really need it.
For some organizations what Splunk does well is important but for most of them they really only need much more basic log aggregation and analysis tools.
I believe the idea is that the big customers are interested because everyone is raving about it. If you price out the smaller customers, there's nobody to rave about it.
Consider, for example, that Akamai's revenues are sitting in a plateau over the last 5 years, while Cloudflare is moving up.
> I believe the idea is that the big customers are interested because everyone is raving about it. If you price out the smaller customers, there's nobody to rave about it.
That's not how enterprise procurement works, which is what makes the big bucks for companies like Akamai and Splunk.
Cloudflare traditionally targeted mid-market and is in the process of building out an upper market/enterprise motion (I worked with the guy they hired to lead that in a previous role).
I can dig deeper into ICP, Market Segmentation, and Enterprise sales if interested. There is too much FUD on HN
>big customers are interested because everyone is raving about it.
In this case the big customers are already using it. Splunk's value proposition for those customer is that they can handle with a massive volume without a hiccup. Small customers don't have the needs where Splunk is uniquely useful.
Because those medium-sized customer become large customers and getting more people to use your product builds up skill set in people. Switching cost is very expensive. This is why we'll probably see DataDog and Newrelic dominate the logging space because of their no contract plans that you can scale up to negotiated rates when you become larger. Even getting a POC of splunk is expensive and sales team will push for a contract.
What splunk has going for it now is that they have lot invested in compliance and security but its only matter of time before other providers start offering the same. Only use case i would consider them for is a SIEM. Datadog logging is so cheap and works and gives me more money to spend on other things.
Maybe, but the Splunk query language is reasonably well liked by its users, at least in the security space. Much more approachable than SQL, which seems to be what all new tools these days are forcing users to use due to their dependence on Snowflake and Presto/Trino. In Splunk, you can type free text queries, and you can also add structure. Fairly flexible. We’ve been asked many times to make Scanner’s query lang more like Splunk’s.
I worked at Cisco following an acquisition. I tend to agree with this comment about the upper stacks :)
However, AppDynamics and Duo seem to be doing well at Cisco from what I can tell. I think observability and security tools are a good match for Cisco and bundle well with hardware. For this reason, I’ll bet Splunk does reasonably well under Cisco too.
That's really a shame, Cisco buying anyone is often a death knell for the product. Look at their acquisition of security companies like Protego, Stealthwatch, ThousandEyes, and others that languish there, bled into watered down features for other dubious Cisco products and disappear into the ocean. Customers then abandon the products to again escape Cisco for other non-stagnant and overpriced products.
Already a customer/friend at a $6B retail customer of mine sent me the link first thing as a Splunk owner there. Just last week I asked if they'd looked at Datadog much yet, and said they'd rip Splunk from their cold dead hands. The follow up to the link for buyout news as that they were going to start looking at Datadog now. Splunk was already expensive, but not Cisco expensive.
Genuinely surprised anybody would acquire Splunk in 2023. Whenever you hear about Splunk from security engineers, they're actively trying to get off it (edit: yes, primarily because of cost). Better, next-gen SIEMs are either here or around the corner.
Splunk is a great product with horrible sales and business team.
The reason why them _trying_ to get off it is because they have a bunch of stuff that is easy and works in splunk, but don't want to pay the exorbitant licensing, or pay even more to increase their use.
But getting off a good product is hard, and they will continue to use it and even pay.
The kind of thing Cisco, Oracle, and IBM love are companies with very expensive products in which no development needs to happen and customers cannot move away easily.
I was in one of these meetings with like 20 engineers on how amazing this thing was. We knew that because we already used it it quite extensively. The very extremely hyper sales rep kept ducking out of the meeting every 5 mins. I recognized it for what it was. He was ducking out to do bumps of coke so he could be more pumped to sell us more stuff.
I was at a shop that got heavily integrated into Splunk for security use cases and then entered a split brain mode of 'well if you need observability we already have Splunk' but also 'hey stop doing so much observability, this thing is expensive!'.
So for 5 years time we used it for observability, we were only half-integrated and also trying to get off of it. Great stuff.
Worked on a piece of software which suffered from years of this split brain. It had some logging and some metrics, but the team was told to be economical about observability. This resulted in the software having many blind spots which led to production issues that had to be manually reproduced. When I become responsible for the software I personally overhauled the logging and the team had to work together to rebuild the metrics functionality.
this is an area that gets very political with architects, managers and other non-coders having too much of a say
a lot of paralysis on the app dev side as the status quo is easier than fighting for a sensible outcome
its also something that yes, benefits stakeholders... but only on a 2nd/3rd order effect of outage avoidance & remediation.. so theres not a huge reward for doing it really really well in many shops
I haven't heard a single person trying to get off of it because "there are better SIEMs" - they're universally looking at other options because of the price.
Cisco has the luxury of bundle and save that Splunk does not.
I can see them shipping a really cool-looking whitepaper detailing FTD, Amp, and Splunk... but actually operating it will feel similar to driving a 20 yr old salt state jeep wrangler on the autobahn.
Oh god those firepowers we bought were so bad. The controller webpage needed to control our pair needed something like 32GB of ram just to load.
Using fortigates now, far happier with them.
But it's not just the firewall level, they were so bad it made us reevaluate our core switches and I don't think we've bought a cisco switch for at least 2 years.
I used Splunk at a previous job and that’s one of my few/only complaints with it. Great tool but extremely expensive for what you get. Datadog is the same way as well as Pagerduty. There’s not enough competition in these spaces
That's super true of PagerDuty. It's a pretty good product and cheap when you only have a few people on it. However, the jump from the basic license to the next tier is HUGE and any add-ons you might need (ie. webhook triggers) bump the price up even more. Just having a simple monitoring solution with >10 people could cost you $100's a month.
That said, every other product in this space is crap. I'm not sure why though. This seems like a pretty good market for disruption. Maybe there is some hidden "problem" that I don't know about.
PagerDuty is extremely expensive and I decided to disrupt the market a little bit by creating All Quiet: You might want to check it out: https://allquiet.app
My take on xmatters is that it gives you some building blocks to build a decent paging system, and has a fair amount of flexibility, but many things that work out of the box, or with a little bit of configuration in PagerDuty require a non-trivial amount of work in xmatters to set up. And you will likely run into limitations.
Why is pagerduty hard to switch off of? It has all kinds of useless and expensive bells and whistles, while the core functionality is a commodity that several companies offer.
We moved vendors a few time and it wasn’t that painful.
What happens when Twilio is down?
Same questions for your email, sms and server. Part of the difficulty is guaranteed uptime and PagerDuty is rock solid in that regards.
Hmm, are you referring to their Observability product or SIEM capabilities? There's a wild amount of competition in the Observability side of things, but SIEM not so much.
I'd love to know what the security engineers you are talking to recommend because Splunk ES/SOAR are top notch products - even with the cost (which is insane).
Which ones do you recommend? Every one I have tried hasn't really given me the same flexibility as Splunk, most seem to miss the core part of what makes Splunk cool. Though I'd definitely like to see Splunk improve their design.
Microsoft is doing a surprisingly good job with their Sentinel SIEM. The sweetener is they give you free ingestion on most of your Office 365/Azure logs which can add up if you’re shipping out to another platform.
Makes it attractive for enterprises already on their platform and they throw in discounts for E5 license tier customers as well (gotta keep pushing the “give us everything or pay way more for single feature licenses”).
The thing that will totally replace splunk (and elastic and snowflake and likely several other whole ecosystems) is some random thing pouring data into clickhouse.
I am nervous about how clickhouse is going to monetize, whenever they decide to turn on the revenue spigot.
I hate to shill in this thread, but that's exactly what we built at runreveal, so I completely agree! We saw the power of clickhouse when we were at segment and cloudflare, so built a company around it.
And since clickhouse is open source, we hope that people will stop giving their security data to vendors who then charge you rent for it. I think the future is writing this data to clickhouse, but also our customer's clickhouses
I used to love Graylog, but I was evaluated it for use with AWS and a) it's AWS bits seem limited and b) I found a bunch of deadlinks from their github to their site. If they can't keep their docs updated, it doesn't give me warm fuzzies about their product.
Hey, founder of Tenzir [1] here — We are building an open-core pipeline-first engine that can massively reduce Splunk costs. Even though we go to market "mid stream" we have a few users that use us as light-weight SIEM (or more accurately, just plain log management).
We are still in early access but you can browse through our docs or swing by our Discord.
If you're looking for something that can handle unstructured data and has a similar query syntax to Splunk then Gravwell (https://www.gravwell.io) might be a fit.
Sounds exactly like the kind of Enterprise software Cisco wants.... At that pricepoint they don't really care what the security engineers want, they sell to higher level folks.
They want so hard to be a software company, and they already have experience with highly inflated priced products.
Their real target is probably trying to offer this built in to meraki like products as a one stop shop. I could see them finally burning their monitoring product in a fire and replacing it with splunk and grafana then selling it as an all cloud solution. At least the intent, we know Cisco's track record for integrating acquisitions.
Avoid Devo, querying across data sets with their system was hot garbage in comparison to both splunk and elastic. Then when you try and break up with them it becomes a whole thing.
Avoid Exabeam. Their UEBA product is riddled with problems, and they are not concerned that it does not display timestamps for when the event occurred- they display timestamps for event ingestion which can sometimes be hours off.
They also seem to outsource much of the development, maintenance and support and appear to have high turnover.
To pile onto the Splunk "love" going on here. Splunk is one of those systems that's too "powerful" for small use-cases, but too expensive for the ones it's really designed for.
Anecdote, I once worked with a client that really wanted to get Splunk, but produced so much network traffic that the discounted annual costs were more than the entire budget for the rest of the organization combined. That's staff, the building, equipment, power, water, everything...the estimated Splunk cost was more than that.
They went with a combination of ELK and a small team of dedicated developers writing automation and analytics against Spark and some enterprise SQL database. Still expensive, still cheaper than Splunk.
That's what I was wondering about when it comes to this acquisition. Can Cisco make Splunk even more expensive? I have faith they can, I know for many folks, Splunk tops the leaderboards when it comes to spend.
Splunk bought SignalFX a while ago and they are trying to lean in hard on the observability craze and piggybacking on OpenTelemetry. I wasn't involve heavily in this migrate to Splunk Observability Cloud project about a year ago but it was a shit show and half-baked and ultimately they dumped it in favor of DataDog IIUC (I had since changed jobs but kept in touch with ex-colleagues).
Worked at a medium size enterprise and was trying to get some detailed performance metrics with a legacy tech stack that didn't have a drop-in APM soluion. This was in the age of graphite which was great for aggregating metrics cheap but not getting detail.
Splunk was used by a much larger product (easily 10x our scale) for monitoring events so there was no red tape to start using it.
After launching the detailed instrumentation (1 structured log event per HTTP request with a breakout of database/service activity) I was able to gain all of the insight needed and build a simple user/url lookup dashboard page to help other engineers see what was going on. We went from being mostly blind to almost full visibility in less than two weeks.
The downside was, we increased our billable Splunk usage by 50% since we were capturing so much more data per log event than the other product just consuming standard IIS/Apache logs.
That type of flexibility was totally worth it. Due to some acquisition shenanigans we broke off from that group and wound up on ELK stack which didn't perform quite as well, but was still usable with the same data. In today's day and age we could have just built an OpenTelemtry library.
Comcast would drop all the error logs for all the cable boxes in the country into splunk. I then queried this to figure out the error code count in a given period. It's really the only thing that can handle the volume.
I remember this talk about pricing strategy by one of their employees in a conference many years back (2017) - https://www.heavybit.com/library/video/value-based-pricing-s.... What I took away from that talk was that pricing can be unintuitive, for both the people setting it and buying it.
The only "unintuitive" part was developers saying the product needed to be $250/yr when the product person made it $2,500/yr which ended up being the right choice
Developers being absolutely terrible at pricing is not unintuitive (I'm a developer)
My experience back in Netflix too. Elasticsearch (we didn't use the L or K) plus query engine on S3 with a catalog was more versatile and way cheaper than Splunk. Nowadays we get a slew of performant OLAP storages that can be used for log analysis as well, which further render Splunk unnecessary.
My experience at a big fintech I won't name: we had our own highly engineered in-house metrics system staffed by a big team. Custom pipeline, integrations in multiple languages, high resolution, custom aggregation and rollups. It was nice.
We also had in-house logging, exception tracing, alerting, service discovery, metrics dashboards, etc. It was all actually pretty good. All engineered by xooglers.
Someone (not to name names) got bitten by the "anti-weirdware" bug and started shifting us off of all our custom-built solutions. Every team got hit with major distractions from their roadmaps for each of these changes. None of the headcount dedicated to staffing the internal systems was freed up - they had to run the new integrations.
The decision was made one day to migrate all of our observability stuff over to SignalFx. Observability wasn't our "core competency" and our systems were "weirdware".
We had to rewrite our instrumentation, all of our reporting dashboards, and all of our alerting DSLs changed. They were not replaced 1:1 for every system and metric, so we emerged in a much worse, much less visible situation across the board. Outages happened or went unreported.
Splunk acquired SignalFx and dramatically raised prices. We scrambled to do the migration process yet again, impacting roadmaps and leading to more outages.
Leadership was changed.
There's one thing to be said about NIH, but when you write systems that are already working, inexpensive, and easy to maintain, you shouldn't throw them out because you're worried analytics isn't your "core competency". Yes - it is your core competency, because you're selling uptime to your customers.
Agreed. Costs plummet when you use S3 as the storage medium for these massive log data sets. I think S3 is much faster to query than most people realize. Just have to be smart about how you organize things.
Sampling via just enabling it for some hosts/partitions is one solution (if you're producing 100M entries a day ... probably could just grab 1/100 of those for parsing).
Another solution is pre-processing (serial dupes are not forwarded).
Another solution is heavily reduced logging (ERR or higher only on prod hosts).
They mean doing the processing that Splunk does is expensive so there simply needs to be less data going into the system (via the pre-processing steps I mentioned) above in order to keep costs sane.
With that said Splunk should offer such a pre-processing product (maybe it does?) which would probably increase their moat even though it reduces revenue somewhat in the near term.
Splunk is honestly kind of the mainframe of SIEM. If you need it, you need it and can probably afford it and they know that. Can you do the job with something else for cheaper? Probably, but not as good and not as easy.
You can't really make an informed decision without knowing how much data they were moving. For it to be that expensive, you'd need to be moving a ludicrous amount of data, and you can always parse data down to the required fields before indexing, which saves on licensing costs.
in 20 years of doing SIEM and SIEMlike solutions, I've yet to find an engagement that said 'Oh, yes...our volumes are XX and YY'...mostly it's a /shrug and a less than educated guess.
There's even reluctance to turning things on and _watching_ it for 10 minutes. An activity that would immediately give you a much better idea of volume. Folks just don't like doing it.
Then you get the things were setting up a redundant logsource is just unwise. DNS logging was 2 orders of magnitude greater than everything else a SIEM was doing. And Email was about the same size.
Similar problems with effectively modeling weather or finding the very smallest of things, there isn't enough compute power or even energy in the universe.
Splunk was so expensive we could not use it to monitor our servers used for weather modeling. Seriously. The log files generated were at times too voluminous and you frequently blew thru your bandwidth cap.
Great product, but completely useless utility value with financial considerations for environments with high volume.
I’ve had the same experience in that I love splunk and their tooling is so easy and powerful. But I can’t afford to put data, especially long term data that requires reproducibility for many years.
I’m always happy when I can use some of our sources that are in splunk but get sad that I can’t do that with everything else.
Its cloud pricing is funny because it’s so much more powerful with massive amounts of data, but they charge based on storage. Our on prem instance wasn’t just simpler to price but we could throttle resources to allow for really high volumes of data with relatively slow query and analysis.
This was the sweet spot for the ELK stack really. You could get the main functionality that Splunk had and self manage it (or run out of a Cloud more recently) and scale to whatever you wanted to.
It mostly just works. Back when I was actively using it it was IIRC the most stable part of the stack. Only went down when daily quota was exceeded. When it ran out of disk, nothing broke, it showed a message in the ui. When space was added, it just started going again like nothing happened. This was something like 2018?
There are ~4bn Cisco shares outstanding. CSCO is down $2. So the market thinks that Cisco is overpaying by $8bn, or a 33 percent premium. Seems pretty bang on to me. Score one for the efficient market hypothesis.
“the market thinks” is an expression that makes me cringe. The market does not think, it’s the result of multiple actions, which many many people pretend they can explain or even predict when really they cannot.
"the market thinks” gives the stock trade market an aura of reason and intelligence which it absolutely does not deserve for many historical reasons. Trading as it exists today is unhinged capitalism, it’s a cancer on our societies as it widens the gap between rich and poor. It should be taxed, something like an Automated Payment Transaction tax, to make high frequency or even medium frequency trading simply unrentable.
I’m not against the concept of stocks in general, but the way it operates now is simply sick, I don’t see how to phrase this differently.
Building splunk has become very democratised in today's day and age.
Back in the day, logging, metrics, event collection etc. was a hard problem that they solved. Esp. when there weren't any simple distributed storage operators.
They have been a cockroach in the orgs, surviving every downturn. As a dev, you might hate it, CISO and CIOs love it. Orgs, often mandate it. The way they dominated the market is via creating CEF formats, integrations. It is more than a logging solution right now. It is an XDR, threat analysis platform etc.
This acquisition is going to be interesting with app dynamics+splunk and others, it feels like there is a larger play here for Cisco.
I don't think the value that splunk have is transitive to ES or grafana. It is, its own thing.
When I first saw Splunk in like 2010 it was mind-blowing. Back then, standard practice was to tile 8 ssh terminal windows and log -f everything I needed. I'm sure it looked cool, but it was damn near impossible to find what I was looking for.
How do you think Splunk will fare as more companies move to public clouds? Seems unbeatable on-prem, which probably means that the company is a good match for Cisco.
Buying short dated far out of the money options is a guaranteed way to get caught. If this actually was insider trading, there are probably a bunch of SEC officials suffering from high-five induced palm injuries.
Meraki and OpenDNS both became better post acquisition, and in both cases I’d say it was because Cisco let them continue to maintain a lot of control, the leaders stayed around, and the majority of the engineering teams did, too. Cisco has a long list of successful acquisitions. The release says Gary will report to Chuck directly, which is a strong sign Chuck will make sure Splunk succeeds. (nb, I was CEO of OpenDNS)
Like you said, Meraki got better because the core team, including engineering and sales as well as the founders, stuck around for about two years. Things did go significantly downhill once the founders left but by that point the company was already so successful that the exodus of great people that followed their departure probably didn't even impact their bottom line that much. I will say that I personally found working for a Cisco subsidiary pretty terrible relative to working for a startup but, hey, the checks cleared.
AppD offers some SIEM. Splunk does much more than SIEM. Splunk Observability Cloud has nothing to do with Splunk Enterprise, it's a fully fledged AppD competitor.
> Oh, wow, they even acquired Intel smartphone modem business at 2019 and other Semiconductor businesses.
Was the easiest way to put some fire under Qualcomm's arse, RF modems, batteries and displays are the only things Apple doesn't have under their direct control - but for batteries and displays they at least have a selection of competing suppliers. With modems, they're stuck at whatever crap Qualcomm delivers.
Apple Weather may be better, but DarkSky is gone and it has not included all the features it used to have, such as hourly rain probability for any day.
From my perspective as an Apple Weather user, it went from basic and barebones to feature-packed almost overnight.
The cost also went down. DarkSky was $4. I wasn't ever willing to pay for a weather app.
I see hourly rain predictability for today, and for future days there are hourly precipitation charts in inches. I can't imagine that precipitation beyond the current day on an hourly basis has any chance of being accurate.
I think alternative weather apps like DarkSky were incentivized to provide extra information that justifies their existence regardless of accuracy/precision.
E.g., if I make my own weather app and my selling point is that I give you a forecast for every 10 minutes or that my forecast goes out 5 years, I don't have to have any shred of accuracy because it's just a forecast. I was able to sell you my app because you're impressed by the fact that I give you more granular predictions.
> The cost also went down. DarkSky was $4. I wasn't ever willing to pay for a weather app.
I was the same way. Then I broke down and paid the $5. Best app purchase I ever made. One time fee and used it for years. I wish there were more apps like this.
Oddly enough this is the one reason why I don't use Apple Weather. I live in Texas - if you don't have covered parking you will inevitably get hail damage. The 1-2 days per week I go into the office I have to check Accuweather beforehand.
Precipitation probability is the most important thing in a weather app to me.
You can set up alerts with windy.com to be notified about a location have a forecast combination of wind and rain that may work well for forecasting hail.
Apple Weather is better, but not as good as DarkSky. And DarkSky is gone.
It’s one of the few apps I bought and it’s frustrating that Apple bought them, picked a few features, killed the rest, and shut everything down.
I’m not even complaining about killing the api, that makes sense since Apple doesn’t care about this.
But Apple Weather’s maps don’t work as well, the precipitation views aren’t as detailed, the user supplied precipitation reports are gone. It just does different things.
But, yes, Apple Weather is now a better app because the acquisition.
Webex is much better under Cisco than it was on it's own. Cisco's expertise in hardware made for a great combination and has kept the product aligned with interoperable standards more than Zoom and some of the others.
The responses here are giving me some hope. I’ve just had many experiences as a customer where products I’ve used became worse (or were shut down) after their companies were acquired
There are exception, but Microsoft seems pretty good at this. GitHub, Minecraft... Skype got a lot better for me in terms of reliability after the acquisition too, of course they've been competed away by other voips like Facetime and Whatsapp these days.
LinkedIn is better than ever for finding a job, or advertising a job, even though lots of people here don't like it because of the LinkedIn poasting culture.
Is so much worse under Microsoft. As a parent, it’s funny how much Microsoft hate is in the house because the Minecraft fuckery. They made new versions, migrated accounts, added micro purchases, made mods harder.
My 5-year-old had a Mojang account and could download and install Minecraft. Migrating to a Microsoft account was very hard and took multiple attempts and my direct help. And for some reasons “sucks.”
Companies rarely buy other companies in order to make buyee's product better, they buy them to boost the buyer's business or at least remove competition.
They don't buy in order to make the buyee's product better, but continuing to improve the product may be necessary to realise the value of the purchase particularly if regular updates and improvements are a big reason that customers stay with the brand.
It may, or it may not. (Cue Apple buying a CNC laser cutter manufacturer just to get hold of the inventory).
When a company is deemed a good investment it's invested into by financiers, actual companies often buy other companies for other means than developing them further.
Why YouTube? It was definitely worse pre-acquisition, but so did the rest of the internet. Do you think it could've gone under without Google's capital?
I wonder if this segment is ready for disruption. Splunk is very expensive, ElasticSearch is still lacking many of the features of Splunk and when hosted on AWS is very expensive. SumoLogic was acquired by private equity, which means that it won't get cheaper. DataDog is also very expensive.
Solution like SnowFlake for logs / telemetry where compute and storage are separated might be the future.
We're[1] building the OSS equivalent when it comes to the observability side of Splunk/DD, on Clickhouse naturally of course but believe in the same end goal of lowering cost via separation of compute and storage.
We’re also giving this a shot. The annual Splunk bill at our last startup exploded from $10k to $1M when we reached 1TB of logs generated per day, which is actually an easy threshold to hit when you have decent traction and aren’t proactively reducing logs. So we built Scanner.dev to drop these costs by 10x.
Decoupling compute and storage is definitely the way to go. We’re using Lambda functions and ECS Fargate containers for compute that scales up and down rapidly, and S3 for storage. Getting ~1TB/sec log scan speeds, which feels fairly good. We keep sparse indices in S3 to narrow down regions of logs to scan. Eg. if you’re searching for an IP address that appears 10 times in a 25TB log set, the indices reduce the search space to around 300MB. Takes a few seconds to complete that query, whereas Athena and CloudWatch take like 20 minutes.
We’re also using Rust to maximize memory efficiency and speed - there are lots of great SIMD optimized string search and regex libraries on crates.io.
We’re early, so there are a lot of SIEM features like detection rules that we are still building. But Splunk/DataDog users might find it useful if costs are a problem and they use mostly log search:
Everyone complains about how expensive Splunk is but the amount of compute and storage consumed by processing logs is ridiculous.
I feel like we should be talking about the sad state of logging where we think it’s perfectly ok to dump millions of 10k stack trace dumps and think that should be cheap.
I once sat down and ran the numbers of what it would cost to host logs myself on the "fanciest" cloud hosts I could find: "storage optimised" Azure Lasv3 series VMs that have AMD EPYC CPUs and 8x NVMe SSD drives.
It worked out to be something like 20x to 30x cheaper than any of the cloud solutions such as Splunk or Azure Log Analytics.
Oracle - don't use an Oracle database unless you hate money, yourself, or your company.
SAP - getting off of their ERP systems is an absolute nightmare and they know/exploit that fact.
Salesforce - CRM systems, in general, can lead to lock-in due to the sheer amount of data and customization they host. In recent years Salesforce has started to leverage this fact to grow revenue without adding value.
Unity - they're getting aggressive in trying to extract more money from their existing customers and I'm not referring to the recent license changes. Nightmare company that you should avoid working with on enterprise software at all costs.
Blackboard - within the education section their LMS is challenging to migrate off of and they will bend you over backwards because they know it.
ServiceNow - they've seemingly given up on making a better product and have invested all their efforts in extracting more money out of their current customers.
PagerDuty - whose sales rep who told me straight up that they didn't need to negotiate with us because it would be too difficult to switch away from their product.
For specific product lines IBM, Cisco, and VMware also do this but I don't think it would be fair to characterize that as their overriding business strategy like the above.
Albert, I must assume this was targeted at my comment to ask for an enumeration of businesses enjoying the model espoused. "Do your own homework" is fine if the objective is clear; it wasnt (to me at least) and I wasnt sure where to start. Thank you to the OP for adding that list!
I bet they will just try to upsell all the AppD customers with Splunk ES/SIEM. If the Thousand Eyes and AppD integration is any indicator they will add a button in AppD that opens up Splunk...
I haven't used Splunk in a number of years due to its cost. Splunk seems like a good pairing for Cisco - it's complementary to its other offerings to less price sensitive orgs, like Meraki.
I've used several Splunk competitors (Sumo Logic, Datadog, etc.) that all have various strengths but suffer from a lesser version of Splunk's problem (once you're locked in and up for renewal, watch out). I also tried some ELK-based stuff, which just plain sucked.
The one thing that hasn't sucked is AWS CloudWatch Logs, after they added Insights (a log query engine). It has reasonable pricing and works really well if you're on AWS.
We’ve got some logs in CloudWatch, but I barely use it because the query interface is unfathomably slow (in terms of query throughput). Do you use the web interface to query, or some other way?
For some applications, it also makes sense to use the built in Logs API that exports logs to S3 (the export process is very fast) then use any of a variety of tools geared toward searching through data on S3.
Great news for companies like ClickHouse, Trino, Elasticsearch, StarRocks, Imply, and etc. If Splunk can make it 28B, some of those companies should make it too, and most likely more by eating Splunk's market.
we have a large splunk install, and a lot of the comments regarding cost are a bit dated. The reason that cost for splunk is generally considered quite crazy is that it's based off number of messages or lines in logs, however to combat large institutions such as mine saying no way they've moved at least here to an amount of data that is actively queried and we sign up to say 500tb and as long as we stay within that its all good. It's still a lot of money don't get me wrong but they've changed the setup from the early days.
Splunk is so expensive and slow. My workplace keeps trying throttle queries and how far back logs are stored. Been spending the last month or so adding ELK stack for tracing to our apps.
Splunk's advantage is that it can handle volumes of logs which ELK, Graylog and Loki simply cannot. If you're not there yet... yeah, Splunk is hella expensive.
So This is a good move. As Palo Alto has moved into this market and is poised to destroy the legacy siem world (splunk et.al) with its Cortex data lake
To channel my inner John Chambers, this is a market adjacency. I.e., a way to expand into a market that complements something they already do. Their security product suite and data analytics tools would all naturally feed into Splunk. Cisco has, at various times, had products in the SIEM space[0], and it isn't unusual[1] for them to build or acquire a few tools in the same category before finding something that is a good product-market fit with some longevity.
1 - A few examples: before WebEx, Cisco had MeetingPlace which was partially internally developed and partially built with external hardware and software products. Before Firepower Threat Defense (Snort acquisition,) there was the internally built ASA product line, which developed from the acquired PIX line. In load balancers, they had ACE (internally developed,) replacing CSS/CSM (based off of their Arrowpoint acquisition.) For NAC, they had NAC framework (internally developed, never really took off,) NAC appliance (acquired,) and now ISE (internally developed.) There are many, many, other examples here.
They bought WebEx for the same reason as most of their other acquisitions: vertical integration and diversified interests. It doesn't even have to work well, it just has to be a feature they can advertise, and dumb executives will assume it works and buy it. By the time they've got their hooks into you, you realize it'll take years to remove it. Pretty good cash flow for years before the customer jumps ship.
What's fascinating is that working inside Cisco, the same tricks work on them. We'd adopt a vendor only to realize it doesn't do what we want, but now we're kinda stuck on them and it costs more to replace them. It's a bog-standard giant enterprise where the left hand doesn't know what the right hand is doing. But they're wizards with cash.
Yes honestly webex may be the single worst piece of software I've ever used in my entire life. I remember having to use it for some school projects back in the day and it working slower than a snails pace. You literally could not type anything into the computer because it was so slow it would just lose letters and take 10 seconds or so to update your keypresses. Years later i had to use it for remote work for a company and it was exactly as terrible as it was all those years before. Entirely unusable. I jumped ship before covid and all the wfh stuff happened to a much much better laid out company but i always wonder how anyone managed to accomplish anything for those couple years.
My experience was different. I did not know it existed before joining a team in Cisco to work on the signalling part. Afterwards when moving to Microsoft I saw how terrible Teams was in comparison. But to this day I would love to get back to Slack if truth be told :)
While at Microsoft, a project I was on was acquiring a license for a library and just to be sure of everything, instead of the standard "usage for this product" license, MS acquired a lifetime license to do whatever we wanted with the library.
Anyway tl;dr their lead engineer flew out and helped us get everything up and running. :-D
We sold our technology to IBM back in the day (EJB era) and the deal involved a "break glass" option where they could pay a pre-agreed fee at any time if they ever needed the ability to modify our source code.
HN is mostly a place where technologists gather, not corporate heads of IT or other business people. This is especially true of the subset of users who actively participate rather than only reading.
And it is not unusual in the least for an enterprise product to be wildly profitable but not admired by technologists. Indeed, it's the default; Oracle, SAP, Microsoft, etc.
What is interesting is to look for examples of things that break this mold, that are both profitable and mostly admired. Frankly, I can't think of any... All the ones I can think of were out-competed and either acquired and ruined or just run out of business. Maybe RedHat is the closest example... I'm not sure though.
What's interesting is the substance of the complaints of those products. Most of the comments are complaining that Splunk is expensive, but no comments I've seen are complaining that it doesn't work or do as advertised. Same for Oracle DB. It's ungodly expensive, and there are (many) other options out there, but you don't really see complaints that it's not able to perform (after an expensive consultant has had a go at your companies checkbook). The Fedex and Paypals of the world can afford to pay for Cisco/Splunk and Oracle licenses.
What's interesting is things that break this mold, like Microsoft Teams, because that's something that can be disrupted, and thus be successful, by having a better product.
F# is nice but seems like a fairly conventional functional language. My first reaction to some of the features of Koka (also MS) was I didn't know that was even possible.
Cloudflare's verify human challenge screen is so intrusive and frustrating that it will cost them their credibility IMHO, if it hasn't already. Some part of me feels that a properly designed cache should be able to handle any level of abusive traffic like a p2p cache would, and if it can't, then what are we all doing?
The problem is a cache needs cooperation with the backend for invalidation: Cloudflare’s robot check can apply to every page right before it talks to the backend at all
But I don't think there's really a great place to get a zeitgeist of the rest of the population. I think they're mostly doing other stuff rather than talking about technology on internet forums. (They're smarter than us.)
Actually yeah, closer than most. I think it's a somewhat grudging admiration at this point, increasingly so as they do more and more also-ran services.
But yeah, this does seem right for the "core" services; ec2, s3, maybe lambda, etc.
AWS business model is to just literally take a popular OSS system and provide it as a service.
It was like that from the beginning. That's why there's much less animosity towards AWS, because they just allow you to run your X without the overhead of infra investment.
That is something they do, which I strongly dislike, but it isn't their business model. Their business model is "pay us to run things on our infrastructure instead of building your own, with an option to be billed based on your usage".
The "take a popular OSS system and provide it as a service" thing is a complement to that business model, because they can say "now that you're using our infrastructure, you can also use all these services, and we'll manage it for you, and you'll only have a single vendor to pay". It provides additional value and lock-in to the business model, but isn't the essential part of it.
And no, that isn't where it began. Providing managed services for open source systems was not a part of their initial value proposition. When I started using EC2 (with EBS and S3), one of the tricky things was getting our own database infrastructure to work reliably on EC2.
It's true that RDS was released not long after, and did the "take a popular OSS system" thing, but they really didn't embrace that model until years later. Indeed, I think RDS still seems like second fiddle to their proprietary non-relational DB service.
Maybe in the beginning. Taking an OSS package, cloning its wire protocol, and then offering their closed source almost-compatible version without having to contribute anything back upstream earns them a lot of animosity.
It's pretty wild to read some of these comments. Splunk is one of the best products I've ever used, bar none. The price is another matter (it's bloody expensive, no doubt about it), but the tool is amazing. I think all the people talking about how much it sucks and can be easily replaced are so far off base they aren't even in the stadium.
You've clearly never run it at scale nor have you migrated between Enterprise (on-prem) and Splunk Cloud at scale. Managing .conf files and eliminating intermediate IDM logic was absolutely not "amazing."
Everything on HN should be taken with a big ol' bag of salt. To do otherwise will cause you to miss out on both employment and investment opportunities you won't find elsewhere.
Definitely true that HN comments should be taken with a grain of salt from a business / investment / employment perspective.
But it's more useful - though still not the full story at all of course - as a finger on the pulse of the people who actually implement software products, rather than their business models and their sales and marketing.
This is not intended to downplay the importance of any of those things! Those people are just not the majority of the audience here. (I honestly wish I knew where they hang out, but I'm not sure there is such a place - all the people I know in those roles just play their cards much closer to their chests than those of us who participate here.)
It's not really a pulse of implementers either. It's a particular kind of engineer. Having been early in a big tech and watching it grow and now being in another startup, I can tell you that the attitudes for SaaS in the industry are much more either positive or calculating than the broad negative attitudes and the constant calls for NIH on here. If anything they remind me of my cohort of college undergrads, excited to write lots of code and poo-poo existing solutions because of how "easy" they are. Our attitudes changed once our time was worth more.
As far as the business types, why do you think they'd be here? The community chants grift, scam, and enshittification at pretty much any change in the customer contract these days. Is that the kind of environment that someone on the business side will find welcoming?
Well, nothing can give a fully accurate pulse, because response bias is pretty much inescapable. There's always a huge part of the iceberg that is submerged. To me, HN rings as a truer pulse of "silicon valley / startupy software developers" than the alternatives on reddit or twitter or mastodon or elsewhere that I've read to a significant degree. Everyplace has its own unique culture with their own unique echo chambers and blind spots driven by the people who opt in to that particular place, and HN is no different.
But having said that, your comment (and the thread-starter) is a pretty good example of "getting a pulse"! A pulse isn't just "the average viewpoint", it also includes the distribution. And for every bit of conventional HN wisdom like "splunk sucks and is too expensive", there is pretty much always a comment like "splunk is pretty successful, actually". Your "I've been around a long time and attitudes toward SaaSes are actually pretty positive or at least calculating" is part of the "pulse" in this thread.
To wit: I honestly had no idea about splunk. I played with it in the distant past and thought "cool!", but I've never used it in the auspices of an enterprise license, and I've certainly never tried to purchase one myself, so I just didn't know anything about this. And if you had asked me about their recent earnings, I would have similarly had no clue. I just had no idea what the "pulse" on splunk was, either way. And now, because of the zeitgeisty comments making fun of how expensive it is, and also the comments like yours and the thread-starter's pushing back on that narrative, I have an updated prior on the splunk. It surely isn't the full story, and I wouldn't walk into a conversation and be all "I'm an expert on splunk, folks!", but I have a much better sense than I did a few hours ago. That's what I mean by "pulse".
> As far as the business types, why do you think they'd be here?
I didn't say I think they'd be here... I'm the one who pointed out that they aren't! Honestly not sure how you read into my comment what you seem to have read into it. But I'm glad I gave you an opportunity to rant a bit!
I read everything I can consume (news, analysis, mailing lists, etc), but find smaller or private forums to be most valuable for participation. "Be conservative in what you send, be liberal in what you accept."
Splunk was an absolute game changer when a company I worked for bought it. I say bought because we started to pay for it before anyone actually used it for anything meaningful. The "adoption" (blaming the company that bought it not Splunk) was terrible and teams were left to find value or not at their discretion without onboarding/training.
The tool itself when I started using it was brilliant and quite deep on capabilities.
All that said, the cost structure for the product can and SHOULD scare away any SMBs. Hosted or cloud, you're probably paying way beyond the value it's bringing in. That's probably the single largest determinant to the product.
I hated Splunk so much that I spent a couple days a few months ago writing a single 1200 line python script that does absolutely everything I need in terms of automatic log collection, ingestion, and analysis from a fleet of cloud instances. It pulls in all the log lines, enriches them with useful metadata like the IP address of the instance, the machine name, the log source, the datetime, etc. and stores it all in SQlite, which it then exposes to a very convenient web interface using Datasette.
I put it in a cronjob and it's infinitely better (at least for my purposes) than Splunk, which is just a total nightmare to use, and can be customized super easily and quickly. My coworkers all prefer it to Splunk as well. And oh yeah, it's totally free instead of costing my company thousands of dollars a year! If I owned CSCO stock I would sell it-- this deal shows incredibly bad judgment.
For how many data sources? The whole reason everyone goes to Splunk is that it scales, and scales incredibly well.
Large enterprises can generate hundreds of terabytes to petabytes every day. Splunk has all sorts of issues, but to pretend as if you can replace them in any large shop with a 1200 line python script and SQLite is just being disingenuous. This acquisition falls right into Cisco's sweet spot, they aren't chasing shops that can dump all their security and infrastructure logging into a SQLite database and not have it tip over in an hour.
It's around 6 data sources on ~25 machines, but it could be easily scaled to way more than that with a bit of work. And I mean less work than it takes to do even trivially simple things using the horrible Splunk API. There are many thousands of small companies using Splunk and getting totally ripped off for a very mediocre product with a rapacious and annoyingly aggressive salesforce.
You'd be surprised how many companies with infra that small have CTOs get consultant buzzword pilled into buying every SaaS under the sun nonetheless...
How many servers does Stack overflow run on? It’s not a good measure of data volume or criticality.
I think “expensive” here is basically relative to revenue/margin. Where margins are high, spending on Splunk (etc.) isn’t meaningful. Where margins are thin, it hurts.
Basically, the arguments here seem to reflect the markets and business model folks are working under. Some pay, some can’t and some won’t - all valid.
I havent developed it yet. But my Splunk killer solutions actually scales so big we can use it to walk to the center of the universe. And its only 1 line of Rust and a bash script that runs when ever the Unix clock has 420 in the number string.
I think we're talking about very different levels of scale. Enterprises are generally feeding tens to hundreds of thousands of datapoints into Splunk depending on their size between servers, networking gear, endpoint devices, etc.
Wait what this is such an important detail. Log aggregators like Splunk start being something to consider when you get to about 25 THOUSAND machines, not 25 machines. I hope that for you, humility will come with experience.
Splunk isn't perfect. Managing it is more work than it should be for example. But I've got hundreds of systems I'm pulling logs from and that's not counting infra and applications as well. And my deployment isn't even a large one by their standards. Your use case just isn't the scale where splunk makes sense.
Splunk does not scale to large data sources. It fucks out at a few TB and then you have to spend hours on the phone trying to work out which combination of licenses and sales reps you need to get going again.
By which time you can just suck the damn log file and grep it on the box.
But, and this is not meant as criticism or insult as I have no idea how Splunk works, it is just based on other comments; do you know what license your company has with them? It appears that if you are paying them millions, it scales fine, otherwise, it does not?
Well usually you have to overpurchase up front and they sell you a 3 year lock in to make it affordable capital cost. Then when you eek over it temporarily, the sales guy calls you up within 10 nanoseconds to bill you for more.
I was getting 2-4 calls a week.
It was so fucking annoying and expensive ($1.2M spend each cycle) we shitcanned the entire platform.
First thing they hear of this is when our ingress rate drops to zero and they phone us up to ask what is happening. Then we don't go to the numerous catch up and renewal meetings and calls. Then we stop answering the phone.
Had a similar experience with them, they are truly the worst. We wasted a bunch of time trying to figure out how the ingestion volume could be so high and then realized that 99% of it was from the ridiculous default settings of their universal collector agent which was dumping detailed system stats every few seconds-- all to drive up usage so they can harass you about spending more money on their awful product. I did the renewal call with them just to basically tell them how outrageous their company is.
Yeah, because that is what I meant. A lot of services are useable without paying through the nose, this one apparently not, but thanks for the excellent input.
I'm certainly not a Splunk expert and I'm CERTAINLY have no insight into the nature of our financial arrangement with them, but yeah it's expensive.
I think there's not much of a useful "flat rate" tier; you pay based on usage. People can accidentally spin up a ton of EC2 instances and get a huge surprise AWS bill, too. And yeah our logging needs are high and monotonically increasing but they're also relatively predictable at our scale.
It ALSO turns out though that Splunk is really really good at their job and matching their expertise would require tons of engineering effort and it's not like the disk space alone is THAT cheap if you want it to be searchable.
I've worked at companies with objectively large amounts of data. Splunk scaled to meet their workloads. At no enterprise doing this is someone able to just isolate a single log file and grep through it at scale.
Well, according to what people write in this thread, a distributed grep or some other way to organize a decent central logging system might be a necessary part of the core competency. Because if they buy splunk instead, they might go bankrupt.
You don’t have to be splunk to make money out of distributed grep but it turns out to not be that easy… as proven by the fact that there are quite a few competitors
Uhhhh you splunk scales no matter the size. for just pure ingest. Now if you got duped into the SVC model I can see what you mean. But for pure Gigs/Day ingest if you know what youre doing it can scale infinitely.
This mostly sounds like a badly managed Splunk. If a 1200 line Python script is all you need to replace a Splunk instance, you weren't doing anything all that interesting or well in the first place.
> useful metadata like the IP address of the instance, the machine name, the log source, the datetime,
This should be tagged on every single log line already, and not something that you should be doing post-ingestion
The logs included things like the systemd logs and stuff that I don’t have control over. You need to be able to enrich with arbitrary metadata for it to be generally useful.
My point is more that a large portion of Splunk customers could do the same thing I did and be way better off. Obviously not their huge enterprise customers spending millions a year.
My complaint is that this acquisition is going to add another 1-4 paragraphs of examinable marketing copy to the Cisco CCNP ENCOR textbook. I'll have to somehow remember not to confuse Splunk with Cisco Firepower NGIPS, which uses Snort. This is what happens when an industry starts to name its products after the sound effects from Peppa Pig.
While it doesn't compete with Splunk, IMHO, it's much easier and much better than what 1,200 lines of Python could conjure up. Dashboarding and all. I love it and use it in a very large enterprise environment.
Well no, dropbox is aimed at non-technical oriented users. Sure, they have "enterprise" features for admins now but that's not how it started and in the end the product is vastly consumed by non technical users.
I hear you, but the difference is that Dropbox is actually good and reasonably priced. Splunk is horrible to use and costs 1,000x what it should, and they are super aggressive about harassing you about usage caps and threatening you constantly with huge price hikes. Dropbox has barely raised price over the years (until pretty recently at least) and has been rock solid and amazing.
Great, finally someone who actually does that. So many examples here with people whining about their Dropbox thingy in 4 lines of Perl but never releasing anything for us to check out. Well done!
That “thousands of dollars per year” number seems quite a bit low for a Splunk license. Even for a small amount of data it’s more like thousands per month.
Well today you are doing 100KB log processing, who knows, tomorrow you may end up doing 500KB log processing. It will be All Hands On on late night Friday to eliminate this existential threat.
I used SumoLogic at my last job, which feels basically the same as Splunk. (Maybe not as fast? No idea on price.) There were times when it was easier to sync 45 GB of logs from S3 down to my laptop and run grep over them than it was to figure out the right arcane syntax and wait for the results. :-)
This comment is incredibly naive. Cisco isn't making acquisition decisions based on your happiness. Splunk's revenue is increasing every year and their losses decrease. It is an incredibly popular tool that complements their products and services well.
I don't know about their router/switch OSes in particular, but a lot of their products already have Splunk integration and they seem to have a couple of products built on top of Splunk.
There's quite a few log ingestion programs that can do all that for you. Did you have some type of specialized log that one of the various logging tools couldn't handle for some reason? It sounds like you recreated the ELK stack lol.
I used Vector in the Beaker Studio prototype back when it was designed to deploy directly to Ubuntu virtual machines. That was a couple years ago at this point, and it worked wonderfully!
It's weird seeing no mention of Graylog anywhere here which is slightly different but I've found much easier to use in smaller setups. Unfortunately I have no idea what enterprise cost ends up looking like.
Why build in this age when too many open source solutions backed by opentelemetry standard are available. Use fluentbit/vector/otel-collector to capture data and send to some open source solution.
Because I find all that stuff to be even more mental overhead to learn and work with, and super annoying to deploy and manage. It would literally take me longer to get one of those kinds of tools to work on my data the way I want it than it took me to make my own tool that does exactly what I want, exactly the way I want it, where it's incredibly trivial for me to add new kinds of logs or anything else.
When you have a hugely complex, made by committee, enterprise-grade generic system/protocol like opentelemetry that does anything and everything, at any scale, it's always going to have huge amount of excess complexity when you are trying to do a specific simple thing well and quickly. It would be harder to figure out the config files for that stuff than it was to just make my own system.
> Someone at Cisco did the math on how much a license would cost and some snarky soul, kin to my own, said "Are we sure it wouldn't be cheaper to buy Splunk?"
Does anyone ever look at this type of problem - Shipping, ingesting, retaining, searching gigabytes of log files - and stop and think - what if there was another way?
https://realmoney.thestreet.com/investing/technology/cisco-r...
Good luck Splunk folks - Cisco isn't exactly known for their software innovation in the upper stacks (they still do pretty incredible things at the network OS layer).