Up until yesterday I didn't know Zendesk was written in Rails. They had a talk at RailsConf 2022[0] where they went over how they handle a billion requests a day with mostly a Rails monolith.
I wonder why they're not brought up more when talking about large Rails apps. Is there an interesting (read: bad) history with them and Ruby?
Author of the talk here. Thank you, you made my day by sharing my presentation. :)
One thing I would like to add after reading some of the comments here. The "1 billion requests/day" is actually an understatement so the title has a nicer ring. Last time I checked we were around 2B -and that's according the most conservative approximation-. Those are requests that hit the application, excluding the CDN caching layer.
For Zendesk, what are the business consequences of using a monolith? Is this a mono- or multi-repos? Are tests simpler? Does the system require lots of special-purpose tooling?
There’s also that when you’re kind of big, there’s a brand to be maintained and that rarely involves presenting the dissected view of your stack. In fact at some it’s readily avoided by some.
I recently discovered, for example, that Django-REST is quite often used for big sites. Like Robinhood and Eventbrite.
I'm assuming you're referring to Django Rest Framework? It's one of the nicest API frameworks I've had an excuse to use professionally. It's easy to see why.
Agreed, while the answers are usually in there (somewhere, which is better than most documentation), it can sometimes be quite difficult to find "that page you saw in here once".
I experienced this first hand when I first started doing timed coding interviews and didn't prepare my DRF environment. Rookie mistake for sure, but it really emphasized to me that things can be difficult to find in there unless you know exactly what you're looking for.
It has so many built in footguns, however. For instance,`SerializerMethodField`, in conjunction with a list endpoint, can generate a huge number of queries if one doesn't prefetch appropriately, and this issue is very difficult to catch with a linter. Granular test coverage is difficult because the testable unit is the ViewSet or the Serializer. I think DRF is great for rapid prototyping and early stage development.
In the long term, with a lot of complexity and a large organization, a more mature architecture that relies on components that wrap tables, composed together to form components that wrap business logic, which are composed into endpoints. Something like that is much easier to test and more scalable for disparate teams to step into and understand.
I haven't found one, unfortunately -- just setting team-wide architecture policies, and then using linters extensively to make sure those policies are adhered to.
That's my experience as well. There isn't a nicer one than DRF / Starlette, but for a specific use case, you can get away with a custom one no problem.
It's interesting though. The container orchestration at work has kind of moved past the initial scary phase, and kinda past the first honeymoon phase. In the beginning, it was like "Oh we can use all kinds of technology in there, how amazing!" - by now we're much rather becoming very, very efficient a smacking problems with the 3 boring old sticks dev has. Something like deploying and monitoring low-volume spring boot applications based on postgres has become extremely streamlined by now.
There are competitors but their general idea is “we’ve rethought support from the ground up (and Zendesk is not it).” Intercom for example, support through chat. The basic idea of the competitors is that you’re “not just a ticket” (another Zendesk zinger) - that any customer message should be part of a holistic strategy for communication, and also strategizing, upselling, etc.
I've been happy with Request Tracker for over two decades, but the scope of RT is also a lot smaller than ZenDesk. Just email-first ticketing, little more.
I agree. I used RT years ago when I was in IT and quite liked it. I’ve used plenty of the competitors over the years and they seems to be mostly worse versions of it.
However RT is really just a request tracker and doesn’t do some of the fancier pseudo PM stuff zen desk does. That’s a plus in my book since when I’m tracking tickets I don’t want a JIRA clone but worse.
Also it’s worth noting I haven’t used it in 10 years so it might have changed substantially since then.
I've used https://www.enchant.com/ for two small companies over the last decade, and have been happy with them. Cheaper than Zendesk, and just much simpler to use.
I use ZD extensively and have for years. I do like it.
I found that after I went through their free training programs, I could use it powerfully, the way it's designed, and it works really well when you know how to use it really well.
Granted, we haven't "met," so your statement stays true, but I really like ZD.
Well, they say imitation is the best form of flattery, and Freshworks, another successful company, was pretty much a copy of zendesk when it started. I haven't used either in a while, so I don't know if/how much they've diverged now.
They're not hiding it either. There's a few ruby and RoR projects they publish https://github.com/zendesk/ including Samson. I feel like people don't talk that much about big RoR apps in general...
They've got a bunch of completely independent regions. So they've really scaled their Rails monolith to handle 100s of millions or less a day and get to a billion by having 10+ regions. Which is obviously still good but not quite the same.
Author of the presentation here. While you are right that sharding is an incredible tool that makes our lives much easier, not everything in our system is/can be sharded.
Also, we do handle more than a billion requests/day, that was just for giving the title a nice ring.
That level of scale always boils down to partitioning work. The fact that these partition are distributed across regions is done for risk mitigation (e.g. what happens when AWS's eu-central-1 catches in fire). Architecturally, nothing really changes if you stuffed them all in the same region.
IMHO: Companies that make a profit are in php and rails (rarely in python). Companies that do not make a profit or have no intention to are in elixir, rust, typescript, go and node. I don't think this has anything to do with the language used - it's the attitude of the founders: "let's use easy boring tech"- that type of thinking extends into the business. The founders are focused on solving a business problem - not some hipster js framework. I don't want to code php, but after year 20 i am confident there is no escape.
elixir, rust, typescript, and node are all relatively newer languages. Php, rails, and python have been around longer and the companies built on top of them have had more time to ride the compound growth rate curve.
Paul Graham wrote this[0] back in the day about tech stacks. I think this is still kind of true, but there's many more people across many stacks today that can work around some of the shortcomings. I guess it would still be important-ish at a small startup, but you still have to hire for it. There's much more JS/python/java/etc these days, so you have to fight that tide while you grow.
Profitability also may correlate differently; smaller places usually focus on growth instead, right?
SendGrid was/is profitable (I've since moved on). It was a perl and python shop that turned to a go shop; a fantastic decision that greatly improved developer productivity. One team was rails and it turned out to be a trash fire before going to Go.
I'm now at an elixir place. I'm not convinced it is the right tool for what we do but it was chosen to leverage the BEAM arch. Headcount growth is the only thing in the way of being profitable at this juncture.
... that supposed to be good ? it's 11600 requests per second... for reference Seastar (a C++ framework) reaches 7M requests per second (https://seastar.io/http-performance/)
i can't find a single benchmark that puts puma at more than 1k req/s. found a thread which mentions 70 req/s, which I guess some overclocked Z80 or 6502 could reach lol
> they went over how they handle a billion requests a day with mostly a Rails monolith
1 day has 246060=86400 seconds. 1bn/86400 is more than 10000 reqs/second, so each request has to be served in less than 100 us. According to [1], random access on an SSD is about 150us. This suggests to me it's likely that most of these are being served cached from a CDN. Are we supposed to be surprised that this can be done by a rails monolith? We don't know how many of those requests are actually hitting the rails app.
Does each web request to an app server have to hit the disk on the same server? And if so, for the entire request duration? Also, what about horizontal scaling which also means that the db is on a different server (likely with secondary / replicated DBs)
If it's all one codebase that isn't split for deployment, it's still very much a monolith. The distinction between monolith and microservices isn't how many deployments you have, it's how many deployment units. If the app is stateless, replication is strictly an operations concern, so from a developer perspective a single stateless deployment unit is a monolith.
An org might deploy a complete copy of the monolith for a single tenant, and it would still be a monolithic architecture.
You can! Well, sort of. (this is new to me, I was curious as well, so thank you)
Native Command Queuing (NCQ) [1] is a SATA extension that lets the drive optimize the order commands are performed in. For HDDs, this means it can do the commands in an order that minimizes overall seeking time. For SSDs, it can concurrently execute commands that are operating on different physical chips.
I can't find a good reference, but it seems like this is also true for RAID volumes, which are almost certainly being used on the server -- the RAID controller can perform parallel reads on each of the independent disks. How well this works is highly dependent on the RAID controller itself.
Below, you're doubling down on the "monolith == 1 server" point. You should do a bit of research before you continue, the word doesn't mean what you think it means.
> You should do a bit of research before you continue, the word doesn't mean what you think it means.
I'd suggest that the word means whatever the majority of people assume it means - that's also how we get the meaning of "agile" to be so inconsistent, depending on which companies/teams/people you talk to. Essentially, people who've only worked with projects that run as single instances might have a pretty different opinion on what a "monolith" is.
I've definitely met a lot of people who'd claim that a single application package across multiple servers is no longer a monolith. The reasoning would probably go along the lines of this: "If a monolith can live on multiple servers, what do you call an application that can only ever live on a single server, with a single instance being launched at the same time?"
So essentially, what would be the names best suited to describe:
- a single codebase that can only run as a single instance
- a single codebase that can be deployed with multiple concurrent instances
Personally, I think it might be worthwhile to also answer that, so we have a better idea of what to name things. Otherwise we end up with messes like people taking DDD too far and having a separate microservice per business entity, because they read into the "micro" part of microservices too much.
Oh, also, in regards to the definition of "monolith" that is centered around only how the code itself is deployed, personally I think that a modular monolith ("modulith"? heh) is a great architecture that doesn't get explored nearly enough! A single codebase that is easy to reason about, a single executable that is probably scalable but also simple to deploy however you need, with different parts of the functionality (think front end, different types of API, reporting functionality, file upload functionality etc.) being possible to enable/disable in each of the instances based on feature flags. Want everything on a single instance? Flip all of the flags to "on". Multiple instances? Configure whatever modules you want, where ever you need.
> "If a monolith can live on multiple servers, what do you call an application that can only ever live on a single server, with a single instance being launched at the same time?"
That exists? Are there examples of this, especially once where there is a good reason for this? I cannot even begin to list all the awful issues with this in my head.
> That exists? Are there examples of this, especially once where there is a good reason for this? I cannot even begin to list all the awful issues with this in my head.
Most certainly. I'd suggest that many systems out there that ever only needed to run on a single server are structured like this. Even though you could technically take plenty of these systems and launch two parallel instances, you'd get problems because they haven't adopted the "shared nothing" approach, or even just basic statelessness principles.
We tend to forget ourselves with all of our modern and scalable container developments, but there are untold amounts of PHP code out there that stores files and other uploaded data on the very same server, in any number of folders. Of course, you can technically set up a clustered file system, or at least a network based one, unless you are running in a shared hosting environment, in which case you are out of luck.
Oh, and speaking of shared hosting, in theory you should be able to get rid of environments that use cPanel and instead switch to containers, right? Well, no, because workflows are built around it and dozens of sites might be run on the same account with any given shared hosting provider.
You'll be lucky to even find such an environment that has an up to date version of PHP installed and running and resource contention issues will present themselves sooner or later: "Oh hey, this one slow SQL query in this site brings down this other dozen sites. Could you have a look at it?"
I actually helped an acquaintance with that exact problem, I dread myself for agreeing to help because it wasn't a good experience.
Looking at the enterprise space, I've also seen systems out there that store state (e.g. information about business rules) in the actual application memory liberally, as well as things like user session information, because someone didn't know how or couldn't be bothered to set up Redis.
So there an app restart would mean that everyone is logged out. Not only that, but if you have a system which allows users to make some sorts of requests, with business rules about what order they can be accepted in, that means that you can store the output of these states in the DB, but during the processing you have an in-memory queue, which means that you couldn't feasibly have multiple instances running in parallel, because then you'd have a split brain problem. It's like those people had never heard of RabbitMQ while designing it.
Apart from that, there are also issues with scheduled processes. If you've never heard of feature flags or don't see a good reason to use them, you'll run into the situation where you'll have your main application instance executing scheduled tasks in parallel to serving user requests. Worse yet if it's coupled tightly and the application will do "callbacks" for reacting to certain changes, instead of passing the message through the DB or something. Oh and in regards to performance, you better hope that the reporting process you wrote doesn't cause the service's GC to thrash to the point where everything slows down.
Oh, and in addition to that, there are hybrid rendering technologies like PrimeFaces/JSF out there, which store the user's UI state on the server (in memory), whilst sending diffs back and forth, as well as making the client execute JavaScript in the browser for additional interactivity. Think along the lines of GWT, but even more complicated and way worse. A while back some people talked about how the productivity can actually be pretty nice, but what I saw was 100% the opposite, but more importantly there's also no viable way to (easily) distribute this UI state across multiple instances, at least with the way the eldritch monolith is written. I've also seen Vaadin applications with the same problem.
Another factor that can cause situations like this to eventually develop is having a tightly coupled codebase, where you cannot reasonably extract a piece of code into a separate deployment, because it has 20+ dependencies on other services in the app and is called in about 40+ places (not even kidding). While you could try, before you know it you would be sending 20 MB of JSON for simple data fetching calls between applications (again, not kidding - once actually saw close to 100 MB of network traffic between back end services and DB calls for a page to load).
Those are just some of the issues. My suggestion would be to never build systems like that no matter how "simple" they seem and instead just stop being lazy and use Redis, RabbitMQ, or even just PostgreSQL/MySQL/MariaDB tables for ad-hoc queues, anything is better than writing such messes. And if you are ever asked to help someone with anything that starts looking like the above, tell them that your schedule is sadly full or at least very carefully consider your options.
> because they haven't adopted the "shared nothing" approach,
In practice, many web applications are stateful. The load balancer would see to it that clients keep talking to the same frontend. For larger applications it is important for cache locality.
> untold amounts of PHP code out there that stores files and other uploaded data
This is quite normal when you have some type of blob, and normally what networked file systems are user for.
> In practice, many web applications are stateful. The load balancer would see to it that clients keep talking to the same frontend. For larger applications it is important for cache locality.
In regards to front end resources, it shouldn't matter which instance you're talking to, if all web servers are serving copies of the same bundle, given that the resource hashes would match, outside of A/B testing scenarios. It's also nice to explore stateless APIs where possible, and not have to worry about sticky sessions.
In many dynamically scalable setups if you tried talking to API instance #27, you might discover that it is no longer present because the group of instances has been scaled down due to decreased load. Alternatively, you could discover that the instance that you were talking to has crashed and now has been replaced by another one.
Hence, having something like Redis for caching data, or even a cluster of such services becomes pretty important! Of course, there are ways to do this differently, such as taking advantage of CDN capabilities, but for the most part sticky sessions are a dated approach in quite a few cases. It's easier for everyone not to care about ensuring such persistence.
An excellent exception for this: geographically distributed systems where even if you don't care about that exact instance, you still want stuff in this data center to be reached, instead stuff half way across the world.
> This is quite normal when you have some type of blob, and normally what networked file systems are user for.
Nowadays, I'd argue that S3 (or compatibles like MinIO or Zenko) is one of the very few ways to do this properly, or perhaps GridFS in MongoDB - an abstraction on top of the file system, that handles storing and accessing data as necessary. Then, using a distributed or networked file system, or block/object storage (depending on the setup) is a good idea.
However, in general cases, you should never use the file system directly for the storage of your blobs, regardless of whether those are stored locally or in a networked file system, as that is just asking for trouble. Things like maximum files per folder, inode limits, maximum folder nesting/file name length limits, maximum file sizes, writing bad code that allows browsing other directories than the intended ones, the risk of files that might be executed in the case of bad code/configuration, case sensitivity based on the file system, encoding issues, special characters in filenames or directories, need to escape certain characters as well, reserved names in certain file systems and frankly too many issues to list here.
So yes, it is "normal" but that doesn't make it okay, though one also has to understand that often in a shared hosting environment there aren't good options on offer, versus just spinning up a MinIO container and using the S3 library in your app.
>I'd suggest that the word means whatever the majority of people assume it means
"Monolith" is a term of art in software engineering. You're in a discussion about software engineering. Saying "it means whatever people want it to mean!" is like talking to a bunch of chemistry people and saying most people think car springs when they hear "suspension".
Look, you didn't know what "monolith" means in a software context. Everyone has stuff they don't know, even in their field. Learn something and move on.
> "Monolith" is a term of art in software engineering. You're in a discussion about software engineering.
The problem is that most of these terms are loosely defined and evolve over time. And I do mean in practice, as opposed to some thesaurus definition that gets lost in conversation.
REST? A set of architectural constraints, but nowadays most just selectively pick aspects to implement and forget about things like HATEOAS or hard problems like resource expansion and something like HAL isn't popular.
Microservices? Some think that those need to be "small" which isn't true, and yet somehow we see time and time again people ending with a pattern of service-per-developer instead of service-per-team, because people just aren't good at DDD and aren't experienced with what works and doesn't yet, if they haven't been building systems like that for a while.
Agile? I'm sure that you're aware of what the management and consultation industry did to the term, now we have something like SAFE which goes exactly against what Agile is and shouldn't be allowed to be named after it.
Cloud native apps? Who even knows, everyone keeps talking about them, there are attempts to codify knowledge like "12 Factor Apps" but I've seen plenty of projects where apps are treated just like they were 10 years ago, to the point of dependencies being installed into containers at runtime, logs being written to bind mounted files and configuration coming from bind mounted config files, as well as local filesystem being used for intermediate files, thus making them not scalable.
> Look, you didn't know what "monolith" means in a software context. Everyone has stuff they don't know, even in their field. Learn something and move on.
Another person suggested the distinction between "monolith" applications (referring to the codebase) and "singleton" applications (which describes the concept of only a single instance being workable at a time). That advice is workable and is useful, since now we have two precise concepts to use.
Your advice isn't useful, because while you're right about the "proper" definition, you brush aside the fact that people misuse what "monolith" is supposed to mean and don't engage with the argument that we need a proper set of terms to describe all sorts of applications to avoid this.
If we don't address this, then monolithic apps become some sort of a boogeyman due to claims that they aren't scalable when that isn't true for all monolithic apps (since that should only imply how the code is deployed), just the subset of arguably badly written ones, as explained in my other comment.
So essentially:
- a single codebase that can only run as a single instance --> could be called "singleton apps"
- a single codebase that can be deployed with multiple concurrent instances --> the proper definition of "monolith"
Without this, you cannot say something along the lines of: "Please don't make this monolith into a singleton app by using local instance memory for global business state." and concisely explain why certain things are a bad idea.
Not sure how much is sincere here, but a monolith typically means "not microservice architecture" and has very little-to-nothing to do with the ops side.
You're mistaken then. Monolith is just project organization. Think microservices and then realize the monolith is on the other end of the spectrum. It's in the name.
You keep posting this. A monolith is a software architecture. It says nothing about there being one instance of that monolithic process or a billion any more than "microservice architecture" becomes untrue unless you have k instances of any particular service.
It's like saying "this thing says it is built as a suspension bridge, but there are ten of them, so I wouldn't call it a bridge."
If your definition of a monolith refers to a singular server, then I've never worked on a monolith in my life. Even the crufty old ASP apps I used to work on weren't monoliths.
This would also mean that deploying a microservice architecture onto a single server, using docker for example, would be considered a monolith. Or even better, any application that leverages a database on a separate server is no longer a monolith! That's way easier than rearchitecting your whole app.
I've never come across a definition for monolith in the context of software that had anything to do with the actual infrastructure employed
Nevertheless, that's a pretty common term when the compute portion of the application is a single tier, but that doesn't mean that the tier can't scale over many stateless compute nodes.
You got some good, but short answers. Since you asked me, I'll give you my definition: It means that your app servers are homogenous. They are all running the same server code in similar configuration. How many instances you are running has nothing to do with it being a monolith or not. You need more than one server just to get some fault tolerance. In practice your scalability bottleneck will typically be your database as all servers share that and your servers themselves ideally are stateless (they typically are for Rails apps, as Zendesk is using Rails).
Microservices on the other hand in theory have a number of very small services that at most share some libraries, but can be deployed independently and could each even be written in different languages if you want. This mostly is an advantage if you know that different portions of your app are gonna scale very differently from each other and rely on disparate data models as well and only share a small interface. In practice, things frequently don't end up as clear cut as you expected on the outset and you end up having to heavily coordinate many of your deploys across services.
this is interesting, seeing a PE firm acquire a public firm.
seems the PE firm knows more than the public markets can account for.
and sees opportunities to trim fat hard, given that they're paying a 34% premium.
if I was a zendesk employee I would be worried
There is a pretty well worn playbook in the PE world: Cut engineering and support, offshore what you can, and take a knife to suppliers. This can be hard for growth oriented management teams to pull off.
Companies like IBM and CA do this too with the added financial benefit of being able to centralize most staff functions (finance, HR, etc) and leverage existing low cost locations.
Of course this hurts existing customers. In reality it’s moving some of the surplus value generated by the company from the customers and employees back to the shareholders.
It’s too complex issue to just say “It’s all good” or “It’s all bad.”
Exactly. Tech-oriented shareholders are longing for growth, so PE firms often acquire tech companies with a large customer base but no stellar growth story. Besides cutting costs, they also charge customers more.
A classic PE game is to declare a widespread, but no longer innovative software product "end of life" just to charge more for "extended support" contracts. Customers often have said product deeply entrenched in their daily workflows and would inccur high customizing costs if they switch to a competitor. In the end, they are better off paying the support premium.
From a funding perspective, PE firms also have an advantage compared to stock exchanges: because their assets are no longer traded daily, the volatility of these assets decreases. The asset value is maybe determined once a year for balance sheet purposes. This means that pension funds and other regulated investors can invest more in the PE sector than in stocks, because technically they are buying "low volatility assets".
This is the process by which a high growth unprofitable company becomes a lower growth profitable company. It can be ugly but it's a transition that every company ostensibly needs to make. We're going to see a lot of this as the market turns.
> We're going to see a lot of this as the market turns.
Not really. We saw a lot of LBO takeovers before the market turned, and Zendesk is one of those (it was planned 6 months ago). LBO activity is expected to slow down now (it already has).
What you will likely see a rise in is a lot of companies cutting costs and going into their bunkers. You don't necessarily need to be taken over a PE firm in order to do that.
Zendesk competitor ServiceNow ($88B Mkt cap) has stated their long-term strategy is upselling to massive customers who will use their more mature features (Operations Management, Security Operations, GRC). If ServiceNow is willing to leave crumbs on the table, I could see Zendesk marketing itself as the go-to firm for smaller IT departments.
I agree that every company will likely slow in growth eventually, but I think the PE playbook is different. It's very clearly sacrificing growth, agility, ability to execute, and future potential, for the highest possible short term gains. I don't think that's something every company needs to go through.
Permira is not that kind of VC. They normal for for growth opportunities. I guess they will try to expand the range of offerings from Zendesk and try to cross-sell these solutions to existing customers. Not necessarily bad for employees.
I'll counter that I think they see a lot of long term value with the market having hemorrhaged a lot and that the PE firm is likely sitting on a lot of cash with very few good options in the asset marketplace. If I had money to buy a company that was putting up good numbers right now with growth oppt that would be a fantastic place to park cash.
> Hellman & Friedman and Permira have arranged for debt and equity financing commitments
So the way this is worded, this is a leveraged buy out? If so worst case scenario for anyone involved. Employees and customers. I wonder why the board would approve such an odd move for tech
It is an LBO. The board approves because they have to get the highest price, and LBOs usually pay cash.
It’s a natural move when companies can’t invest in profitable growth any more. At that point it’s time to start returning money to shareholders who can invest it elsewhere. They have to change how they operate and management teams (and their playbooks) that optimize for growth are different than optimizing for returning cash to shareholders today.
One long term way to do this is the IBM model of continuous underinvestment, layoffs and share buybacks. A company can accomplish this more quickly with an LBO and new management team.
Maybe their new owners won’t break their iOS SDK at a rate of 3-4x a year.
And of course they have their GitHub issues closed, because they’re a support company.
Emailing them gets an automated reply that you can only email them via your account holder’s email. Then when you do that you’re a dev playing telephone with another dev via a support agent.
> Maybe their new owners won’t break their iOS SDK at a rate of 3-4x a year.
The iOS SDK certainly won't break when they shutter all development. Until they discover that they can save on cost by shuttering the API itself of course.
It's an LBO by a private-equity firm, if you're a zendesk customer prepare your migration, it's all downhill from there.
As an enduser I don’t like Zendesk. I find it unfriendly to use. Like when you open a request you receive a response by email and you can reply it to. But if you want to open your request on the website (to check status or other details) you won’t find a single link to your ticket, not even to the home. I have to manually open my browser and navigate to the support page (if I am lucky enough to find it) of whatever product I am using is. How frustrating…
Salesforce stock is down with the rest of the market and not as appealing as an acquisition tool as a result (they also paid a lot for slack). They also already play in Zendesk's space (eg service cloud) and tend to buy what they don't already have.
Ehh....in the time I was there, they bought Tableau _and_ Datorama in the same month, and they already had reporting/analytics features (although not as in-depth).
They also bought a dozen or so machine-learning companies that all did similar (or even identical) things over the span of the 5 years I worked there.
Famously, they bought Heroku, and then completely ignored it (including internally -- there were efforts at one point to have Heroku be the "way forward" for internal development on PaaS providers, and they scrapped that entirely while keeping Heroku more-or-less running).
I'd say Salesforce tends to buy whatever shiny thing gets Benioff's attention for the moment, usually because he talked to somebody at a conference. The guy tends to ram through quite a few questionable business decisions, and these days Bret Taylor's job as "co-CEO" is mostly "cleanup crew" for Benioff's various whims.
Without going into much detail, the way some NFT stuff was handled internally is a big example, and then there was the company all-hands where Benioff berated some presenter on stage for having not-cool-enough shoes, and Bret had to spend a whole trying to spin it into a "haha, great joke buddy" sort of situation.
Stuff like Benioff's "just the tip" tweet has always been who he is when his handlers don't have a tight grip on him. Bret Taylor's position seems like he's been put there because 1.) He's got _some_ engineering experience more recently than 1999, and 2.) Bret is less, uh, belligerent than Benioff.
I mean they want to buy it to acquire data about public companies so they can make market moves ahead of quarterly filings.
If a company is using zendesk and there is an uptick in tickets or traffic - they can calculate growth based on that. There were PE firms buying email companies back in the day to look at # of emails from netflix for new subscriptions or cancellations - and they used that to determine if growth was up or down.
I wonder why they're not brought up more when talking about large Rails apps. Is there an interesting (read: bad) history with them and Ruby?
[0]: https://www.youtube.com/watch?v=mJw3al4Ms2o