Having built a load-test tool as well, I can say making it realistic enough and keeping it that way is possibly the hardest challenge. Maintenance cost is high, especially in a features focused environment.
Which tool. Curious.
To your other points,
> That sounds like throwing money at the problem and it probably worked (for a while anyway).
> Maintenance cost is high, especially in a features focused environment.
Isn't really just choosing which way to throw money at the problem? Hardware costs, vs. person-hours to maintain a thin client version?
We built something similar at Mattermost, which (funnily enough) is a comparable application.
> Isn't really just choosing which way to throw money at the problem? Hardware costs, vs. person-hours to maintain a thin client version?
That's fair, although the second option has (in my opinion) a better return on investment given by the knowledge and experience gain.
In the example where it is supposed to "viewing a message, marking the message as read, and finally calling reactions.add"...it doesn't really do those things in a real chain. They just have a 5 second delay after "view a message", then run the "mark message as read", then a 60 second delay, then calling reactions.add. I'm not sure that mimics real end user behavior terribly well.
It seems like they could have used jMeter rather than making a home-grown web sockets test client. Perhaps there's some requirement where existing tools don't work well.
It's kind of interesting to see them choosing rather "declarative" (which is, json-centric) approach instead of adopting small languages like Lua for scenario-based scripting.
Maybe the declarative approach is suitable for auto-generation from the user stats data as they described? After all, there are often fewer number of people who like to write stress tests than writing a feature that should be stress-tested.