Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We have certainly heard from some customers that agree that 'everything' is slow, but we've also heard from other customers saying they have no problems.

What do your metrics show? I instrument my web sites so I know how long every operation – server responses, front-end JS changes, etc. – takes and can guide my development accordingly. You have a much larger budget and could be answering this question with hard data.

I’ll second the “everything” responses. Request Tracker on 1990s hardware was considerably faster than Jira is today - and better for serious use, too.



Hi acdha,

We have metrics, but of course as with many such things you always want more insights than the amount of data you're collecting (so we're always trying to grow this as appropriate).

This data is what led to the above (added edit trying to reply to @rusticpenn) saying we can see that "some instances are slower than others", and "some users are slower than others". I can't share those numbers of course though.

However, privacy reasons does prevent us from collecting too much data, so differentiating why individual users might have different experiences (even when other known factors are similar/identical) is difficult.

Also I'd be happy to take any suggestions you have about what to look at back to my engineering team, if you're willing to share other ideas. I know we're tracking several of the ones you mention but more options is always better.


I mean, it really is everything. If it were my project I’d make sure I have telemetry on all UI actions and would then set a threshold (say 200ms), triage everything which has a percentile consistently over that threshold to look for easy fixes, and then set a policy that each release only improves on those numbers. I can’t think of any user-visible changes to Jira or Confluence in the last 5 years which I wouldn’t trade in a heartbeat for good performance.


Hi acdha,

Thank you for the advice - how about on the page load side? Our biggest problem is probably the variance issue (mentioned in other subthreads) -> we can't easily tell what is the difference between a slow and a fast load in many cases.

Even if we compare things that are available from the metrics like CPU, Mem, and Network speed, those are not very granular metrics (for example, to understand someone with 16 threads was actually at 95% Mem used during that page load), and have little correlation at a wide level with page load speed.


I'm sorry, as a JIRA user since 2008, your software has always been slow. I used to like that I could run your software on prem and configure issue fields etc, but now, you have so many layers of crap and "pretty" that its not suprising you can't tell what is fast and what isnt.

It is not your customers job to instrument your software. Your API gateway can provide precise and accurate figures on how long API calls take and there is nothing stopping you adding web page metrics that can provide client-side measurements as well.

Some examples are available on the publicly visible JIRA boards, like the one for hibernate. Just go click on all issues and then click on any issue in a private browser window and with the cache empty.

Every one of the fields take seconds to load. That is not internet roundtrip time, that is your backend. Even when the issue is ~80% loaded (according to your own page load bar), there are still JS scripts that will load and reformat the page, causing the browser to reflow.

These are not cached, because loading another issue doesn't resolve the problem.

So there are fundamental front end problems that have nothing to do with the servers or backend, they are entirely a problem of the JS and the in-browser activities.

Fix them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: