

Ask HN: Review my startup idea - doobiedoobiedoo

I'm almost finished building a real user monitoring (RUM) service for live web performance testing.<p>This is useful for a couple of things: 
1) Regression Testing... is your site slower (or better!) than it was before you rolled out a new version or new feature?
2) What page(s) are performing poorly &#38; receiving alerts if not hitting SLA targets.
3) Usage predictions.
4) Analyzing if a performance problem is you or your dns(etc.)<p>There's no software install required for your server.<p>I've seen this kind of thing in other products, but it's usually tied to an expensive pro account.<p>Before going public &#38; writing up the UVP for the landing page I'd like to get some feedback on the idea... would you pay for this, what would you like/dislike having in such a service etc.<p>Thanks a ton HN.
======
vitovito
Hi, user experience and interaction designer here. I especially like talking
with early-stage startups because I ask hard questions that save a lot of time
and effort in the long run. I also offer this as a service via UX Hours:
<http://uxhours.com/>

1\. "Review my startup idea" is your title, but your first line is "I'm almost
finished building." What if everyone says your idea is terrible and points out
hundreds of competitors and software and reasons why yet-another-one-of-these
is a bad idea? How will you feel? Will all of that time you spent be wasted?
Will you be able to be competitive?

2\. You might not get honest feedback because you're so far along. Normal
people (especially friends and family) don't like to be discouraging, whereas
designers do it for a living. You might also get honest feedback, but ignore
it, because it conflicts with what you've already built, and people naturally
try to preserve their own egos and justify their own efforts.

I'm not a performance engineer, but when I worked somewhere that had one
(these numbers refer to your points):

1\. There's no way we'd outsource regression testing. We had three load-
testing environments. First, the developer's local machine could be configured
to do test timings of whatever they were working on before they started, and
then after they finished, to do a basic gut-check. Two, we had a partial
internal mirror of production (ten servers, maybe?) and a dedicated person and
load-testing software. This was used after QA, before final sign-off and
deployment. Three, we could slow down production and actually run it on the
production machines by taking some of them out of rotation. This was done when
we needed to be really sure about something. We would never expose untested
software to real users.

2\. Our "pages" supported partial failures: we had a cloud CDN providing
anonymous access, our servers for logged-in users, and pages would still work
with reduced functionality even if certain back-end services were down. "SLA
targets" hits a whole raft of dependent services that we wouldn't give you
access to.

3\. Automated usage predictions are useless for new features. That's what
customer research is for.

4\. We had an entire operations team to tell us that.

5\. Cost was not an object because this wasn't "the web site." It was "The
Business."

6\. When we had trouble, it wasn't about the cost of the software, or even the
usability of the software. It was not having a person who could not just do
the monitoring, but also diagnose the problem and help us fix it. Maybe they
were sick. Maybe the on-call person didn't have the right expertise. Maybe the
ops guys called the wrong person.

Turning actual user behavior into load test data might be very useful if
you're a one person team, but it can't replace customer research and a testing
team and a load testing team, so positioning is important.

Friends who wanted this sort of software were concerned with:

1\. Recording behaviors and automatically aggregating common repeated actions
into sets of flows.

2\. Ways to automatically turn that data into tasks that can be run in load-
testing software. The easier this is, the better. If I have to also learn how
to use load testing software, you suddenly lose a lot of value versus writing
something myself.

3\. Proxying for Google Analytics. I already send 100% of my usage data to
Google. They already hook everything that happens. Why do I have to install
something else in my pages that will do the same thing? That slows down load
times and might introduce conflicts when you get a click and they don't, or
vice versa. Why can't you just piggyback on those events? Or send the events
on to them? Or get the events from them?

In short, they didn't want a tool, they wanted "install this one line of JS
and next week we can run statistically valid load tests with one button click
for $N a month."

(And going back to my first paragraph, this is probably what you'd get out of
an hour conversation.)

~~~
doobiedoobiedoo
Wow, that's feedback!

1&2) I'll definitely be doing that with the next one.

\-- 1) I wouldn't expect complete use QA to be exposed, just seeing the real
speed numbers of browsers. 2) what kind of services? 3) sorry, I meant traffic
patterns... busy weekends etc. 4) ok 5) ok 6) agreed, positioning is
everything \-- 1,2,3) again this is mostly a speed performance thing... as for
GAnalytics my own experience is that, for speed, the sampling of data is quite
low & the usefulness of the reporting equally so. This would be using
insertable, nonbocking js code.

Thanks for the awesome feedback!

