Hacker News new | past | comments | ask | show | jobs | submit login

Seeing as you posted this comment twice, here's my reply again:

No. I hope this burns to read.

Software never improves because incompetence is the norm. Not because we didn't have a magical data collection unicorn available.

Competent software companies ran user panels, had decent quality control, didn't steamroll their communities, didn't market loudly over user dissent and certainly didn't shut down their issue tracking to even their top tier partners.

That was Microsoft 10 years ago. That is Microsoft today. But you know, Telemetry solves all these problems doesn't it? No.

The real answer to your question: ask and listen. People will gladly tell you. Do not just take the data otherwise you end up with a set of poorly selected metrics which do not represent user opinion and a lot of pissed off customers who don't want to or can't tell you due to legislation and data protection.

Edit: to back up my point, Microsoft closed down Connect with over 30 issues open from me and our account manager left to go and work for a competitor because he was fed up of dealing with that kind of shit and couldn't even get basic issues from a Gold partner actually escalated to anyone. We had a ticket open for 7 years against clickonce where IE9 broke it completely for about 15,000 users.

As for community steamrolling, this is a repeat of this one again: https://github.com/dotnet/sdk/issues/6145

Edit 2: I have removed some irrelevant stuff. This story goes on forever. I have so many anecdotes from dealing with MSFT pre and post OSS glory that I concentrate all my effort on staying as far away as possible.






I think you are underestimating the hardness of tracking the many users of Windows, and different bugs they might have.

Microsoft have a team of people who look at crash reports, and categorise the results (see for example https://devblogs.microsoft.com/oldnewthing/20050412-47/?p=35... , just a quick thing I found).

Having the ability to track the crashes of millions of machines, to find patterns in which drivers are crashing which applications, seems like an impossible thing to replace.


I don't have a problem with this. If asked I will submit a crash dump. If it shows me what is being sent. That's common courtesy. Opt-in information is absolutely fine.

Being unable to opt out and the default being opt-in is what is unacceptable.


Yes but those crash reports used to have a send/don't send button

The average user has no idea what those buttons do and will click whatever makes the popup go away, which will be either 'yes' or 'no' at random

It's like a consent form for a medical procedure. At the end of the day, you're not a medical professional. Is the average person really informed when they do or don't provide their consent?

Nevertheless, consent is still paramount. Removing consent on the basis that most users are incapable of being informed is a poor excuse.


Also, as someone who's been doing tech support since 1995, people here either wastly overestimate the dumbness of others or they just happen to have unusually dumb colleagues, friends and whatnot.

Most people aren't really stupid, rather bad software make them look stupid and bad tech support shifts the blame to the users.


Why are you setting up a straw-man? Nobody said that Telemetry solves all problems. Every additional piece of information can be helpful. If you don't think that it can be, then really there is a fundamental disagreement that will just result in an endless argument.

> The real answer to your question: ask and listen. People will gladly tell you. Do not just take the data otherwise you end up with a set of poorly selected metrics which do not represent user opinion and a lot of pissed off customers who don't want to or can't tell you due to legislation and data protection.

And likewise simply listening to a vocal minority via "ask and listen" is not a silver bullet.

So, you're both right, you need both to make informed decisions.


See the comment about user panels. Select a random portion of your userbase and ask them. Talk to your account managers. Communicate between them. This is software 101.

I've built and supported software with 80k end users and did that effectively single-handedly.

I'm sure a large corp can do the same if it sacrifices a bit of bottom line...


Finding representative portions of your userbase that will actually talk to you is pretty difficult. I've worked on a couple different products with millions of end users and we frequently discovered a subset of our userbase was having big problems and we simply weren't hearing from them.

> Select a random portion of your userbase and ask them.

This sounds great in theory - harder to do in practice. Often what ends up happening is the only people who will share their time with you are the ones who want something specifically changed for them. Thus my point, it's effective, but it's not a silver bullet.

> I've built and supported software with 80k end users and did that effectively single-handedly.

And plenty of businesses have used Google Analytics, Mixpanel, etc. combined with the aforementioned technique.

TL;DR - The two strategies are not mutually exclusive.


Problem is, the latter strategy is rarely used (telemetry is cheaper!), and as far as I can see the consequences, telemetry data is hard to use correctly. In particular, it's prone to become a mirror of your design, creating a feedback loop. Telemetry will show people will use more of the things you've exposed more, and less of the things you've exposed less, so if you take that at face value, then you're just amplifying your own initial design bias.

This sounds great in theory - harder to do in practice

And therein lies the root of "telemetry" — the SV bubble's lack of interest, lack of effort, and fear of interfacing with the wetware.

Telemetry is easy. Talking to people is hard. Too bad.


> This sounds great in theory - harder to do in practice.

Sure, but lots of things are hard. That doesn't mean we should all be happy about software phoning home without the consent of the user.


What does listening to telemetry even look like?

The user spent an hour fiddling with settings. Is that because they love the new settings toolbox? Or is it because they were very frustrated with it and couldn't find what they were looking for?


This could also be done by reading their forums, and reaching out to people.

For example, I have an XBox One controller. It used to pair fine via Bluetool. It still pairs fine with my Mac. Other stuff still pairs fine with my PC. But it just won't work after a Windows update.

What is telemetry going to tell them that they don't already know from the forum? Maybe the scope, but it's fuzzy. Some users might give up after one or two tries. Some users might be using the "Add hardware" box several times in a row for different reasons. Telemetry isn't a magic insights thing. It's difficult to get right, and to draw the right conclusions.

One thing's for sure, telemetry's cheaper than QA-ing updates properly.


Telemetry is generally a key component for quality assurance. While a paid tester can file bugs about "it crashes" or "it takes 10 seconds to load", a regression taking load times from 100ms to 200ms will be very hard (if not impossible) for a tester to notice and file a bug about. It will only show up in your telemetry.

You could argue that telemetry should then only exist in your beta channel or testing builds, and some developers do that. It's silly to argue everything can be caught by your QA team, that is simply not true for online services. In the past projects I've worked on have had long-standing bugs that took weeks of ongoing effort between both our paid QA staff and customers to finally identify reproduction steps, at which point we were able to examine telemetry for those reproductions and fix the problem.


I think your last point probably nailed it.

"Pairing is failing on device with A1B2C3 Bluetooth controllers on driver versions 8.2 and 8.3, but not 8.4"

"This is happening for 100% of users with the B2C3D4 controller and is likely a driver bug, but has happened only twice on the C3D5 device, both for the same user - likely a hardware failure"


Maybe you should have used paid engineer support. MSDN comes with a few support tickets.

We were a gold partner with more MSDN subs than I've got fingers and toes.

Just because it comes with support tickets does not mean, that they will be solved.

What will happen if they won't? Are you going to switch to different Windows or Office?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: