This kind of process is extremely valuable and should be done by devs more often. Start from the start and follow whatever your application tells you to do. Note down when it doesn't tell you where to go or what to do. You'd be surprised by just how many things you do automatically while working because you know the little tricks and things to get by, and that wording doesn't necessarily match what the app requires now.
Side note - this kind of this is why good QA people are awesome. They'll show you what users will actually do.
I'll add in something here. Element the app said they were logging into matrix.org.
matrix.org has a "try matrix". The first thing is it tells me to choose a client (this feels like a loop), then says to choose a server but also maybe I don't need to, then has a create account button.
The create account button takes me to a docs page. Which tells me to go to the element site, and then create an account with matrix.
So that's matrix -> use element -> element says to use matrix -> matrix says to use someone else, ok you can use us -> to use us go to element -> element says you're making an account with matrix.
edit - oh you can and should also do this with your dev process.
Create an empty folder, check out the repo and follow the readme. Do you actually get a running system for local dev? Can you successfully run the tests? If you are able to, do this on a clean machine (maybe load up a docker image and see if you can follow it in a truly clear system). Does it turn out it assumes you already have tool X installed because your developers already have it from another project? Do you actually need postgres running with a specific user with specific login details?
If you're like me you don't like writing docs, so this may actually just push you to add scripts that do the setup required.
To not sound sanctimonious about this every time I've done this with my own code I've found issues with the documentation.
QA is a massively underappreciated position. A QA person that knows when to automate, when to manually test, and how to report and file issues relevant to the project can save a significant proportion of hours on a project overall. I wish many more companies included budget for QA, it saves developers a lot of time.
A bit of a side-note: this sort of analysis is a great answer to "I want to contribute to open source, how?". Some fairly simple wins for significantly better user experience, and no coding required!
Good QA people are hard to find and it's a weird balance to strike.
I've seen QA get run over by aggressive developers.
I've seen QA people who were so good, detailed and provided clear reproduction steps that developers couldn't wait to see them work.
I've seen QA people who were completely unwilling to work on efficiency improvements, automation or even testing things in parallel so they became a bottleneck to the entire organization.
I've seen QA people who just fall into a routine, do exactly what is asked of them and never try to improve.
Like with anything, it comes down to the person in the job. If you get QA people who are really committed to the work, take pride in what they do and are always trying to improve it's the dream.
In general, I'd say that's because it's a position that's shit on.
You're apt to be paid far less than actual dev positions. If you're a QA manager you're always pushed on by upper management to outsource and lower costs. There is none of the prestige of being a "QA 10xer" that you'd see heaped upon a dev in the same position. And I see little training/courses pushed out for QA like is typically seen for dev.
It seems like QA in most companies is a necessary evil that management would take out back and shoot the first moment they could.
Developers are artists and QA is critique. Worse, you must entertain their complaining, and pay for the privilege! The vultures. (This implicit bias would explain the treatment disparity. But it's a baseless hypothesis. It just seems like the simplest behavioral explanation.)
It’s semantics, but thinking of QA as refinements rather than criticisms goes a long way. Everyone is reasonable until their mental defenses are activated I believe. I think it’s worth making even great efforts to communicate improvements or requirements without using phrases which make someone feel bad. If the goal is to actually persuade and make a positive impact rather than just point fingers to feel superior, finding the way to say things without assigning shame is almost always the way to get things done.
Good QA will be sensitive to this when creating feedback, and good Devs will understand that tunnel vision from submerged in the product full time every day will typically lead to a built experience that isn’t amenable to large portions of their users.
Providing QA feedback tactfully is an impressive and appreciated skill, as is receiving feedback gracefully. Everyone should aspire to do both.
A good tester is the kind of person who revels in running the same lab experiment many, many times and chortles for always getting results within the error bars.
A good QA is the kind of person who can think of every way something will fail, and then come up with a proactive risk mitigation strategy that makes everyone smile with pride.
> QA get run over by aggressive developers
True.
Back when we had QA/QC, my "One Weird Trick" was to put the QA / Test team in charge of releases. Running the bug triages, in charge of acceptance testing, running the go/no-go meetings, etc.
Worked f@#$ing great. Almost like magic. Zero drama. Our releases were almost anti-climatic.
I miss the '90s.
Well, I miss my '90s QA/Test experience.
Most everyone else was stuck in Kem Caner's world. The preeminent "SQA" guru who preached victimhood and grievances. Probably did more than any one to pile drive the QA Test profession into the Mariana Trench of irrelevance.
(Apologies, weak sauce, I know. I usually have a better "colorful metaphor" ready to deploy for these types of rants.)
From manager perspective, no matter what people do you have at start of the journey, the team culture can be changed. Most people are willing to learn something new and try new processes if they see the value. In more than 20 years I have seen maybe 2 or 3 pathological cases, where a person had to leave the team rather than play by new rules. It is not easy, it may take time for the team to adapt, but that’s a manager‘s job to unlock the potential of every team member and that job is doable.
The old wisdom in the US, is that if you have "Quality" in your job title, your career is over.
At the Japanese company that I used to work for, it meant that you were one of the most powerful people in the corporation, and was a sought-after adornment.
I always give the power to my QA team to block any release no matter what and to give higher priority to tickets than product manager. If CEO wants to override, I cover them and take the blame. This is not a guarantee that there will be no bugs in production, but it saved us a few times.
They never reported a "NotABug." They could back up every report, and give exact reproduction steps.
They found weird, obscure corner cases, and that was by hand (they hated automation tools).
They had 3,000-line Excel spreadsheets. If even one of those rows failed, the whole shooting match (like an entire product line) could come to a halt (so that meant they had to cross their t's, and dot their i's).
They seldom had "opinion-based" reports, and, when they did, the report was presented by the manager, after long discussions.
The company I worked for, was renowned as one of the highest-Quality optical corporations in the world.
I worked with NTT Docomo years ago for a short time. First time I ever got to see a CMM Level 5 organization. It was insane. No wonder Japanese cars were so much better than everyone else for so long.
If you ever get the chance, take it - you will learn way more about software quality than you thought existed!
Heh, this reminds me of an episode of Top Gear I was watching years ago about quality of British cars. They said something along the lines of "The manufacture sais 'eh, good enough' the moment the car is able to move under its own power.
Do you have any insights into why desktop and mobile software from these companies is so universally horrible? I’m thinking of Canon’s remote tethering tools, Fuji’s instax and remote control apps for iOS, and Epson’s scanner software.
I don't want to get into slagging these folks, but I feel your pain. In a big way.
Hardware != software.
Hardware companies have a really difficult time, understanding this. They insist on running in-house software projects as waterfall-based-measure-twice-cut-once-never-accept-a-bug-count-greater-than-0.
Anything different is "bad quality cowboy."
It can be difficult. I rapidly learned not to use the word "agile," within earshot of many senior types.
This applies to US hardware companies, as well as Japanese ones.
Most folks, hereabouts, seem to think of me as an unbearable, retentive, snob, but my former managers would often think of me as an undisciplined, reckless, slob.
Hi, I'm the Thib person mention in this article, and I agree that QA is super important. I can mostly talk about matrix.org, since I have little power over the Element clients. Disclaimer though: I'm technically employed by Element (to make paperwork simpler since I'm France-based, Element has an entity in France, and the Foundation is UK-based), but I'm working for the Foundation full time.
This kind of article is super valuable since it gives us the perspective of a new user. I opened https://github.com/matrix-org/matrix.org/issues/2178 to translate the gripes mentioned in the issue into actionable items for us. I took action on the most urgent one (updating the Try Matrix page), but want to take the time to go beyond the surface symptoms and address the root cause of the other gripes.
On the Foundation side, we're a small but mighty team of four. The website is currently maintained part time by me and a volunteer who is doing an excellent job at it.
As I wrote recently in a blog post "Tracking what works, not people" (https://ergaster.org/posts/2024/01/24-tracking-what-works/), I would love to have the resources to conduct user research and user testing on the website but I unfortunately don't. We deployed privacy-preserving analytics to see where people drop and what confuses them. It's not nearly as good as proper QA and user testing, but that's what we can afford for now.
Overall I'm grateful to the author for documenting their frustration, and even more grateful for reacting constructively to our responses and integrating them in the blog post! One of the strengths of open source is to find and address issues collectively. I consider this blog post to be a good open source contribution.
If people around believe in our mission and want to help us with their brainpower, I invite them to join our "Office of the Matrix.org Foundation" room: https://matrix.to/#/%23foundation-office:matrix.org
For those aligned with our mission and who want to support us financially, the https://matrix.org/support/ page should give you all the information you need to help us out.
Hi, hopefully things came across OK, for clarity I wasn't saying "why haven't they done this, they're bad at QA!?!?!" but just wanted to say that most of us should be doing the same kind of thing with our own products/tools/sites and give a shoutout to QA peeps.
Thanks for working on matrix, I'm building some things on matrix and it's been pretty interesting.
> For those aligned with our mission and who want to support us financially, the https://matrix.org/support/ page should give you all the information you need to help us out.
Just wanted to add that I signed up recently and wanted my wife to sign up too. I managed to figure it out, but the article is correct. Even down to trying to figure out whether I should use Element or ElementX on iOS. I also realized that my wife would never figure it all out.
I think there are several things we can do improve, and the process should be fairly similar with Element:
1. Refine who the website is for, and what they are coming here for. We need to narrow down who our audiences are, what they want, what they known and don't know, and how we can best serve them.
2. Conduct user research with a diverse set of people representative of who we think our audiences are. We need to sit down with them, ask them to create a matrix account unguided, and ask them to comment what they are doing and how they feel about things.
One of the difficulties of the website is to find the right balance between not overwhelming the user with difficult decisions (picking a client? picking a server? I just want to chat with my friends!!) without being too biased. We need to be opinionated to guide newcomers through a decently simple process, but we need to leave room for all the vendors to thrive.
If you're trying to make a good onboarding user experience then you should do your onboarding testing with people who've never seen the product before, not devs or QA. Once people are familiar with the product (devs, QA, and anyone who has used it before) then they're "tainted". They'll remember the weird way that they had to work around an issue, and that'll just end up being "the way it is" rather than something to fix.
I've read that a strategy for this is create an ad and pay people $50 to come in and try to use your software. Tell them to do something in your software and see what they get hung up on. The worst UX problems will be hit by nearly every user.
As simple as that is, none of my employers have ever done this. The closest was one of the bosses asking his wife to try out the software.
You can go one simpler if you need, you can use something like https://www.usertesting.com/ which will do the ad side of it for you. I used to do testing for them for some beer money. You'd use something for the first time and try and achieve some task, often talking through it, and it's all either screen recorded or it used to be videoed for phones.
Having someone in person can be extremely valuable for other reasons, but this can be a quick approach.
For larger customers, going onsite and watching them use your tools is so valuable.
> Once people are familiar with the product (devs, QA, and anyone who has used it before) then they're "tainted".
I broadly agree, though this is where I'd split out really good QA people I've worked with. The added advantage is they can also explain the change required that would get some user X to have a better experience (e.g. how your autistic users may get more stuck at a certain place, or how to change the flow such that a 3-4 year old can navigate a UI).
>This kind of process is extremely valuable and should be done by devs more often.
The fact that it's not being done doesn't bode well for their perceived engagement to this project.
I remember when it launched and how much they hyped it up to be the future of secure messaging. That was how many years ago now? It was pre-pandemic.
I'm a lover of all selfhostable federated solutions so I actually hosted a Matrix server for a couple of years. My conclusion is that it's just not ready for production scalability.
And you can't migrate easily between implementations because of their unique database design.
Side note - this kind of this is why good QA people are awesome. They'll show you what users will actually do.
I'll add in something here. Element the app said they were logging into matrix.org.
matrix.org has a "try matrix". The first thing is it tells me to choose a client (this feels like a loop), then says to choose a server but also maybe I don't need to, then has a create account button.
The create account button takes me to a docs page. Which tells me to go to the element site, and then create an account with matrix.
So that's matrix -> use element -> element says to use matrix -> matrix says to use someone else, ok you can use us -> to use us go to element -> element says you're making an account with matrix.
edit - oh you can and should also do this with your dev process.
Create an empty folder, check out the repo and follow the readme. Do you actually get a running system for local dev? Can you successfully run the tests? If you are able to, do this on a clean machine (maybe load up a docker image and see if you can follow it in a truly clear system). Does it turn out it assumes you already have tool X installed because your developers already have it from another project? Do you actually need postgres running with a specific user with specific login details?
If you're like me you don't like writing docs, so this may actually just push you to add scripts that do the setup required.
To not sound sanctimonious about this every time I've done this with my own code I've found issues with the documentation.