So I glanced at the source code, and it surprised me to see that it was not a typical HTML document, but instead an XML document with a stylesheet. It is very rare to see a non-HTML page on the web, whether it is the XHTML serialization of HTML, or fully custom XML.
This bug is actually a regression in Firefox 58 from the fix for a use-after-free bug (CVE-2018-5097):
I gives me the best of two worlds: simple markup with complete control of output and CSS styling and instant changes of all documents without a compilation step.
Seems DocBook has been around since 1999, judging from when the first O'Reilly book was published about it .
I just flagged it with the Pocket app's "Report An Article Display Problem" feature, but it is such a niche use case that I can't imagine them spending time on it.
DarkWyrm hasn't been active on Haiku for some years now (though I think he has an occasionally-updated Twitter account?), so it's a community-maintained document at this point.
Excellent goal but way harder than people think.
I would say it is not possible in the fullest.
An example of this would be if a music composition
program has an easily-reached maximum song size because
the code monkey who wrote it used a 16-bit variable
instead of a 32-bit one. *While there are sometimes
limitations that cannot be overcome*, the actual code
written and the architecture used when it was written
should have as little effect as possible on what the user
sees and works with when using your software.
I think it might be better rephrased as:
1) Don't add unnecessary constraints (or: Don't prioritise efficiency/etc. over the interface).
2) A good interface abstracts away technical problems, rather than presenting them in a different form.
There's also the aspects of form follows function, and mechanical sympathy. The use of a tool should strongly guide its shape and design, and similarly, a user should be able to develop an intuition about what the tool is good for and what it's not good for, so they can use it to maximum effectiveness, and not be frustrated when it can't cope with a problem it's not designed for.
Spolsky talked about it 16 years ago:
KDE/Gnome are much more than implementation details: writing a program for either is using a certain design languages and sets of UX rules, and those are user-visible.
Classic example from when iOS/macOS had manual reference counting: fixing an over-retain bug in the framework causes the app with an over-release bug to crash.
Even more extreme example: https://twitter.com/gparker/status/1050903609142009856?s=20
The implementation will shine through sometimes, or will be (easily) detectable via "side-channel attacks", e.g. an interface feature can be slow due to an implementation.
OTOH this allows to upgrade and entirely change the implementation without affecting the user interface, or at least while keeping it largely backwards-compatible.
Especially since the grammar in the article is relatively OK, whereas that title would be really terrible if it were not a pop-culture reference.
That's a fairly bold claim. Haiku is an OS that basically no one uses for any purpose. Why should I take any sort of design advice from them?
By your logic, since macOS has ~1/10 the install base of Windows, clearly this means that "very few" people use it and we should not take Apple's advice on anything design-related.
I'm old enough and enough of a computer enthusiast to remember when BeOS came onto the scene. Even most computer enthusiasts today probably haven't heard of it and of those who have I imagine only a subset have heard of Haiku.
Now maybe Haiku really is great, fast and easy to use but I think it's a stretch to claim it's "known for" those traits when (perhaps undeservedly) I think it's more accurate to say to a first approximation it's not known at all.
In this regard it's nothing like Mac OS which while much less popular than Window, even my parents have heard of.
> By your logic, since macOS has ~1/10 the install base of Windows,
Does not follow. MacOS use relative to Windows is not at all a fair comparison. Firstly "everybody" uses Windows, so 1/10 of huge number is still a huge number. Second, except for gaming and some particular areas like EDA, MacOS has a general software library with a broad-base of technical and non-technical users throughout the world. Haiku OS cannot claim to have anything like that.
That was the second half of my comment. Please re-read the first half.
My point in that part, though, was that macOS has ~1/10 the users of Windows, but Windows' UI/UX was/is widely regarded as absolutely atrocious, and macOS' as being pretty good to great, depending on who you ask. So "if it has less users, their thoughts on design are irrelevant" is ... not a good argument.
There is a valid point to be made about verifying usability and the need for a large and diverse userbase.
How much you want to bet that the Haiku userbase is overwhelmingly male? And this is not to make some concerned sexism argument but to illustrate that the community is more representative of the IT crowd rather than the general population.
OK, that is indeed a valid point, but not one I got from the first comments.
I think Haiku has a diverse enough following that we are past most of these concerns, and a lot of our major paradigms which differ from other OSes were validated in the BeOS days. There are some areas where we still have work to do -- e.g. interfaces for the blind -- but overall we are doing quite well.
> How much you want to bet that the Haiku userbase is overwhelmingly male? And this is not to make some concerned sexism argument but to illustrate that the community is more representative of the IT crowd rather than the general population.
Oh, I know it's true of Haiku, but I practically can guarantee the same is true of Linux/BSD/etc. So then, can only "mainstream" operating systems have anything to say here?
We recently ported nbdkit to Haiku and it was a pleasant experience dealing with the developer. However I never actually used Haiku (just reviewed and applied patches).
The other thing I think I agree with is the philosophy on designing complicated operations. Every operation a user performs should leave them with no question as to what the outcome will be when they click the Big Red Button. And they should be able to painlessly undo it. Imagine typing without a backspace key. Your words per minute would be near-zero because of the fear of making a mistake. Undo is essential.
The lack of transparency around state changes is something that makes users hate software. Who will this share button share with? What does "make my YouTube comment publicly accessible" actually mean? Users don't know, then they make the wrong decision based on incorrect information, and get mad -- and rightfully so. We blame the user, but it's our fault for laying hidden traps. You can implement social media without tricking users, but we choose not to.
I see a lot of people fall into the "great is the enemy of good" trap with regards to undo. They see some operation that creates some immutable effect on the Universe, so the only remedy offered to the user is "email the dev team and we'll try and fix it." This is a pain for the dev team and a pain for the user. Yes, there are certainly permanent changes that software can make. Send an email, launch the nukes. But those should not be barriers to letting users undo. If they want to undo something that sent an email, you'll just have to send another email apologizing. That is all the development team is going to do when they have to manually fix it. Nukes can be killed while they're in flight; better to have some fallout near the launch site than to start World War III. And yet, we consistently fail to offer undo in most applications, so users learn to be very scared of doing anything, and that fear prevents them from accomplishing their goals. That's our fault, not the user's.
I have seen this all in action at work. We have something that is a "big red button" (we're an ISP, it's the "change service plan" button). People will not click this, and it's because they don't know what it will do. If we said "clicking this will cause the customer to pay $x a month, with the next invoice being sent on 12/01 (click here to see a preview), and their bandwidth profile will be set to XMbps in the next 30 seconds", then people would click the button, because they know it will solve their problem, rather than create a problem.
Anyway, you have to give your users understanding and the ability to safely experiment (which is how you gain understanding). That is what makes software a tool to enhance productivity, rather than an annoyance that makes the user do extra work. Just dumbing down your error messages isn't going to cut it.
Many developers can see the value in open source software. This philosophy claims even more value (user freedom/liberty) can added to groups outside developers by creating "open implementation software". The practicality and feasibility of this design philosophy are one story (that is probably not worth arguing over without evidence). The idea of empowering users to understand the implementation at even a rudimentary level resonates with a lot of frustrations I've had as a babied user of some software. I will try to design with this philosophy on my next personal project and see how it works!
Aside from the fact that it's a minefield of dark patterns, how did social media get into this discussion?
"Aside from [the thing that makes it relevant to this discussion], how is [x] relevant to this discussion?".
> Good Software Uses Everyday Language
I'm curious why keyboard shortcuts are called accelerators.
Overall looks like a pretty good set of things to think about as a UX designer.
Funny fact I've actually "invented" this philosophy (drivers for every file format being installed, managed and exposed to applications and scripts by an OS) when I was a child. I was amazed to see somebody has actually went implementing this idea (as far as I understand it works approximately this way in Haiku)
> As an aside, MP3 is a format which requires licensing for decoding and encoding to be legal,so depending on the MP3 format is not a good idea unless your program deals specifically with it.
This is not true, the patent has expired AFAIK and every desktop Linux distribution can play it out-of-the box for quite a long time already (and even if it would not have, people living in countries that don't have software patenting should be let to make use of their advantage). In fact MP3 is a great format to depend on as this is the only lossy audio compression format that really plays everywhere. AAC is better and can be played on the majority of modern devices and OSes but still not really everywhere, OPUS is the best but there still are so many devices and apps that don't support it unfortunately.
Yep, it is indeed. You can of course write your own translators for custom file formats and install them along with your application; but then anything which already handled whatever representation they produced (e.g. PNG/JPG/etc. images -> BBitmap, Haiku's image buffer class) could then handle those filetypes also.
> This is not true, the patent has expired AFAIK
Only last year, and this document was mostly written a decade ago :)
Haiku can now play MP3 out of the box also (and since the servers and buildbots are mostly in Germany, we don't really pay much attention to U.S. patents anyway at this point.)
The Amiga got something like that with Workbench 2.0, actually:
This is also very cool:
> ARexx can easily communicate with third-party software that implements an "ARexx port". Any Amiga application or script can define a set of commands and functions for ARexx to address, thus making the capabilities of the software available to the scripts written in ARexx.
> ARexx can direct commands and functions to several applications from the same script, thus offering the opportunity to mix and match functions from the different programs. For example, an ARexx script could extract data from a database, insert the data into a spreadsheet to perform calculations on it, then insert tables and charts based on the results into a word processor document.
But as a great demonstration of why this is fantastic: Amiga software written in the early 1990's  can open e.g. WebP files without modification, despite the format being released only in 2010.
 Actually even some older ones - since AmigaOS lets you patch OS functionality quite easily, there's a tool - DTConvert - that patches library calls to intercept attempts to read non-IFF image formats and passes it through datatypes, to let even older programs load files supported by datatypes...
Is it? I want this to be true because I want to write good software. But I've worked with some very senior developers who would disregard all software engineering and user experience concerns and just spew large quantities of low-quality code that made the managers just as happy, especially since it got done quickly. And the end-users in many niche industries are used to being shipped garbage, so they're just as happy too.
"Good enough is always good enough". You probably want to do more than good enough. Those developers you talked about, probably knew the amount of quality that was actually needed, no more, no less.
Clean code doesn't mean "no bugs" and "good software":
- I've seen very cool, popular, money making games that had terrible code (all in 1 C file for example).
- I've seen a terrible mess of code, that was practically bug free (because it was running for years in production). And have seen that same code refactored to clean code by a junior (who thought he would 'fix things'), and introduced a lot of bugs (because that is just what happens when you write code, even clean one, even by a senior developer).
If you deliver a final product (such as a game for example), that will not need extra features, you can hack stuff in at the end. When you would hack code together at the start, you will shoot yourself in the foot by needing to go through your mess all the time. But when you do this at the end of the project, you can save time by adding in quick hacks. The technical dept that you introduce doesn't need to be paid off. Of course you cannot do this with a long running project. But with a deliverable product go right ahead.
The main thing that you have to consider is this: EVERYTHING IS A TRADE-OFF. I see juniors make this mistake all the time, selecting the best programming language, the best VCS, the best whatever. You know, that 'best' thing also has drawbacks. So it's always about making trade-offs, selecting something despite the drawbacks, because of the benefits.
General rule: if something only has benefits, it's probably because you didn't figure out the drawbacks yet.
How to make the proper trade-offs? Experience ;).
I would add the amount of effort put into designing and polishing any project that you can defend is correct is related to how much future development and use the project will see and how critical the project is for the company.
So, if you want to work o a nice quality project or want to spend a lot of your time on improvements to quality find yourself a project that will either see a lot of future development or a lot of future use or is extremely critical to the business or, best, all of these characteristics.
I see this mistake happen all the time -- people complaining on quality, disparaging previous developers or demanding effort to improve it just for quality sake ("because I just read this book and it can be done better") without reasonable arguments to support from business POV.
Also I see people categorizing all project deficiencies as technical debt where what they should be doing is to first establish what would be reasonable expectation regarding quality and then the debt is only what would be absolutely necessary to reach that reasonable expectation.
So for example, if you make a small app that will not reasonably see a lot of future development but is critical to work reliably in production (and it isn't) your debt would most likely be centered around making the application reliable enough and probably not around making the code nice (as long as it isn't preventing makeing the application reliable).
This gets missed too often: clean interfaces are important, whether it's a UI or an API. Clean code? It depends.
I once interviewed a dev who had all the right answers when it came to software development practices, TDD, etc, but when we looked at the product he was working on, the UX was a stuttering mess.
I don't care what your code looks like if you did not achieve a good end-product.
Is that a good metric to judge a developer, though? Good UX requires either formal training in the field, or a very gifted individual. In my opinion it has very little to do with the quality of either the code or the developer.
To have baseline UX awareness is crucial for every developer on any team that sets out to build a great product, just like any effective designer has to be aware of physical limitations of his design space (for example speed of light, bandwidth)
If your UX designers job is to point out that responsive UI is indeed important for your product, he is not actually doing UX design and you are likely in deep trouble.
Apple and Google shouldn't fire their best compiler/JIT writers just because they write lousy GUIs.
A designer is not responsible for making sure that the implementation of their design runs at stutter free at 60fps in a browser -- that is an engineers job. If the engineer who implements such a design is insensitive to these things and needs convincing that it's important to not have the implementation not stutter, jerk or jank you are never gonna get good results.
I do care if your code is a mess if other engineers have to maintain it.
My point is that "good code" should be prioritized behind a good product. The purpose of code is to deliver a product, not to be well-factored or to embody certain principals. If your product is bad, your code is already categorically bad, no matter how beautiful it is.
But what if you have a component here or there with well defined, well tested interfaces, but internally it's a bit of a mess? Or what if you hack together a proof-of-concept for a new feature to get it out the door and see what users do with it before investing a lot of time in making it perfect? Those kinds of things can help your organization move faster.
As with most things, clean vs messy code is not a binary. It's the sum of a lot of individual value judgements about when it's pragmatic to focus on code quality.
Even so, I am inclined to argue that in most cases the exact opposite is true. That is, it is actually “easier” to create a “good product” with a messy codebase, because in most cases “easier” and “good product” are understood by primary stakeholders to mean business value at this moment.
- One coder writes his product with a high quality code base.
- Another coder sits next to 3 users of his product, and hacks stuff in as fast as possible.
I would bet on the 2nd one for having the better user interface.
And that, of course, has it's exceptions too. Non-optimized stuff exist, and everything that isn't on the optimum line can be improved in some way without drawbacks. Besides, the optimum line is always moving.
- How certain are you that you didn't introduce a bug?
- Is it also running faster on this exotic platform that we still need to support?
- Did it involve API changes?
- Do we need to put this into production now, or wait for the next optimization to save some time?
- How much time did we spend on this? Was there something else that would have made a bigger impact?
In theory you can improve things without drawbacks, but in practice you always have to make the right trade-offs.
Are you sure that wasn't an optimization to get "LTO" with a non-LTO linker?
See "Unity build".
We don't want golden-polished software. Software that only just does its job, makes the company money, pays the bills, and allows for future improvements where possible/necessary. Lean and mean, then refine (if the bills have been paid).
The distinction has to be made between good software and well-written software. It is very possible to have well-written bad software and awfully-written good software.
I think realising this, is what makes a junior into a senior. Having the confidence to not always do what that little birdy in your head says :)
Customers buy software that solves problems, therefore good software solves problems. Customers buy good software, not well written software. These are not always mutually exclusive.
For example, controller software attached to a turbine engine. What I said above is especially true in those scenarios and is actually quite folly to do more than necessary in the name of software perfection. Plus unnecessary complication hurts brains :)
I'm one of the end-users of many terrible B2B products. I'm not happy with them, it's just that the department that buys the software from IBM & Co. doesn't use it. They don't care. But it costs a lot of money in the long run because of bugs and slowness.
I'm starting to feel like the whole movement for 'software craftsmanship' is making a lot of developers miserable. It's maybe better for them to realise they are not artisans, they are bricklayers. To focus that creative energy on their own projects rather than getting burnt out fighting their managers over writing 'good software'.
Not as miserable as working on some 7 year old system hacked together without any craftsmanship. Plus if its a long term project the maintenance costs must spiral.
Generally business considerations >> user happiness.
Software is primarily shipped to make money. If a shitshow product - it's not hard to name a few - makes money, there is no business feedback loop which will raise quality.
Senior devs tend to justify this on the basis of experience, and management justifies it on the basis of shareholder value.
In reality it's just cheap, shoddy, dispiriting, and cynical. In a mature industry developers would be able to take pride in making end users happy , as opposed to making them frustrated and enraged.
 Not the same goal as impressing developer peers.
Perhaps you could make something they love instead, but if the production then costs 10x more, and they are only willing to pay 2x more, it does not make sense economically.
The Russians built a huge volume of very low-quality tanks, expected to go for only a few hundred miles, and to win through superior numbers. Their crews were poorly trained. The designers suggested easy-win improvements, but the brass blocked the changes as it would have impacted production rates.
The Germans built the best-engineered tank the world had ever seen, but its design tradeoffs made it ineffective, and it arrived too late in the war.
Really a lot of russian vs american style examples.
Also technical (engineering) books for example. Generally russian books where a lot more condensed, raw, pure numbers/formulae etc. American school was a lot more colorful, pictures, graphics, plots, diagrams.
There were people that liked better the russian style, however I think the "prettier" one has prevailed.
Interesting subject I think.
But it does mean that it's easy to exceed their expectations, for example by making the browser back button take the user back to the previous view instead of crashing the entire user session. I tend to try to justify spending time solving technical debt by making development of new dazzling features faster and cheaper, but as long as the users (and more importantly their managers) are sufficiently dazzled if you only throw them a pittance once in a while, that becomes hard to sell.
You know, I kind of went through the same questioning: at my current company we have an employee that has his niche/historical role on one of the key infrastructure of our product.
The guys works from his home and whatever hours, commits 300+ lines per commit (90% of which is unrelated to the commit name, just commenting things or uncommenting others). The code is a spagetti of if else if else, it has monstruous "client specific" code (like: if the current client is this guy, then do that, otherwise for all other clients do this).
There's even a load of "if the program runs in test, answer what the test expects, otherwise in prod send something else".
Every time I look at the commit tree (a balance of wanting to have fun and wanting to see what is going on), I'm in horror. The guy rewrote the java.lang.string, the java.lang.date, and some other base classe of the language to incorporate his own version of the Date handling.
Fun fact, at one point his date parsing code "wasn't happy with october"... why? because October contains the letter 'T', and he botched his ISO date parsing (format: YYYY-MM-DDTHH:mm:ss) by checking the presense of the letter T..
I have many many stories of him commiting some bad code, only to spend 12h in a row trying frantically to correct the bug by adding ifs and elses.
The last time I looked at his code, he was rewriting a huffman compression algorithm by himself (NIH syndrome) to compress his on disk data. Complete with loads of code to handle per-bit operations that he wrote b y himself instead of relying on a battle-proof library.
Management-wise, they accept his behavior because this guy, working from his own home, often can work at scandalous hours and be on the bridge when he pushes a botched version to production at 2AM. At the time of this comment, his last 4 commits on master are at the following time: 08:57PM, 10:40PM, 12:11PM, 01:02AM
I know I'm venting hard, but this guy is probably be the best paid engineer of the company, he is a huge liability IMO, and is present at all-hands meeting maybe once a month. So yeah, why bother coding great code, making sure that your tests and CI are green, when some other guy with a bit more history in the company can get away with doing shitty work and having poor life balance, as long as he shows "commitment" when he botches his software releases.
This is really some life-long questioning of mine: should I care about things or should I focus on politics and disregard others. Aren't those on top of the pyramid the more "sharky" people that climbed the ladder not by pure skill but by showing the appropriate (bad) behavior when needed and correctly compartimentalized their own questions about ethics and got away ?
It's nor even that the code is bad (and it is) it's that he couldn't think straight, the business logic is all insane as well.
Still boss hired me to sort it out and the time scale is measured in years (week is a mix of new stuff and fixing old stuff).
Typical performance improvement is two orders of magnitude.
When I started it took 70s to search for a quote.
Now it takes 200ms and mine includes the line details on the quote in a easy to eyeball format.
My boss seems happy at least.
Now that I'm thinking about it... do the managers actually know the bug was caused by his "programming style"? Because, if they are under impression that he is fixing other people's mistakes... then it would make sense to pay him better than the rest.
> This is really some life-long questioning of mine: should I care about things or should I focus on politics
I guess you need both; and the exact proportion depends on the company.
> Aren't those on top of the pyramid the more "sharky" people that climbed the ladder not by pure skill ...
This already assumes that those on the top have climbed the ladder. As opposed to already starting somewhere higher than you will ever get.
I'd be curious in hearing you elaborate on this. I've been in a WFH setup like this for about 4 months, and while I enjoy it right now I do wonder if it'll be something I might regret after 2-3 years on this schedule.
My primary reasons for working like this are that:
1. I hate being in the corporate office environment, it feels very artificial and constraining
2. I tend to have my most productive coding sessions very late at night anyway
My new situation is categorized as flexible office hours. I go into the office 2-3 days a week, enjoy good rapport with the people I work with, then WFH on Mondays and Fridays (typically). I find this to be a good balance - I have enough structure Tuesday to Thursday that I can continue this trend Monday and Friday.
I would pay for a book with more anectotes of kafkaesque, nightmarish visions of horrendous programs.
Honestly this is my biggest disappointment as I get better as an engineer. No one except me seems to bother about simplicity, readability, maintainability (Ok a few of my colleagues do).
...By removing unneeded items from the file navigation window, you are reducing the number of choices the user must pick from and also preventing him from opening the wrong kinds of files...
...Good feedback just means making sure that the user knows what your program is doing at any given time, especially if the program is busy doing something which makes him wait...
...An even better solution would be to select a good default choice for the user and give him the option to change it...
... In short, a user will learn only what he must in order to do his job...
Ctrl-F "her", "she": 0 matches
Another tip to design good software: don't assume all your users are male :)
This is an okay stab at a HIG, although it is severely lacking in examples and screenshots, for a document that deals with graphical user interfaces.
Good past attempts that are worth the read if you're into this kind of stuff include the original Macintosh HIG, as well as NextStep User Interface Guidelines. I also quite liked the Hypercard Stack Design Guidelines. Gnome once had an okay-ish HIG (which I contributed to), not sure how well up to date it's been kept through the years. KDE, despite being better on the eye candy front compared to Gnome, was always lacking behind in terms of actual concrete guidelines for developers.
You're right, we should update it, though.
No, you really shouldn't, it's not a real problem. The only people who keep tallies of gender pronouns in documents are internet trolls; don't indulge the trolls. Anyway, switching up pronouns every other time breaks continuity and makes documents harder and painful to read.
When writing on technical topics, the chief concern is to be clear, accurate, and straightforward. Anything that might distract your reader is undesirable. By referring to “the user” as “he”, you’re already distracting roughly 50% of your potential readership (if you’re writing the documentation for a Do It Yourself Vasectomy Kit, perhaps you get a pass).
Switching up pronouns every other time is a strategy, but it is indeed one that can hurt the readability of a document. This article covers various strategies nicely:
(Note that this document was written over 20 years ago, so the “rules of style” when this document was written were roughly the same as they are now)
Correct English is never out of style.
Different languages do different things, and that's okay. Some languages apply grammatical gender to more than just sex, e.g. tools, or fruit. That's okay. English uses grammatical gender to distinguish between male, female & neuter objects, and defaults to male when referring to unknown males-or-females. That's okay too. It's all part of the rich panoply of life.
> By referring to “the user” as “he”, you’re already distracting roughly 50% of your potential readership
I really doubt every woman is distracted by proper grammar. And of course by not using correct English, you distract people who use & prefer it.
If we were speaking about eg contemporary France French, you would have a point - there is a governing body decreeing what is correct French and what isn’t.
That is not the case for English.
But even that is besides the point, because “they” as a genderless pronoun has been used for over half a millennia. (William Caxton, 1460: “Each of them should...make themself ready”)
Do you have any research/data to back this up? I'm a male and I am not distracted when people use "she" (or "he" for that matter). I have never heard anyone in person complain about not being able to focus on an article because they used "he" instead of "she".
> Anything that might distract your reader is undesirable.
Concision is of utmost importance in any language. Many of the examples in your link sacrifice readability and concision for the purpose of avoiding pronouns. What really distracts me is when people use "(s)he", "s/he", "he or she" (oh no -- which one should be first?), or periodically switch between "he" and "she" absolutely everywhere -- all for the purpose of filling a contrived equality quota.
Elitist prescriptivists who wanted English to be Latin because of bizarre linguistic fashion tried to purge it, but never did from general use, and prescriptivism, especially on points that have never aligned with common usage,has fallen out of favor, so there is little reason not to use it now but adherence to a particularly archaic elitism.
If by 50% you mean 100% of the women having trouble making it through an article with multiple instances of "he" not immediately followed by "or she", that's a pretty bold claim.
I suspect the key factor in people's inability to focus on the central point of the article lies in their political persuasion rather than the shape of their genitalia.
It was certainly one aspect of my writing I got a lot of feedback on. Changing that habit was fairly easy, and has entirely solved the problem.
I’m not sure what’s political about pointing out that “the user” of eg a computer system is not always a “he”, but if you want to bring the conversation there, have fun. If anything, it’s starting to feel to me like people with no interest in technical writing are coming out of the woodwork and making this political!
But who were these people? I don't doubt that there exists a population such that half of it complains about lack of gender neutral language in technical writing, but it does not necessarily follow that the same half should also be arbiters of techincal writing style.
> It was certainly one aspect of my writing I got a lot of feedback on. Changing that habit was fairly easy, and has entirely solved the problem.
> I’m not sure what’s political about pointing out that “the user” of eg a computer system is not always a “he”, but if you want to bring the conversation there, have fun.
Back when the convention in English (and various other languages) was that male pronouns were used in mixed or unspecified situation there was no need to point this out, it was given by the convention. Pronouns with masculine grammatical gender being exclusively applicable to men is a recent innovation, without which the problem you are trying to solve wouldn't even exist.
These shifts in meaning can of course happen organically, but I find it very curious that this particular one happened at the same time as the rise of feminism in the 20th century. More extreme examples of the same politicised language is using "womyn" instead of "woman" in order to prevent it from ending in "man".
Look: gender-neutral language in technical writing is not that difficult. Most of the time you can rewrite sentences to simply avoid gendered pronouns, e.g., "the successful applicant will use his skills to contribute to the platform team" becomes "the successful applicant's skills will contribute to the platform team."
As for alternating between pronouns, do you think readers are bothered by switching between using male and female names in examples? If you realize that the first example used Bob, the second used Agatha, and the third used William, do you suddenly leap up, scream INTERNET TROLLS GOT TO THEM!, and punch your monitor? No, probably not. So do you do if you see pronouns switching? Really? Again: I'm betting probably not.
I can tell you that as someone who's done technical writing for a half-dozen companies over the last decade or so that I have never heard of readers complaining about alternating pronouns. I have heard of them complaining about documentation that is exclusively male, though, because it turns out that's something some -- not all, maybe not most, but definitely some -- readers will notice and be a little nonplussed by.
What does the second half of the first sentence have to do with the second sentence?