In other words you have a network, with lots of small computers (clients) talking to one or more big computers (servers).
That is so pervasive since the 1990s that you seem to assume it's how everything worked. It is not. Xenix was strong in the earlier era of host based computing.
The core concept is that you only have 1 computer, the host. It's kept in a special server room somewhere and carefully managed. On users' desks they have just terminals, which are not computers. They are just screens and keyboards, with no "brains". Keystrokes go over the wire to the host, and the host sends back text that the terminal displays.
No network, no computers in front of users.
In the '70s and early '80s this was the dominant model because computers were so expensive. Before microprocessors host machines cost tens of thousands to hundreds of thousands of $/£ and companies could only afford 1 of them.
Most were proprietary: proprietary processors running proprietary OSes with proprietary apps in proprietary languages.
Some companies adapted this in the microprocessor era. For instance Alpha Micro sold 680x0 hosts running a clone of a DEC PDP OS called AMOS: Alpha Micro OS. It sold its own terminals etc. It was cheaper and it used VHS videocassettes as removable media, instead of disks.
Unix replaced a lot of this: proprietary versions of the same basic OS, on those proprietary processors, but with open standards languages, open standard terminals, etc.
Xenix was the dominant Unix for x86 hosts. It let you turn an 80386 (or at a push a 286) PC into a host for a fleet of dumb terminals.
Xenix as stock came with no networking, no C compiler, no X11, no graphics, no GUI, nothing. Each box was standalone and completely isolated.
But a 386 with 4MB of RAM could control 10 or 20 terminals and provide computing to a whole small business.
No Ethernet, no TCP/IP, no client/server stuff.
Client server is what killed Xenix's market. When PCs became so cheap that you could replace a sub-$1000 terminal with a sub-$1000 PC, which was way more flexible and capable, then Xenix boxes with dumb terminals were ripped out and replaced with a PC on every desk.
Not even the articles from the webpage you've linked talks about "...big computers (servers) [...] with dumb terminals" nor that the concept of client/server is
> The core concept is that you only have 1 computer, the host. [...] On users' desks they have just terminals, which are not computers
Opposite of what you write the linked page starts with
"In a client-server system, a large number of personal computers communicate with shared servers on a local area network" and later explicitly lists
continues with references to Microsoft (N)OS.
And then the refecence from NOS leeds us to
"There are only a few popular choices – Novell, UNIX, Linux, and Windows. The complexity of NOS forces a simple overview of the features and benefits."
So I don't really understand what you point that Netware neither was a file-- nor a print-server.
That's not really correct.
I mean, yes, it was rock solid, true.
But it was not a file server, or even a file and print server. It wasn't a "server" as such at all (although it could be one if you wanted.)
A "server" is a concept from client/server computing:
https://www.sciencedirect.com/topics/computer-science/client...
In other words you have a network, with lots of small computers (clients) talking to one or more big computers (servers).
That is so pervasive since the 1990s that you seem to assume it's how everything worked. It is not. Xenix was strong in the earlier era of host based computing.
The core concept is that you only have 1 computer, the host. It's kept in a special server room somewhere and carefully managed. On users' desks they have just terminals, which are not computers. They are just screens and keyboards, with no "brains". Keystrokes go over the wire to the host, and the host sends back text that the terminal displays.
No network, no computers in front of users.
In the '70s and early '80s this was the dominant model because computers were so expensive. Before microprocessors host machines cost tens of thousands to hundreds of thousands of $/£ and companies could only afford 1 of them.
Most were proprietary: proprietary processors running proprietary OSes with proprietary apps in proprietary languages.
Some companies adapted this in the microprocessor era. For instance Alpha Micro sold 680x0 hosts running a clone of a DEC PDP OS called AMOS: Alpha Micro OS. It sold its own terminals etc. It was cheaper and it used VHS videocassettes as removable media, instead of disks.
Unix replaced a lot of this: proprietary versions of the same basic OS, on those proprietary processors, but with open standards languages, open standard terminals, etc.
Xenix was the dominant Unix for x86 hosts. It let you turn an 80386 (or at a push a 286) PC into a host for a fleet of dumb terminals.
Xenix as stock came with no networking, no C compiler, no X11, no graphics, no GUI, nothing. Each box was standalone and completely isolated.
But a 386 with 4MB of RAM could control 10 or 20 terminals and provide computing to a whole small business.
No Ethernet, no TCP/IP, no client/server stuff.
Client server is what killed Xenix's market. When PCs became so cheap that you could replace a sub-$1000 terminal with a sub-$1000 PC, which was way more flexible and capable, then Xenix boxes with dumb terminals were ripped out and replaced with a PC on every desk.