
Why not have Lego like Software? - Mad_Fury
Hi Folks,<p>Really enjoy the HN great community with so much knowledge and enthusiasm, been lurking for a long time just reading the stuff you folks share and discuss, i could go on and on :) but need to keep it short...<p>Consider I am simply asking a question and no implying anything, rather just wondering...<p>Would it be possible or rather better if we built software in a Lego like way, Particularly i was wondering about Linux and some other tools (audio players, etc, etc)<p>I understand that all the distros do things in their own way, but at the same time share a common Core (correct me if i am wrong) Kernel, then different groups build things around it.<p>Why not unify those groups and make them work on the Shared Components and Individual - ill try to explain...<p>Imagine Having a Single distro which once you install the Core on your Computer asks you Which group you belong?<p>-Musician
-Scientist 
-Education
-Artist
- Developer 
etc.<p>Based on that it installs the UI + Tools for that domain, Where also if needed be Switching from a Domain to Another Domain just removes all the packages and then asks again what you wish to add<p>Just like Plugins for IDEs<p>Wouldnt it make the Software more Robust and also everyone gets to work and build his own vision + make the shared core much more reliable, I have been thinking about this and wondering about it, also been reading and searching but havent found the answer i was looking for really, I might have also skipped a few details on what i was going for, but i guess you who are versed in this topic would Elaborate more on it and give your comments.<p>Looking forward to the answers :) (if the title is wrong suggest a change, be glad to update to attract more thoughts and discussion, Reading suggestions welcome, Books, podcasts anything...)<p>Thanks
======
c22
Lego works the way it does because one company is responsible for producing
every single little plastic brick. The Lego company takes great pains to
control the precision of their manufacturing process to ensure the bricks
click together. If you worked for Lego and you designed a piece that did not
connect to any other pieces, they would not produce it. It seems obvious this
approach cannot work over the whole world. However, I think if you look at
software made only within a single company I feel like you _do_ see this
principle in action, frequently.

Also, Linux distros themselves _are_ the domain-focused packages you want, all
built around the "core" of the Linux kernel. There are various "distro
building" projects that approach this configuration problem from varying
levels of abstraction, e.g. Damn Small Linux, Ubuntu Builder, or NixOS.

~~~
Mad_Fury
I agree, its clear from the Corp companies, that is why i cant stop wondering
about the focus of developers, why each month we get a new framework, new
project that tackle the same or similar project as an already existing
project. Why not get together and let the best idea win, too often do i read
about philosophies and differences, one such i would mention Debian vs Devuan

Full disclosure i am not a Software engineer or architect

Thanks

~~~
thecupisblue
Because not all solutions work the best for all problems.

------
Piskvorrr
That's essentially what a package manager does - there are even such groups as
you suggest.

It does differ from Lego because of the complexity - multiple orders of
magnitude, in fact; and another thing: everything made of Lego has a
distinctive Lego-style. That's perhaps another driving factor: "that style
looks ugly, we will build another, nicer and more fluid."

~~~
Mad_Fury
But why not work collaboratively to improve the existing one? I understand the
package manager, please if possible share the groups links or anything
pointing in the direction, i would like to explore. Thanks

~~~
masswerk
Possibly, the BSD community is more like what you're looking for. It may be a
matter of top-down vs bottom-up philosophy (i.e., control).

~~~
Mad_Fury
I asked the question because of Distro Watch, i was always wondering why are
there so many Linux Distros, why not work on just a few Distros United, while
enabling the modularity where each group would have a piece of their vision
while enabling the users to chose which one they would use,

could this fragmentation be the cause for the lack of Linux presence on the
Desktop?

Rather than uniting the forces, and building experiences on top of
functionality, they each build their own experience their own way in hopes of
attracting users...

Thanks

~~~
exikyut
I'd say this is due to

\- disagreements/factions

\- wanting the satisfaction and validation of building something yourself

\- implementational conflict with existing systems

Sometimes the validation thing is due to ego, sometimes it's because the
journey of building a new system is useful in itself.

In the case of the distro I want to create (:P) it's because the (very simple)
ideas I have conflict with the way things are commonly done, so I'm going to
have to set out on my own. It'll be fun though. Users in my case will be a
headache because then it won't be a fun personal project. I just want to solve
my own frustrations.

And "Linux on the desktop" has been a meme for years. It'll never happen until
Linux achieves cohesiveness, aka never, full-stop. (Took Microsoft a while to
figure that and build WSL...)

NB, I don't see an email or other contact details in your bio. I could let you
know when I finally do get around to starting w/ that distro.

------
Kaibeezy
At root, philosophically (as opposed to practically), are you not describing
this: _A Pattern Language_ , Christoper Alexander, 1977,
[https://en.m.wikipedia.org/wiki/A_Pattern_Language](https://en.m.wikipedia.org/wiki/A_Pattern_Language)

Yes, it’s not as blocky and snap-together as Lego, but I always considered
that an artifact of the need for adaptability/evolution, flexibility to allow
things to fit into environments of random complexity, and accommodating a wide
array of human preferences.

“Programming”, like architecture/planning of the physical environment, seems
to have turned out to need this degree of granularity, nearly
indistinguishable from an organic system.

~~~
Mad_Fury
Seems like a great read, thanks :), book on the way...

~~~
Kaibeezy
Good call. I’d put it in a top 100, even top 10, for post-
zombie/Trumplightenment civilization rebuild.

~~~
Mad_Fury
Please do share any other reads :), thanks

~~~
Kaibeezy
OK. It doesn’t really go to your original post, but I often also point people
to _The Design of Everyday Things_ , Donald Norman, 1988,
[https://en.m.wikipedia.org/wiki/The_Design_of_Everyday_Thing...](https://en.m.wikipedia.org/wiki/The_Design_of_Everyday_Things)

Basic insight into the question of: “What makes a good tool?” Maybe for your
purposes, the angle is: “When is a simplified tool more useful than a complex
one?” Or: “How can I add complexity to a tool while preserving its
usefulness?”

------
BjoernKW
Doug McIlroy, the inventor of UNIX pipes is quoted as "Write programs that do
one thing and do it well. Write programs to work together. Write programs to
handle text streams, because that is a universal interface.".

I know it's probably not exactly what you're looking for but "Do one thing and
do it well" tools and UNIX pipes to connect such single-purpose tools in my
opinion is quite close to a "pluggable building blocks" metaphor.

In terms of web applications the REST paradigm and data represented as JSON
structures is a similar smallest common denominator.

Someone else has already mentioned Christopher Alexander's pattern language
approach. Design patterns and Atomic Design are similar concepts for the
software development domain.

~~~
mlthoughts2018
This is the real value of microservices, in my mind. It allows you to create
services that "do one thing and do it well," including deployment
considerations like the degree of autoscaling or load balancing required by
some components and not others.

Totally unrelated, it is also why I think "full stack developer" jobs are just
business junk and should be totally avoided. Business people wishing for "full
stack" employees are like a giant red flag of a business full of dysfunctional
complexity mismanagement, without a clear vision of what needs to be solved.

Inside of a software system, if we organized one monolith class or one
monolith webservice that was "full stack" in the sense that it had
responsibilities cutting across lots of disparate systems or layers of the
stack, we would rightly think this is a bad software design, rife with
unnecessary complexity from trying to make certain components do too many
things. And probably it would be reflected in how hard it is to refactor,
extend, or write clean tests for such a system.

I argue that popping up one layer outside of the software system and into the
realm of the organization of engineers who design and implement it, we should
still think the same way. Different employees should be isolated specialists
in certain parts of the system, and should not be expected to have cross-
cutting "full stack" job duties, which is really just a way for companies to
artificially save money on headcount (artificially, because the lack of clean
separation of concerns ends up costing them more money in the end).

This is not a statement that a given engineer should not learn about broad
aspects of system engineering. It's always good to diversify your knowledge,
have a better picture of the meaning of the work of other teams, or be able to
fill in during an emergency or something. Only a statement that an employee's
nominal job duties should not include vague bottomless complexity buckets
reflected by "cross functional" or "full stack" responsibilities.

This dysfunction is also often described with the "wear many hats" bullshit,
where the same person who is expected to design a performant machine learning
backend system is also expected to create a responsive single page website and
write any CRUD implementation, ecommerce shop logic, auth logic, etc. etc.,
all while also functioning as a devops engineer and a data pipelines engineer
because "wear many hats."

It's the same mistake that Single Responsibility Principle is meant to guard
against, just at the level of programming teams of people instead of
programming components of a software system.

~~~
BjoernKW
From my point of view, the main benefit of microservices is risk reduction.
Microservices allow you to iterate rapidly and roll out new features without
potentially affecting your application as a whole.

"Do one thing and do it well." can also be an argument for using
microservices. However, your application has to be of a certain size in order
for that to be a valid point. If your application merely has a few endpoints,
most of which are rarely - if ever - used as each other's inputs, then using
microservices only adds development overhead without yielding considerable
benefit. This is also why I'm critical of the surprisingly common one-size-
fits-all "Use microservices for everything" approach. Microservices are just
another design pattern. They're not a cure-all for every software development
problem.

I, for one, am doubtful about the front-end / back-end developer dichotomy
because it tends to create new silos. I don't particularly like "full-stack
developer" either because I think engineers should aim to be problem solvers
rather than people who merely turn someone else's solution into code.

Solving a particular problem in some cases might just involve the back-end or
the front-end but in most cases it not only involves both but other areas such
as requirements engineering or design as well.

~~~
mlthoughts2018
> “From my point of view, the main benefit of microservices is risk reduction.
> Microservices allow you to iterate rapidly and roll out new features without
> potentially affecting your application as a whole.”

I think this is the same reason why “do one thing and do it well” is useful in
software components or shell utilities too.

Basically, what you said about microservices is just a restatement of Single
Responsibility Principle / decoupling, in general, and it’s not unique to
microservices.

In this sense, I’d say microservices are not a design pattern. Rather, it is
an organizational strategy for minimizing the risks of coupling _in other
design patterns_ , which is also why microservices are explicitly a good idea
in cases when a service might only have a few independent endpoints.

I don’t buy the argument that it adds overhead. In fact I’d say it explicitly
reduces overhead, because the boilerplate of wrapping it in a web service is
just too tiny to care about compared with the overhead required to manage the
complexity when a system tries to be responsible for too many things.

In other words, I think good hygiene would say defaulting to microservices is
the right idea, and consolidating should require evidence that it provides the
expected benefits.

Personally I am reminded of the funny Andrew Gelman quote, “Just because
something is counter-intuitive doesn’t make it true.”

I think it’s fashionable to criticize microservice designs and people jump on
the bandwagon of pretending like criticizing microservice designs is somehow
more well-rounded or more nuanced, but really it’s not. It’s just an attempt
to make something fashionable via contrarianism.

> “I think engineers should aim to be problem solvers”

I agree with this very much and it is why I think cross-silo / cross-
functional engineering is junk. The best way to organize a team of problem
solvers is to clearly delineate the boundaries between what the problem
solving responsibilities are.

If people use the term “problem solver” to just be an ambiguous bucket of
unlimited complexity by virtue of expecting people to “just solve the problem”
no matter what the impedance mismatch between their skill area and the problem
area, then it renders the term “problem solver” meaningless, and will just
lead to a big mess of dysfunction.

As an engineering manager, one of my most important jobs is to empower
engineers to say “no.” To reject inappropriately allocated work or
inappropriate cross-functional pivoting driven by ineffective short term
business thinking.

The engineers have to be empowered to say, “Hey, that work should not be
assigned to us” because it signals whether the business overall has structured
itself and staffed itself appropriately to actually respond to the business
problems sent by Mother Nature at us.

If we are structured wrongly, yet try to paper over it by pressuring engineers
to be HR-approved “problem solvers” in the sense of more just doing what they
are told instead of doing what would actually work, it will be medium-term
detrimental and counter-productive.

In this sense, I think “silo” is a good word, and indicates a good engineering
culture where different silos have the power to say “no” and where
specializations are respected and planned for by the management.

Yes, of course that can be subverted into an anti-pattern in poorly run
businesses, but that’s a property of bad management, not of “silos” in and of
themselves.

~~~
BjoernKW
> I don’t buy the argument that it adds overhead. In fact I’d say it
> explicitly reduces overhead

At the very least microservices add infrastructure overhead and network
latency.

In terms of everyday development complexity, instead of 1 project for a
monolithic application you now have n projects, one for each microservice,
which is fine if your organization is large enough and sufficiently well-
organised to adhere to a "One team is responsible for each microservice."
organisational pattern.

If, however, your developers find themselves juggling multiple projects at a
time and having to run several microservices on their local machine in order
to make a change this might indicate that either there's an impedance mismatch
between the communication structure of your organization and the design of
your product
([https://en.wikipedia.org/wiki/Conway%27s_law](https://en.wikipedia.org/wiki/Conway%27s_law))
or that your application simply isn't sufficiently complex yet to warrant the
use of microservice patterns.

> In other words, I think good hygiene would say defaulting to microservices
> is the right idea

Good hygiene is possible without microservices, too, just as microservices can
be used to build the worst kind of monolith: The distributed monolith.

> If people use the term “problem solver” to just be an ambiguous bucket of
> unlimited complexity by virtue of expecting people to “just solve the
> problem” no matter what the impedance mismatch between their skill area and
> the problem area, then it renders the term “problem solver” meaningless, and
> will just lead to a big mess of dysfunction.

I suppose it depends on how an organisation manages and breaks down tasks. If
for example you work on a user story level a single developer can indeed be
responsible for implementing a complete story including all aspects that might
be involved. "Responsible for" doesn't mean "You have to do it all by
yourself.". If you have a particular problem ask a specialist and ideally
everyone involved will even learn something new in the process.

When viewed from an impedance mismatch perspective the front-end / back-end
distinction in some ways is quite arbitrary.

The prototypical example of impedance mismatch is the one between object-
oriented programming and relational data. However, most would expect back-end
developers to be both skilled in - say - Java and a relevant SQL dialect.

The impedance mismatch between a front-end technology such as Angular and a
back-end technology such as Spring (both of which make use of similar
patterns) is much less pronounced than that between SQL and an object-oriented
language.

~~~
mlthoughts2018
> "which is fine if your organization is large enough and sufficiently well-
> organised to adhere to a "One team is responsible for each microservice."
> organisational pattern."

I actually think the opposite is true. Having the developers on a single team
manage N different service-specific projects is far simpler and lower
overhead, because the infrastructure overhead of N different projects is the
trivial part, compared with the complexity of managing the internal state of a
single project responsible for too many things.

It's like separating a software implementation into multiple different
functional units, either different classes or modules or functions, to
decouple them and keep them cleanly separated for testing. The "overhead" of N
different specific classes is not actually a legitimate concern compared with
the overhead of a single class trying to do N overlapping things (or
functional units if using a different programming paradigm). It's no different
for microservices.

Plus it's fairly simple to run your own container orchestration layer that
allows efficiently mapping different microservices onto heterogeneous hardware
resources, whether in your own data center or with a cloud provider.

For example, my team operates different web services for different machine
learning applications. One service might do nothing but predict the age of a
person in a photo. Another web service might do nothing by predict the gender.
From experience, I can tell you that muddying these two things together into a
single web service, just because the two endpoints are separate concepts, is a
bad idea. It complicates testing, the organization of acceptance test data
sets, the ability to scale out the resource needs of the different models
separately, to control how they are deployed. You superficially feel like you
are "saving effort" on the tiny boilerplate code that is repeated between the
two to expose endpoints, sanitize inputs, etc. (most of which can be factored
into an internal software package anyway, to have good code reuse), but really
you are not at all, because the growth of the testing code, single-project
deployment and build code, etc., dwarfs whatever tiny amount of repeat web
framework stuff there might have been.

They are just different logical units of concern, even if they are components
of the same broader API of machine learning services we provide. I think it
would represent a classic example of what you are describing as a case when
you'd prefer not to organize it as microservices, and yet experience has
conclusively shown my team at least that there is much greater boilerplate and
mangled overhead involved with putting them into the same web service than
with keeping them separate and duplicating an utterly tiny amount of work on
the web service wrapper part. And this is all with the same team members
maintaining and extended all of the services in a small company. It just
streamlines and reduces our work to do it with microservices.

> "If, however, your developers find themselves juggling multiple projects at
> a time and having to run several microservices on their local machine in
> order to make a change this might indicate that either there's an impedance
> mismatch between the communication structure of your organization and the
> design of your product"

I actually disagree with this, and also it is an incorrect use of "Conway's
Law" (which is about software organizational structure taking on the same
organizational structure of the social and communication patterns of the
company, and being limited to repeat those structures as the communication
process limits the design). None of that affects sets of services differently
whether they are bundles or separated as microservices. Conway's Law would
affect the design limitation the same way in either case, and the organization
as separate or coupled services would be a totally different thing, happening
at a metaorganizational level to the level where Conway's law can be
applicable to software.

It's also actually quite nice and easy to write extremely simple tooling with
e.g. docker-compose and a trivial Makefile to control local development that
has effects spanning multiple microservices. And in fact this creates really
nice, version controlled artifacts that indicate how that local testing and
organization works, which is often severely lacking inside the internals of a
more monolithic multi-service single project, where you might be able to run
it locally with superficially simpler looking commands or resources, but then
the internal operation is overly coupled between services and you cannot
easily see which things you can modify or change without impacting other
things in unintended ways.

Lastly, I think you missed the point of my comment on impedance mismatch. The
problem would be taking someone who is a specialized engineer in X but then
expecting them to "be a problem solver" by spending time on Y, where the
person's value add is huge if you let them work on X, but is tiny if you ask
them to work on Y, even after accounting for whether the business needs X or Y
at the moment.

For example, taking one of your best front-end engineers and requiring them to
fix some broken database normalization problem, or write a stored procedure,
instead of having pre-arranged for the team to contain the necessary expertise
such that the front-end person can delegate that task to someone whose
comparative value add is all on the data engineering side, meanwhile the
front-end person goes back to work on more front-end tasks, where their
comparative value add is high.

The impedance mismatch comes from recognizing that if you keep a specialist in
X busy with work in X, it is more optimal for the company (and more morale
building for the worker), than if you ask them to be preemptible to stop doing
X and instead do Y or Z which ostensibly should be things farmed out to
specialists in Y or Z.

It can be this way with a backend engineer, a front-end engineer, a scientific
computing expert, a database expert, whatever.

You are taking the classic impedance mismatch example of an ORM too literally,
because the general idea is that you are restricting the valuable flow of
something (the specialized skill of the worker) instead of fundamentally
reorganizing resources so as to not need to restrict it (by cleanly separating
responsibilities into different silos of specialization).

------
crb002
This is why category theory interests developers. Composing programs like lego
bricks.

It is more nuanced that, distros involve a lot of snowflake programs using
several build systems using several languages many of which do not compose
well together.

The ecosystem is gravitating back into a homogeneous LLVM back end reminiscent
of COBOL/C/FORTRAN all playing well together.

Build systems are still a mess. Cmake and Bazel for now are on top. The next
winner will allow for massively distributed builds on AWS Lambda and S3. I'm
personally working on a distro for AWS Lambda.

~~~
jingwen
Bazel's buildfarm remote execution service is growing:
[https://github.com/bazelbuild/bazel-
buildfarm](https://github.com/bazelbuild/bazel-buildfarm)

------
__d
Red Hat (and I imagine Fedora) has a concept a little like this: when
installing the system, you can select from a list of "roles", and those roles
in turn imply a bunch of individual packages.

But, there's not many of them, and from memory, there's really only "Desktop"
and "Developer Workstation" as options like the sort you're proposing.

~~~
nineteen999
Redhat/Fedora groups are broken up a lot more than that. From a RHEL 7.5 box:

    
    
      # yum grouplist
      Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
      There is no installed groups file.
      Maybe run: yum groups mark convert (see man yum)
      Available Environment Groups:
         Minimal Install
         Infrastructure Server
         File and Print Server
         Basic Web Server
         Virtualization Host
         Server with GUI
      Available Groups:
         Compatibility Libraries
         Console Internet Tools
         Development Tools
         Graphical Administration Tools
         Legacy UNIX Compatibility
         Scientific Support
         Security Tools
         Smart Card Support
         System Administration Tools
         System Management
    

You can install any of these later, eg.

    
    
       yum groupinstall "Development Tools"

~~~
__d
So of those, perhaps "Development Tools", "Scientific Support", and perhaps a
couple of others reflect the OP's proposal of high-level task-centric modular
functionality.

But there's no reason that groups couldn't be configured for "Musician's
Tools", "Artist's Tools", "Educator's Tools", etc.

~~~
nineteen999
The arbitrary groups would need to have a comps.xml file generated.

[https://fedoraproject.org/wiki/How_to_use_and_edit_comps.xml...](https://fedoraproject.org/wiki/How_to_use_and_edit_comps.xml_for_package_groups)

You can pass these to createrepo with the -g flag to create/update package
groups in an YUM/DNF repository.

------
rektide
Shared, assembleable components is a dream shared by good projects such as
Node-RED and the Sugar desktop environment.

There's a lot of rethinking what an app is that's necessary to get to a more
"lego" like system, where things snap together & work together via common,
regular interfaces. It requires throwing out a colossal amount of the user-
experience work we've done, & finding broad top-down principles & systems that
serve and guide & place & route the individual components & their connections,
which is "some seriously next level shit".

[https://nodered.org/](https://nodered.org/)

[https://wiki.sugarlabs.org/go/Getting_Started](https://wiki.sugarlabs.org/go/Getting_Started)

------
asplake
Reminds me of “Software ICs” and the Brad Cox book “Object-Oriented
Programming: An Evolutionary Approach”, 1991 if not before (I’m seeing the 2nd
edition). He co-created Objective-C.

------
Mad_Fury
Missing from top:

These Days there is a new project for an already existing project on the same
topic which is in a Way fragmenting the Market more and more - Yes Different
features etc etc. but why not work on a common problem - Extend the
functionality and allow the user to chose the things he needs - You buy a lego
set and you can mix and match with a set from 10 years ago

Is it because everyone wants to lead a project, opinions differ on the
solution approach or something else

~~~
exikyut
You nailed it with the last line: people. Everyone wants to rule the world.
This fundamentally doesn't work, but that fact doesn't stop everyone from
irrationally trying anyway.

The problem you are getting at is entirely sociopolitical, and is, sadly,
effectively unsolveable. I spent a decade and a half figuring this out
(started circa 2003, gave up roughly last year). It was a good journey.

If I was going to pick a starting point to springboard off of, it would be
that humans are embarassingly easy to social-engineer. With developers, you
just stroke their egos a bit, make them feel validated and secure, and they'll
build pretty much whatever you want. Managers have the opportunity to exert
power and control over _entire teams of people!!1_ and this is so fun to play
with sometimes the end goal (the enabler of the control, aka the
responsibility) nearly (or actually :/) goes out the window.

It only takes a little bit of study to reverse-engineer the brain and almost
immediately you'll get actionable insights on how to make people feel like
they're getting what they really want when you're actually making them achieve
your agenda instead. Fill people's heads/lives with movement they have to keep
up with, make it so they can ride it like a fun wave if they do it right, and
bam, you can control lots of people.

Crowd dynamics is so fascinating.

I'm not even touching on office politics, cults, etc.

Sadly open source has its own toxicities. Linux is in many ways the PHP of
operating systems; some really amazing bits hiding away in invisible corners,
but no good oversight due to its bazaar-based design, and thus no
cohesiveness. You will never get very far trying to push balanced views within
Linux; the community has become too much of a reactive-yet-immobilized
culture. The Lennart Poettering debacle is one recent example.

Moving away from open source specifically, software design itself is also
fundamentally broken because it tries to abstract out the people - what put
the _intelligence_ into "artificial intelligence"! - as perfect infallible
beings that _sometimes_ make mistakes. (For example, why on earth do
programming languages describe errors as "exceptions"?!)

Programming is full of leaky abstractions that have never been properly
followed up on and properly ironed out of the woodwork.

What we're currently stuck with is are systems that are intrinsically tied to
individuals' mindset and skills, and as a result no two codebases are really
the same.

So, it's generally really impressive if you can take Random Component A and
expect it to work properly with Random Component B. One example I can pull of
the top of my head right now:
[https://github.com/taviso/loadlibrary](https://github.com/taviso/loadlibrary)

Another way to describe this would be to say that "taking anything and making
it work with anything else" is Very™ Very® _ʎɹǝΛ_ NP-Hard™©®, if not
categorically impossible. (This is what took me 15+ years to accept.)

In terms of perfect governance, I definitely take a cyberpunk stance.
[https://news.ycombinator.com/item?id=17320295](https://news.ycombinator.com/item?id=17320295)
[https://news.ycombinator.com/item?id=17083221](https://news.ycombinator.com/item?id=17083221)

I also just found this and am adding it in:
[https://news.ycombinator.com/item?id=17322457](https://news.ycombinator.com/item?id=17322457)

Related thoughts/continuation:
[https://news.ycombinator.com/item?id=17299517](https://news.ycombinator.com/item?id=17299517)
(if you're truly interested, you might follow all the continuation links)

TL;DR study people, bureaucracy and sociopolitical crowd dynamics

~~~
Mad_Fury
Thanks for this, ill take a look :)

------
exikyut
I took another look at your post, and figured a 2nd comment would be best.

What would be the value of separating users into isolated groups? What if I
want to belong to multiple groups? The simple ideology you've used breaks
down.

Okay, so you could add extra subgroups, but here's the thing - this sort of
thing gets tried all the time, the abstractions inevitably break, the
implementer(s) give up (hopefully after not too much time invested).

The TL;DR of this mindset is that _complexity is sometimes good_. Sometimes
the right tool for the job is the most complex tool, which exposes all the
features and shows you the 9,999 knobs and switches and buttons at once. Lot
to take in, but then you can do whatever you want.

Trying to shoehorn complexity into Fisher-Price user interfaces results in
hamstrung yet bloated systems with more complexity than will ever be used.
That's what happened with Electron, incidentally, and why some feel so
uncomfortable and have such cognitive dissonance about it, I think.

~~~
Mad_Fury
I like the part in Photoshop, Where you could say for what you wish to use it
and it would rearrange the menus or add and remove certain options which are
needed for that Use case, this and some other cases influenced my question,

Do you think it is a waste of resources having so many tools which do so many
things the same way + a few unique options,

I like the ShareX tool (screen recording) which asks me to install the extra
feature if i wish to use Screen recording, (goes back to the previous comenter
who mentioned it is basically packages)

I hope i am making myself clear in the questions and comments :D , not a
native eng speaker and also didnt really prepare the question well, it just
came to me in a rush as it was stuck in my head for a long time, sorry about
that,

Sometimes i look at old software and some new one (sublime text) where devs
worked hard for our resources where as today, (luckily i have 16 GB) there
seems to be a lack for performance or the will to design efficient systems
(due to abundance of resources in PCs?)

What did you mean isolating users into groups? I wouldnt want that :D, but i
mentioned it as a layer on top of Core things, example

All users want video and web tech - but then some are more specific - these
specifics would be the packages they get bundled on top of core features, and
the Collective groups of devs work on those

Linux drivers and installing certain software is sometimes a real pain and can
vary from distro to distro , if the linux is to succeed shouldnt we
collectively fight to avoid this barrier

Forgive me folks its my first time doing this i think i jump here and there ;D
perhaps even confusing you sometimes...

I will update the email Thanks

~~~
exikyut
Quote-replying is probably a good idea

(For once regex find and replace in my editor worked!)

> _I like the part in Photoshop, Where you could say for what you wish to use
> it and it would rearrange the menus or add and remove certain options which
> are needed for that Use case, this and some other cases influenced my
> question,_

That's nice UX design, I agree. It certainly increases the Q&A workload
though.

> _Do you think it is a waste of resources having so many tools which do so
> many things the same way + a few unique options,_

Most definitely, but we're stuck with no alternatives at this point.

> _I like the ShareX tool (screen recording) which asks me to install the
> extra feature if i wish to use Screen recording, (goes back to the previous
> comenter who mentioned it is basically packages)_

This is a wise approach. The problem is that it requires additional testing
load than if everything were installed by default.

> _I hope i am making myself clear in the questions and comments :D_

You're fine.

> _not a native eng speaker_

But easily understandable :)

> _and also didnt really prepare the question well_

You've held together very well.

> _it just came to me in a rush as it was stuck in my head for a long time,
> sorry about that,_

It's fine. Really.

> _Sometimes i look at old software and some new one (sublime text) where devs
> worked hard for our resources where as today, (luckily i have 16 GB) there
> seems to be a lack for performance or the will to design efficient systems
> (due to abundance of resources in PCs?)_

I do this every day running Chrome on a 12-year old single-core 32-bit laptop
with 2GB RAM :D everything's _constantly_ swapping like crazy. Sometimes a
single tab will take 10 seconds to open. New URLs routinely take half a minute
or more for the webpage to display. It's fun, and a lesson in... something,
I'm not sure what, but I'm definitely learning it inside and out.

I think it's all about the vague threshold that management can't get away with
shipping a product because it's so horrible that users will simply outright
reject it. Software bloats out to the point where it sits just above that
unusability threshold, then it hovers there. I think this is the rule that
governs everything.

Enterprise systems are at the worst end of this; they're frequently so
horribly slow they're unbearable. This is because they are used by a captive
audience that cannot switch away even if they want to.

Pretty much everything else "settles" after a while so it sits just above this
threshold.

What with the progression of tech, this means that if you install Word 97 in a
Win95/98 VM (or maybe it'll even work natively :D), you'll notice that it only
uses 8MB RAM, and then when you minimize the window it suddenly collapses down
to 4MB! Or, for another example, the cross-platform old KDX filesharing client
uses all of 8MB RAM on Linux. Never seen another app like it. I think the
reason this is depressing is because when we run out of RAM we get sad, and
then we think of how amazing things _could_ be... but in real life it doesn't
work that way, when we build applications we take advantage of the resources
that are available.

> _What did you mean isolating users into groups? I wouldnt want that :D, but
> i mentioned it as a layer on top of Core things, example_

> _All users want video and web tech - but then some are more specific - these
> specifics would be the packages they get bundled on top of core features,
> and the Collective groups of devs work on those_

OK, I meant this bit in your post:

> _Imagine Having a Single distro which once you install the Core on your
> Computer asks you Which group you belong?_

> _-Musician -Scientist -Education -Artist - Developer etc._

These are very vague titles, and their meaning is not very unambiguously
defined. I could interpret them in many different ways. This is where the
leaky abstractions start.

Using your groupings:

\- If I'm a game developer, I might be a musician, artist and developer.

\- If I'm a teacher, I might pick education and scientist.

You said "which group [do] you belong [to]?", with group being singular not
plural. This is where I was getting isolation from.

Okay, sure, so you make it so people can be in multiple groups. But then you
loose the optimal efficiency you're chasing; if I'm a game developer I might
want specific art tools installed, and installing _all_ the art tools might be
a waste. So, okay, say you break the categories down, and add "game
developer"... but then the 500 different kinds of game developers will all
have something to say, and this is where things get really messy. :)

To rephrase my earlier comment, providing the complexity and not isolating it
has worked out to be the best practical solution.

In order for grouping to be successful, it would need to be _better_ , and
provide _more flexibility_ than simply throwing everything in everyone's face.
So the grouping mechanism needs to hold up to the kinds of edge cases I've
described.

> _Linux drivers and installing certain software is sometimes a real pain and
> can vary from distro to distro , if the linux is to succeed shouldnt we
> collectively fight to avoid this barrier_

Consider that 100 and probably 1000 people before you have tried to figure
this out. As I said, I gave the same considerations 16 years :) so surely
others have thought long about this as well.

Ultimately the solution there is to simply learn the various systems.

> _Forgive me folks its my first time doing this i think i jump here and there
> ;D perhaps even confusing you sometimes..._

Nope, it's fine, I can follow along.

I have a slightly tangential thought to add at this point.

The reason I personally went down the specific paths I did with trying to
understand the way things were and the rationale behind everything, was as a
coping/reaction mechanism for my ADHD. I had a hard time trying to grapple
with the seemingly infinite complexity I was being presented with (with Linux,
open source, software development, etc), and I tried various techniques to try
and make sense of everything in spite of not being able to grasp the bigger
picture.

Naturally, this noble endeavor was itself also compromised due to the ADHD. I
couldn't even workaround my ADHD effectively, because of my ADHD. Sigh.
Turtles all the way down.

I've found that alternative therapies can be unbelievably effective for mental
health, so I've made some good strides with comprehension in recent years.

You mention jumping around a bit, that you decided to talk about this in a
rush, etc. I used to be like that :) (let me see if I'm right: you posted this
in a rush before you overthought about it or got bored?) so I'll mention
something I realized that was specifically helpful from an ADHD standpoint.

First of all, when I started poking Linux I wanted to find "the one true
Linux, the most authentic experience". It took me years to understand why I
even wanted this in the first place and why it seemed so necessary.

It had to do with how I learned: I couldn't just throw a bunch of information
in my head and have it arrange itself; the attention span necessary for that
to function wasn't available. I needed to handhold the organizational process
a bit more, and to do that I needed to explicitly find and tag foundational
aspects of what I was learning about so I could keep those in mind as I
stumbled on new information and figured out what to associate it with.

Problem was, I couldn't figure out what to classify as foundational. Where was
"the root"?

Here's an answer that took me 10 years to realize: Linux is a relativist
system, not an absolute system. Or, in more contemporary parlance (as noted by
another comment in this thread), it's bottom-up, not top-down. So everything
is relative. There _are_ no roots.

The best way to properly understand "Linux" is to simply just dive in
_somewhere_ but focus on absorbing as much history as you can; each historical
bit of info will help you make more sense of things.

> _I will update the email Thanks_

Cool!

FWIW, I must say upfront that I'm TERRIBLE at frequent back and forth
communication. I tend to use email as a once-every-6-months ping mechanism. If
I say hi any more frequently than that, miracles are happening :)

So, don't be surprised if I take ages to reply. I burn out easily (still
working on fixing this).

------
davidjnelson
This is a really cool idea. It would be great to have something like this but
for react. The barrier there seems to be that everyone would have to adopt a
common compenetized css standard.

------
darkhorn
[https://en.wikipedia.org/wiki/Smalltalk](https://en.wikipedia.org/wiki/Smalltalk)

------
carlhjerpe
If you compare a Lego toy with a molded one you'll see that the Lego toy
easily breaks apart, isn't as polished and depends many parts.

This is all things we don't like in the software world.

------
miguelrochefort
The solution is the semantic web.

~~~
Mad_Fury
You went All in On Web :D, what about other Systems and Tools, thanks

------
Viker
Sound engineers have tried this many times....

~~~
Mad_Fury
Any Reading you recommend or sources, or just saying Thanks

