> Maybe the most marvellous, utopian idea for software was Unix program design, in the mid-1970s. This is the idea that programs will all talk together via unstructured streams of text.
It was a great idea in the 1970's. But we had OLE and COM and CORBA and GNOME and KDE and now PowerShell where you can load .NET assemblies and pass around structured objects in the same process. We've had Cocoa which exposed Smalltalk style control of objects in applications since the 1980's.
This isn't a problem of technology. It's a problem of incentives. It's the same incentives that drive programs to try to look different from each other instead of fitting smoothly into a set of user interface guidelines. It requires a producer of software to find it beneficial to them to fit into the ecosystem around them.
> The promise of a computer is to reveal truth, through a seamless flow of information.
Seamless flow of information requires connecting the underlying semantic domains. Semantic domains rarely exactly match. Humans use enormous amounts of context and negotiation to construct language games in these situations.
> music was around forever but in the 16th century, if you made music, you made it for god, or the king.
This is empirically false. We have records of dance music, of broadside ballads that were sung in the streets, of the music people played at home. And there were lots of student drinking songs about being young and free.
I think the real solution to the author's angst is to go study the field more broadly and get out of whatever ghetto they are living in.
Curious. To me this is the worst thing you could ever do. Talking via streams to say `cat file.txt | grep ERROR | wc -l` is cool. But you could do SOOO much more, if programs would actually output structured data streams. You could connect standalone applications much in the same way as Visual Scripting, where you plug inputs and outputs together and mix them with operators (think of Unreal Engine's Blueprint, just for command line tooling).
It's a true shame that Linux did not develop a well defined CLI metaformat that defined exactly what parameters are there, what's their documentation, their completion, what outputs does a program produce based on the parameters you provide, etc. You could do true magic with all this information. Right now you kinda still can, but it is very brittle, a lot of work and breaks potentially with each version increment.
I think it stems from the design failure to build your app around a CLI. Instead, you should build your app around an API and generate the CLI for that API. Then all properties of structured data streams and auto-explore CLI shells come for free.
A lot of people have had this thought over the decades, but it hasn't really happened -- powershell exists for linux, but who's using it? The genius of the primitive representation (stringly typed tables) is that it has just enough structure to do interesting processing but not enough to cause significant mental overhead in trying to understand, memorize and reference the structure.
Case in point of the difficulties of adding more structure without wrecking immediacy of manipulation is json.
For anything with more than 1 level of nesting, I do stuff like
blah | jq . | grep -C3 ERROR
blah | jq $SOME_EXPRESSION
I'm not saying it's not possible to get out of this local optimum, but it appears to be a lot more subtle than many people seem to think. There may be an simple and elegant solution, but it seems it has so far escaped discovery. Almost five decades later, composing pipelines of weakly structured and typed bytes (that, by convention, often are line separated tables, possibly with tab or space seperated columns) is still the only high-level software re-use via composition success story of the whole computing field.
Powershell is unfortunately not the shining example of a shell that best leverages structured/typed input/output succinctly.
But on Windows, sysadmins use powershell heavily. Nearly every IT department that manages windows machines uses Powershell.
I don't buy that. On a GNU/Linux box, there's few things that are easier than installing a new shell, if you prefer a different shell than bash it's two commands away. Bash does the job people expect it to do and would probably be _very_ alienated it they'd had to start messing around with .net gubbins.
>And yes I would prefer your second "mental overhead" way as it involves less typing
Maybe for the first time you would. Maybe if you were to accomplish this specific thing. Anything else? Have fun diving into the manpage of your shell _and_ the programs you want to use, and you better hope they share a somewhat common approach to the implemented (object) datatype or well, good luck trying to get them to talk with each other
>Powershell is unfortunately not the shining example of a shell that best leverages structured/typed input/output succinctly
I would just remove the last part, then agree with you: ">Powershell is unfortunately not the shining example of a shell"
> Nearly every IT department that manages windows machines uses Powershell
I mean, what other choice do they have there? cmd? Yeah right, if you want to loose your will to live go for it
When you are SSH'ing into one of 10k containers for a few commands, you will only use what is already there. Bash is there and works and that is what one will use 100% of the time. No one is going to permit Powershell to be bundled to satisfy personal preferences.
This is far more common than you think in enterprise corporations. I work at the hypothetical one, which doesn't use k8s. (yet to upgrade cloud infrastructure of native data center)
If power-shell was bundled by default in Linux distro LTS releases, a lot of sysadmins I know would start using it, since they are already familiar with it for windows and write all their scripts in the same.
1. It doesn't, just use zsh and piping into grep becomes a single character, like so:
alias -g G='|grep -P'
Text gets around that problem, in a sense.
Just like GUIs should but usually are not gracefully, responsively scaled to user expertise, the developer experience should but usually are not gracefully, responsively scaled to the appropriate level of scaffolding to fit for purpose to the requirements defining the problem space at hand. I need more representations and abstractions, not less.
Metaformats drag in their own logistical long tail that in many use cases are wildly heavyweight for small problems. Demanding metaformats or APIs everywhere and The Only Option trades off against the REPL-like accessibility of lesser scaffolding. API-first comes with its own non-trivial balls of string; version control between caller and callee, argument parsing between versions, impedance mismatch to the kind of generative CLI's you envision, and against other API interfaces, etc.
The current unstructured primitives on the CLI, composable into structured primitives presenting as microservices or similar functions landing into a more DevOps-style landscape, etc. represents a pretty flexible toolbox that helps mitigate some of the risks in Big Design Up Front efforts that structure tends to emerge in my experience. I think of it as REPL-in-the-large.
As I gained experience I've come to appreciate and tolerate the ragged edge uncouthness of real world solutions, and lose a lot of my fanatical puritanism that veered into astronaut architecture.
Like even treating code and data the same, and minimizing the syntax required so you're left with clean, parseable code and data. Maybe in some sort of tree, that is abstract. Where have I heard this idea before . . .
> I think it stems from the design failure to build your app around a CLI. Instead, you should build your app around an API and generate the CLI for that API.
Now this I am fully in favor of, and IMHO, it leads to much better code all around: you can then test via the API, build a GUI via the API, etc, etc, etc.
> and now PowerShell where you can load .NET assemblies and pass around structured objects
He has another post about VS Code where he expresses amazement about its basic refactoring ability. So yeah, fully agree.
> yeah, it renames the require statements, when you rename a file.
??? IntelliJ has probably been doing this since day 1 (in 2001), more have probably done so for far longer. Granted this applies to Java, not JS, but still. Speaking of IntelliJ, it is completely absent from the author pathetic history of IDEs, adding to the fact that he ought to get of the ghetto he's living in.
One of the biggest unsolved mysteries in our field.
I wrote a statement in Rider that used a class that didn't exist. So I hit Alt+Enter with my cursor there, and had it create me a class. Then I hit Alt+Enter on that class and had it move it to a separate file. Then I added the base class it should inherit from, hit Alt+Enter, and had it scaffold out all the methods I need to override. About fifteen or twenty seconds with a modern IDE and didn't require any of my intellectual capacity to actually execute.
I realized that another class in this multi-GB codebase had a typo in its name, and hit Shift+F6 to rename it. Typed in the correct name, and twiddled my thumb for two or three minutes while it renamed every instance in the codebase.
Found a file that used a declaration style that's against our coding style. Hit Alt+Enter on one example, told Rider to configure the project to always prefer the other style and replace all examples in the file.
None of those are particularly magic, but having so many of them that are completely reliable a context menu away makes an enormous difference. Also with a recent file list popup and really excellent code navigation, I find that I don't keep a file list or tabs open at all. I just jump to symbols and toggle back and forth between the last couple of files.
So much auto-generated boiler-plate code reminds me of nightmarish Java codebases
Sometimes all you need is to temporarily connect things together, the environment is benign, and your requirements are not demanding.
But Mouser.com shows over 2 000 000 entries (over 390 000 datasheets) in "Connectors". High current, high voltage, high frequency, environmental extremes, safety requirements, ...
There are all sorts of situations where crimping wires together won't do the job. Same with text. By the same token, there are all sorts of situations where text will do the job, but people are tempted to over-engineer things.
Edit: Made typo on mobile.
If your first reflex is to be 2nd hand offended, maybe you should relax a little and try to not to see hostility everywhere ¯\_(ツ)_/¯
But after I read your comment, I went back to the story and did a Google image search for a few (but not all) of those pictures. And guess what? They all come from other sources.
It's not like I was looking for hostility. I was reading the comment and nodding along and then the hostility came out of nowhere and smacked me in the face.
 This comes up top of Google for me: https://www.laurencegellert.com/2012/12/software-ghettos-a-f...
I feel that "code ghetto" conveys the myopic POV of the article perfectly.
In my experience, people who get offended by Dunning-Kruger don't even realize they are prime examples of it.
 This comes up top of Google for me: https://www.laurencegellert.com/2012/12/software-ghettos-a-f... and I feel that it conveys the meaning, even if you disagree with the particular ecosystems being denigrated.
We have recordings from the 16th century? Do you mean "recordings" as in sheet music for automatic pianos or organs etc?
I love that there are so few limits in "software world." No pesky laws of physics. No enforced values by one or another interest group.
Like any craft, it takes years to master, and efficiency comes from doing it so often, that it becomes inculcated. Nowadays, I make massive architectural decisions every day, without writing down a thing, and, here's the "hit": these decisions turn out to be correct, even though they often mean a lot of work (for example, this entire week has been spent re-testing -and fixing- an app I'm working on, in response to an internal refactoring job). It's like a wood carver, encountering an unexpected knot in their material, and having to work around it, or maybe even incorporate it into the design; treating it as a nice surprise and advantage.
I enjoy writing software. I think I'm fairly good at it. I'd not want to stop.
But the rant reminds me of this old gem: https://www.internetisshit.org
I wish. Try working at almost any company, and you'll have to betray your users on an unrealistic arbitrary deadline.
The industry doesn't appreciate people who think, or talented people, or good engineers. It appreciates those who can haphazardly slap together a subpar something out of ready-made libraries to meet "business goals". And this is depressing af.
I am a "dependency skeptic." A lot of folks think I'm against them, but nothing could be further from the truth. Most of my software is composed of dependent modules.
Mostly written by me.
You can't ask a lawyer, because they will always say "no," but I have seen some really bad things happen, because people didn't take this stuff seriously.
Lawyers will generally chew up the food chain, and applications that use dependencies can get caught in the blast radius.
That can happen with big shops, or small shops.
Around here, lawyers are a dime a dozen.
OTOH, those who can do this while still maintaining some integrity get a lot of respect, at least in communities like HN. I'm reminded of Spolsky's "smart and gets things done."
And just to push back a bit: reuse is a good thing. Indeed, by looking first to make sure you aren't re-implementing the wheel, much of the spurious software that this article bemoans can be avoided. I know it sounds like an imperfect compromise, but that's because it is. Sometimes you just need to get shit done.
Dependencies are awful, it's where jurassic-scale disasters are born.
Whether we like it or not, dependencies are here to stay. It's like calculators are now an integral part of every student's book bag. I'm old enough to remember when you would get thrown out of school for bringing a calculator.
It's just that when we develop and publish infrastructure, it needs to be held to a much higher standard than the apps that are built on top of it, and I'm not sure we're at the point where we can implicitly trust infrastructure. Being good at vetting and choosing dependencies is still an extremely valuable skill.
Laws of physics are very important to software engineering. Stuff like speed of light and ability to disperse heat have a tremendous influence on how computation can and is done, and how we design software systems.
Then there are laws entropy and complexity, math laws & cryptography, social laws (how do people cooperate), game theory, economic laws and legal laws. And bunch of other stuff that I forgot about now. All greatly limiting / shaping the way we can create software at scale.
Software engineering is one of the most (if not the most) complex forms of engineering out there. A lot of people don't realize it, since they routinely work with tiny parts of the whole picture. Writing a typical REST API backend, or making a RPi blink an LED is like civic engineer building a popsicle stick bridge. Sure it's fun, but that's not "the whole field".
> Laws of physics are very important to software engineering.
there are practical concerns abound, especially if you're going to focused as you have on on industrialized software production.
but to me the defining nature of computing & the digital is that it is very near to unlimited, that it is about how we think, what abstractions we create, out of vast vast vast potential & endless capabilities.
yes there are some limits. but growing up, I wanted to build boats & houses & submarines & planes. much of the attraction of digital technology was a near into complete freedom from physical constraint. the 486 I had was a machine with infinite potential, something I could put myself into endlessly, ever becoming more, ever imagineering further frontiers to build & advance into, & everyone else could do the same, & we might still never cross paths. computers are an "unmaterial" wonder, something wholly unlike all else.
I implore us to think not just of the limits, laws, but of the endless spaces the mind might be free to travel & explore amid the digital.
The other day I saw a comment on a forum about a videogame saying something along the lines of "an experienced programmer would wonder why there are two different compilers at work here".
The industry has expanded enough that we can have people with a decent amount of time spent programming without ever encountering things that other people with the same amount of time consider normal.
Software isn't perfect, there are perverse incentives, and everything gets messed up -- because it's a human endeavor, just like everything else.
But I enjoy it and it pays well. I try to point the ship in the right direction when I can.
You might determine that you've been working too much, fighting battles unfruitfully, while neglecting the things in life you actually care about, like family. Or you could decide that software really is your way to contribute to the world, and find a way to optimize for that.
Either answer is fine. Either way, if you figure out what you ACTUALLY want instead of what society tells you, you'll be happier and find more purpose.
There are no answers in life. Just strategies. Find out what you actually want, and find the most effective strategy.
In my case, I love designing and writing software. It's my hobby. In particular, architecting software, in a fashion that is "organic." My architectures morph throughout a project.
If you look at my GH repo, you'll see that it's solid green. I don't really take weekends off. In fact, I often get more done, over the weekend, than I do during the week.
What I can't stand, is the "baggage" attached by folks that don't love developing software as much as I do. Team dynamics add overhead, but that is not necessarily a problem. Insecure managers or team leaders, on the other hand...
- Mark Twain, from Pudd’nhead Wilson
Computability theory has a similar flavour, though. There are hard rules about certain things not being possible.
Since computation is physical, there are theoretical bounds on the energy consumption of computation. (Landauer’s principle ).
Physical limits computation are at least one way to think about the relationship between thermodynamics and software.
To be fair we still are bound by mathematics - boolean algebra, graph theory, information theory, etc. N=NP comes to mind
Unfortunately many software projects seem to be limited more by pesky psychology, and while natural laws are reasonably universal and consistent, the diversityof human insanity/stupidity is incredible
While developing on a single host (especially in traditional imperative and object-oriented languages) papers over a lot of fundamental physics. The nonexistence of absolute simultaneity and the derivative concept of data gravity are very much a thing in even the most basic contemporary software systems... The most ubiquitous system to deal with it, is I would say, the "Virtual DOM".
No idea what that has to do with anything else GP said though.
I'd love to visit that world you live in. Software development is nothing but a huge heap of trade-offs based on limitation of all kinds. Money, time, quality, re usability, maintainability, and many more. I'm ready to bet there is no other job out there that is as complex and bound by so many parameters than software development. Maybe hardware development but I put them into the same category personally.
>No pesky laws of physics.
>No enforced values by one or another interest group.
I'm at a loss of words. Software is what makes the modern world move. You literally cannot avoid interest groups that want to influence you once you have a popular software that is used by millions.
Sure you can write some hobby projects and avoid all of the above mentioned, but once you do something with impact that is used extensively to help people (something everyone should strive for!), all of those issues WILL come to haunt you. No exception!
I live in a very, very real world. I suspect that it may even be the same world that you live in.
It was really a rhetorical exercise. Of course there's all kinds of constraints and whatnot, but, in my case, I started as an EE, working at a company that made fairly advanced microwave equipment, and, there, the laws of physics made it quite difficult to do what we wanted.
Software was like walking out of a tunnel, into a sunlit field. I could go in any direction I wanted.
It really is quite amusing how we like to sling aspersions at each other. I'm not here to compete with anyone. In fact, I find the fact that so many people are so much better at stuff than I am, to be quite comforting.
I love learning.
> but once you do something with impact that is used extensively to help people
You have no idea...
Good software is equally intricate and complex as many other fields that superficially seem more complex, like Rocket Surgery. Unfortunately it has changed a lot over the last 20 years. Just learn to use these libraries, go to a bootcamp and call yourself fullstack developer.
Few people even know what instructions per second even means. Deliver 300MB chat applications or 40GB Games to your customers and call it a day.
Good software development is more than an art or craft, it is seriously hard and takes a lot of experience and it is NOT relaxing at all. It is rare, unfortunately and it's just frustrating to see the decline over the last 20 years.
I suspect that we'd probably get along, IRL. I got a gander at your site, and it seems that we have a hardware interaction background.
I'm deliberately trying to reduce my scope. I worked for many years in a fairly major-league Japanese corporation, as part of a big, distributed team. The company was a hardware company, so a lot of our software was designed to either play with, or be embedded in, hardware.
Nowadays, I like to try keeping it to apps for Apple devices. My apps tend to be a fair bit more ambitious than you'd normally see from a one-man shop, but they are still fairly humble, compared to what our teams worked on.
However, I feel that the quality of what I do is better than before; mostly because of my craftsman approach. If you knew the company I worked for, you'd find that ironic.
Some other forms of engineering?
Just brainstorming, I have no clue. What do you think?
Why should everyone strive for this? It seems unrealistic at best.
"The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit, and programmers have had several centuries to code, you reach a point where there is far more significant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy"
(Longer excerpt with an earlier bit about rewriting things always eventually just moving around the set of bugs, inconsistencies, and limitations rather than improving them here: http://akkartik.name/post/deepness )
And and Danny Hillis' idea about the Entanglement ( excerpt from a 2012 interview with SciAm) -
"But what's happened though, and I don't think most people realize this has happened yet, is that our technology has actually now gotten so complicated that we actually no longer do understand it in that same way. So the way that we understand our technology, the most complicated pieces of technology like the Internet for example, are almost like we understand nature; which is that we understand pieces of them, we understand some basic principles according to which they operate, in which they operate. We don't really understand in detail their emerging behaviors. And so it's perfectly capable for the Internet to do something that nobody in the world can figure out why it did it and how it did it; it happens all the time actually. We don't bother to and but many things we might not be able to."
The author seems to be treating software almost as if it were some "thing" outside of human control and development. By this, I mean he waxes philosophical about software development as though it were some divine practice handed down by the angels themselves - not what it actually is, which is a crude implementation and abstraction of the Universe's actual programming language - subatomic particles.
We have the software we have because we as humans are so limited in our thinking and scope, and because every human has slight variations on their idea of the "ideal", whatever that ideal might be.
If that was not the case, how could we have 50+ programming languages, when the very purpose of a programming language is to express your ideas into a somewhat tangible (insofar as one can claim the electronic written word to be tangible) form that can then be communicated to others.
Maybe now, I'm the one waxing philosophical, but my background is that of an evolutionary biologist; I was not formally trained in software or computer engineering, but it seems to be the "point" of every programming language is to express ideas and the point of every piece of software is to create a tool. Humanity and our ancestors have been doing this for millions of years, so why would we be expected to stop now??
I think part of the issue is that we don't all agree on what the point of a computer program is or how best to get to the same results for the points we do agree on.
Similarly the reason we don't understand the Internet is because the Internet is a conceptual handwave used by humans. We use it to communicate which means it's a lot of different things, very messy and a lot of it is organic.
Your being a Neo-Darwinian and your confusion at the possibility of regression or a halt in progress are the same. Once the mollusc has "conceived" of his shell as an "adaptation" to a change in condition, he has "responded" so harshly to environmental dangers that he has closed them off almost entirely, bringing the process of speciation and adaptation to a near halt. In fact, there are countless examples of "tools" "conceived" by organisms that have been so immaculate that development has grinded to halt. Not all is progress... the world is not a machine...
To expand on your point a bit, this brings us around to hill-climbing optimization and being trapped in a local maxima.
Biological evolution is marked by episodes of relatively generalist organisms spreading to new niches, speciating, and sometimes re-invading the environment they came from by outcompeting the original specialized denizens (I'm not necessarily just talking about large scale punctuated equilibrium, but smaller scale species ebb and flow).
So too with software: the cycle of specialization, optimization, ossification, and displacement by a generalist competitor from elsewhere happens over and over ("worse is better" is probably the pithiest expression of this, but "premature optimization is the root of all evil" is pretty nice too).
Evolution itself has evolved to increase generativity, in order to not only speed adaptation to change in general, but to unprecedented change (especially when the changes are themselves driven or exemplified by other organisms).
So too with software, where the change that software must adapt to is often driven by other software.
So software keeps getting invented and changed to optimize for and colonize changing environments (social, economic, hardware, network, and software envs), and languages keep getting invented to improve the processes of optimization & adaptation to change, as well as generativity, for both new and old niches. And of course, the boundary between software and programming language is just as fuzzy as similar boundaries in biology, frameworks and DSLs are just two obvious examples that straddle the division.
Not often appreciated is that all of the above applies just as much to the human social/cultural practices of developing software as it does to the tools those practices co-evolve with (eg. writing/editing, testing, change control, distribution, building, ticketing/issues, deployment, etc.). And we can flip our view around and see how parallel mechanisms have always been operating on human culture from even before we were human.
The same reason we have different specialties in the sciences/arts/etc. Even math itself has different languages to express the same ideas. Having different computer languages allows people to express ideas (solve problems) more efficiently given the domain of the problem. Very basic example: I wouldn't use zsh to write an xmpp server implementation (but it's possible) and I wouldn't use Java to call a handful of unix commands (also possible).
Honestly, as someone trying to get into developing web apps, I believe we are already at this point. Because of writings like Paul Graham's (eg, http://paulgraham.com/icad.html), I figured I'd go with Common Lisp, and as an added bonus there'd be less of a paradox of choice for libraries. Not so. Already I'm looking at a half dozen different approaches for persistent storage. I used to love Perl's concept of TIMTOWTDI, but more and more I find myself drowning in a sea of options that all seem pointless.
We see this in "enterprise" software where the warts are incoherencies of process or refusal of departments to communicate.
And we see this in the "gig economy"; Coase's Theory of the firm posited that companies form because the transaction and coordination costs of doing everything as atomised workers purchasing services would be too high, but what if there was an app for that? Boom! The firm disintegrates, leaving us with the taxi company with no taxis and the hotel company with no hotels.
Software asks us a terrifying, disconcerting question: what exactly do you want?
But it is still an enjoyable read, and contains kernels of truth - we have abandoned the unix philosophy and created bloated, poorly engineered software. I think the comparison to music and hair cutting is very apt - the mark of a skilled engineer is knowing how to limit scope and ambition so that value can be delivered quickly and elegantly.
We all roll our eyes at the next baby sitter app, but at the end of the day we probably use some app to find a baby sitter - whether it be Google Search or Yelp. In a biological ecosystem, all things that can be, will be.
Industry (in the Industrial Revolution sense), underwent similar transformations (and is still going through them), from dirty/dangerous/back-breaking factories of the 1800s, to slightly-less-dirty and slightly-safer and more automated factories of the 1900s, to... whatever the 2000s will bring. I'm honestly not sure where software stands in relation; maybe software is actually closer to the 1800s-equivalent of the Industrial Revolution, than to the gleaming 2020 gigafactories of today. Maybe we are still too early.
There is probably some golden age of industrial production that industrialists look back fondly on, in the same way the author and many others idealize Unix. Likewise, complaints about rampant/mindless over-consumption, are not new for society at large, especially as we as a civilization really finally start to wrestle seriously with the question of, just how much can this planet support, and what we can take away from it or pollute it with, before the ecosystem collapses? Do we really need another factory, another plastic widget that'll never decompose, another smartphone that will end up in a landfill in a two-year upgrade cycle?
But, this progress, messy as it is, is still progress. In the aggregate, people live longer, are healthier and safer, and there are just more people, than there have ever been in the history of this planet. In part, all of these industrial processes made it possible. That doesn't mean there hasn't been waste. Now is the time to reconcile, what our waste is actually costing us, and how much waste we can actually support, as a society and a civilization. "Too much software" has potential for real harm, whether in physical pollution or in mental/emotional harm (i.e. the attention economy, the assault on user privacy, our partisan echo chambers, etc.).
Many of the applications we're creating nowadays are solving gnarly, poorly understood problems.
I think we have it pretty good. We continue to build and the stuff that isn’t good, gets rebuilt. That’s fine. What’s important is that we have the means to build good, lasting software if we choose to prioritise it. We have languages and tools that the ancients did not have.
but well, KSP is a game, and a GUI framework wouldn’t exist in that paradigm, so I’m not sure if I know what you’re getting at.
First, this is not true, as they are also able to communicate through the clipboard.
And second, the point of the file is to have a durable form for the data. A VR world is still going to use files.
EDIT : In fact, these two modalities of working with data directly stem from the hardware : "live" memory of the clipboard in the always powered Random Access Memory versus "dead" memory of the files in the 'hard' drive which keeps storing information when unpowered.
Maybe there are other potential modalities than these two, but I personally am unable to imagine them.
(I guess that there's an 'in-between' with logs stored as files, but that have their most ancient information regularly erased... which ends up happening with any information, regardless of the storage format, given enough time !)
EDIT2 : And we both forgot about the concept of databases, which tend to be more complex in abstraction than filesystems, and can either exist in parallel or on top of a filesystem, and which many apps definitely use. But they obviously have to compose too with the underlying bit-juggling hardware.
there are UI frameworks in node-based environments where you connect little boxes between to each other to create buttons, text, clickable areas, etc. One of my jobs has been about migrating off that into more traditional OOP-based languages because it just sucks ten thousands less times when your UI starts to grow.
It's challenging, because we got the restaurants first, and we didn't have millennia of home-cooked tradition built up before the restaurants arrived. And we've all only trained in restaurant software, and our instincts don't always carry over to home-cooked software. Creating a culture of home-cooked software must happen in the margins, without capital, because capital by nature goes to restaurants.
It's challenging, but it seems worth trying.
(This comment has been percolating over the course of many conversations in the Future of Coding forum: https://futureofcoding.org/community)
And while the blinkenlights in the server room start to morse at you through the window in the door in a menacing pattern, the guys and gals in server ops demand answers as they point at the avalanche of warning- and error messages that started lighting up the terminals like a Jedi knight fighting a swarm of mosquitoes in the swamps of Dagobar at midnight with his light sabre.
At this moment you can only begin to fathom what happens in operations at this moment, with orders being delayed, fulfilment rates dropping by the minute, pickers and lorry drivers starting to get pissed off at the nerd in IT who must have screwed up again and they know where your car is parked.
I guess what I really wanted to say is: be careful what you wish for, it might bite you in the end :)
The old hobbyist computers would put you in front of a prompt when they booted up.
I'm working on something like this, actually. Here are some 2-minute videos:
Drag'n'drop UI, Python in the browser and the server, one-click deployment - we're aiming for exactly that niche!
And it wasn't web-based nonsense that's hosted on someone else's server for a monthly cost. They actually owned the results. That's a big difference versus what companies would offer today.
TCL/TK was great for making quick GUIs, but I never saw non-technical people use it for that purpose. Still seemed to be non-programmer, but still technical, professionals.
I've been poking at this problem: https://github.com/akkartik/mu
I honestly think this is the root of the problem, and that there is "too much software" is merely a symptom caused by this.
As someone who didn't eat out often before quarantines, and is only a fair-to-middling cook, I can say that restaurants have crippled us in many respects. The overall cost is more than cooking at home, it's less healthy for you to eat at restaurants, fast food in particular has a bigger environmental impact, and to top it all off, these corporations are warping the political landscape with lobbying, never mind what other concerns there might be (much like the oil lobbies).
I find it funny that you claim that "we didn't have millennia of home-cooked tradition built up before the restaurants arrived" as I'm pretty sure the exact opposite is true. So it would appear that as with all victors, restaurants are re-writing history, too.
Take all of these observations and apply them to software, and it fits, perfectly. What's the commonality? It has to do with certain unchecked forms of certain economic engines . . .
Just to be clear, I was alluding to software here, not real food and real restaurants.
Fair. I guess I'm being dense today. It's a very good point that despite the feeling of the software industry being old, it's still younger by far than any other human endeavor (that I'm aware of). It's a field that is chaotic and still hasn't had time to settle down, really.
There's a searchable archive at http://history.futureofcoding.org. It still has issues, but it's getting better. And it's also built with the present of coding :p
Oh absolutely, I'm bored of it too. I was just struck by the irony. :)
> There's a searchable archive at http://history.futureofcoding.org.
> It still has issues, but it's getting better. And it's also built with the present of coding :p
>_< blank page with JS diabled... ;)
BTW, Mu is really cool!
That's a bit ahistorical, so your analogy is actually better than you think.
Computers started out essentially hand built one at a time and hand programmed by changing the wiring (this is analogous to building and cooking over an open fire). Gradually (but quickly) they progressed through craft and guild stages to the point that computers were being mass produced, at which point software was being physically traded on spools of magnetic tape (analogous to cooking for your family, band, clan, tribe & trading recipes with other cooks, marketplaces for staples and specialized implements), then software & patches being posted to Usenet and emailed around (cookbooks, large-scale food production, the local inn, home kitchens), and only then did software start to become a product that people mostly bought and used rather than making yourself unless you had to.
The closest food analogy to this transition is probably baking bread, which was quite specialized for a while (even if you had a kitchen with a stove and used it, you almost certainly bought your bread, and even if you baked your own bread, you definitely bought your flour) and technology had a strong role in decentralization by making home kitchen ovens reliable and common, although any baking you did was probably cakes rather than bread. This was equivalent to the personal computer and operating systems vs. applications.
You can extend the analogy further and say that modern smartphones and app stores are like an an apartment with a kitchenette that only has a microwave oven so all your food is bought pre-packaged at the supermarket. Sometimes you go out to eat, sometimes you order pizza delivered, sometimes you grab fast-food.
In all this the restaurant (an evolution of the inn) hasn't gone away, and in fact some of those restaurants scaled up to the point that the pre-packaged food you buy has restaurant branding (eg. Marie Calendar's).
No-code solutions (starting with IFTTT) are a lot like making your own sandwiches from store-bought ingredients, including pre-sliced bread.
Now, you're pointing out that the computer software equivalent transition from cooking all your own food to centralized and industrialized food production and preparation only took an eyeblink in comparison, but you are wrong in stating that software production didn't go through all the same stages of evolution and just sprang fully formed into the equivalent of restaurants and microwaved meals, or that the food production and preparation value chain wasn't subject to the same kind of technology-driven swings between centralization and decentralization as software, they just don't apply to all parts of the chain at the same rate which can lead to reversals in other parts of the chaoin.
For example, industrialization of appliances wasn't the only development that pushed decentralization of food preparation; canned vegetables, refrigerated transportation, gas/electric/water infra, increasing sophistication of ready-to-use ingredients, and other similar developments were necessary and responsible too.
So, basically, this sort of irregular ratchet that always moves in the direction of standardization and industrialization sometimes pushes some particular layer toward centralization, and sometimes toward decentralization, sometimes adds more layers on top, and this has been just as true for food as it is for software.
Java is far from the only object-oriented language. Does the author suggest that C# and Python are of only historical significance? It's unsubstantiated to associate the decline (?) of Java with the decline (?) of object oriented programming, and even moreso to associate either of them with the implied decline (?) of composability, which can be achieved in many ways other than object orientation. For example, Go and Rust do not implement OOP in the same way as Java, but many people think their modifications improve composability, far from contributing to its decline. What we have in the quote are three dubious facts (it's not clear how much of a decline any of those have undergone), associated through nonexistent connections (they are not related).
Compared with say, C++ where you can write entire programs that never declare a single class. Its the same thing with purely functional languages, in any significantly large program you reach the point where your solving problems using the wrong paradigm because the language is too dogmatic.
Java likes its objects, but in modern java there's enough in the way of lambdas etc that most of the weird "this must be a class" stuff is removed, or at least hidden. And the java community are far less dogmatic about it these days.
It's not a blog piece: it's a cry for help.
You're very right, that's a better way to phrase it.
>Glad I found it on here
I completely agree. I feel like it could be a great catalyst to a lot of good conversation which is why I was ultimately glad it was on here and that I read it, even though at first I found it strange. It made me feel a way that I can't exactly explain or put it in to words; and I consider that to be a good thing.
It’s exceedingly rare that I’ll come across some solution to a problem that is the best of all available options. That’s generally fine in everyday life. You don’t need to be the best carpenter in the world, just the one that offers the best service at a fair price in your area.
I don’t think we’ve really absorbed the fact that with software development, average solutions to problems have no justifiable reason to exist other than market inefficiency.
A lot of ad hoc software I see that has had hundreds of thousands of pounds invested in it is ultimately a buggy, feature-depleted version of SquareSpace. And yet massive sums of money continue to be spent on un-necessary greenfield projects to reinvent one wheel or another.
The UK government recently wasted unspeakable sums of money “developing” a track-and-trace app. The quotes are because it’s not clear what needed to be developed when essentially the track-and-trace API’s from Apple and Google boil down to slapping your local health service’s splash screen over the template and not much else.
I do sort of suspect that over time, as search has become synonymous with Google and internet video has become synonymous with YouTube, that we’ll end up with a one-solution-to-supplant-them-all for almost every common use case. We’re not there yet, but we’re not far off either I’d wager.
There are pros-and-cons to this. Microsoft had cornered the market on what people at the time perceived to be the most essential software in the 80’s and 90’s and those products are significantly less consequential today. Maybe we’ll operate in cycles of centralisation and decentralisation like we have with information networks.
Eventually we’ll all just be replaced by AI anyway, so maybe it doesn’t even matter.
In seriousness, we don't have this kind of AI yet: https://youtu.be/7Pq-S557XQU
Therefore, IMHO, we should not stop writing software on that basis alone.
I agree with many points put forth, I really do, but I don't come to the exact same conclusion. I believe we as a society shouldn't stop making software. But I believe there needs to be a better filter. Far, far, far too much shitty, pointless software is out there. The author cites Java as a pinnacle; funny, some people have said similar things about Lisp.
Perhaps some people should stop writing software. Just determining who is allowed to write software or what software is allowed to be written is the hard part, and is ripe for abuse. Perhaps this discussion should be centered more on the why a particular piece of software is being written. But then we get into ideology, and even ideologies that are destroying the physical planet are staunchly defended when they are obviously having measurable harm on our information systems.
 - Greenspun's Tenth Rule of Programming: any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp. Also a quote from many moons back on slashdot: "Lisp was a simple, elegant language that demonstrated that almost any language written after 1961 was unnecessary, except for demonstrations of concepts like Object-Oriented programming that could then be re-implemented into Lisp, and that any code written in older languages could be replaced with something better."
It also tries to load a live reload script from localhost.
> Perhaps some people should stop writing software, on that I fully agree . . .
Or rather stop writing blogs like it's software.
He seems to suggest that having billions of wage slaves is somehow better than having fewer happier people.
Also, if he thinks 1970s Unix was the pinnacle of design, he should at least read about Plan 9 before deciding that we should stick with 1970s Unix. And maybe learn Go.
Funny. I thought that the promise of a computer was to do a bunch of stuff that computers are good at but that humans are not. I thought that the promise of a computer was to help make certain tasks easier and/or faster.
No idea what any of that had to do with truth.
This really struck me, the idea that perhaps software mirrors the human experience. In the US, we've traded tribalism for individualism, open discussion for echo chambers, and the exchange of beliefs for the walled garden of one's personal vision.
Just try to imagine what would happen if the entirety of Facebook's or Google's user were dumped somewhere for everybody to rifle through... The only saving grace here seems to be that the amounts of data are so huge that copying it all takes an unreasonable amount of resources.
> Information has not become more seamless.
My comment is a reflection on this.
Honestly I feel like we'll have a decent base to our craft in another decade or two.
The biggest problem I do see in software is that people with no software design experience often drive the most important software design decisions. Primarily companies marketing departments driving their development.
It's not necessarily wrong for those types of roles to drive product design but the people in those roles need to learn a lot more about what makes a good product.
We should allow software to die. It's okay. Whenever a generation feels like they can't lose the artifacts of the previous generation, civilization is in decline. Which is okay too, except when technology protects those artifacts too well, which I believe usually hastens the end.
We already have this. Just today I read on another post on here how someone had to stop using their 32-bit audio plugins because their OS stopped supporting 32-bit. That software wasn't "fit enough" to survive in the current environment, so it was selected against. It died. Its entire species (its copies) died out.
The really interesting discussion, I think, is "what is software"? Where does its boundary end? The 32-bit app died because the team that built it couldn't/didn't update it to 64-bit. It didn't die because it stopped working for the environment it was built for. It died because of money and people, not bits and transistors. Why was that? What happened to the team? What's the lesson? Should the plug-in have been made in the first place? Does it matter? If the "death" is due to people and money more than bits, where's the boundary the remarks "software". What should we let die (copies of bits, or teams?) and how does that look?
For me, all this leads into economics, wealth distribution, resource sharing, leadership. The "real" software?
First they were kept alive by cobbling together new hardware to keep the old monster running. Then they were moved to virtualized environments that emulated the original hardware with all its weirdnesses and glitch.
These are the zombies we're talking about. The useful living dead. We feed them their pound of flesh every now and again in the form of coders who can still speak the ancient incantations, but otherwise we keep them in the basement toiling away.
We don't touch them because we are too afraid. No one dares to take the responsibility of rewriting all that code that still works, no matter the cost.
Now you could still live in such an old house today and be happy but the moment you remodel and tear down that asbestos wall you will need to sanitize the whole thing from dangerous particles. Same thing with old programming languages, they don't have the same guard rails as today so once you start tearing up old dust there could be invisible demons awaiting, especially if done by people with limited experience in such environment.
If you write it once and fire all the programmers, every update is a huge undertaking with immense risks. But if you keep reiterating and refactoring it slowly over time, the code stays fresh and the knowledge isn't lost.
You might change the language stack for the whole product during the iterations, maybe just use a fresh stack for a part of it. Maybe keep the old tech, but make it more readable and easier to integrate with other systems.
> We should allow software to die. It's okay. Whenever a generation feels like they can't lose the artifacts of the previous generation, civilization is in decline. Which is okay too, except when technology protects those artifacts too well, which I believe usually hastens the end.
I don't think this will be a problem, over the long term.
the average life-span of companies listed in S&P 500 was 61 years in 1958. The current average is under 18 years.
As long as that code is helping to keep it's hosts alive, it will keep being run. When the remaining hosts die, the code will die too.
60 years from now, someone will be having this discussion about having to keep a virtualized instance of AWS running, complete with all the CPU, GPU, and networking bugs, just to keep some crucial bitcoin clearinghouse that no-one understands anymore running.
Heck, maybe it will be even weirder, like, what if GMail ends up being that sort of zombie infrastructure, and it stops working if there is no more spam? We'll have to set up a dedicated AI to create and send spam just to keep the last demi-sentient instance of Gmail running without having an existential meltdown.
I'm reasonably sure that by that point, COBOL itself won't be a problem anymore, there simply won't be any companies that still depend on it left.
“I leave Sisyphus at the foot of the mountain. One always finds one's burden again. But Sisyphus teaches the higher fidelity that negates the gods and raises rocks. He too concludes that all is well. This universe henceforth without a master seems to him neither sterile nor futile. Each atom of that stone, each mineral flake of that night-filled mountain, in itself, forms a world. The struggle itself toward the heights is enough to fill a man's heart. One must imagine Sisyphus happy.”
I'm finding it difficult to take such a piece seriously and over time it pisses me off more and more. I use OOP everyday and sure, there is such at thing as "too much OOP", I have reached that point, but there is also sometimes too little and just a mess of linked functions. I only use Python btw.
Ugh, what a horrible negative piece to read for someone who really deeply loves computers.
I've often thought software has become a fetish instead of a tool.
I worked at a place where I was scolded for things like indentation formatting issues and the product we were working on made 1M a year maybe.
Then I went on to work for a place that had a steaming pile of PHP code that probably made the company 20 million dollars or more.
It was an eye opening philosophy changing learning experience for me. It wasn't the software that mattered.. it was the solution to the problem...the software fetishists were hung up on the pedantry of the software while these people out here are writing anything that works and solving problems and making money and not giving a shit if the software looks pretty or has the latest language or framework.
Also, being in devops where you blend systems with development and you offload as much as possible to the SYSTEM instead of trying to emulate the system in software.
I think one of the best trends of software is it's evolution as glue between already written systems that we're seeing in cloud.
The writing is really beautiful. If the author is on this thread very nice work.
What I really want is for my computing devices not to change for about ten years at a time. I don't want to have to upgrade the hardware, I want it to be repairable with replacement parts available, I don't want to have to upgrade the OS to a new major version that moves all of my widgets around for about ten years. I don't want to use beta software that slowly mutates until it mostly works or is abruptly discontinued, I want to install a suite of finished software that continues to work the same way with only imperceptible security fixes for ten years, so that I don't constantly have to be adjusting my workflow as individual portions are obliviated.
And then, when that ten years is up and I need to upgrade, I want the changes to be actual upgrades stemming from fixing design mistakes and removing pain points and doing things that not only weren't possible ten years ago, but which are also desirable and not merely novel.
So maybe we need a Studebaker for computers.
I want stability, not perfection.
Did this really happen? I hadn't heard of it before.
Don't lose sight of the fact that you're reading this on (and the author posted this through) a computer connected to a global network. That's the only reason it's possible to read this. Disseminating this so widely would have been incredibly difficult (and expensive) not that long ago.
Is it perfect? No. But I feel like it's hard to see how far things have come when we're living with it day-to-day.
In the field that I work in (audio software) there are dozens of things that we can do in software now that were unimagined even in 1990. Polyphonic note editing? Utterly transparent time stretching? Even just the ability to do huge amounts of serious DSP on generic CPUs has completely altered the entire landscape of music production (and the software that is used to do it).
The same is true of so many other fields.
What hasn't changed much is the sort of software that is concerned with entering, editing and generating reports on data. The business/consumer focused versions of this stuff have changed from being built with tools provided by Oracle in the 1980s to using various web stacks, but the basic model - data input, fetch some data and present in a GUI - remains unchanged. And perhaps that's because the task hasn't really changed much, and what we have is actually fairly good at doing it.
But switch over to other areas where data-driven applications are important - many scientific disciplines for example - and even there you will find huge expansions in what is possible, particularly in terms of visualization (or auralization) as a way to help humans explore data.
And FFS, Google freakin' maps! Yes, something like it existed 10 years ago, but have you actually used it while driving recently! It's not bringing about world peace or solving hunger, but good grief, that is an absolute marvel of the composition of so many different areas of CS and software engineering into one incredibly user-friendly tool that I don't even know what to say other than "use the 2nd lane from the right and then turn left at the light".
It's important not to take really good pieces of software and services for granted, yet it's something we all do every day.
This is so true. In fact, the user to whom you are replying is the creator of one of those really good pieces of software: Ardour .
If others who read this comment are users of Ardour, please consider doing a $5 monthly donation to Paul on PayPal.
Thanks for the plug, even if it feels a bit out of place here (and most of our supporters only pay $1/month, which is fine too).
Consider that these things aren't really due to the practice of making software becoming better, but rather simply that hardware has become ludicrously powerful so as to enable this at all. It's the hardware portion of IT that deserves the gold medal here.
It is true that they both benefit from more powerful hardware, but these examples required significant advances in software too.
You are right that the particular examples of audio software capabilities do not in and of themselves bring anything in particular to music.
But the timestretching stuff has totally changed how huge numbers of people now make music, because they can work with audio that is in the wrong key and/or at the wrong tempo, without really thinking about it.
Do I think that this results in an aesthetic leap forward for music itself? I do not. In fact, probably the opposite in many senses. But that is true of so many human technologies, not just software. Some people would even argue that the advent/invention of well tempered tuning (and the concomittant move away from just intonation) hurt music in the west, and that was just as much the result of "sufficient engineering hours" as anything in the software world.
Also, just to correct you, 20 years ago, I guarantee you that nobody, absolutely nobody, believed that you could ever do polyphonic note editing. When Melodyne first demonstrated it, most people who knew anything about this just had their jaws hit the floor. It was absolutely not an "utterly conceivable consequence", even though in retrospect of course it now all seems quite obvious.
"The real power of a neural net is its ability to compute solutions for distributed representations. In most cases, the solutions for these complex cases are not obvious. The pitch class representation of pitch is a local rather than a distributed one. In this case a possible solution for the chord classification problem is apparent without the use of a learning algorithm. A net containing 36 hidden units, one representing each of the possible major, minor, and diminished triads, could be constructed so as to map chords to chord types. Thus our interest in using a pitch class representation was not to find this obvious solution, but to find a solution which used a minimum number of hidden units.
We hypothesized that three hidden units would be adequate and that the hidden units would form concepts of the intervals found in triads: i.e., major third, minor third, perfect fifth, and diminished fifth.
Each pitch-class net used 12 input units to represent the 12 pitches of the chromatic scale and 3 Output units to represent chord type. The number
of hidden units and the values of the learning parameters are summarized in Table 1 for each of the eight pitch class nets discussed.
Net 1 had an adjacent layer architecture as shown in Fig. 2 and three hidden units. It identified 25 percent of the chords after more than 11,000 learning epochs. When a fully connected architecture was used in conjunction with three hidden units in Net2, 72 percent of the chords were identified after 2,800 learning epochs. "
Neither of those papers cover any of the technology or ideas behind what Melodyne introduced with polyphonic note editing, which allowed the editing (in time and/or pitch space) of a single note within the audio of a polyphonic performance.
I'm entirely fine with saying "getting computers to do things humans have done for a long time isn't really progress". I'm not sure it's true.
One of the primary problems with DAWs as conceived initially was that they were closed, proprietary systems comprised of stupid-expensive hardware to even just open the dang application. This helped to facilitate the inescapable bubble in which the Mass Media finds itself today, playing right into their competition-killing hands. So of course, the world was stagnant for 20+ years, since the only people that had access to this software were "audio professionals," who had the creativity, ingenuity, and passion of a wet noodle. And they did predictably lame things with it all.
When only Kanye West and T-Pain had access to polyphonic note editing, it was pretty lame indeed. But access is, in itself, novel. The world has since changed considerably, and we have projects like Ardour in part to thank for this.
The Dunning-Kruger effect can be fiendishly subtle. So many people are limited by their experience, they don't stop to think (as you have) and imagine what fields outside theirs have taken advantage of in information systems.
I agree with the article that in the mainstream web/app ecosystem, there is a lot of unnecessary trash. Through my own experience, I've seen duplication of libraries and APIs that just frustrates. On those fronts, yes, it would be nice to have a little less paradox of choice.
But as you have graciously pointed out, there are domains undreamt of by the author that wouldn't give up the progress of the last 70 years for any amount of gold, and have much need of software still. Thank you for your examples.
Right, so now what is popular music today? Vocalists who can't sing in tune without help from technology, and musicians who never play more than a few bars at a time in studio because it is all stitched together digitally, and live performances that are just lip-synching to a playback. It's artificial from end to end.
Yesterday on youtube I watched two hour+ live concerts, both made available by the lead artist (Dhafer Youssef). The first was a more jazz inflected performance (not suprising given Mark Guiliana and Chris Jennings as the rhythm section), the second was more "world music" (also not suprising with Zakir Hussain on tabla and Husnu Selendecir on clarinet). Both featured Youssef playing oud and singing in his incredible melismatic style. One was recorded in 2014, one in 2106.
The music was utterly incredible. Virtuosic performances of the highest levels, astonishing compositional and improvisational music structures. And amazing sound quality (though sadly one was marred a little by clipping).
You don't have to like this particular music. Just stop focusing on "popular music", which has ALWAYS been dominated by dreck of one sort of another. Remember that there is (and often always has been) a huge universe of incredible music out there from cultures around the world, reflecting both tradition and experimentation, skill and inspiration, solo brilliance and music collaboration, virtuosity and taste.
Lamenting what the kids are doing with Ableton Live or FL Studio when you can watch Dhafer Youssef play live for 2+ hours with Tigran Hamasayan, Mark Guiliana, Zakir Hussain (or whatever floats your boat) is just wasting your time and your life!
UIs have not only gotten prettier, but in general, UX has improved significantly in the past 20 years.
Take something like Google/Apple Maps and compare it to the tools available back in 2000. Sure, you could Mapquest some directions and print them out. But nowadays, you don't even have to think about planning your drive out. You hop in the car and navigate to your location. Need to stop somewhere for gas? Search along the route and find the cheapest gas with the shortest detour (not based on distance, but an accurate time estimation based on traffic/lights/etc) with your favorite restaurant nearby. Traffic up ahead? Get a suggested reroute in real-time. Need to take an exit? The app will tell you exactly which lane to be in to anticipate the next turn. Road debris/collision ahead? The app will let you know. Tioga pass closed? You don't even have to think about it since the app will route you via the correct pass from the start. Traveling via public transit in a new city, walking, biking? The app will give you directions tailored for your specific mode of travel.
All of this, happening in real-time, on a device that fits in your pocket, while driving in your car.
I could bring up the same examples for any other type of service. The fact that so many dismiss these legit, quality of life, improvements just shows how jaded many of you are. Sure, we aren't making breakthroughs at the fundamental levels of software, but that's only because we haven't even gotten close to reaching the limits of our current capabilities.
FWIW, there are at least two layers of map that map apps have no clue about that I find essential as a cyclist: how much shade am I going to get while going down a road in a city that routinely has temperatures that are near the top end of what the human body can deal with? And how bad is the road surface in a constantly-sinking swamp city full of potholes and half-assed repairs?
There’s other layers - I don’t need to worry about whether or not I want to trade a longer route for clawing my way up a steep grade now that I don’t live in Seattle, for instance.
I think in a lot of ways, UI ebbs and flows; a lot of modern UI feels pretty terrible and non-intuitive, and re-designed for the sake of being re-designed.
And at the same time, writing software is tremendously easier than 20, or even 10 years ago. But maybe that's an ebb-and-flow issue as well. I tried adding React to an existing project last week, and was completely unable to. Everything broke. I'm debating which is the less-bad option - create a new repo, starting with React and gluing everything else around it, or moving just-the-react parts into a repo of their own. I'm tired of trying to figure out how to glue disparate, opinionated things together. I'm missing the Unix philosophy the author is talking about.
I think we're still figuring out UIs. As someone who sucks at them myself, I often wonder where are the modern Bruce Tognazzini's. I'm still reminded of a point on UI design he made that the corners of the desktop in a WIMP are special because they are infinitely large targets - you just fling the mouse pointer there and hit them. This point was of particular interest to me as an early Linux desktop (FVWM) user with virtual desktops (3x3) that you could configure to flip between when the mouse got near the edge, so it broke the infinite corners convention.
I'd like to find people who are thinking about UI's as deeply as the AskTog articles used to.
I wouldn't even say this is true. I hate most UIs out there. There seems to be little thought given to how they appear and are interacted with on a wide-variety of devices.
In general, I just don't like a lot of æsthetic out there either.
The problem is that, i only need one table to really deliver REAL value to my customer, instead of huge giant monothlic with complicated business logic !
I don't know. You might don't need "general" or "framework" to deliver real value to your customer.
What i (or many) needs is a "framework of design", not code on how to deliver value to customer.
What about minstrel ballads or festival music?
But yes, things do you not speak well to each other still.
I think the single most important distinction from back then too right now, is that back down programs was written to serve the user.
These days people no longer use programs, they use online services, and they are designed to make money.
That’s a small detail which makes a world of difference.
> Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
— Melvin E. Conway
I think what we're seeing is a kind of global instance of Conway's Law.
So did Dijkstra in 2000, see EWD1304.
What changed since to the better?
> Our computer programs are as adhoc and inscrutible as they were in the 70s, and we’re all struggling under them.
I really don't understand the point of this page/post/thing. it's some serious r/im14andthisisdeep content
So yes, software is working, and yes, new cool software is released all the time (eg deep learning)