Hacker News new | past | comments | ask | show | jobs | submit login
I hate almost all software (2011) (tinyclouds.org)
226 points by panic on Feb 7, 2017 | hide | past | favorite | 144 comments



The older I get, the more I agree with this rant. I used to be that guy putting cute unicode checkboxes in his test runner, telling myself I was making the world a better place because my software was nifty and clever. Now, I realize that, while that's nice to see, my time was better spent just solving the problem and then moving on to the next problem. Nobody was dying for animated tick marks and I probably ruined someone's experience because their terminal emulator or multiplexer had shitty unicode and escape code support.

It's a delicate balancing act, of course. The tinkerers are often the ones pushing tech forward with their incessant fiddling. But it comes at such an incredibly high cost. I mean, how many human lifetimes have been consumed by the shitty intricacies of dynamic linking alone. And for what? To solve a class of problems we haven't really had since 1992?

What a waste.

There's ample space in the software world for all kinds -- and arguably the industry is fueled by the blood of well-intentioned innocents -- but my increasingly jaded self just wants the shit to work so that I can close my laptop and get outside with my kids. The guy who spent 15 hours a day obsessing over The Right Way and code purity and ecosystems has aged into a man who firmly believes that less is much more, cuteness is a code smell, and solutions should be as self-contained as humanly possible.


I think it might have to do with the fact that the older you get the more knowledge and experience you have; thus you see problems you didn't see before although they did exist all along.

I like KISS. Usually there's three ways to solve a problem:

1) The quick workaround/fix. Almost always costs more in the long run but is fast.

2) The good enough fix. Costs a little bit more time and might not be the absolute best solution, but it's good enough.

3) The "real proper" fix. Costs a lot more time than the above with little value added over #2.

These days I'm almost always opting for #2 by default because it's good enough. It's hard at times to let things go that you know could be solved in a better way. There's seldom a real need to go down road #3 for other purposes than your own satisfaction though, and that sucks at times.


Amen. I've found that changing my conception of software from a concrete deliverable to a process has led to a more healthy relationship with my work. It's never done, and since I've made peace with that, I've rediscovered a different kind of delight in working.

These days, I regard change requests as fascinating plot twists, not blemishes on my completed work. And I find myself asking, "Did that fix your problem?" much more and "Am I done?" much less. Since I consciously stopped trying to mind-read and anticipate needs, I've had much more success. Keep it simple and assume nothing. After all, expectation is the root of disappointment.

My clients like it and I enjoy it more, too.


These days I'm almost always opting for #2 by default because it's good enough.

Me too, but I think it only works (and also doesn't introduce problems in the long run i.e. still keeps the software maintainable) because the software to which the fix gets applied is already properly designed to begin with. So in the end #2 only seems to work because the one applying it is good enough, and the software it's applied to is also good enough.

At least, that is my impression from a bunch of experiences: applying #2 to what is aleady a trainwreck will usually cause problems in the long run anyway, standard avalanche effect. For proper, elegant pieces of software though basically the difference bweteen #2 and #3 fixes seems to fade away: everything is so loosely coupled and has such well-defined responsabilities that most fixes which are #2 simply cannot become any better no matter how much time spent because they are so focused there is only one way to fix it. Or there might be an alternative way, but the outcome is exactly the same so it's also not really 'better'.


It's not waste:

- you learnt

- you had fun

- you felt proud

- it validated you

This is a path you need to follow toward mastering a field.

You are what you are today because you did it, and if you are any good right now, the world is a better place now.


Oh absolutely. The journey is the destination and all that. And I'm undoubtedly a better developer and coworker because of my experiences.

I'm happy there are people out there eager to wade through the shit just as I did when I was younger. Their enthusiasm is great. I'm just less willing to do that shit-wading these days. It takes all kinds.


Our society has a "genius syndrome" problem. Its culture promotes the idea of a master solving all issues. He is usually 20 something and does everything right.

But the fact is:

- most great people are not born great. They become it. If you want great, you need time to nurture it.

- most great people fucked up hard a lot on their way. Greatness implies training. Training on important things with consequences.

- most great people are not alone. Something great is rarely achieved by one person. Even if something appears to be done by one person, you usually have a lot of entities supporting the work in some direct or indirect ways. Without it, the so-called genius would have accomplished nothing.

Bottom line, we should remember:

- good persons, like good things, are rare and have a cost. Plus they take time.

- The lone genius is the exception, not the rule.

So you messed up on the way ? Good. This is life investing in your potential. Can't wait to work with someone like you.


Shucks. For what it's worth, your comments have made my day. I'm sure I'd have a blast working with you, too.

You wouldn't, perchance, have a masters degree in stochastic modeling, would you?


No I'm just a good old regular Python web dev and trainer. Also I worked in the porn industry in France and in NGOs in Africa for half of my life.

Very, very far from 'stochastic modeling', which I actually had to google before answering your comment.


People with resumes like yours were always my favorite co-workers. Diversity of experience leads to more life lessons per unit time.

Cheers, friend.


> I worked in the porn industry in France

As a web dev?..


Among things :)


All costs are opportunity costs.

The ultimate question is: how else might you have spent your time?

Understanding technology and returns to technology, by specific technological mechanism (an area in which the literature is oddly silent) as well as the long-term costs and consequences, and the efficiency-driven economic consequences (the Jevons paradox and related elements), may give pause.

Yes, there's something to be said for learning. But applying learning to domains in which it's not ultimately most productive is itself an opportunity cost.

There is almost always a benefit to finding and reaping low-hanging fruit, in raising up the least-served, in recognising who may not be serviceable at all (about 50% of all people have zero or absolutely rudimentary computer proficiency), or to recognise the value in providing advanced, expert-user interfaces which may serve a very small user base (a few percent, perhaps a few fractions of a percent), but for whom the added productivity is valuable to others.


The trouble is that the tinkerers learn to cope with the idiosyncrasies of the legacy systems and build their shiny new stuff on top of them. In the end, we get software that solves a relatively simple problem but depends on layers upon layers of other systems and abstractions.

IMO, we need to fundamentally rethink the design of our computing infrastructure so that it suits our current way of using computers. This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.


I'm sympathetic to your point of view, but Big Design Up Front never works, in my experience. It's impossible to anticipate future uses for tech that we currently consider vestigial, unnecessary, or inefficient.

Mainframes and terminals were the future until PC displaced them, until the web came along, until mobile and cloud computing smashed it all and brought us full circle (sorta). Designing to optimize the full stack for one of these iterations would have made the transition to the next phase much harder. Fewer people than you'd think foresaw the IoT, FPGA, Arduino, processors-in-everything revolution currently underway back in the early 2000s, when Moore's Law was still gospel.

In the end, I take a sort of Buddhist view toward the fuckedupedness of software: It sucks, but it's all we've got and must be accepted on its own terms. Find a strategy that works for you and that minimizes global pain. Do no harm. Attempting to force your way into Nirvana and out of Samsara only wedges you deeper inside it.


And then we end up with industry that continuously rediscovers and rebrands shittier versions of solutions that were already created in the 70s.

The problem with BDUF is not that you can't predict things well in advance. Indeed, you can, as it has been done countless times in the past in our industry. The problem is, doing the Right Thing doesn't make you the first to market, doesn't make your solution the cheapest and the most virally spreading. Hence we end up building towers of shit instead.


> The problem is, doing the Right Thing doesn't make you the first to market, doesn't make your solution the cheapest and the most virally spreading.

There's definitely that, yes. But I'd still contend that we're also crappy at predicting trends and future uses of technology. Yes, some people have predicted things well in advance. Some have done so accurately. Some have done so consistently. But none have been both accurate and consistent. It's maddeningly difficult, even at short time scales, and second-order effects quickly take over. The same applies to even tiny projects.

BDUF just has too much in common with communism: They're both forms of well-intentioned central planning, and they're both symptoms of the hubris that makes each of us think (s)he is more of an expert than we really are. And they both break down when unforeseen forces slam into their base assumptions.

I hear you. I really do. I've BDUF'ed my fair share of systems, and witnessed countless other people do the same. It just never works out. There's a damn good reason that "Worse is Better" keeps winning: It's got serious evolutionary advantages over BDUF.


"Big Design Up Front never works"

Seems to work for NASA and others doing way more complex projects than I am .


And yet people praise SpaceX for doing much better and cheaper with their iterative refinement.

BDUF can work if you have a single organisation with a single stable set of requirements that's reasonably compact. The Kennedy "man on the moon" speech was such an example.

Where it falls down is trying to meet the needs to the 7 billion distinct human individuals, which are inevitably vague and shifting and change in response to publication of software.


SpaceX also doesn't have the track record NASA does, and cannot be compared.

In 50 years when they've been working alongside each other, maybe, but not right now.


NASA has also used iterative refinement in the past.


They have much larger budgets and tighter tolerances than all but a few domains. Nevertheless, their approach is more modular than you think. They do a lot of designing to interfaces, especially where the necessary tech doesn't currently exist.

EDIT: Forgot to mention how little engineering (relatively speaking) NASA does these days. They farm a lot of projects and sub-projects out to contractors, devoting most of their engineering expertise to the requirements phase. Now imagine if your clients spent that kind of time communicating upfront! Software would be a lot better across the board.


In general, it works if you decide what you want before starting the development. I'm not talking about knowing what you want: I'm talking about deciding what you want. Much like buying a car: no way you can think to all the possible detailed preferences of yours before choosing a new car, but when you sign the check you are voluntarily giving up any further discussion.


But the cost is enormous and flexibility nonexistent.


IMO, we need to fundamentally rethink the design of our computing infrastructure so that it suits our current way of using computers. This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.

Do you know if there's anyone seriously working on this? I feel like there are a lot of people on this website who want something like this and would love to help out.


Yes, there are people working on this.

VPRI[1], the group around Alan Kay had an NSF-funded project to reproduce "Personal Computing" in 20KLOC. The background was that way back at PARC, they had "Proto Personal Computing" in around 20KLOC, with text-processing, e-mail, laser-printers, programming. MS Office by itself is 400MLOC. Their approach was lots of DSLs and powerful ways of creating those DSLs.

This group then moved to SAP and now has found a home at Y-Combinator Research[2].

One of the big questions, of course, is what the actual problem is. I myself have taken my cue from "Architectural Mismatch"[3][4][5]. The idea there is that we are still having a hard time with reuse. That doesn't mean we haven't made great strides with reuse, as in we actually have some semblance of reuse, but the way we reuse is suboptimal, leading IMHO to excessive code growth both with increasing number of components and over time.

A large part of this is glue code, which I like to refer to as the "Dark Matter" of software engineering. It's huge, but largely invisible.

So with glue being the problem, why is it a problem? My contention is that the key problem is that we tend to have fixed/limited kinds of glue available (biggest example: almost everything we compose is composed via call/return, be it procedures, methods or functions). So my proposed solution is to make more kinds of glue available, and make the glue adaptable. In short: allow polymorphic connectors.[6][7][8]

So far, the results are very good, meaning code gets a whole lot simpler, but it's a lot of work, and very difficult to boot because you (well, first: I) have to unlearn almost everything learned so far, because the existing mechanisms are so incredibly entrenched.

[1] http://www.vpri.org

[2] https://blog.ycombinator.com/harc/

[3] http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_...

[4] http://repository.upenn.edu/cgi/viewcontent.cgi?article=1074...

[5] https://www.semanticscholar.org/paper/Programs-Data-Algorith...

[6] http://objective.st

[7] https://www.semanticscholar.org/paper/Polymorphic-identifier...

[8] http://www.hpi.uni-potsdam.de/hirschfeld/publications/media/...


The problem is that tech is always built with hidden assumptions. ALWAYS. Anyone who tells you otherwise is a liar or naive.

Not everyone can work with the assumptions that this tech demands, and not all of the assumptions are apparent (behavioral assumptions especially), so we end up either solving the same problem with a different set of assumptions or using glue code to turn things into a hacky mess that works.


Right, it's definitely the hidden assumptions that are currently killing us.

That's why the process is so difficult, because questioning everything is not just hard, it's also very time-consuming and often doesn't lead to anything. Or, worse, doesn't seem to lead to anything, because you stopped just a little short.


> This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.

Perhaps better to start from the top, and work down to the CPU architecture?


> IMO, we need to fundamentally rethink the design of our computing infrastructure ...

Would you please care to provide an example of what changes you'd like to happen?


Do you find it beneficial to pursue a long term productivity, to write much less code, more reliable and more performant code, that requires much less maintenance, etc.? I feel like you don't. Obviously motivation to do that cannot come from a career perspective, as nothing like that is going to make work easier for you, but maybe there is something else? Just trying to understand.


I hope I didn't give that impression. I find that simpler code is inherently more reliable, more performant, and easier to maintain. I don't try to look too far down the road, because I've been on enough projects to know that they never turn out how anybody thought they would. So I just focus on writing simple code. And simple code is most easily written by sitting down with the client and really understanding their need, paring it down to the absolute minimum viable solution.

As for "career" considerations... well, I haven't had a very traditional career, so I don't put much stock in that. I just spend much less time writing "features" these days, and I've found it's been beneficial to my productivity, client satisfaction, and personal happiness.

Apologies if I came across as disgruntled.


On the one hand I basically agree with what you said on the other: 'I mean, how many human lifetimes have been consumed by the shitty intricacies of dynamic linking alone. '

I guess none, by any normal measure of human lifetimes and consumption thereof.


If you add it all up though? I'd say I've probably spent as many as several person-days dealing with dynamic linking nonsense over my career, let's round that up to a week. Fifty weeks a year, 80 years in a life, if there's just 4000 people like me in the world that's a human lifetime right there.

(No pedants please, if you don't like "fifty weeks a year" then change "80 years" to "76.9 years")


> my time was better spent just solving the problem and then moving on to the next problem.

What do you mean with "solving the problem"? A temporary solution for a temporary problem, a definitive solution for a problem that could be forgotten, or something that had to be maintained? In the last case, moving on and leaving others deal with our mess is not professional at all.


That's not what I meant at all. Instead of building lots of bells and whistles and "features" these days, I tend to build the minimum viable product. The last 10% takes 90% of the effort, and I've found it's usually unnecessary/unused anyway. This frees me up to solve 3-4 problems where I once solved 1 (and 3-4 that nobody actually had).

This is, of course, a gross generalization.

Regarding your last sentence: We all "move on and leave others to deal with our mess" to some extent. It's unhealthy for a developer to set out to fix all the problems in a stack, and it's unrealistic to expect them to leave a tidy solution when they're done.

Iterative, collaborative software development is like democracy: Messy and inefficient and frustrating, and also the only viable way to get things done.


I share this sentiment. As a user, I have found some relief by switching to a very simple setup with just a WM (XMonad), a browser (Firefox+vimperator), Emacs and a terminal (urxvt, which I only use occasionally).

Previously, even though I used Emacs as an editor, the idea of moving functions that correspond to applications (like mail reading) into an editor looked clunky and even dirty to me.

But I grew disillusioned of the Unix ethos. While it works well for composing very simple programs, it fails to deliver on most other things. For example, Mutt is a great mail user agent. But it is very difficult to customize beyond its author's original intentions using its own little configuration DSL. And scripting it from your shell does not lead anywhere. This happens with many other nice Unix utilities.

In contrast Emacs, understood as a text-mode Lisp machine, is at odds with this approach. The code you write is as first class as the code you run from whatever module you import. Everything is alive, integrated, and can be changed.

As a programmer, I am moving towards more and more abstract application domains for the same reason. Mathematica, PyMC or Scheme allow me to ignore all the horrid details the OP describes and work at a higher level.


I love the ethos of Scheme.

The language has ways that the program is expected to do things, such as named let and syntax-rules, but is flexible enough to get out of your way if you have a different idea, such as defmacro.

I wish that the rest of my userland would be as flexible.


You can always use Emacs and treat all low level things as plumbing. Eventually Emacs will hopefully switch to Guile Scheme. In any case Emacs Lisp is a pretty good language and would not feel clumsy to an Scheme hacker, I think.

You can have a pretty modern userland with Org, Gnus or mu4e, pdf-tools, etc.

I have recently switched to GuixSD after 2 years toying with NixOS. All dirty Systemd plumbing fades away and gets replaced with a minimal service manager written in Scheme, GNU Shepherd. Plus all packages are defined in Scheme. It's very refreshing.

I hope one day we get to see a cleaner kernel, ABIs and compilers. Things have gone a bit out of hand.


I'd rather not have Emacs as my environment. I really wish that something like Gnome worked as well as Emacs to be customiseable and extendible, and in fact that everything in /bin would share a cohesive, modular and programmable interface, preferably Scheme, but I'll settle for something sensible.

Emacs, and to a lesser extent, sch, are a nice first step to making a nice environment. But they still feel ten years old, or more. I use TUIs, and appreciate them. But VLC and my window manager should be beholden to the same level of control and interactivity.

And yes, I am never satisfied tweaking my system to my preferences, because I always hit roadblocks before I get there.

(I am totally fine with the kernel, and compilers and the like using "dirty" code -> that's one of the easiest ways to reach a damn fast system.)


I don't think GNU Emacs has a plan to switch to Scheme.

https://www.emacswiki.org/emacs/GuileEmacs

> Guile-based Emacs is a branch of GNU Emacs that replaces Emacs’s own EmacsLisp engine with the Elisp compiler of Guile.

One Emacs Lisp engine replaced by another.

> Lastly, it will become possible to write Emacs extensions in Scheme instead of Elisp

That means Emacs Lisp remains the implementation language and the extension language. Guile provides the base machine and Guile Scheme becomes another extension language.

Somewhen in the future..

Personally I think that project is misguided anyway, since it would be much smoother to use some Common Lisp as an implementation language (Common Lisp and Emacs Lisp both are based on Maclisp). But Common Lisp is not popular with RMS.


> Scheme

> the rest of my userland

One of the reasons I eventually got fed up with Scheme is that interfacing it with the rest of my userland was really quite hard. IO, subprocesses, networking just weren't as smooth as Unix wanted them to be. Everything had to be wrapped in lists before the "big ball of snow" could do magic with it.

Later I discovered Tcl. upvar and uplevel are not syntax-rules or defmacro, but they give you very similar power in practice - I won't say more or less arcane, just different. IO, events and subprocess management are all right there. Best of all, EIAS (everything is a string) makes communicating with the outside world as natural as it should be.

It helps that Tcl got tailcall and lambda shortly after I arrived :).


> upvar and uplevel are not syntax-rules or defmacro, but they give you very similar power in practice - I won't say more or less arcane, just different. IO, events and subprocess management are all right there.

Makes me wonder if one could port those Tcl features to Scheme / CL via the macro facility, thus having the cake and eating it too?


Unfortunately, no. At least not with anything resembling R6RS scheme. upvar/uplevel give you dynamic access to the calling context, which is at odds with scheme's insistence that compilation must be possible, including the erasure of all names. Tcl is at the other extreme: everything is late-bound, (very nearly) everything is introspectable.

It is possible to go the other way though ... [1] doesn't go as far as macros, but (provided you don't believe in compile-time/run-time separation) upvar/uplevel could take it there!

No call/cc is possible in Tcl, but coroutines can express more limited forms of continuations. And when Oleg calls out call/cc as a poor primitive, I'm comfortable giving it up :).

[1]: https://wiki.tcl.tk/22049


> Everything is alive, integrated, and can be changed.

Every few years it seems, I become confused. I write some prototypical javascript, which goes well, and I'm puzzled: "Wait, why hasn't javascript become at least smalltalk-like in being alive, integrated, and deeply changable?" Explanations based on developer ignorance pale with the passing years, and with the active tool-building community.

Then I have to mock some browser api, and I remember. Browser javascript - language of a hundred primitives types. A few language primitives, Object, and hundred browser types doing a half-assed job of pretending to be objects. Instead of "get it right, and then write a fast path", browser dev seems to be "write the fast path first, and then pretend we're done". :/

(Yes, Smalltalk has issues scaling complexity - I didn't say scales well.)

Ah well. At least it makes it easy to watch for a language that isn't crippling. Energetic rapid phase transitions are hard to miss.


> In contrast Emacs, understood as a text-mode Lisp machine, is at odds with this approach. The code you write is as first class as the code you run from whatever module you import. Everything is alive, integrated, and can be changed.

I think it's really interesting that Lisp developers like everything to be powerful, but only one of these exists:

https://en.wikipedia.org/wiki/Rule_of_least_power

https://en.wikipedia.org/wiki/Rule_of_most_power

Not that I like writing in least-power programming systems (let's say Python here), but it's easier to trust anyone else's programs in them.


Most LISPs don't give you the most power. They are usually GC'd, and usually don't have access to directly modify memory, like you can with anything in the realm of C.

The foot-gun the give you is expressiveness, which allows you to make code unintelligible, rather than power to do anything that you feel like. (I'm not aware of any surviving LISP that would allow you to modify the structure of how cons cells are generated, for example).

There's certainly power in expressiveness, but it isn't the godlike power of C, where you can potentially damage hardware.

That, I think, is why we don't have a "Rule of most power".


Most actual Lisp systems can access random memory and define C-like memory structures. Otherwise they would not be able to integrate with other software.

See for example:

http://www.sbcl.org/manual/index.html#Foreign-Function-Inter...


I dislike when things are called rules, but are neither natural constraints or enforced by authority; these are, at best, suggested behavior.


why do you say mutt follows the unix philosophy?


In fact, let me go one step further: If mutt does not nicely interoperate with other programs using streams of text then it does not follow the Unix philosophy.


> just a WM (XMonad), a browser (Firefox+vimperator), Emacs > and a terminal (urxvt, which I only use occasionally) Exactly my setup... now if only Firefox would not peg a CPU core or two in the background and slow down to a grind ever so often... (despite NoScript, uBlock origin etc.)


Firefox is currently working on Quantum which should do what you're asking it to.


I've heard variants of that line for ... about 20 years now.


The nightly build is realllllly fast. It starts getting the result of their years of work to speed things up. But it's incompatible with some addons. E.G: lastpass will freeze the browser for a few seconds.


Might as well switch to the firefox-internal password store


It doesn't serve my use case.


This is pretty much my setup but I still use the terminal a lot. What are you using instead? eshell? ansi-term? Or do you find you don't need terminals much generally?


I don't think this is a software problem. It's a general legacy systems problem. Think of self-driving cars. It seems plausible that one day all cars on the street will be self-driving, at which point the existing system -- which was designed for humans -- is a legacy system that is not efficient for the current users. If you were to design an autonomous transportation system from the ground-up I doubt, for example, you'll put in stop signs that cars have to read via machine vision. You'd probably have that information somewhere on the internet (or better yet, design something from the ground up that doesn't require stopping).

You can see how this translates to computing. The reason why I still have to deal with a `document` object while building apps in JavaScript is because we built an application distribution platform on top of a document viewer. There is no centralized planning here -- we're just riding a wave of distributed innovation and making the best out of it. (Nor should you expect centralized planning in technology to emerge unless a superintelligent AI system takes over).


Javascript is the app distribution platform that won, because the others were worse.

ActiveX? Too much lockin, no sandbox. Java applets? Too heavyweight. Flash? Nice tooling for the producer, terrible security record. ClickOnce/Silverlight? Never really got out of the gate and would have been MS/Win only.

The next generation of app distribution is "app stores" with arbitrary refusal policies which take a 30% cut of sales. A success for Apple.

Javascript mostly delivers "write once, run anywhere", albeit through a vast shifting layer of shims. Its sandbox is pretty reliable. It's not owned by anyone and they can't stop you shipping JS apps. Thus it survives.


I agree with most of what you said but you're conflating languages and platforms. JavaScript as a language says and does nothing about sandboxing. It's the browser that does that. It doesn't even specify anything about concurrency (up until promises maybe) or the event loop. The event loop is a feature of the host environment. When Netscape put JS on the server they used in CGI mode.

So I wouldn't completely remove the browser from the picture. Had it been Lua or some other simple general purpose programming language that was the language that ran in the browser I'd say it would've been the JavaScript of today.

(I think one other component of this is "worse is better". JavaScript was easy to copy and implement, although not elegant or the best designed language)


I wish it still were just a document viewer. Turning it into an application distribution platform has mostly ruined it.


> has mostly ruined it

For whom? The author of the post says that the only thing that matters is the end user, and I would bet my life that almost all users of browsers think it is great to be able to do "almost anything" on it. The rest is indifferent except a minority which is people who are in your category.


"Almost anything" has come to mean "display so many ads I need a brand new computer to have enough CPU left to see the actual content", "spy on my every behavior", and "install five kinds of malware on my system" for many users.


Maybe the time will come for another document viewer. Resurrect gopher? Reuse epub? Steal from Xanadu?

It should at least fix some mistakes of the web. The core challenge is to define what was a mistake and what was not.

Built-in micro-payments, so we don't have to rely on Ads? This did not work out anywhere so far. Maybe with Bitcoin it is different today? Alternatively, should all information be free and everybody just pays with bandwidth?

Should links be uni-directional? Or bi-directional? And can we fix link-rot then?


> top of a document viewer

Next thing you are going to tell me is that building my UI in Word using VB is a bad idea.


Boss? Is that you!?


You clearly work for a hedge fund


... when your IDE is Excel.


Only do that if you have to track a killer's IP address.


> we built an application distribution platform on top of a document viewer.

This 1000%.


Still seems to be the most compatible cross platform way of distributing applications so far.


That tells me there's more work to do, not that we should stop and praise the local maximum.


You've got it: the technical revolution is just a series of patchy evolution applied onto of one another. Much like every other human systems.


Much of what he's complaining about is a failure to obey a rule that appeared in the original Macintosh User Interface Guidelines.

"You should never have to tell the computer something it already knows."

For example, users should never have to type in a pathname. The user should, at worst, be offered a list of workable alternatives from which they can select. Expecting the user to manually typing in a value for "$LD_LIBRARY_PATH" fails this test.

The original Macintosh software had applications which were one file, with a resource fork containing various resources. This kept all the parts together. UNIX/Linux and Windows never got this organized. Hence "installers", and all the headaches and security problems associated with them. The Windows platform now has "UWP apps" with a standard structure, but they're mostly tied to Microsoft's "app store" to give Microsoft control over them.


Doesn't packaging each piece of software as a single file lead to a security disaster? When openssl has a vulnerability, suddenly the end user needs to upgrade multiple applications, and that is assuming that all of the developers have actually built the upgraded software.


I go back and forth on this. Yours is the most compelling argument for shared libraries (along with the ability for OS writers to break ABI), but there's so much bad behavior out there in the wild that I don't know if the reality fully reflects the ideal.

For example, end users never actually upgrade applications in response to a vulnerability. If they're on a commercial OS, those fixes are pushed to them by the supporting entity (Microsoft or Apple) in the form of OS updates. If they're on an open source OS, their package manager handles things. The number of conscientious, security-aware users is minuscule. In practice, this would just create a little more packaging work for upstream maintainers, negligible to their current responsibilities.

So, from the user's perspective, the only functional difference they'd notice between shared and static apps would be an increase in the insecurity[1] of third-party, closed-source apps. Which are already the largest vector for viruses, adware, etc. for most end users.

In other words, I'm not sure it would be different from the current environment in practice. Exploits would continue to, more often than not, target single apps, not shared libraries. More attack surface, easier pickings.

[1]: There are a few strategies to mitigate this (partially-relocatable compilation, where only a few libraries are dynamically loaded; OS-level services in place of in-memory libraries).


A similar issue rose up in Windows some years back.

Yes, they have dlls that do much the same as .so in _nix. But each program on Windows will first check their own folder and subfolders for a matching dll before asking the OS.

And each program ship with a set of redistributable dlls from visual-c++.

One of those dlls were found to have a vulnerability. And while MS could patch their office suite and related quite readily, they could at best offer a tool for scanning for vulnerable dlls in program folders and beg users to badger the software providers for updates.

The sad thing right now is that various big names within the Linux community is pushing for a more Windows like distribution model, when the problem they are trying to fix is related to overly rigid package dependency trees (with the RPM package format in particular, as the primary user of the DEB format has worked around the issue with a careful package naming policy).


That's also the one reason why I'm very reluctant to throw shared libraries out the window.


Ok, security-critical stuff as shared libraries (libjpeg, libz, libssl, etc).

Why do we package the other stuff as shared libraries too? GMP, GtK, Qt, OpenCV, SDL, BLAS, ...


The original motive for adding shared libraries to Unix was so that X11 would fit in memory on the machines in use at the time (according to a comment I read many years ago).

If the use of shared libraries saves on memory, it probably also saves on L3 and L2 cache, so on the aggressively cached CPU architectures of today, replacing a shared library with a statically-linked version might slow things down by decreasing cache hit rates.

In particular, if every KDE application is statically linked to Qt, then when KDE application A's time slice ends, whatever parts of A's copy of Qt are in cache will be invalidated with the result that if B wants one of those parts it will have to fetch it from it's own copy of Qt in memory whereas if A and B shared a copy of Qt, the fetch from memory could be avoided.


Has anybody actually measured it in the last decades?


I don't know.

But we know that Intel and AMD design their CPUs to go as fast as possible on the operating systems people actually use (Windows, Macos, Linux) all of which use dynamic linking. Plan 9 is the only OS I know of that does not support dynamic linking (and Plan 9 simply does not have large libraries -- they have what they call services instead, which are similar to souped-up Unix daemons).

Linux and Windows in turn are designed to run as fast as possible on Intel and AMD hardware.

After a few iterations of this sort of mutual evolution, it starts to become very unlikely that a change as big as switching a bunch of big libraries from being dynamically linked to being statically linked would actually improve performance because lots of optimizations have been made to squeeze a few percentage points of performance out of the existing system (which includes the practice of shipping most large libraries as shared libraries), and typically those optimizations stop working if there are large changes in the system.


You are implying that, for any given shared library, you can classify it as "clearly security-critical" or "clearly not security-critical".


Patching security flaws only works against adversaries who are out of the loop on zero-day vulnerabilities. Script kiddies, not organized crime or intelligence agencies.


http://xahlee.info/UnixResource_dir/_/ldpath.html

> The original Macintosh software had applications which were one file... UNIX/Linux never got this organized.

In Unix, you link at install time. A lot of pain results from well meaning people importing concepts from other systems and doing it badly.


Sounds like someone had a bad day.

Software developers are in a strange position in that they create the world they live in, so any warts seem magnified and self inflicted. But any significantly advanced discipline is going to be complex. Physics, chemistry, and biology does not have the same luxury of being "rewritable." They operate in the natural world, so you curse God and not Man.

Rewriting a software system from the ground up is nearly impossible. Also any sort of system will need to maintain some level of backwards compatibility, which induces complexity. Thinking about an operating system as something that was built according to a grand design is wrong. They evolve over time, written by different people with different styles solving different problems.

A counterpoint to this argument, "It’s harder to read code than to write it."

[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...


It's not inherent complexity. It's unnecessary, self-indulgent complexity for the sake of it.

My pet theory is that developers don't really want to use tools that aren't complicated, brittle, build systems, because build systems feel like useful machines. There's a certain sense of satisfaction that comes from operating a useful machine with a nice set of relatively simple and predictable built-in problems and challenges.

It's a small world and it rains a lot, but you're never going to fall off it, and it's all yours.

That satisfaction wouldn't come from a more UX-driven computing paradigm. Especially not one that requires trying to understand what goes on in UserLand - or worse, by inventing a model that reasons like a user.

So there's no natural push for other development models coming from inside the community.

In spite of the complaints no one - well, hardly anyone - wants to try something different. And no one outside the community understands computing well enough to produce any competition.


He is the creator of Node.JS


True. But Maybe he did have a bad day? Like a promise that never resolved or something...


"Rewriting a software system from the ground up is nearly impossible."

Humans likely won't be tasked with solving that problem in the future though.


In my workspace sometimes people think I'm off to kill their buzz whenever they bring up the topic of introducing another library into our stack because it solves a specific section of our problem-space.

The real value and fun imho is solving complexity in your head, and then coding down a solution that is simple and uses the least number of moving parts as possible.

And yes, we programmers are also humans and we sometimes like to cherish our creations in non-productive ways, so if you code something simple and well thought out, it will be almost always come out elegant and concise, and you'll be able to pat yourself on the back at the end of the day for building something solid.

I don't believe solving something is all about the user experience because there are humans on the back pulling levers and making that solution happen, so there's that.


Wonder how much of that is because the MBA beancounters use LOC as they productivity measure, as if programming is the same as a widget factory.


I live in a VS .Net bubble and avoid this Linux, C, native etc. stuff like the plague. No environment variables, no scripts, no dependencies except things that have a real and obvious purpose, all native interop isolated to its own small ugly files that I don't touch.

Sometimes I want to use an open source library or something and it's honestly days of fighting with compiler flags and environment variables and versions of compilers. There's always undefined symbols or other mystery failures. Last time I did this, the makefile failed because I was getting a source file from a badly configured web server that caused the downloaded file to get corrupted with some downloaders but not others. So there's learning about idiosyncrasies of the HTTP protocol too. It's a complete clownpants nightmare of wasted time.


Funny, I feel the exact same way about the windows world. The few times I've ever had to touch a windows machine, someone tells me - "Oh just reboot it to fix that microsoft problem." - "Oh the system update must have been corrupted, reinstall the OS." - "Oh just go download this rando .exe using your browser that isn't signed or open source, that tool will be able to fix your problem."

I just wonder how that could ever possibly be acceptable.

You don't even have to reboot a Linux machine to upgrade the kernel nowadays for crying out loud.


But then, I hear Linux people talk about systemd making their system unusable, and having to recompile the kernel to make network cards work, and laptops never waking up from sleep, and reading the arch wiki to find which bizarre kernel options to pass to make CPU low power modes work, and (yesterday) having to update the kernel to make chrome work.

I'm not saying these are all real problems, just that if you only hear the bad, it sounds terrible!


Outside of systemd, that is hardware related. And hardware is a pain because most of it is made with barely any testing towards Windows. And Microsoft and standards are on a "best effort" basis.

Thus you get odd behavior in hardware trying to match "extended" behavior in Windows, that is then papered over by the binary only driver shipped with the product initially. None of that being available to the Linux devs trying to get things to behave sanely.

The systemd issue is quite different, as for many it seems to be making Linux more Windows like. And they specifically turned to Linux in the first place because they could no longer stand the cryptic errors and "heisen-bugs" that was daily life while using Windows.

Linux before systemd was to them a land of predictable computing. If something broke the error would point you in the right direction for a fix. And once it was fixed it stayed fixed.


Complainers are always loudest. Most people barely acknowledge systemd and just get on with their work. As they did with Gnome 3, KDE4 and PulseAudio.


the transition to systemd was painless, unlike VB error handling :-P


I both agree and disagree with this. We're not paid to create new things, we're paid for our suffering - to drag ourselves through the nails and glass shards of previously existing code to accomplish a thing. In that respect, I don't see learning the details of a programming language or a tool as a waste of time - it's an investment to make sure I get less glass stuck in my body next time.


> We're not paid to create new things, we're paid for our suffering

Even I wouldn't phrase it that negatively - instead, we are paid for getting features created, no matter what it takes. Sometimes it is bliss, sometimes it's the seventh circle of hell. But in either case, it's our job to just get it done.


It's crazy to think from this came NPM and from that came left-pad. I can't reason about this, I can't understand the evolution in any other way than Chaos.

Is this the best we can offer as engineers. We launch our beautifully crafted boat into the ocean only to watch with despair as is taken and swamped by the waves and the enormity of the sea?


> I can't reason about this, I can't understand the evolution in any other way than Chaos.

I think the most fitting description of the process in our industry would be "throwing shit against the wall and seeing what sticks".


Makes one wonder if in the end there isn't some benefit on oversizing that boat at first, so it can keep a bit more of control in those waves.


> In the past year I think I have finally come to understand the ideals of Unix: file descriptors and processes orchestrated with C. It's a beautiful idea. This is not however what we interact with. The complexity was not contained. Instead I deal with DBus and /usr/lib and Boost and ioctls and SMF and signals and volatile variables and prototypal inheritance and _C99_FEATURES_ and dpkg and autoconf.

The next step is to understand that file descriptors and processes orchestrated with C are a substrate that would make everyone reimplement most of what they want from scratch each time. Part of the list describes system services that we have come to expect from an OS: DBus = interprocess communication; signals = process control; volatile variables and ioctl = hardware control; /usr/lib and dpkg = software packaging and distribution. There are more to be added. at+cron = scheduling. syslog/journald/systemd = monitoring; wall = alerting; cgroup = process and process group control; bash = orchestration from a repl; systemd = orchestration not from a repl; lots of stuff = security; every developer's own file format = configuration; every developer's own plain text protocol = data encoding for IPC. The list goes on from here. Even in the early days of Unix, mature systems like VMS had these capabilities.

The problem is not that these exist. It's that they were all built piecemeal instead of being standard system services accessible from a programming language like processes and file descriptors.

Except for bash. That's the result of an old mistake: the job control language on OS/360. That's when we got the idea of a command language distinct from a programming language, and we've been building it back towards a programming language ever since.

Autoconf, though, is purely penance for the sins of the past.


"The only thing that matters in software is the experience of the user." is going on the wall in the office. And I'm mailing one to Android development team.


That statement seems simple and correct at first, but the term "user" is not well defined. Even if there was only one user in the world, me, it still isn't well defined. The reason being that I can't tell you right now at this moment about all the possible use-cases I might have for some software in the future.


I think what this statement really means is that we all create software for the end user, but then get lost in all the theory and principles of software engineering that often drag out software projects and make them overly complex. The point at the end of the day is that "it's all about delivery" and we need to remember that without the customer/end user, we're just wasting everyones time moving bits and bytes around because we can.


That's where the YAGNI principle comes in. You cannot reason about use cases you don't know yet, so don't even try to predict what you might need until you have a clearer picture.


Of course this statement is simply wrong. Multiple stakeholders have a share in developing software.


So we are against perceived complexity and learning new things now?

To elaborate a bit: At work I use OLAP functions to simplify the business layer and focus on the data, moving much of the complexity to sql statements. Others get the data in it's list form and complicate the data layer or the business layer by transforming the inconveniently-structured data there. Yet I am the one who is told to tone it down and think simple.

Much like the author of this article, no matter how senior he is, they seem to divine a vague notion of "simple" when they face perceived complexity. And they always seem to associate this complexity with a real life example of something which could have been simplified (like how angry Linus Trovalds got when he tried to set up a printer on linux) which is unfortunately a red herring.

So is NaN equivalent to null? Does that question even make sense? Perhaps the real question is, how do I represent an optional entity without incurring additional runtime costs? C++, one of the most lamented languages for it's terrible template complexity, has no null problems. Most values are copyable, moveable or re-constructable and most code throws when faced with fatal situations where e.g. the OS runs out of allocatable memory to the point where returning null simply vanquishes and you won't see it in production code anymore. Did that solve your complex null problem? Absolutely yes. Did that come with a bunch of baggage you have to learn and implement? Yes it kinda did.

So the complexity is there for a reason. Go is shameless when it comes to nil. Yet Go is mostly called out for it's lack of generics. Ah, how ironic.

Anyway, if it is there in programming, it is probably for a reason and alternatives introduce their own baggage. Usage of POKE or sprite DATA segments did not suddenly dawn on me when I first started to learn me some programming. I did not say it was unnecessarily complex, or question why it didn't work like legos. I just opened the book and reduced the complexity by learning.


> So the complexity is there for a reason.

Reminds me of the occasional questions about equality, like "why Java doesn't let you override ==?", or "why Lisp has eq, eql, equal, equalp, and you still end up using your own equality predicates anyway?". The root of the problem being, the concept of equality itself is complex at the philosophical level. It's easy to lament something is complex, it's harder to ask yourself why it is so, and accept that there is complexity in reality itself.


I was with it until the last paragraph. One reason for the mess is nobody thinks of downstream developers as users.


Twenty-six years of developing software for different organizations yet I still struggle with the desire for simplicity in the face of conflicting needs.

The tendency is to think of a "user" as a lone person in front of a screen using a UI of some kind. This idea of user is incomplete. In addition to people using a UI, for say a university financial aid system, a management chain exists with the need for applications to be modified as quickly as possible with new requirements and with no interruption of service. They need applications to communicate with one another. To meet those needs, developers utilize libraries, frameworks, etc. for modularity and reusability. To insure that nothing gets broken while modifications are made, regression testing must occur. Adding test suites increases the likelihood of uninterrupted service. To help with module interdependencies, developers use package managers. And so the stack of software grows.

I'd call it accidental complexity, but it's not an accident. We are dealing with complex software stacks, often with interdependent modules, because we are trying to meet many different needs simultaneously. I don't see an easy way out of the situation.

Over the years, I've become better at determining when to abstract problem solutions in order to provide for future needs and when to do the minimum possible to get the problem solved. I don't think there is a rule or formula for making that determination. You just have to make mistakes and learn from them.


There is also all too often a tendency as treating all "users" as drooling idiots.


You know what software I like? Video games. It's a model we should all be striving for. There isn't another category where the experience of the user matters more than anything else.


I think dev-ops are often overly complex ... Although most people want too simple programs/computers, if they would just invest some time in them, they would become more productive.


This is a bit of a rant so either ignore it or consume the frustration and empathise.

Good enough is a turd in disguise, your desire to deliver enough working software to justify your existence and salary is egotistic and myopic.

Build sustainable software, that will work, for you, your predecessors and successors.

The constraints of the domain are fine and dandy, that's what matters, that's the problem deadlines, delivery and all of that is superficial. Keeping the business afloat for now is a band-aid on cancer if you can't write software in a a way that you in 10 months, 1 year, 3 years will be able to maintain.

Here's the reality, you screwed up, you buckled and allowed the pressures of institutionalised shittyness to constrain you capacity to deliver real value.

Please, everyone lets stop acting like software is some ephemeral joke that we can half-ass, and start being serous about the implications of our actions.


> The only thing that matters in software is the experience of the user.

While ultimately true, it's a bit too simplistic, because it presents user-experience as a constant. Done deal. Delivered or not. In reality the user experience is something which will change over time, as the software is updated and evolves to accommodate the additional needs of the users.

If you can't keep your code clean over time, your code will start suck, needless complexity might sneak in, and so will the user-experience start to suffer as well.

A good user-experience, just like the code, needs to be maintained. And thus comes the need to organize the code. And for a developer to bother maintaining the code, and for him to be efficient at it, he should work with tools and within an environment which fits his needs (and accommodates his requirements for a good user-experience). And thus comes fiddling with editor-settings. Etc etc.

Basically this rant highlights a real problem: Software complexity is out of control. But the conclusion does not support the arguments given. Rather his conclusion drives all the things he rants about being wrong in the first place.

I think, in the end, to get clean code, to get rid of needless complexity, there's one simple thing which trumps everything else: You need someone empowered to make the required changes. And you need that person to care.

Without that, it's all just us waiting it out, until it starts heading into the abyss. We can already see that with lots of the bigger open-source projects which once fuelled the (small) Linux desktop revolution a decade ago.

So how do we cope?

> There will come a point where the accumulated complexity of our existing systems is greater than the complexity of creating a new one. When that happens all of this shit will be trashed. We can flush boost and glib and autoconf down the toilet and never think of them again.

Maybe we will. But I don't believe for one second that we're going to break the cycle this time around, because finding and keeping someone caring enough for decades to come is a pretty tall order.


And this is perhaps why the unix philosophy has such pull, as it suggests an environment where one can make changes and adaptions piecemeal and at ones own pace.

But that suggests abandoning the notion of the DE, and going back to WMs and agreeing on, never mind adhering to, the ways software exchange data.

Something that is the polar opposite of the direction that "desktop Linux" is moving these days.


Best blog article I've read on HN in a very long time.


That dbus was turned from the Desktop BUS to the de-facto system bus is me an eternal puzzle.


Developers are users too.


In my opinion, he hates the fact that documentation of stacking things up is very poor. Which is true.


Most technology is useless. Its not a product of necessity, but a product of our chosen economic system.


Yes, you hate technology because is hard to build it.


The author has not articulated beyond - you don't understand how fucked the whole thing is. Terrible writing.


That's the nature of a rant, no? It's not a persuasive argument, or an essay. It's the frustration felt in a moment lived.


Yes, the essay is a terrible format for capturing life moments. Poems are a forgotten tool from our past that I think work way better. The sentence-paragraph structure is a slave to the period. It leaves no space for attention. Very information heavy. Poems, on the other hand, are like long tracking shots. They are naturally infinite. You can create a moment and stay with it. Note taking, list making, messaging are effective for similar reasons.


  Poems are a forgotten tool from our past that
  I think work way better.
Good point. So here's a haiku which might be applicable to the post in question.

  The Master laments,
  When those who produce oysters,
  Declare "find the pearls."


I see the reason for the complains, but I do not agree with the basic premise. It should not be "I hate almost all software", it should be "I hate my employer and the job I am stuck in".

I mean, this person is writing for Solaris/Illumos (judging by SMF references and autoconf complains). Yeah, running modern stuff on solaris sux, unless you have a sysadminny mindset. This is clearly not for him. Fine -- different people like to optimize different things: user experience, internal code beauty, maintainability, code robustness, number of platforms supported.

No one should be forced to work on stuff they don't like. There are lots of jobs, and most of them do not require SMF, DBus, /usr organization, or volatile variables.


Most of them require other stuff that's still just as awful.


> This is clearly not for him.

Haha I had a good laugh. He created node.js.


So what? It probably started off as a fun hobby project like "I bet I can write a fast server in javascript." That was probably all he was interested in at first. Now he deals with problems that are not fun and not anywhere near that original hobby. These problems are clearly not what he considers fun at all, therefore I don't see any reason why you had to be snarky with the OP.


Node was written in c++.

Not even sure if you read the original article, because that's far from what I took away from it. I guess we all choose to see what we want to see.

There's a difference between someone complaining when they are bad at something, and someone complaining when they're good at something.


Maybe you should stop thinking of code as solving problems and more like painting a picture. It doesn't matter if there are mistakes or if you've done everything perfectly, as long as you've created something cool.

It's like telling a room full of artists that you hate almost all art because the tools available etc etc over complicate painting a simple picture whose purpose is to fulfill a client contract, etc.


Here are a few software development maxims which I believe you, or others reading this comment, may benefit from:

  Software is the manifestation of a solution
  to a problem.  Define the problem which needs
  to be solved.

  Developers can test their solution or Customers
  will.  Either way, the system will be tested.

  Only progress a system from a known working
  state to the next known working state. If the
  current state of the system is unreliable
  (IOW: "buggy"), peel back the existing logic
  until it is in a provably working state.
  Only then reintroduce removed logic (in
  adherence with the above) to reach a working
  state where additional logic can be introduced.
And, yes, it does matter "if there are mistakes" for the vast majority of software systems. Perhaps not those with which you've been exposed, but for many this is the case.

HTH


Known working state ... That's a key argument against systemd. It's not inherently deterministic.


> Maybe you should stop thinking of code as solving problems and more like painting a picture. It doesn't matter if there are mistakes or if you've done everything perfectly, as long as you've created something cool.

But paintings are unobtrusive in day-to-day life. Perhaps a different "art" analogy would be architecture (as in, the art of designing buildings). Think of a rockstar architect who shows up in your home town and gets something new built that everyone has to now live with (whether they like it or not). There are many controversial buildings that people love to hate. For example, when the Eiffel Tower was proposed, several famous Parisians artists fiercely opposed it. Guy de Maupassant famously ate lunch at the Eiffel Tower restaurant every day because it was the one place in Paris where the tower was not visible. [1] Another example is "The Crystal" at the Royal Ontario Museum, which was constructed in 2007 "on top" of the original museum building constructed in 1912. Many architecture critics hated it and it was famously reported as "one of the ten ugliest buildings in the world." [2] There are many other such examples around the world.

So it is with technology. Someone engineers something, like a CPU, a programming language, a build system, or an OS, and we all have to live with it. Guido van Rossum sticks a Global Interpreter Lock in the middle of Python, and we are stuck with it. Kitware comes up with a suboptimal language for CMake, and we are stuck with it. Intel extends x86 for 40 years, and we are stuck with it. Microsoft randomly changes the Control Panel layout in a new version of Windows. etc... Usually these things can't be changed, just as no one can ever redesign the Eiffel Tower. We just deal with it and, as the saying goes, "a good craftsman never blames his tools."

[1] https://en.wikipedia.org/wiki/Eiffel_Tower#Artists.27_protes...

[2] http://www.theglobeandmail.com/news/toronto/roms-crystal-mak...


> It doesn't matter if there are mistakes or if you've done everything perfectly, as long as you've created something cool.

> computation: noun

> the action of mathematical calculation.

Code is absolutely a method to solving a problem. That's just about all it is, a means to solving a problem.

What that problem is can be creative, and the method can be creative.

But creating something "cool" is utterly incidental. You might set out to do it, you might not. "Cool" is sort of an objective term.

Programming is the subject of mathematical research, which I find cool. I expect others would find it tedious.

Code doesn't necessarily mean building the next Facebook or Gmail. It might instead be creating the interface for a keyboard to a computer. A solved problem. You just need to tweak it a little to fit the hardware at both ends. It's still a specification for solving a problem.

Coding is different, in that when I sit down, and use a word processor, then I can see the rough edges, and I can potentially improve them.

I can know why the "paintbrush" isn't good enough for the task at hand, and I can craft my own one. And many artists have been criticised, and praised, for their methodology.

Making use of primitive tools happens in the art community, and usually increases the value of the final product.

Making use of primitive tools happens in the coding community, and usually increases the time it takes to create something that functions in the same way. Not to say it always happens, but this is an entirely different realm.

Code has a lot more in common with mathematics, where we don't ask everyone to recalculate Pi from scratch each time, than it does with art, where the materials can be as important as the method.


Sorry, but that makes no sense. The only reason I'm using any computer at all is to solve problems. Even if that problem is "entertain me", if my computer wastes my time in any way, I'm not going to look at it and say "Oh, gee, that frozen loading bar is just so much ART that I don't mind waiting on this shit box to do what I say."

A painting which wastes your time is not a beautiful painting, regardless of your perspective on subjectivity.


I don't think he was calling the user experience art so much as the solution. It's true, there are many ways to solve the same problem and many can be effective warts and all. There is some artistry to it, at least today (maybe someday we'll solve this problem).


Can I be mad at ugly paintings going for ridiculous prices?


Nah, hold back until the transaction actually takes place. Then you have reason to flip tables.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: