Instead, in construction, they use a system of checks to ensure that different experts consult one another so that every decision is reviewed by a relevant expert.
I suspect that the "chief architect" approach that Brooks advocates may have become obsolete as well since the Mythical Man Month was written. Perhaps software developers could learn something from the newer methods that replaced the "master builder" model in construction.
I still remember the jolt I felt in 1958 when I first heard a friend talk about building a program, as opposed to writing one. In a flash he broadened my whole view of the software process. The metaphor shift was powerful, and accurate. Today we understand how like other building processes the construction of software is, and we freely use other elements of the metaphor, such as specifications, assembly of components, and scaffolding.
The building metaphor has outlived its usefulness. It is time to change again. If, as I believe, the conceptual structures we construct today are too complicated to be specified accurately in advance, and too complex to be built faultlessly, then we must take a radically different approach.
Let us turn nature and study complexity in living things, instead of just the dead works of man. Here we find constructs whose complexities thrill us with awe. The brain alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-protecting, and selfrenewing. The secret is that it is grown, not built.
Can we grow designed software? Or, can we design truly growing software? I guess it depends on the definition of growing you use, "scalable" ? "alive" ?
The secret is that the "elegant" API design you see is usually the n-th iteration.
Growing code is much like growing trees. The tree grows itself, what it requires is that it is pruned and helped to reshape in a fashion that will allow it to stay alive for a long time and bear fruit.
But one needs to constantly prune the damn thing.
You start planting trees, maybe it's in an empty field, or maybe it's in an old forest. At some point it takes root and multiplies. People help plant your forest in unexpected places and it expands. At some point you try pruning and controlling the trees. And at some point a forest fire destroys it making way for a new forest to grow.
To continue the metaphor, one of the most important pieces of evolutionary growth is death.
So the model has its limitations for human projects where we don't want to have quite that propensity for surprising outcomes.
> I've always seen the statement that evolution is intentionless to be a bit arrogant
An ongoing example of learning and growing a design is "Swift Evolution Process"
This is the Free Market vs. Planned Economy debate all over again.
s/is grown/has evolved/
By designing systems in layers of interacting components, we can grow more and more complex systems. The internals of different components are irrelevant so long as the external interface is consistent.
We evolve them by improving the interfaces (see APIs deprecating functions/methods/messages) and by refactoring/rewriting the internals or implementation.
I don't know. I get the feeling that large construction projects suffer from many of the same issues that affect software projects.
And every time something goes wrong enough to end up in the media, the reason is invevitably that the various contractors/experts didn't coordinate properly, that no one really feels responsible, that major design flaws were overlooked early on and now there's no way to undo them without going even further over budget. (Yes, I am aware of the selection bias)
Big projects are hard and their hardness increases disproportionately with size. If a project's goals are such that there can be no "master builder", then things become a lot riskier and they go wrong more often.
So I think what we actually want is a bazaar of projects where each project is architected and overseen by a master builder.
Like with the endless state vs market debates, it's always a question of balance, not one of either or.
I agree with your reasoning. It resonates with what for example Richard Brandson says about companies: split them up when you reach ~50 employees.
By splitting large projects into smaller sub-projects, we've now created the need for a higher level master builder to oversee how the sub-projects integrate, haven't we?
No, I think SOA and micro-services are technical considerations that have relatively little bearing on division of labour or organisational/project structure. You can statically link a library created by someone you don't even know, or you could have one huge dysfunctional in-house team split up a monolith into fifty different micro-services without creating any new ownership rules at all.
So I think splitting up projects in smaller subj-projects alone is not good enough if all it does is create a deeper organisational hierarchy based on exactly the same command and control principles as before. As you say, it just creates new planning issues on the next level.
We need manageable units of planning, and we need to know when things become too complex for planning and require some principle of self organisation. The difficulty is that self organising systems are not easily steered towards a single goal.
It's interesting that you quote Branson, because the most important feature of a separate company is that it is a unit of economic responsibility and ownership. It usually serves more than one customer, unlike a micro-service that is a fragment of some in-house monolith. And it sets its own goals, mostly independently.
Traditional multidisciplined engineering domains (aerospace, automotive, marine) have been using this model for decades. It's about time that kind of engineering rigor came to software.
While the product manager in practice is usually a glorified secretary getting beaten by both management and engineering.
Not because the concept itself requires so many pages to understand, but because it takes several repeated high-profile examples of checklists making a big difference, before the feeling of "it's just a checklist, what's the big deal" - that a lot of people express upon hearing the idea first - is replaced with understanding and internalization.
Microsoft could probably rewrite Windows at great expense to use 10% as much code and it would still be a bloated mess. But, well why would they waste the money?
Cathedrals are not, generally speaking, profitable. They represent the expenditure of lots of capital over a long period of time.
Bazaars don't cost much to start. You can start quite small and have a functioning system that does useful things for people. They can grow quite large, and when they grow too large it becomes difficult to find exactly what you want without a really good map. But you can probably quickly find a bunch of things that are more or less close to what you want.
Cathedrals are not easy or cheap to repair, but the investment is so large that people usually prefer to repair them. A bazaar that doesn't work out makes some local people sad, but they will go to another bazaar that is a little less convenient for them, and perhaps do better there.
It's nice to have some cathedrals, because they feed the soul. But you need to eat every day, so there will always be bazaars, and if you need to make a choice, the bazaar is going to win unless you have a lot of resources stored up to fall back on.
This is getting lost in the metaphore, instead of the topic. Architected software is profitable.
It's essentially like you only made building tools for cathedrals, and no tools for thatched stick homes, and you create a culture around always building a cathedral so no one really knows how to build a thatched home. Essentially, creating a market irregularity through cultural expectations about how serious software has to be and who is going to be writing it.
(The complaint people always make when I say this is that making software isn't easy, it's hard, and notices couldn't possibly build useful software. I would half agree: some software problems are hard, and require a grizzled developer and some hard planning. But much of software involves no difficult computer science problems, and is more about understanding requirements well enough to be able to assign them to the right basic programming primitive, or a handful of common libraries. This is the kind of code that we use cultural friction to keep inside Engineering, and build cathedral-style, even though it could be done in thatched-home style by notices, if we structured our codebases for that.)
Most of the tools I see for software development seem to be organized around the needs of the bazaar (or thatched huts) not the needs of the cathedral. A million toy languages which might solve your problem well, but don't scale to a million users. Websites like GitLab and GitHub so you can share last week's 1 kloc project with collaborators. Libraries that do that one weird thing you need for your project, and nothing else.
By comparison, the cathedral builders (Google, Facebook, Apple, Microsoft, etc.) seem to be building a lot of their own tools. This includes programming languages, frameworks, build systems, version control, operating systems, and so many other things. They build their own stuff because the tools of the bazaar don't work quite well enough for cathedrals.
Not the OP, but when I see comments like this one I do realize that sometime HN has a very strong echo-chamber. The world doesn't need more than 100 (give-or-take, maybe 1,000, maybe 10,000) apps/websites which need to scale to "millions of users", a situation which doesn't keep that many workers occupied (Google&FB and the like employ much less people compared to the industrial giants of the early 20th century).
But the world does need millions of apps for the 10-100-1000 users, if needed built using the "toy languages" you decry. If we make it easy enough for people to build these apps, using "toy languages" if need be, the world would be in a much better place (we'd have higher productivity).
I'll give you my example at the company I used to work for in the early 2000s (when the "Cathedral and the Bazaar was written"). I was doing some office work, along with my other 20 or so colleagues, which involved having to check that two separate folders on our computers had the same files. This took each of us about one-hour each, so, there were 20 man-hours spent each working day on this mundane task. Lucky me I was a (already close-to-dropout) CS student, and I had heard about Python and about how easy it was to do stuff with it, and lo and behold, it really was. Just:
> import os
> l1 = os.listdir('first folder')
> l2 = os.listdir('second folder')
>a_call_to_a_custom_function_which_was_comparing_l1_to_l2() //which was probably quadratic, but it didn't matter
then use py2exe to put it all up in an .exe file which could be also run on my colleagues' computers (along with some inputs and the like) and that was about it.
A task that used to take an hour each day now was only requiring a script/program call. I fail to see how this program would have required a grown-up language which would need to be scaled to "millions of users", even though it proved to be pretty useful. And there are countless examples like my anecdotal one I gave above, you just need to go into any institution or company office, look at how people work on their computers and realize that the world needs millions of small programs like mine that would substantially increase productivity. The problem is, like the OP said, that us, "programmers", like to keep the playing field only for ourselves.
The very concept flies in the face of what is today's accepted UX "best practices", i.e. to make software trivial, engaging and masterable in 5 seconds. It naturally happens by removing any kind of thing there is to be mastered.
The task you performed with Python should be easily scriptable at the OS level. It shouldn't require one to know complex programming languages and toolkits. Similarly, I think that a tool like Tasker, maybe with a bit better interface, should be available by default in vanilla Android. We're vastly underutilizing the power of computing devices by restricting end user's ability to work with them.
 - https://play.google.com/store/apps/details?id=net.dinglisch....
You're going to need a better example than that. This program already exists, it's called diff(1), md5sum(1), or cmp(1). You could wrap its use up in a shell script to make it even easier, or the companies/people could spend some money/time to learn how to use the tools already at their disposal. In a lot of cases, lack of training is the issue that should be addressed. I've said before "Those who don't learn /bin are doomed to reinvent it, poorly"
This isn't to say you're wrong about a lot more little, customized programs that could be written. The focus needs to be on the specific processes of a particular company, because every company's processes are unique (maybe not on all axes, but on more than one). Enterprise software often errs in either of the two extremes: either it's overly customizable and doesn't fit anyone's needs completely; or it's highly specific and tries to force working its way. And this is done out of a desire, from the software vendors, to capture market share. Customized software is highly expensive, and people are expecting something tangible for that purchase. What they should be concentrating on (at least from an efficiency standpoint) is empowering their own people to automate the processes they do: after all, they are the experts in these processes.
Microsoft created a tool a few years ago called Lighswitch that allowed end-users to throw together CRUD apps quickly, and it seems to have been met with deafening silence. I wonder if managers and CIOs in BigCorps would tolerate their end-users throwing together little apps that solved their problems in today's equivalent of VB6, or MS Access (the ultimate agile experience since users are solving their own problems). Experience suggests not, and although those apps could have become unmaintainable, it seems that there is little effort being made by vendors to address that market, and to provide ease of use, with better maintainability and scalability.
Going to the PHP example, you could pick one of a number of deploy and hosting providers and have your code running and world visible in minutes for less than a Starbucks coffee a week (specific example Laravel Forge + Digital Ocean).
The problem is that even mediocre software developers with a couple years of experience can miss critical things in any language with any framework that can leave them incredibly vulnerable to attack.
For homegrown internal systems, the barrier to entry isn't the code, it's putting it somewhere people can access. In ye olde days you could slap together some VB6 and throw it in an Excel template and have a workable product- but have you ever inherited something like that? I have, multiple times. It's AWFUL- but I also have made a lot of money on not making it awful.
As an engineer, my rapid prototype basically means I eschew some things like a cache layer or performance optimization for just getting the concept out- but at an organization with no real devs, I can see the value in someone who can hack together anything with whatever they have to prove the idea, then calling in the mercenaries like myself to make the concept a real thing. The problem (and expense) usually lies in the fact that they wait until the concept is completely untenable in its current state and everyone is in a panic.
Now, if you're a spreadsheet jockey and you just need to gather and display your data in a non-trivial way, there are quite a number of things already out there. Business Objects (or whatever it's called now) and Tableau have basically formed large companies upon this idea and there's open source options like Jasper Reports.
I think the days of being able to slap some VB together and write a desktop application are just about completely dead in most situations, which means you really do need a vast breadth of knowledge that a weekend warrior developer didn't need to have a number of years ago.
Lightswitch relied on Silverlight and VB Studio, which made it useless for almost everyone.
The problem with bazaar culture is its obsession with tools and systems, and its lack of interest in users. When you get a product that inverts that - like Wordpress - it's often incredibly successful, in spite of its many the technical shortcomings.
The hierarchy of value in bazaar-land is:
1. New tool/framework/language/OS (that looks good on my CV)
2. Elegant, powerful product for customers
3. Fully productised, reliable, scalable, and easy to maintain combination of 1 & 2.
2 and 3 are more or less on equal levels. 1 is far, far ahead.
Because the culture is so tool-obsessed, a whole lot of makework and work-around fixing is needed just to get things to build, never mind work well for customers.
Basically there are dumb tools, dumb products, and occasionally elegant commercial products fall out of the combination - but usually only when they're designed by someone who cares about the user experience.
Hacking culture massively undervalues the user experience, and massively overvalues tinkering and tool-making as ends in themselves.
There's a basic disconnect between the talent needed to write code that works, and the talent needed to design a user experience that's powerful but elegant - whether the user is a non-technical user, or another developer.
The cathedral/bazaar metaphor is utterly unhelpful here, because neither really captures the true dynamic.
I've watched this play out for 25 years with dbase, Paradox, Access, and countless other tools intended to empower end-users. Typically only one person in a User Area (UA) has the gumption to want to develop an application. It's wildly successful at first. As time goes along, the person devlops the app based on new requirements as is true with any app. At some point, the complexity exceeds the user's skill and time. Often, it's when they want the app to become multi-concurrent user.
I saw that one play out around 1995 with an app built on Access 2.0. The department had a copy installed on each of 20 desktops. The manager came to realize it needed to be a shared app. The power user didn't know how. My colleague spent the better part of a year doing it.
Whatever the reason, IT gets called in. Then we have to salvage a good-for-an-amateur app. Usually the app has become critical to that department so the developer resource has to be pulled from other priorities to salvage the situation.
The problem isn't the lack of tools or CIO's protecting their turf. It's IT being left with messes when a power user gets into trouble. Whether it's Oracle Glue, Access, Gupta SqlWindows, Crystal Reports, or Frontpage, the scenario consistently plays out the same way.
So, now we are at the point where the app is breaking down under its own weight. What do we have now?
- Clear specification: The users already know what they want from the app, something very rare in our business
- Proven value: The app is not something someone designed by looking at people from the outside and saying "I think that can be done better ..." but something which stems from their own daily needs and pains.
- Experience with likely extension points: From the history of the app and where new features had to be bolted on you can already see where new feature requests will likely come in, so a new design can accomodate that
And last but not least: A working app, so you have less stress to finish something, but instead can iterate on your new version until it really is better than the current version, without anyone bothering "when is it finished? when is it finished? We need that yesterday. When is it finished?!"
And it must work exactly like the existing semi-manual system, including the ability to make random edits on legal records.
I've done these a few times before, and usually pulled it off, but there are solid reasons why they say, "don't rewrite software".
In particular, the "clear specification" usually has to be thrown out immediately and previous extensions are no guide to extensions for a new system.
And no one wants to do a serious job of it until the absolute last possible moment, so "when is it finished?" is the most important question.
The Access example, from my previous comment, was the “we’re tired of waiting” vein. The app was a critical part of their work day: they used it while on the phone with customers. We had to get involved when the app had become unusable. The developer had to be drawn from another project to “throw it on a server” so it could be shared. Unfortunately, Access 2.0 had a primitive locking scheme that prevented it from being shared between 20 or so people. To compound the lunacy, they fought recommendations, like migrating to a relational database, every step. We had a developer unavailable for the better part of a year while she had to make the desktop app into a department-level app. She had to make the changes while the app was in active use. This example is not one of a partnership for a planned MVP handoff to IT. It was, probably unintentionally, a way to jump the queue to have their project done.
I’m all for a partnership like you described. But, it has to be a partnership with the parties involved agreeing on some kind of a schedule so resources can be available without hurting other projects/UA’s.
Honestly, I'm very unimpressed with tools these days solving actually useful problems BECAUSE they're so dependent on their assumptions of the simplicity of the problem space.
I don't think we're disagreeing, necessarily. Just speculating on how to put a conclusion on the end of your thought.
I agree with you, but everything is relative. It's expensive to produce a custom microprocessor, but it's cheaper than it's ever been.
> Any nut-job with a few weeks of training can make a shitty website with PHP or Node.js and have it instantly accessible to the most of the English-speaking world.
The barrier to entry can be much lower than that. Someone without any programming experience could fork and deploy a Node service in 60 seconds if the tools were designed for that. I think you and I are just putting our parameters for "low" and "high" in different places. You are comparing Google (cathedral) to entry-level programmers (bazaar). I am comparing a random engineer in your company (cathedral) to one of your customer support staff who is requesting a copy change (bazaar).
Two totally separate conversations.
My impression is that it's more expensive these days, which is why we don't see as many startups like MOS or Acorn, and see instead partnerships between larger companies. It also seems less likely for anyone producing an ASIC to get funded in the first place these days. I couldn't find good data to settle the cost issue, though.
> I am comparing a random engineer in your company (cathedral) to one of your customer support staff who is requesting a copy change (bazaar).
I don't understand this argument. I'm not sure what "copy change" means in context, and I don't know how customer support relates to the discussion.
I guess the main point I was trying to make was that the tooling for bazaar-style development is at your fingertips from the moment you sit down at a computer, but the cathedral is harder to make and the publicly available tools aren't as good.
The fact is, even in the bazaar model where the barrier is low, when does customer support make code changes? I'm talking here about instances where customer support for open-source projects exists.
In all my decades of programming, I have never met a non-professional with good, let alone better sense of requirements. A layman does not think in terms of details; they think in terms of abstractions, often in terms of castles in the sky. The problem is that computers are the exact opposite of abstractions and castles in the sky: exact, unforgiving, and dumb.
In fact, in all my decades of programming and working with computers, in my journeys across two continents, the number of professionals with a good sense of requirements I have met can be counted on the fingers of my one hand. If that is not disheartening, I do not know what is. It's emotionally and psychologically devastating to me personally. It's extremely depressing to even think about it. What does it say about our profession?
As for writing tools for ourselves, learn UNIX, and then you'll learn of the UNIX programming model:
write programs which work with other programs; write programs with the notion that the output of your program could very well become another program's input. Write programs which accept ASCII input from other programs, for that is a universal interface. Be liberal in what you accept, and conservative in what you send.
> As for writing tools for ourselves, learn UNIX, and then you'll learn of the UNIX programming model
And then learn some history, understand how UNIX actually was a huge step backwards for computing and how we utterly fucked up the industry. Modularization is fine, programs that work with other programs are great (for many definition of "programs", not just "UNIX process"). However, unstructured text communication is a waste of resources and cesspool full of bugs, and we knew better in the past. We're regaining some modicum of sanity with the lightweight structured text formats of today, but it's sad we had to take a decades-long detour to rediscover that.
As for unstructured text communication, say what?!? Every good UNIX engineer knows: build in a -m switch for versioned machine readable output, and if possible, make that output a stable interface. That's clear, at least to me. That isn't clear to you?
And I hope by structured text, you don't mean garbage like JSON, one of the most inconsistent and idiotic formats I have ever seen?
Hopefully you also don't mean XML, which is terrible to parse with standard UNIX tools like grep, sed, and AWK. More complications for negligible gains.
I'd prefer a type system so I can use these tools like a library. Most of them only work on piped data or files.
A recent example is that I needed to diff files. There are existing programs and I didn't want to reinvent the wheel, I just needed that particular wheel to build something else.
To use the existing programs I had to write to a file, which is too slow for my use case. It would be much easier if I could hand these tools a pointer to my in memory data structures and get the diff back in another structure.
This is one reason why we often see libraries replicated /bin. Powershell did a good job of solving this (but was too flawed in other ways).
However, if you have more complex data to send, text may be problematic. And if you're going to send structured data via text, you need a standard, easily parsable format so that people can easily parse your data without having to roll their own, incredibly buggy, parser. JSON and DSV are both easy to parse, and so those are the formats people use, like it or not. And no, it's not inconsistant. It wouldn't be so easy to parse if it was.
Also, I have never seen a tool with -m. Maybe it's because I'm running Linux.
You saw a mainframe. I saw a number that were quite different from each other. The parent said "a step back," though, not mainframes or a specific mainframe. There were many architectures that came before or after UNIX with better attributes as I list here:
If we're talking minimal hardware, let's look at two other approaches. One was Wirth's. They do an idealized assembly language to smooth over hardware or portability issues. It's very fast due to being close to bare-metal. Simple so amateurs can implement it. They design a safer, system language that's consistent, easy to compile, type-checks interfaces, can insert eg bounds-checks, and compiles to fast code. They write whole system in that. Various functions are modules that directly call other modules. High-level language, rapid compilation, and low debugging means that two people crank out whole system & tooling in about 2 years. Undergrads repeatedly extend or improve it, including ISA ports, in 6mo-2yr per person. A2 Bluebottle runs insanely fast on my 8-year-old hardware despite little optimization and OS running in a garbage-collected language. Brinch Hansen et al did something similar in parallel on Solo OS except he eliminated data races at compile time with his Concurrent Pascal. Later did a Wirth-style system on PDP-11 with similar benefits called Edison.
On functional end, various parties created the ultimate, hacker language in LISP. Important properties were easy DSL creation, incremental compilation of individual functions, live updates, ability to simulate any development paradigm, memory safety, and higher-level in general. The LISP machines implemented most of their OS's and IDE's in these languages. Imagine REPL-style coding of an application that would run very fast whose exceptions, even at IDE or OS level, could be caught, analyzed at source form, and patched while it was running. Holy. Shit. They targeted large machines but Chez Scheme (8-bit) and PreScheme (C competitor) showed many benefits could be had by small machines. Jonathan Rees even made a capability-secure version of Scheme which, combined with language safety benefits, made it one of most powerful for reliability or security via isolation. A project to combine the three concepts could have amazing potential.
So, yeah, UNIX/C was a huge step back in compiler speed/consistency, speed/safety tradeoffs in production, flexibility for maintenance, integration, debugging, reliability, security, and so on. Tons of architectures or languages better on each of these with some having easier programming models. That Thompson and Ritchie's perfect set of language features for C replacement were collectively an Oberon-2 clone (Go) is also an implicit endorsement of competing system. Plenty of nails in the coffin. Sociology, economics, and luck are reasons driving it. The tech is horrible.
Unix was and is sucessful because it was good enough, and far more platform, language, and tecnique agnostic than the competition. Unix reccomends a lot, but ultimately perscribes little.
You're missing the point: abstracting some machine differences behind a system module then building on it in a safer, easy-to-compile language with optional efficiency/flexbility tradeoffs. Thompson and Ritchie could've done that given prior art but they wanted a trimmed-down MULTICS with that BCPL language Thompson had a preference for. Around 5 years later, Wirth et al had a weak system to work on and did what I described with much better results in technical aspects. His prior work, Pascal/P, got ported to around 70 architectures ranging from 8-bit to mainframes in about 2 years by amateurs. Imagine if UNIX had been done the Wirth way then spread like wildfire. Portability, safety, compiles, modifications, integrations... all would've been better. Safety stuff off initially where necessary due to huge impact on performance but gradually enabled as a compiler option as hardware improved. As Wirth et al did. I included Edison System reference because Hansen did Wirth style on PDP-11, proving it could've been done by UNIX authors.
"Lisp was VERY expensive computationally, and was often highly unportable, being written in asm, and lispms all implementing their own version of the language: more elegant, less practical."
Choices of the authors. Similar to above, they could've done what PreScheme and Chez people did in making an efficient, variant of LISP with or without GC's. Glorified, high-level assembly if nothing else. PreScheme could even piggy-back on C compilers given they were prevalent at time it was written. Took till the 90's before someone was wise enough to do that although I may have missed one in LISP's long history. They also formally verified for correctness down to x86, PPC, and ARM. Would've benefited any app or OS written in it later. Pulling that off for C took a few decades... using Coq and ML languages. :)
"Unix reccomends a lot, but ultimately perscribes little."
My recommendations do that by means of being simple, functional or imperative languages with modules. Many academics and professionals were able to easily modify those compilers or systems to bring in cutting-edge results due to tractable analysis. UNIX is the opposite. It prescribes a specific architecture, style, and often language that made high-security or high-integrity improvements hard to impossible in many projects. The likes of UCLA Secure UNIX failed to achieve objective even on simple UNIX. Most of the field just gave up with result being some emulation layer or VM running on top of something better to get the apps in there. Also the current approach in most cloud models leveraging UNIX stacks. It wasn't until relatively recently that groups like CompCert, Astree, SVA-OS or Cambridge's CHERI started coming up with believable ways to get that mess to work reliably & securely. It's so hard people are getting PhD's pulling it off vs undergrads or Masters students for alternatives.
So, yeah, definitely something wrong with that approach given alternatives can do the same thing with less labor. Hell, ATS, Ocaml, and Scheme have all been implemented on 8-bit CPU's with their advantages. You can run OpenVMS, MCP, Genera LISP, or MINIX 3 (self-healing) on a desktop now directly or emulated. You can get the advantages I mentioned today with reasonable performance. Just gotta ditch UNIX and pool FOSS/commercial labor into better models. Also improve UNIX & others for interim benefits.
People just use monoliths in C instead & call it good design/architecture despite limitations. Saying "it's good enough for my needs" is reasonable justification for inferior technology. Just not good to pretend it's something it isn't. When you don't pretend, you get amazing things like the BeOS or QNX desktop demos that did what UNIX/Linux desktop users might have thought impossible at the time. Since UNIX/Linux were "better." ;)
I do think that other people in the organization usually have a better sense of needs. And so if they could have a better understanding of the materials, they could do a better job of managing requirements than an engineer who looks at code all day, and is typically not observing the customers.
Your advice about UNIX is good. I try not to write modules larger than a few hundred lines of code. Anything bigger than that gets split into fully isolated modules with well defined interfaces.
Also: I'm sad you're sad. And I'm sad because I feel my tools isolate me from the people I would like to be working closely with. But I'm very optimistic about solving this problem. I think all of the building blocks are there to solve it, we just haven't made a concerted effort as a community because we're mostly under the impression that it's impossible for non-coders to understand code.
I don't understand this tools argument. It seems akin to saying the reason I find myself isolated from collaborating with particle physicists is due to the fact that I don't know how to operate a large hadron collider, while completely ignoring the fact that I can't even read a Feynman diagram.
Can we maybe update that to UTF-8?
1) I'm lazy
2) I am not worried about job security
Does anyone actually, honestly, incorporate "how can I keep the application of my skills here the right amount of inaccessible to others?" into their time spent on project? Shame on you.
For me, I don't like working in Excel, and I want to have better working relationships with designers, customer support people, business folks, etc. I want us to be able to work on the same projects together, which means not Excel because Excel is extremely limited and difficult to work with. Not difficult for a random person to use for some calculation. Difficult for me to get the things I want to get done in Excel when I'm trying to build arbitrary web apps.
What businesses want is not for software to be easier to build, that's what developers want. Businesses want software that can be more quickly used to solve their use cases. This is not an easy problem. That's why they hire engineers to solve it.
To be more clear, software is quite easy to build, easy does not mean quick. All the hard problems are solved through a library or a framework. The computer science problems left are too hard for even the professional developer to solve.
Software Engineers should specialise in knowing a lot of already written quality software, and they should be good at figuring quick ways to reuse them and combine them and adapt them to the businesses use cases.
Shopping malls aspire to be beautiful (primarily on the inside), but they can never be as beautiful or as well-architected as cathedrals, in form or in spirit.
Shopping malls have a bigger overhead than bazaars, and are less flexible. But their overhead is much less, and their flexibility much greater, than that of cathedrals.
Shopping malls provide more foundation and infrastructure than bazaars do. A bazaar minus its stalls is just a dusty field. A shopping mall minus its shops still has multiple levels, maps / directories, elevators / escalators, parking, loading docks, etc.
It costs more to set up shop in a mall than in a bazaar, but your shop will be more trusted, you won't have to squabble for space with your neighbours every day, and you won't get blown away by the next storm.
Shopping malls are designed with dedicated spaces that are tailored to different businesses. A supermarket has very different needs compared to a shoe shop.
Shopping malls are bazaars with a tonne of design, engineering, and regulation thrown in. But they're still bazaars. They're still where you get your groceries.
Kind of like if the iOS App Store only had room for 15 apps, and Apple Inc decided which, and the roster was only updated every few years at most.
are you the one who writes "The Shopping Mall and the Bazaar" and leads us into a better era?
...or Yahoo, or Google, or IBM, or Hooli, or <Big-Fat-Tech-Giant>.
1. Christianity, the group who builds Cathedrals, is probably the most profitable organization in the history of mankind.
2. Bazaars blow away when the wind picks up. If too many people show up, things start falling down. Cathedrals, and their close cousins, castles, last centuries.
3. I think Bazaars have their place, when you really need to ramp up something to show. We used to call that prototyping. If it gets past that, you gotta build it right eventually.
I think AutoDesk did this when Inventor came along, but I am not sure. A big re-write if ever there were one.
Like someone else said about Python, "it's a cathedral where it counts".
I don't need to be the richest organization ever. I'd be happy for 1000th place and those billions rather than overreaching for number one and the risk that entails.
Is that even a good thing? Aside from being beautiful artistic and historical works, I think that Cathedrals have outlived their usefulness. Just look at St. Patrick’s in NYC - that thing cost 177 million USD over 3 years just to restore it. I mean, obviously it's nice, but in terms of functionality it's a huge waste. You could build another 2 Lakewood Churches (each with a capacity of 16k people), using modern steel building techniques, just for the restoration costs of St. Patrick’s.
I can see the case for over-engineering on things that are effectively "solved" problems (hashing algorithm implementations, knife design, non-electrical hand tools, JSON parsers...) but when it comes to complex systems like buildings or operating system architectures, I don't think we're at a point of stability where anything should be expected to last centuries (or more than a few decades, in the case of software).
We're still seeing fundamental shifts in the assumptions that these systems are built on - whether it's the Cathedral that couldn't have possibly foreseen its stone arches being replaced by steel girders or the early mainframe OS that was designed before cheap computer clusters became the norm.
Business problem spaces don't last centuries. Algorithm do.
If you wanted tabs in Android prior to 13, you'd setup a TabHost with a LocalActivityManager ( https://developer.android.com/reference/android/widget/TabHo... ).
Then in version 11 ActionBar.Tab is added and LocalActivityManager is deprecated in version 13.
By version 21 they became bored with that and deprecated it ( https://developer.android.com/reference/android/app/ActionBa... ).
You wouldn't know it was deprecated though as the most current training documentation still recommends using this now-deprecated method ( https://developer.android.com/training/implementing-navigati... ). All you are left with is a pointer to a vague, ambiguous page in the context of the deprecation method. Welcome to the cathedral of Android developing.
BTW the release dates for these APIs:
API 11 - February 2011
API 13 - July 2011
API 21 - November 2014
The current most up-to-date tutorial on their web site is chock full of recommending things which they deprecated already.
And what about customizing a Spinner's font and background color and popupBackground color and putting a non-standard element at the top?
And the amount of time it took before Android got percentage based spacing in layouts?
And the fun of remembering what UI attribute-values like android:gravity="center_horizontal", android:layout_centerHorizontal="true" and android:layout_gravity="center" mean?
And the awkwardness of using non-standard typefaces and using things like the RecyclerView and it's weird Adapter?
And the fact that for some reason they used an XML element named "layout" when they implemented their data binding feature?
And the fact that some attribute names are camelCased while others are underscore_cased?
And the ease with which memory leaks are introduced by inadvertently closing around context instances because you need context instances all over the place because the Activity is a god object?
It's all so very ... not ... like ... a cathedral.
my hazy guess was that Google simply hired a bunch of extremely smart programmers and let them code whatever they wanted. but there is no equally smart architecture/design team to guide, organize and filter the work. there's just a build system.
maybe that's essentially the same as your theory.
Then we also have the whole story with the NDK, after 3 years having officially deprecated Eclipse, only now they are finally getting something comparable to the CDT.
The experimental Gradle plugin for the NDK still doesn't work that well, just got replaced by something else in the stable Gradle plugin, which is also kind of legacy support, because the way forward seems to actually be cmake. All because that is what Clion knows anyway.
And as usual for these things, it is documentated across Android samples, Git commit messages and blogs.
For more, see this perceptive comment: https://news.ycombinator.com/item?id=12251705
Not to mention the remains of NextStep. Also very Cacthedraly.
Edit: thinking about it, I wonder why Apple didn't keep using that name/brand. It's obviously there all over the API. Was the NextStep name/brand tainted as a doomed technology at the time?
The amount of productivity available to Mr. Kamp for free today is conservatively double or triple that available in 1999. Databases, web frameworks, scale know-how, IDEs, hosting platforms, the list goes on.
He harkens back, sadly, to an era in which codebases like Genuity Black Rocket cost $100k in licensing, and ran on $30k/month Sun servers. Seriously.
Languages are faster, development times are shorter, and chips are WAY faster. And, code can be pushed out for tinkering and innovation onto github for free. Combine that with his estimate that we have 100x more people in computing, and the combination is a riot of creativity, crap, fascinating tech and everything in between.
The bazaar is messy, but I'm not aware of any solid critiques which show cathedrals are more efficient at the multiples-of-efficiency kind of gains we get from legions of self-interested, self-motivated coders.
The article isn't about efficiency, it's about quality. The assertion is "Quality happens only when someone is responsible for it."
This is due to Moore's law, not the software design choices that the article bemoans. Those $30k/month Sun servers were many times faster and cheaper than the earlier machines they replaced as well.
We've had software and hardware gains, massive ones, and they compound.
I have to disagree, compilers may have gotten a bit better at making faster binaries. But languages, like new languages, are increasing in expressiveness and safety, sure, but very rarely efficiency. Go and Rust are not faster than C or C++, likely never will be (for one C has decades of lead time), Go and Rust may be faster than C was 20 years ago, but that doesn't matter.
(And yes, sometimes, it's faster. Today. Not always! Usually they're the same speed.)
As steveklabnik noted that is old data (which you would be normally be able to see from the date-stamp in the bottom-right corner, but that's been hidden).
This web page is updated several times a month, and presents the charts in context --
(You might even think that you can tell which language implementations don't have programs written to use multi-core and which do.)
For example, here's a screenshot I took a few months ago: http://imgur.com/a/Of6XF
or today: http://imgur.com/a/U4Xsi
Here's the link for the actual programs: http://benchmarksgame.alioth.debian.org/u64q/rust.html
Today, we're faster in C than one program, very close in most, and behind where SIMD matters.
> But C has decades of lead time.
As an aside: As someone who has used LLVM to build a compiler, it doesn't quite work that way, yes rust has access to those gains, but it may not be able to effectively use them (due to differing assumptions and strategies).
Not generally, no. Maybe the popular ones become so, but that's mostly by rediscovering the languages of old, which had better safety and more expressive power.
I'm not sure how you can say Lisp is a cathedral; it's not even "a" anything. Common Lisp, Racket, Clojure, Emacs Lisp, etc., many of which are themselves bazaars. Ruby, for another example, may have started as one person's vision, but now the canonical implementation is a big multisourced effort, and there are other implementations with lots of uptake that aren't directed or blessed by the mainline Ruby.
You mentioned Common Lisp - it's a great example of a cathedral. A language carefully designed by a committee, which took into consideration all the previous Lisps that were in popular usage. You can tell there was a lot of thought behind the process.
As for Emacs and the bazaar, I think this is a good case study of good and bad aspects of bazaars. On the one hand, you have an incredibly flexible tool, which turns it into a perfect test environment optimizing workflow with text-based tasks. You have people writing Emacs modes for anything including kitchen sink, and it turns out many of those experiments offer superior workflow than standard, dedicated applications (especially when it comes to interoperability and maintaining focus/flow).
On the other hand, Emacs often requires you to hot-patch stuff here and there, and its language support is usually worse than that of a cathedral-like IDE dedicated to a particular programming ecosystem. And I say it as an Emacs lover. I still prefer Emacs to IDEs, but that's because of the flexibility benefits, which are unparalleled. But I'm not deluding myself that Emacs has better support for Java than IntelliJ, or better support for PHP than Eclipse, or whatever. For language ecosystems requiring complex tools to back them up, it's a PITA to set up your working environment in Emacs. Hence the negative side of bazaar - you don't get as much focused effort to make something of high quality.
Common Lisp was designed as a unified successor to Maclisp, in response to an ARPA request.
Not to Scheme, Interlisp, Lisp 1.6, Standard Lisp, Lisp 2, LeLisp, ....
Scheme was further developed. Interlisp died, Standard Lisp had a Portable Standard Lisp variant and then mostly died. Lisp 2 was dead before, LeLisp died later.
The core of Common Lisp was designed in 1982/1983, decided mostly by a small team of Lisp implementors (those had their own Maclisp successors) with a larger group of people helping out.
1984 a book was published on the language and implementations followed.
Standardization then came as a more formal process later with goal of creating an ANSI CL standard - again it was mostly US-based, paid by ARPA. Areas were defined (language clean-up, objects and error handling), .... Proposals were made (like Common LOOPS by Xerox) and then subgroups implemented and specified those (CLOS, ...).
> You can tell there was a lot of thought behind the process.
There were a lot of people involved. Not just the X3J13 committee. It was also a community effort at that time.
JK. They might starve to death before they finish.
The greatest thing about CL is that it has so many features that you can use to make a CL-lover look like a deranged nutbar.
You don't need to know all of it in detail. It's good to have an overview and look up details as needed.
In a numerics library, I don't need to know every function in detail. I just look it up on demand.
Hyperbolic tangent for complex numbers? I don't know the details. Learning all is fruitless. When I need it, I look it up.
> They might starve to death before they finish.
Teach Yourself Programming in Ten Years. Why is everyone in such a rush?
Java JEE in detail? Oops.
The Haskell type system in detail? Ooops.
> The greatest thing about CL is that it has so many features that you can use to make a CL-lover look like a deranged nutbar.
Wait until you get 'Scheme R7RS large'. Ever looked at recent languages specs for languages like Scala, Haskell, Java 8, Fortress, Ada, the coming Java 9, Racket, C++, ...
One thing you need to learn about Lisp languages: the language is not fixed. It can be an arbitrary amount of features, even user supplied.
What you need to learn is not all the features of one construct. What you need to learn is how to learn the details incrementally on demand, while having an overview of the concepts.
If you think LOOP is large and complicated, have a look at Common Lisp's ITERATE: even more features and more powerful. Even designed to be user extensible.
And it's totally great.
But you have to admit that there's something a little...off...about having an iteration sub-language with a 43 page (in PDF) manual. And I mean sub-language literally; one of the advertised features of ITERATE is that it has a more Lispy syntax, so your editor has a hope of indenting it correctly.
LOOP is actually not a CL specific language construct and did not originate there. It was invented by Warren Teitelman for Interlisp. There it was called FOR. From there it was ported/reimplemented and extended to several Lisp dialects.
Sure, but I didn't say Common Lisp was a bazaar, either. I said it didn't make sense to say "Lisp" was a cathedral, because there are many Lisps, and some of them are bazaars.
Most of the original developers have long since moved on, there are design problems, various teams and managers rebuild or duplicate work, and management sometimes imposes big changes just before release.
Software quality is hard to judge from the outside, and takes longer to build.
Just take your Unix mentality and make a few substitutions:
* gcc -> cl.exe
* ar -> lib.exe
* ld -> link.exe
* make -> nmake.exe
* libfoo.so -> foo.dll
And there you have it, the world that "didn't exist before .NET" ... This is crazy amounts of ironic because Windows had DLLs at a time when shared libraries were not so much a thing on Unix. Not to mention things like COM which are all about creating de-coupled components.
I haven't seen anything since that allowed such decoupled development
COM is/was a rats nest of confusing and frequently duplicated APIs with insanely complicated rules that by the end really only Don Box understood. CoMarshalInterThreadInterfaceInStream was one of the simpler ones, iirc. COM attempted to abstract object language, location, thread safety, types, and then the layers on top tried to add serialisation and documented embedding too, except that the separation wasn't really clean because document embedding had come first.
Even just implementing IUnknown was riddled with sharp edges and the total lack of any kind of tooling meant people frequently screwed it up:
The modern equivalent of COM is the JVM and it works wildly better, even if you look at the messy neglected bits (like serialisation and RPC).
Some things done not as well as these core ideas: registration done globally in the registry, anything to do with threading, serialization, IDispatch.
I think in many situations you can take lessons from the good parts and try to avoid the bad.
I don't see how pointing out common bugs helps your argument though. You can write bugs in any paradigm.
IUnknown is a classic case of something that looks simple but in fact a correct implementation is not at all trivial, yet COM developers were expected to get it right by hand again and again. COM itself didn't help with it at all, so the ecosystem was very dependent on IDE generated code and (eventually) ATL and other standard libraries.
None of the things you highlight were good ideas, in my view, although probably the best you can do in C.
Actually it is the WinRT introduced in Windows 8.
Basically it is the original idea of .NET, which as called COM+ Runtime, before they decided to create the CLR.
WinRT is nothing more than COM+ Runtime, but with .NET metadata instead of COM type libraries.
Also since Vista, the majority of new Windows native APIs are COM based, not plain C like ones.
As for COM, it seems to me that most common reason why something developed by some ISV is separate DLL is that it's COM component.
Moreover, what a sad world for our profession when an IDE doesn't do something for you and people start to doubt it exists. Reminds me of kids on here saying a programming language without a package manager might as well not exist at all.
Microsoft did really fuck it up historically (IE overwriting shell32 comes to mind) but the mechanism didn't have problems when applied by the right hands (sometimes a first party is the wrong hands :P)
If you want to talk problems in the mechansim, ask me sometime how it's possible for a Win32 process to host multiple incompatible malloc implementations in the same address space.
I didn't use Windows enough to know more about it than that let you have multiple copies of the libraries. Did it let you upgrade a compatible library?
"If you want to talk problems in the mechansim, ask me sometime how it's possible for a Win32 process to host multiple incompatible malloc implementations in the same address space."
I think I'll pass, thanks. :-)
It is discussions like this which make me truly admire Douglas Adams for his insights and ability to express them.
For instance, when I read through the debate here, I can't help notice how many of the arguments are really variations of "It's a bypass! You've got to build bypasses! Not really any alternative."
'That is the sorry reality of the bazaar Raymond praised in his book: a pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT "professionals" who wouldn't recognize sound IT architecture if you hit them over the head with it.'
I was hoping for some kind of expansion or attempt at a solution here (which of course would be non-trivial).
There's still too much money to be made on kludges for that to happen.
It's Akerlof's "Market For Lemons" writ large. Users can't assess the quality of software before they buy it and sink ages of their own time into learning it. Often users can't assess quality problems even after they've bought it. So the market isn't going to reward quality.
(The original paper was about cars; now we have software in our cars the problem is twice as bad. VW 'defeat devices' and Toyota 'unintended acceleration' passim).
At the very least, I would love to see companies created around popular open source tools and verticals to create designed end-to-end experiences. Download, double-click, start coding, and see something on the screen.
You just have to get out of the churn-for-churn's-sake cesspools. There are high-quality, stable software stacks out there, where the Cambrian explosions and rediscovery of ideas from a generation ago have already passed.
I don't use any of 'em. If you can afford to give the finger to those not running in a near-POSIX environment, you can just use makefiles or npm scripts: write your code, and run shell scripts to build it, the way God, Doug, Dennis, Brian, and Ken intended.
As for good dev environments, I will not leave my beloved emacs (C-x C-e in geiser-mode means that you can run your software as you write it, and I love it: Most dynamic languages have something similar), but that would intimidate newbies. Gedit and a shell is probably the best environment to start them with: It's about as simple as you get, and every developer is expected to have a rudimentary knowledge of shell, so best to start early.
Laughed for 5 mins on this. So true! Somehow we are expected to take this into stride.
Even if you do get online, don't forget to configure the MITM CA cert!
Next up, apps that try to execute from %LOCALAPPDATA% (Squirrel installers). This is blocked by most AppLocker configs.
Isn't "enterprise" computing fun?
It is true that configure scripts are probably doing some useless things, "31,085 lines of configure for libtool still check if <sys/stat.h> and <stdlib.h> exist, even though the Unixen, which lacked them, had neither sufficient memory to execute libtool nor disks big enough for its 16-MB source code", etc. But then what is the alternative? Writing a configure module each time by every programmer who wants to release some software? This is called code reuse and yes, it's not perfect but it saves time. By not reinventing the wheel again and again. By reusing something that is stable and has been there for some time. Probably such thing is generalizing over many architectures, and making useless things, but then again, who cares for some extra 5-10 seconds of the "configure" command, when you are covered for all those strange corner cases that it already handles?
You could start out removing the autocrap checks for <sys/stat.h> and <stdlib.h>.
Then you could eliminate all the other autocrap checks which come out one and the same way on every single OS in existence.
And in all likelyhood, you would find out that you don't actually need autocrap at all, because the remaining two checks can be done with an #ifdef instead.
cp Makefile.template Makefile.'uname' && vi Makefile.'uname'
People who'd "kept it simple" like you suggest were the bane of my life. I spent more time debugging each of those builds than all of the builds that used actual tools combined.
(Ironically enough "all projects must use autotools" seems like a quite cathedrally attitude though)
There are of course occasional glitches, but not like those ones.
1. That just pushes the issue back to the language implementors. Ever discover that your favorite language is unavailable on your new platform?
2. You get restricted to the least common denominator. Thay one OS feature that would make your app 10x faster? Sorry.
Do you think C had any major meaning outside UNIX, before it got largely adopted in the industry?
It was just yet another language with dialects that had some kind of compatibility, with Small-C being the most used one.
For the second point, also not an issue unless the language doesn't support FFI.
The beauty of runtimes, instead of bare bones C, is that they can be tuned to make the best of each OS APIs, while keeping the code portable.
This is nothing new, it was quite common outside AT&T.
There isn't a standard UNIX way to do GUIs, bluetooth, NPC, GPGPU, touch screens, cryptography, printers, medical devices, ...
All that UNIX has is POSIX, http://pubs.opengroup.org/onlinepubs/9699919799/ , which only focus on CLI and daemons as applications.
Anything else are not portable APIs that don't have necessarily to do with UNIX.
It doesn't matter that the kernel is UNIX like, if what is on top of it isn't.
And anyone that only knows GNU/Linux as UNIX, should read "Advanced Programming in the UNIX Environment" from W. Richard Stevens/Stephen A. Rago, to see what actually means to write portable UNIX code across POSIX implementations.
GUI is dead; if your application doesn't run on the server and display the results either on the command line or in a web browser, you're doing it wrong.
So you use VI and Emacs on Android, generate postscript and troff files, configure /etc/passwd and /etc/init.d
How does your Android .profile look like?
Yes z/OS has a POSIX subsystem, it also doesn't support anything besides CLI, daemos and batch processing.
Mac OS X is a certified UNIX, however none of the APIs that matter. You known, those written in Objective-C and Swift are UNIX.
> GUI is dead; if your application doesn't run on the server and display the results either on the command line or in a web browser, you're doing it wrong.
Better let all of those that earn money targeting infotainment systems, medical devices, factory control units, GPS units, iOS, Android, game consoles, smart watches, VR units, POS, ... that they are doing it wrong.
I don't use Android, because it's a ridiculously hacked-up version of GNU/Linux (as if being based on GNU/Linux isn't bad enough).
Have you spawned a shell on it? The filesystem is a royal mess, the likes of which I've never seen before. Could I run vi and groff and even SVR4 nroff on it? Yes, if I wanted to waste my time with it, I could.
I didn't touch .profile because I don't care for bash one bit, but it was there.
However, in this context, it's still UNIX. A hacked-up, ugly UNIX severely mutilated to run on mobile telephones and tables, but conceptually UNIX nevertheless (honestly, I have never seen anything to hacked-up and mutilated like Android, and you can bet that in 30+ years of working with computers, one sees all kinds of things).
People don't want a clunky computer any more; except for computer people, I don't know anybody from general population who has one. I'm offended that we're even wasting time discussing desktop anything!
Also I don't see anything here on these APIs,
That relate to these ones:
You mean they carry their portable UNIX servers in their pockets with them. Since they all come with a web browser, there's your application's or your server's front end.
and use this thing called apps on them.
I have a few of those on my mobile UNIX server as well. Stupidest thing I've ever seen or used, "apps". What for, when they could have used a web browser to display their front ends, or could have ran on a backend server and just sent the display to the web browser? Most of those "apps" I use won't function without an InterNet uplink anyway... idiocy pur.
Which ad absurdum makes UNIX APIs irrelevant for cloud computing.
Hell, I built one for a guy's gaming rig a little while back. That replaced his hand-me-down from a company designing & installing sprinkler systems. Why he have that? They just bought new desktops for all there CAD users. Lot of CAD users out there probably have one too. ;)
Plenty of RAM? Yes, but on supercomputers. My machines were sgi Origin 2000's and 3800's running a single compute intensive application doing finite element analysis and using 16 GB of RAM, across all the CPU's in the system. A single calculation would usually take a month.
On the desktop, you couldn't be more wrong: I was part of the cracking / demo scene, and we literally counted clock cycles in order to squeeze every last bit of performance in our assembler code, me included.
One can also look at the numbers. Last year, over 17 million PC's were sold in US. Think the buyers were really all computer people? Even with 3 year refresh cycle, low end, that be estimate of around 50 million computer people in this country that's been buying desktops over 3 years. Think they're really that big a demographic?
But if you look at the number of PC's sold year over year, the number is dwindling at the rate of roughly 15% - 18% per year. Look, for example, under the "Global Computer Sales" column, here:
the average laypeople don't want computers any more, and the sales reflect that. For their needs a tablet or a mobile telephone with a web browser is pretty much all they need, and the web can and does now deliver pretty much any kind of application they could ever need or want. And that's precisely where most of the sales of desktops were. Professionals using computer aided design and people like you and me are few and far in between, in comparison.
On an sgi related note, I myself owned several Octanes and even an sgi Challenge R10000 (with a corresponding electricity bill). I must have torn and rebuilt that Challenge four or five times, just for fun. My primary workstation for years (which I fixed and put together myself) was an sgi Indigo2 R10000, with an hp ScanJet II SCSI scanner, a 21" SONY Trinitron, and a Plextor CD-RW SCSI drive, back in the day when CD-RW was "the thing". With 256 MB of RAM when most PC's had something like 16 or 32 MB, it was a sweet setup. Ah, IRIX 6.5, how much I miss thee...
Anyway, the answer to your question is iPad Pro by Apple Computer. It runs an operating system called "iOS" which is a heavily customized FreeBSD on top of a custom CMU Mach kernel. And it's UNIX03 compliant. UNIX! It's everywhere!
I'm aware of it, it's not good enough. Its UI is terrible when you need to work with multiple applications, it's a pain to customize anything and even more of a pain to run your own programs.
How does you .profile on iOS look like?
For me, it's closer to a minute. "configure" is good enough that it does the job, and it's hard to replace it. "configure" is bad enough that I loathe it with emotions that words cannot describe. It's design is terrible. It's slow. It's opaque and hard to understand. It doesn't understand recursion (module code? pshaw!)
automake is similarly terrible I looked at it 20 years ago, and realized that you could do 110% of what automake does with a simple GNU Makefile. So... that's what I've done.
I used to use libtool and libltdl in FreeRADIUS. They gradually became more pain than they were worth.
libtool is slow and disgusting. Pass "/foo/bar/libbaz.a", and it sometimes magically turns that to "-L/foo/bar -lbaz". Pass "-lbaz", and it sometimes magically turns it into linking against "/foo/bar/libbaz.a".
No, libtool. I know what I'm doing. It shouldn't mangle my build rules!
Couple that with the sheer idiocy of a tool to build C programs which is written in shell script. Really? You couldn't have "configure" assemble "libtool.c" from templates? It would only be 10x faster.
And libltld was just retarded. Depressingly retarded.
I took the effort a few years ago to replace them both. I picked up jlibtool and fixed it. I dumped libltld for just dlopen(). The build for 100K LoC and ~200 files is about 1/4 the time, and most of that is running "configure". Subsequent partial builds are ~2s.
If I every get enough time, I'll replace "configure", too. Many of it's checks are simply unnecessary in 2016. Many of the rest can be templated with simple scripts and GNU makefile rules.
Once that's done, I expect the build to be ~15s start to finish.
The whole debacle around configure / libtool / libltdl shows that terrible software practices aren't new. The whole NPM / left-pad issue is just "configure" writ large.
Unfortunately, there are two problems.
1. They were all operating under different requirements.
2. They were all absolutely convinced that they were the best in the business and that they were right.
As a direct result, those of us who got to deal with more than one of the resulting systems want to beat them all to death with a baseball bat with nails driven into the end.
I don't mean that reason to use configure is bad. There are many different systems, and being compatible with them all requires some kind of check / wrapper system.
I mean that the design of "autoconf" and the resulting "configure" script is terrible. Tens of thousands of lines of auto-generated shell scripts is (IMHO) objectively worse than a collection of simple tools.
See nginx for a different configure system. It has a few scripts like "look for library", and "look for header file". It then uses those scripts multiple times, with different input data.
In contrast, configure use the design pattern of "cut & paste & modify". Over and over and over and over again. :(
The new thing seems to be generating configure from CMake which requires you to get a Ph.D to figure out how to override $CC.
echo "This configure script requires a POSIX-compatible shell"
echo "such as bash or ksh."
echo "THIS IS NOT A BUG IN LIBAV, DO NOT REPORT IT AS SUCH."
# wc -l ac* configure.in configure
Apparently the bash requirement isn't too bad since it works on Windows and Plan 9.
It actually is kind of silly that you can't depend on this stuff being abstracted, but instead must individually test it instead of asking a reference on a given system.
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36", indeed.
The build complexity is one of the reasons I stay away from Java: If these people think they need XML for builds, what other needlessly complex horrors have they perpetrated? And that sort of thing.
Do you have your buildscript template on github?
M4 and sh are very concise languages. Nonetheless autotools is orders of magnitude more complex than Maven. You really can't compare at all.
At any rate, if you want a more concise syntax there is gradle (but it's a bit slower as it's actually executing a real scripting language) and, perhaps a nice middle ground, a thing called Polyglot Maven which is the same build engine but with a variety of non-XML syntaxes. The YAML one is quite nice:
That way you get simple syntax and a simple model, but still with lots of features.
Limited build tools are a good thing.
One of the guys working on Redox OS make a library called cake which provides macros for a Makefile style script: https://github.com/ticki/cake
Let's say we have a make format called dmake. It invokes $CC with the specified arguments for each file, and links them together into a binary/so/whatever, putting it into the build directory and cleaning artifacts. Okay.
Now say that you start a new project in rust. Well, crap, dmake doesn't work. You have to use rdmake, which is built by different people, and uses a more elegant syntax - which you don't know.
Then you write Haskell, and have to use hdmake - which of course is written as a haskell program, using a fancy monad you don't know, and python has to use pydmake, and ruby has to use rbdmake, and scheme has to use sdmake, and lisp has to use ldmake, and asm has to use 60 different dmakes, depending on which asm you're using.
Instead, we all use make. Make allows for arbitrary code to be executed, so no matter what programming environment you use, you can use a familiar build tool that everybody knows. Sure, java has Ant, Jelly, Gradle and god knows what else, and node has $NODE_BUILD_SYSTEM_OF_THE_WEEK, but even there, you can still use make.
That's the power of generic tools.
Let's try it then. The declarative, build system has a formal spec with types, files, modules, ways of describing their connections, platform-specific definitions, and so on. Enough to cover whatever systems while also being decidable during analysis. There's also a defined ordering of operations on these things kind of like how Prolog has unification or old expert systems had RETE. This spec could even be implemented in a reference implementation in a high-level language & test suite. Then, each implementation you mention, from rdmake to hdmake, is coded and tested against that specification for functional equivalence. We now have a simple DSL for builds that checks them for many errors and automagically handles them on any platform. Might even include versioning with rollback in case anything breaks due to inevitable problems. A higher-assurance version of something like this:
Instead, we all use make. Make allows for arbitrary code and configurations to be executed, so no matter what configuration problems you have, we can all use a familiar build tool that everybody knows. That's the power of generic, unsafe tools following Worse is Better approach. Gives us great threads like this. :)
>You could've just as easily said the common, subset of SQL could be implemented extremely different in SQL Server, Oracle, Postgres, etc. Therefore, declarative SQL has no advantages over imperative, C API's for database engines. Funny stuff.
No, that's not my point, my point is that a build tool that meets parent's requirements would necessarily be non-generic, and that such a tool would suffer as a result.
>Instead, we all use make. Make allows for arbitrary code and configurations to be executed, so no matter what configuration problems you have, we can all use a familiar build tool that everybody knows. That's the power of generic, unsafe tools following Worse is Better approach. Gives us great threads like this. :)
Worse is Better has nothing to do with this. Really. Make is very Worse is Better in its implementation, but the idea of generic vs. non-generic build systems, which is what we're discussing, is entirely orthogonal to Worse is Better. If you disagree, I'd reccomend rereading Gabriel's paper (that being Lisp, The Good News, The Bad News, And How to Win Big, for the uninitiated). I'll never say that I'm 100% sure that I'm right, but I just reread it, and I'm pretty sure.
A build system is essentially supposed to take a list of things, check dependencies, do any platform-specific substitutions, build them in a certain order with specific tools, and output the result. Declarative languages handle more complicated things than that. Here's some examples:
I also already listed one (Nix) that handles a Linux distro. So, it's not theory so much as how much more remains to be solved/improved and if methods like in the link can cover it. What specific problems building applications do you think an imperative approach can handle that something like Nix or stuff in PDF can't?
"It's fully generic."
It might help if you define what you mean by "generic." You keep using that word. I believe declarative models handle... generic... builds given you can describe about any of them with suitable language. I think imperative models also handle them. To me, it's irrelevant: issue being declarative has benefits & can work to replace existing build systems.
So, what's your definition of generic here? Why do declarative models not have it in this domain? And what else do declarative models w/ imperative plugins/IO-functions not have for building apps that full, imperative model (incl make) does better? Get to specific objections so I can decide whether to drop declarative model for build systems or find answers/improvements to stated deficiencies.
Add to that Myreen et al's work extracting provers, machine code and hardware from HOL specs + FLINT team doing formal verification of OS-stuff (incl interrupts & I/O) + seL4/Verisoft doing kernels/OS's to find declarative, logic part could go from Nix-style tool down to logic-style make down to reactive kernel, drivers, machine code, and CPU itself. Only thing doing arbitrary execution, as opposed to arbitrary specs/logic, in such a model is what runs first tool extracting the CPU handed off to fab (ignoring non-digital components or PCB). Everything else done in logic with checks done automatically, configs/actions/code generated deterministically from declarative input, and final values extracted to checked data/code/transistors.
Hows that? Am I getting closer to replacing arbitrary make's? ;)
Each filetype is accepted by a program. That program is what we'll want to use to compile or otherwise munge that file. So, in a file somewhere in the build, we put:
*.c:$CC %f %a:-Wall
*.o:$CC %f %a:-Wall
The actual DMakefile looks like this:
quux:bar.o baz.o:-o quux
This is something I came up with on the spot, there are certainly holes in it, but something like that could declaritivise the build process. However, this doesn't cover things like cleaning the build environment. Although this could be achieved by removing the resultant files of all targets, which could be determined automatically...
Far as what I was doing, I was just showing they'd done logical, correct-by-construction, generated code for everything in the stack up to OS plus someone had a Prolog make. That meant about the whole thing could be done declaratively and/or c-by-c with result extracted with basically no handwritten or arbitrary code. That's the theory based on worked examples. A clean, integration obviously doesn't exist. The Prolog make looked relatively easy, though. Mercury language make it even easier/safer.
All you have to do now is make sure your hands are the right hands.
Like buddha said: right mind.
Building software is a programmatic process. No XML please! We're decidedly not on Windows, and since I have the misfortune of fitting such square pegs into round holes, please don't use XML for applications which must run on UNIX. It's a nightmare. It's horrible. No!!!
YAML doesn't need any special tools - it's ASCII and can easily be processed with AWK, for example.
I don't know about you, but the last thing I want is to have to have a whole new set of specialized tools, just so somebody could masturbate in XML and JSON.
XML is a markup language. That means it's for documents, possibly for documents with pictures, perhaps even with audio. It's not and never was meant for storing configuration or data inside of it. XML is designed to be used in tandem with XSLT, and XSLT's purpose is to transform the source XML document into (multiple) target(s): ASCII, ISO 9660, audio, image, PDF, HTML, whatever one writes as the transformation rules in the XSLT file. XML was never meant to be used standalone.
If you really want to put the configuration into an XML file, fine, but then write an XSLT stylesheet which generates a plain ASCII .cf or .conf file, so its processing and parsing can be simple afterwards. XML goes against the core UNIX tenet: keep it simple.
Do you like complex things? I do not, and life is too short.
Of course, like any real programming language, it's hard to process with regex, but then again, I don't want to process makefiles with regex. And you might have some luck coaxing AWK or the SNOBOL family to parse it, and it would be far easier than doing the same with XML.
>please don't use XML for applications which must run on UNIX. It's a nightmare. It's horrible. No!!!
I'd disagree with you there. DocBook, HTML, and friends, are all good applications of XML (or near XML), doing what XML was designed for: Document Markup.
Seriously people, when you're writing a program in a language that has "Markup Language" in the name, does that not ring any alarm bells?
Yes, I said JSON. JSON is very easy to parse, and you can grab unique key/values, which are most of them, with this regex:
As for when your record spans multiple lines, with recursive structures, the previous regex is for extracting simple atomic data from a json file, which is usally what you want in these cases anyway. If not, the json(1) utility can, I believe, extract arbitrary fields, and composes well with awk, grep, etc.
Also, because JSON is so common, you get really good tooling for handling structured data by defult, instead of kinda-okay tooling for 50 different slightly-incompatable formats. 10 operations on 10 datastructures vs 100 operations on 1, and all that.
But for unstructured data, or for one-level key/value data, JSON is overkill. You can use DSV, like this:
No, he suggested writing the code to be cross-platform so that configuration at compile time is unnecessary.
Building freebsd-7 ports on my athlon felt like "forever and 2 more days" back then. If it is not possible to remove autoconf/configure with all obsolete options, can we at least PLEASE stop doing the same thing again and again 220 times for each small package in enormous dependency list? Caching, anyone?