Hacker News new | comments | ask | show | jobs | submit login
Is it really 'Complex'? Or did we just make it 'Complicated'? [video] (youtube.com)
229 points by mpweiher on Jan 8, 2015 | hide | past | web | favorite | 158 comments

One of the key points of the talk happens around the 32-37 minute mark, comparing the crew sizes of various ships. He makes the point that the core of development is really putting together the requirements, not writing code. Unfortunately we've put nearly all of the resources of our industry into engineering rather than science (tactics rather than strategies). What he stated very succinctly is that a supercomputer can work out the tactics when provided a strategy, so why are we doing this all by hand?

That’s as far as I’ve gotten in the video so far but I find it kind of haunting that I already know what he’s talking about (brute force, simulated annealing, genetic algorithms, and other simple formulas that evolve a solution) and that these all run horribly on current hardware but are trivial to run on highly parallelized CPUs like the kind that can be built with FPGAs. They also would map quite well to existing computer networks, a bit like SETI@home.

Dunno about anyone else, but nearly the entirety of what I do is a waste of time, and I know it, deep down inside. That’s been a big factor in bouts with depression over the years and things like imposter syndrome. I really quite despise the fact that I’m more of a translator between the spoken word and computer code than an engineer, architect or even craftsman. Unfortunately I follow the money for survival, but if it weren’t for that, I really feel that I could be doing world-class work for what basically amounts to room and board. I wonder if anyone here feels the same…

That's exactly it. I sometimes wonder how many brilliant people have left this industry because they were similarly enlightened. (One prominent person in applied computer science recently told me she was considering going to go into medicine because she wanted to do something useful.)

It doesn't have to be like this. We have the brain power. But I feel we squander it on making silly websites/apps when we could be making history.

You know what medical professinals do? Get patients healthy so the patients can enjoy apps and sites.

I think the previous poster already identified the problem:

>I follow the money for survival

Maybe we can have a system where people can survive without money?

Nice, however that would remove an incentive for innovation

> incentive for innovation

Do you think Alan Kay was incentivised by money to do all the stuff he talked about in this video?

OK - so alan kay did working on creating this great system. Is there any knowledge from it used in a practical application today ? Probably no, because

a.It's probably too soon.

b.It often takes boring work(and money) to transfer pure ideas to successful practical applications.

we were talking about insentives for innovation, not insentives for boring work. I agree that boring work should be insentivised by money.

I'd be interesting in going the course of: boring work should be eliminated through automation or obsolescence.

Who paid you for your comment?

Take a look at this thread -- scroll all the way to the bottom. What is the incentive for all of that?

I don't think anything says you can't use another incentive.

Innovate or die (literally), doesn't seem like the wisest philosophy to base life on this planet around.

See Keith Weslowski's recent split.

I suspect a similar calculus may have inspired Joey Hess's retreat to a farm.

I can think of others: jwz, Bret Glass, Cringely.

I know about one such person that after having successful IT Business career moved to another country (mine) and became neurosurgeon (at absurdly stupidly low pay that doctors in my country get).

The comparison of the sailship, submarine and the airplane doesn't really hold, though.

A sailship travels for weeks or months, has not a single piece of automation and has to be maintained during the journey.

Similarily the submarine is meant to travel for long periods of time during which it has to be operated and maintained.

Both of those are designed to be self-sufficient, so they have to provide facilities for the crew, and the crew grows due to maintenance required, medical and battle stations.

The airplane travels for 16 hours tops and is supported by a sizable crew of engineers, ground control, and for passenger planes, cooks and cleaning personell, on the ground.

Right. Warships try to be self-sufficient and able to deal with battle damage, which means a large on-board crew. The biggest Maersk container ships have a crew of 22. Nimitz-class aircraft carriers have about 3,000, plus 1,500 in the air units. US Navy frigates have about 220.

If you want a sense of how many people are behind an airline flight, watch "Air City", which is a Korean drama series about a woman operations manager for Inchon Airport. The Japanese series "Tokyo Airport" is similar. Both were made with extensive airport cooperation, and you get to see all the details. (This, by the way, is typical of Asian dramas about work-related topics. They show how work gets done. US entertainment seldom does that, except for police procedurals.)

None of this seems to be relevant to computing.

Are those really a uniquely asian approach? Because the description sounds rather like the (earlier) BBC series Airport.

Drama vs documentary.

> ...I’m more of a translator between the spoken word and computer code than an engineer, architect or even craftsman.

What are engineers, architects, and craftsmen but translators of spoken word (or ideas) to various media or physical representations?

One could argue that the greatest achievements of history were essentially composed of seemingly mundane tasks.

We'd all love to feel like we're doing more than that, perhaps by creating intelligence in some form. Maybe as software engineers we can create as you say "simple formulas that evolve a solution" - and those kinds of algorithms/solutions seem to be long overdue - I think we're definitely headed in that direction. But at some point, even those solutions may not be entirely satisfying to the creative mind - after all, once we teach computers to arrive at solutions themselves, can we really take credit for the solutions that are found?

If one is after lasting personal fulfillment, maybe it'd be more effective to apply his or her skills (whatever they may be) more directly to the benefit and wellbeing of other people.

Yeah but being a translator is a different kind of activity from being an engineer, architect or craftsman. It feels to me that being a Software Engineer -- though currently I've been working with something else -- is kinda like (grossly simplifying it) a really complicated game of Lego or Tetris, where the 'pieces' (APIs, routines, functions, protocols, legacy code, etc, etc) don't always fit where/how they were supposed to, sometimes you can tweak them, but sometimes they're black boxes. Sometimes you have to build some pieces, after giving up on trying to make do with what you got. I think that's what the OP means by time wasted. We waste a lot of time getting the pieces to fit and (as much as we can verify) work as they should when we should be spending our time focusing on what structure we want built after all the pieces are in place.

Engineers, architects and craftsmen don't work, nor waste anywhere near as much time, with as many "pieces" (if any) that behave in such erratic ways.

As the son of an architect, I must somewhat dispute this characterization. There are tremendous constraints in that craft which are sometimes parallel to those of software development.

Budget is the first constraint, which limits the available materials and determines material quality.

Components and building codes are a second. While the 'fun' part of building design might be the exteriors and arrangements, a tremendous amount of the effort goes into binding these 'physical APIs' in a standards-complaint manner.

Finally, there is physics. This brings with it the ultimate and unassailable set of constraints.

I know I'm being a bit silly, but my point is that all crafts have frustrations and constraints, and all professions have a high (and often hidden) tedium factor resulting from imperfect starting conditions. These do not actually constrain their ability to create works of physical, intellectual or aesthetic transcendency.

I agree. I felt like this within my first month of working professionally in 2001 and ever since.

Thank you for the apt metaphor. I often struggle with explaining how I feel about this but you nailed it.

We spend too much time getting these tiny pieces to connect correctly, and more often than not they're misaligned slightly or in ways we can't see yet (until it blows up). It's way too much micro for not enough macro. And there isn't even a way to "sketch" as I can with drawing - you know, loosely define the shape of something before starting to fill in with the details. I keep trying to think of a visual way to "sketch" a program so that it would at least mock-run at first, and then I could fill in the function definitions. I don't mean TDD.

Sorry, but UML provides enough definition to accurately describe any process... you generally need two graphs for any interaction though. Most people don't think in these terms. Some can keep it in their heads, others cannot.

It's easier for most people to reason about a simple task. Not everyone can also see how that simple task will interact with the larger whole. I've seen plenty of developers afraid to introduce something as fundamental as a message queue, which is external to a core application. It breaks away from what they can conceptualize in a standing application.

Different people have differing abilities, talent and skills regarding thought and structure. Less than one percent of developers are above average through most of a "full stack" for any given application of medium to high complexity (percent pulled out of my rear). It doesn't mean the rest can't contribute. But it does mean you need to orchestrate work in ways that best utilize those you have available.

> Sorry, but UML provides enough definition to accurately describe any process...

Which is the opposite of sketching, which means (in drawing) to loosely define the basic shapes of something before accurately describing things with "enough definition".

We're not talking about the same thing at all and I can't relate to the rest of your post.

I'm talking about how there's no way to sketch a program that mock-runs and then fill in the details in a piecemeal fashion.

One way to partially do this ,by using an auto generation as far as possible. Naked objects(for complex business software) takes this to the extreme - given a set of domain objects, it creates the rest, including GUI. Than if for example, you want to customize the GUI ,you can.

I think it even makes sense at start to create uncomplete objects, run the mock, as continue. Also, the authors of the framework claim that you work very close to the domain language s you can discuss code with domain experts, so in a sense you're working relatively close to the spec level.

The learning curve is a bit high though, i think it could have been much more successful if it has used GUI tools to design domain objects and maybe a higher level language,

Someday, mankind will give birth to an artificial intelligence far more sophisticated than anything we can comprehend, and our species will cease to be relevant. A very long time from then, the universe will become too cool to support life as we know it and our entire history will have been for naught. And long before any of that, you and I and everyone here will die and be completely, utterly forgotten, from the basement dwellers to the billionaires.

It's called an existential crisis. It's not a special Sillicon Valley angst thing, everyone gets them. Try not to let it get to you too much.

I feel exactly the same way. That's been a major driver behind my current career frustration, and one reason that I want to move away from full-time employment categorically.

So much of it comes down to focusing on everything but requirements. If someone asks for directions to the wrong destination, it doesn't matter how exceptional of a map you draw.

And, despite the importance of requirements, we often trust the requirements gathering tasks to non-technical people. Worse, in large corporate environments, the tasks are often foisted on uncommonly incompetent individuals. When I hear about "Do you want to ...?" popups that are supposed to only have a "Yes" button, I know something is fundamentally broken.

I haven't escaped the corporate rat race yet, but I'm taking baby-steps toward a more self-directed work-life. Here's hoping that such a move helps.

nearly the entirety of what I do...

You are not even hardly alone. And while I suspect the PoV is prevalent in IT, it's hardly specific to it.

There's a guy who wrote a book about this so long ago that we don't remember his name. He's known only as "the preacher", sometimes as "the teacher" or "the gatherer".

It's Ecclesiastes, in the old testament of the Bible, and begins with "Vanity of vanities, saith the Preacher, vanity of vanities; all is vanity."

I'm quite an atheist, not religious, and generally not given to quoting scripture. I find this particular book exceptionally insightful.

Much of what you describe is in fact reality for most people -- we "follow the money for survival", but the purpose of it all quite frequently eludes me.

I started asking myself two or three years back, "what are the really big problems", largely as I saw that the areas in which they _might_ be addressed, both in Silicon Valley and in the not-for-profit world, there was little real focus on them. I've pursued that question to a series of others (you can find my list elsewhere, though I encourage you to come up with your own set -- I'll tip you to mine if requested).

My interest initially (and still principally) has been focusing on asking the right questions -- which I feel a great many disciplines and ideologies have not been doing. Having settled on at least a few of the major ones, I'm exploring their dynamics and trying to come to answers, though those are even harder.

It's been an interesting inquiry, and it's illuminated to me at least why the problem exists -- that Life (and its activities) seeks out and exploits entropic gradients, reduces them, and either continues to something else or dies out.

You'll find that concept in the Darwin-Lotka Energy Law, named by ecologist Howard Odum in the 1970s, and more recently in the work of UK physicist Jeremy England. Both conclude that evolution is actually a process which maximizes net power throughput -- a case which if true explains a great many paradoxes of human existence.

But it does raise the question of what humans do after we've exploited the current set of entropy gradients -- fossil fuels, mineral resources, topsoil and freshwater resources. A question Odum posed in the 1970s and which I arrived at independently in the late 1980s (I found Odum's version, in Environment, Power, and Society only in the past year).

But yes, it's a very curious situation.

"...I’m more of a translator between the spoken word and computer code..."

At our current level of technology and business sophistication, that strikes me as a hard enough problem to justify a career with good compensation.

Any patterns you are noticing in the translation process?

I shouldn’t post this.

It’s hard for me to single out specific patterns because it feels like the things I’m doing today (taking an idea from a written description, PowerPoint presentation or flowchart and converting it to run in an operating system with a C-based language) are the same things I was doing 25 years ago. It might be easier to identify changes.

If there’s one change that stands out, it’s that we’ve gone from imperative programming to somewhat declarative programming. However, where imperative languages like C, Java, Perl, Python etc (for all their warts) were written by a handful of experts, today we have standards designed by committee. So for example, compiling a shell script in C++ that spins up something like Qt to display “Hello, world!” is orders of magnitude more complicated than typing an index.html and opening it in a browser, and that’s fantastic. But what I didn’t see coming is that the standards have gotten so convoluted that a typical HTML5/CSS/Javascript/PHP/MySQL website is so difficult to get right across browsers that it’s an open problem. Building a browser that can reliably display those standards responsively is nontrivial to say the least.

That affects everyone, because when the standards are perceived as being so poor, they are ignored. So now we have the iOS/Android mobile bubble employing millions of software engineers, when really these devices should have just been browsers on the existing internet. There was no need to resurrect Objective-C and Java and write native apps. That’s a pretty strong statement I realize, but I started programming when an 8 MHz 68000 was considered a fast CPU, and even then, graphics processing totally dominated usage (like 95% of a game’s slowness was caused by rasterization). So today when most of that has been offloaded to the GPU, it just doesn’t make sense to still be using low level languages. Even if we used Python for everything, it would still be a couple of orders of magnitude faster than the speed we accepted in the 80s. But everything is still slow, slower in fact than what I grew up with, because of internet lag and megabytes of bloatware.

I started writing a big rant about it being unconscionable that nearly everyone is not only technologically illiterate, but that their devices are unprogrammable. But then I realized that that’s not quite accurate, because apps do a lot of computation for people without having to be programmed, which is great. I think what makes me uncomfortable about the app ecosystem is that it’s all cookie cutter. People are disenfranchised from their own computing because they don’t know that they could have the full control of it that we take for granted. They are happy with paying a dollar for a reminder app but don’t realize that a properly programmed cluster of their device and all the devices around them could form a distributed agent that could make writing a reminder superfluous. I think in terms of teraflops and petaflops but the mainstream is happy with megaflops (not even gigaflops!)

It’s not that I don’t think regular busy people can reason about this stuff. It’s more that there are systemic barriers in place preventing them from doing so. A simple example is that I grew up with a program called MacroMaker. It let me record my actions and play them back to control apps. So I was introduced to programming before I even knew what it was. Where is that today, and why isn’t that one of the first things they show people when they buy a computer? Boggles my mind.

Something similar to that, that would allow the mainstream to communicate with developers is Behavior-Driven Development (BDD). The gist of it is that the client writes the business logic rather than the developer, in a human-readable way. It turns out that their business logic description takes the same form as the unit tests we’ve been writing all these years, we just didn’t know it. It turns out to be fairly easy to build a static mockup with BDD/TDD using hardcoded values and then fill in the back end with a programming language to make an app that can run dynamically. Here is an example for Ruby called Cucumber:


More info:




How this relates back to my parent post is that a supercomputer today can almost fill in the back end if you provide it with the declarative BDD-oriented business logic written in Gherkin. That means that a distributed or cloud app written in something like Go would give the average person enough computing power that they could be writing their apps in business logic, making many of our jobs obsolete (or at the very least letting us move on to real problems). But where’s the money in that?

> There was no need to resurrect Objective-C and Java and write native apps. That’s a pretty strong statement I realize, but I started programming when an 8 MHz 68000 was considered a fast CPU, and even then, graphics processing totally dominated usage (like 95% of a game’s slowness was caused by rasterization). So today when most of that has been offloaded to the GPU, it just doesn’t make sense to still be using low level languages. Even if we used Python for everything, it would still be a couple of orders of magnitude faster than the speed we accepted in the 80s. But everything is still slow, slower in fact than what I grew up with, because of internet lag and megabytes of bloatware.

It's even slower when you write it in Python or Lua. I've built an app heavily relying on both on iOS and Android and it's a pig. This was last year, too, and outsourcing most of the heavy lifting (via native APIs) out of the language in question.

"Low-level languages" (as if Java was "low level") exist because no, web browser just aren't fast enough. There may be a day when that changes, but it's not today.

>There was no need to resurrect Objective-C and Java and write native apps.

I still find it crazy that the world went from gmail/googlemaps to appstores. It's beyond me why most apps arent anything more than cached javascript and uitoolkits.

I think one reason for this is because devices became personalized. You dont often go check your email on someone elses computer. People dont need the portability of being able to open something anywhere, so the cloud lost some of its appeal. Now its just being used to sync between your devices, instead of being used to turn every screen into a blank terminal.

> I still find it crazy that the world went from gmail/googlemaps to appstores. It's beyond me why most apps arent anything more than cached javascript and uitoolkits.

* because javascript is still not as efficient as native code

* because code efficiency translates directly into battery life

* because downloading code costs bandwidth and battery life

* because html rendering engines were too slow and kludgy at the time

* because people and/or device vendors value a common look and feel on their system

* because not everyone wants to use google services/services on US soil

Interestingly the Japanese smartphone market had most of these fixed by 1997 or so; defining a sane http/html subset (as opposed to wap). Less ambitious than (some) current apps - but probably a much saner approach. Maybe it's not too late for Firefox OS and/or Jolla.

[Edit: on a minimal micro-scale of simplification, I just came across this article (and follow up tutorial article) on how to just use npm and eschew grunt/gulp/whatnot in automating lint/test/concat/minify/process js/css/sass &c. Sort of apropos although it's definitely in the category of "improving tactics when both strategy and logistics are broken": prevous discussion on hn: https://news.ycombinator.com/item?id=8721078 (but read the article and follow-up -- he makes a better case than some of the comments indicate)]

but in the javascript model most of the processing is offloaded to the cloud when possible, making it MORE energy efficient. ideally you are just transmitting input and dom changes or operational transformations.


web ecosystems can still enforce common uitoolkits.

gmail was just an example.

The difference between javaScrip in browser and native code has nothing to do with the could. Both can offload processing to remote servers.

As soon as we have reliable and energy efficient wireless connections actually available with perfect coverage, maybe.

I think there's an element of "pendulum swinging" to how we ended up back at native. The browser is a technology that makes things easy, more than it is one that makes things possible. And "mobile" as a marketplace has outpaced the growth of the Web.

So you have a lot of people you could be selling to, and a technology that has conceptual elegance, but isn't strictly necessary. Wherever you hit the limits of a browser, you end up with one more reason to cut it off and go back to writing against lower level APIs.

In due course, the pendulum will swing back again. Easy is important too, especially if you're the "little guy" trying to break in. And we're getting closer to that with the onset of more and more development environments that aim to be platform-agnostic, if not platform-independent.

Well now you have a new job writing regular expressions for Gherkin...

I want to subscribe to your newsletter where you write things you shouldn't write. I'm glad you wrote this. I'm not alone!

Definitely a blog theme.

Meta: I'm glad I asked about the patterns now. I had doubts about the extent to which asking that question would add to this thread.

When I got to that point ("All I ever do is write forms for editing databases") I switched to physical engineering - got qualified in Six Sigma, got qualified in Manufacturing Engineering and now I'm doing a degree in Supply Chain Management. Love it.

Software is a force multiplier.

If the initial force is headed towards uselessness software won't save it from it's fate, it will in fact just get it there faster.

By the same token if you manage to write some software that is magnifying a force of good the results can be staggering.

Software allows us to accomplish more net effect in our lifetimes than was ever possible before. However if we spend our time writing glorified forms over and over again we aren't going to live up to that.

I do often wonder how many people would do something with a great deal more positive impact (research, full-time work on free software, working on that crazy, but possibly actual world-changing idea/startup, whatever) if they didn't have to worry every day about survival, aging out of their field, being forced to run faster just to stay where they are, etc. One day perhaps we'll get to see what that looks like. For now, though, chasing the money is often the rational thing to do, even if that means building yet another hyperlocal mobile social dog sharing photo app.

This is exactly how I feel. Even though my income has quadrupled in the last three years I haven't made a single positive impact to anyone's life using my skills. This was a very sad realisation that I've basically been reinventing wheels for a living and not really engineering anything.

I find it hard to believe that someone would pay you a salary for three years to do something that is completely useless to everyone.

At the very minimum, the fruits of your labour are useful to your boss/customer. That is already a single positive impact in someone's life right there.

Most likely, your boss/customer then used your developments in their primary business, which, no doubt, affects even more people in a positive way.

I believe being in technology, we have the capability to positively affect many orders of magnitude more people, than in most other industries. So cheer up! ;)

I'd caution against making the argument "If someone pays you, it's useful to somebody." It reminds me of the naive assumption of economists, that people are rational and seek to maximize utility. But exceptions easily come to mind.

People waste money all the time. Maybe the OP's boss paid him three years salary to do something the boss thought was important, but really just wasted time. And nobody knew it wasted time, because it was hard to measure.

I agree. You may be "helping" your manager, but that may mean helping them displace their political opponent for the sole purpose of taking more power for themselves. That, and I've seen far too many inane requirements and heard too many stories of code that was never used to remain especially optimistic.

Pursuing promising avenues is not wasted time, even if it ultimately comes to nothing.

I agree with you, but I've been in situations where management continues wasteful projects because they can't admit failure, can't define goals for the project, or throw good money after bad. I should have been more specific.

A personal example: At a small marketing agency, my boss insisted we needed to offer social media marketing, since "everyone's on facebook now a days." Because we were posting on behalf of clients, we didn't have the expertise to write about the goings on of their business, and often had no contacts in the client organization. We were forced to write pretty generic stuff ("happy valentine's day everybody!" or "check out this news article"). Worse, we had clients who shouldn't be on social media in the first place: plumbers, dentists, and the like. I have never seen anyone like their plumber on facebook. So naturally, these pages were ghost towns.

Yet despite having zero likes and zero traffic, clients insisted on paying someone for social media (They too had read that "everyone's on facebook") and my boss never refused a paying customer.

Want to know how it feels getting paid day in and day out to make crap content that nobody will read? Feels bad.

>Unfortunately I follow the money for survival, but if it weren’t for that, I really feel that I could be doing world-class work for what basically amounts to room and board. I wonder if anyone here feels the same…

Ok. So find where to do that, and do it. If you don't have children to support or other family to stick by, you're only excuse is your fear of doing something too unconventional.

You want to do world-class work? Cool, let's talk about it. What's world-class to you? What would get you out of bed rather than give you an existential crisis?

For me, doing work that eliminates work is something to aspire to. It doesn't even matter if it's challenging. I would actually prefer to be down in the trenches doing something arduous if the end result is that nobody ever has to do it again. But that only happens in the software world. The real world seems to thrive on inefficiency, so much so that I think it's synonymous with employment. Automate the world, and everyone is out of work. I had a lot more written here but the gist of it is that I would work to eliminate labor, while simultaneously searching for a way that we can all receive a basic income from the newfound productivity. That's sort of my definition of the future, or at least a way of looking at it that might allow us to move on to doing the "real" real work of increasing human longevity or exploring space or evolving artificial intelligence or whatever.

Well, there are lots of firms to work at whose primary goal is something that permanently eliminates work. Here's a list I just happened to have on hand: https://hackpad.com/Science-Tech-Startups-zSZ0KdT6Zk1

Much of programming is mindless plumbing, but the problem of specifying what problem you're trying to solve will never go away. Focus your attention there.

Doesn't that depend on where you work and what you're working on though?

Most workplaces are broken in some way or other, eventually you either learn to grin and bear it or you find a way to escape the system.

Yes, I feel exactly the same. I ride the rollercoaster of depression and the impostor syndrome is definitely there. Most of what I do is mired in incidental complexities that have little to do with the task at hand. Programming in itself is simple because it's a closed system that you can learn and master. What makes it seem hard is having to manage the logistics of it with the crude tools we have (checking logs instead of having a nice graphical control panel for instance). I also feel like all the problems I have to solve are ultimately the same, even if in wildly different areas, because it's all about how do I put these function calls in the right order to get my result. I can never shake the feeling that there could be a generic-programming genetic algorithm that tries a bunch of permutations of function calls and the way the results are the arguments for the next calls, given a list of functions I know are necessary for the solution. I always feel like I am still having to do computer things in my head, while sitting in front of the computer.

Very interesting related talk about complecting things - "Simple Made Easy" - by Rich Hickey, inventor of Clojure:


If you're a Ruby person, maybe watch this version instead, since it's almost the same talk but for a Rails Conference, with a few references:


I think there's an unfortunate trend throughout a lot of human activity to make things complicated. The prototypical example in my mind has always been the Monty Hall problem, where you often see convoluted explanations or detailed probabilistic calculations, but the solution is just plain as it can be when you consider that if you switch you win iff you chose a goat and if you stay you win iff you chose the car.

I've also always been a fan of Feynman's assertion that we haven't really understood something until it can be explained in a freshman lecture. At the end of his life he gave a now famous explanation of QED that required almost no mathematics and yet was in fact quite an accurate representation of the theory.

> The prototypical example in my mind has always been the Monty Hall problem, where you often see convoluted explanations or detailed probabilistic calculations, but the solution is just plain as it can be when you consider that if you switch you win iff you chose a goat and if you stay you win iff you chose the car.

The problem here is, when two people disagree, they both think the problem is simple and they both think their answer is trivial. The thing is, they haven't solved the same problem.

In the standard formulation, where switching is the best strategy, Monty's actions are not random: He always knows which door has the good prize ("A new car!") and which doors have the bad prize ("A goat.") and he'll never pick the door with the good prize.

If you hear a statement of the problem, or perhaps a slight misstatement of the problem, and conclude that Monty is just as in the dark as you are on which door hides which prize and just happened to have picked one of the goat-concealing doors, then switching confers no advantage.

A large part of simplicity is knowing enough to state the problem accurately, which circles back to your paragraph about Feynman: Understanding his initial understanding of QED required understanding everything he knew which lead him to his original understanding of the problem QED solved; for his final formulation of QED, he understood the problems in terms of more fundamental concepts, and could therefore solve them using those same concepts.

Actually, if Monty chooses randomly and just happens to pick a door with a goat you should still switch, because it's still true that you will win if you switch iff you chose a goat but you'll win if you stay iff you chose the car already. The host's method of selection is not relevant given a priori the observation that the host selected a goat. The host's method is relevant if we forgo that observation and iterate the game repeatedly.

I thought the point of the modern Monty Hall problem is to introduce students to probability and thus it's important for them to list the various probabilities. They're learning how to do it. Other people need to check the working to see if the student understands what to do.

Here's a nice example of this teaching: http://ocw.mit.edu/courses/electrical-engineering-and-comput...

car: switch and lose. Goat: switch and win. Is OK as far as it goes, but it's not enough explanation to cover why you should switch.

You need probability in the explanation because of the two goats/one car choice in part one .. and here I am trying to explain it again. You already know how it works, but you don't see that your explanation of it doesn't explain it.

After they removed one answer, you have more information...

Because the choice isn't balanced...

If it was as simple as you suggest, it wouldn't make a good part of a game show. Everyone would see straight through it.

It always seemed a bit easy to me without running through probabilities due to the fact that the game show host was forced to choose goat (which is different depending on where the car is). The idea of a forced non-random choice is more clear to me than a probability explanation, but you need probability to demonstrate a proof.

Your explanation makes it sound like its a 50/50 chance.

there are two goats, you win if you choose either first and switch.

the problem is deceptive because it doesnt explicitly say they ALWAYS remove a goat.

The answer becomes obvious when you ask "There are three doors, one has a prize. Would you like to select one door or two doors?"

Ok, that is the best Monty Hall explanation I've ever heard.

A doctor told a programmer friend once, "I have to fix problems that God gave us. You have to fix problems you give yourselves."

To be fair, doctors are often fixing problems we've created ourselves too.

But that's different than problems created by the doctor.

Edit: granted, this still can happens when self-medicating. The general case of doctors fixing the mistakes of doctors is different than what the parent suggests.

Not always.

My mother, a family practitioner, sometimes gets a patient which is on 15 or so different medications, all prescribed by different specialists at different times. She whittles it down to 3 or 4, and they get a lot better.

That doctor sounds a bit like an arrogant douche.

"It's not exactly brain surgery, is it?" https://www.youtube.com/watch?v=THNPmhBl-8I

I wonder if the teams at NASA, working on rocket science says "it's not rocket science, you know"

I have a relative who works for Raytheon as a physicist on rocket systems. Whenever he tells be about a dumb mistake he had to fix, he always uses that expression.. followed by, "Oh wait, I guess it is."

a tip of that hat

With my 2015 budget and 2015 requirements, I'm going to need a few miracles.

That doctor is wrong. All human problems are caused by the constraints of our physical world.

Especially gross oversimplifications of reality and absolute statements.

That's why God endows them with greater analytical prowess.

In a way both are fixing bugs created by someone else.

God on HN? Come on, the world isn't flat anymore.

You don't have to take that literally in order to understand it, but just in case you really missed the point: the doctor is essentially saying that he is dealing with problems that have no direct cause or architect that he can consult whereas the IT people have essentially only themselves to blame and have made the mess they're in (usually) rather than that it has been handed down to them through the mists of time or from above.

In a general sense that is true but in a more specific sense it is definitely possible to be handed a project without docs and horrible problems and it might as well be an organic problem. Even so, the simplest biology dwarfs the largest IT systems in complexity.

> Even so, the simplest biology dwarfs the largest IT systems in complexity.

And also mercifully in continuous real-world testing over time with no Big Rewrite to be seen. While an individual organism might have unsolvable problems, nature doesn't have the same tolerance for utterly intractable systems that we sometimes see in the software community.

> Even so, the simplest biology dwarfs the largest IT systems in complexity.

Well said.

Intolerance on HN? Apparently so.

Oh, look, someone who has proven that God doesn't exist. Please, post the proof for all of us to see.

You'll just have to take it on Faith.

That isn't how I, or any atheists I know, define atheism.

Just to clarify, the original response was to the original posters claim that God does not exist. My request for him to present his proof to me was not meant as an open attack on atheism or even a critique of it, I was simply speaking to and about one persons beliefs. In any event, I begin to suspect I should have said nothing with this being such an emotional issue for people.

I don't know why people would downvote this.

Possibly because you missed the sarcasm/joke nature of my comment and gave a serious reply. I wasn't defining atheism as faith in no-God.

I was digging at the idea that someone would accept a claim on Faith, then demand evidence before giving up the idea, and how that is inconsistent. If someone can faith into an idea without evidence it should be equally easy to faith out of it with no evidence.

That it's not easy, is interesting.

Nothing -> belief in God, costs little and gains much.

Belief in God -> no belief, costs much and gains little.

Where by costs and gains, I include social acceptance, eternal life, favourable relations with a powerful being, forgiveness of sins, approved life habits, connections to history and social groups, etc.

It's not a Boolean. Or at least, from inside a mind it doesn't feel like a Boolean.

I feel like its important to point out that having faith in a creator is not exclusive to having a belief in things like eternal life, favourable relations with a powerful being, etc. etc. People seem to easily conflate faith in a creator with practicing a mainstream religion.

edit: I would also like to add, although I already said it, that I believe both of these opinions regarding the creator are acts of faith, to absolutely say there is not one and to absolutely say there is, and if someone where to have made a statement as if a creators existence was a sure thing I would have called them out just the same.

I'd agree, it takes faith to know that there was no creator of the world, the same faith it takes to know there was. Just to clarify, I'm not arguing there's proof that 'God' does exist, I'm arguing there's no proof that 'God' doesn't exist. The original comparison falls short, the world can be proven to be round, not flat.

edit: a thanks in advance to all you enlightened fervent atheists for the down votes.

Just to clarify, atheism isn't a faith claim, it's a skepticism of faith claims.

Just to clarify, I never said atheism required faith. Please, read the comment you responded to.

Atheism isn't having faith there is no creator, it's having the intellectual curiosity to doubt the existence of a creator.

Edited to be more honest.

It only takes courage for those who live in areas that are indoctrinated with dense religious ideology's who will suffer consequences for their actions. For most in the western world, it is not such a hard thing to identify oneself with.

edit: original response claimed it took courage to doubt the existence of a creator. It has since edited out to claim it simply takes intellectual curiosity. I would argue that the same intellectual curiosity could have the end result of someone having a belief in a singular creator of reality.

> I would argue that the same intellectual curiosity could have the end result of someone having a belief in a singular creator of reality.

Possible, but not likely, given how many intellectually curious people have independently reinvented atheism but no religion has been independently reinvented.

Religion and belief in a singular creator are not the same thing. Furthermore, I don't really believe you have any credible way of producing a valid number of 'intellectually curious' individuals who have made the choice to believe in a singular creator versus not.

edit: misunderstanding of mutually exclusive and added some more words.

It's far more likely to be the other way round. Atheism was only really formed about 300 years ago, and by that time the world had been explored and the printing press had been invented. In contrast, all of the ancient religions, formed in completely disconnected continents, had the notion of a creator at their centre.

> Atheism was only really formed about 300 years ago

This denies the existence of Ancient Greek atheists, and people who were skeptical of religion in general before the modern era.

I think you mean agnostic not atheism then.

Agnosticism is a belief that we can't possibly know whether there's a supernatural. I don't hold that view. My view is a skepticism of the supernatural, which is atheism.

No. I'm atheist, and that is my worldview. Saying I'm wrong about my own philosophy is nonsensical.

I'm also not alone: My definition of the word seems to be the most common one among the current atheist movement.

My doctor for one, would benefit most from fixing his own basic problems than from dwelling on God or what problems God has given us, but like most doctors he pretends to solve big problems while saying pretentious things.

I'm just saying you'd expect these doctor types to have the greater powers of analysis, but in my experience they just pretend harder.

[Edit to remove that off putting stuff]

See also "No Silver Bullet — Essence and Accidents of Software Engineering"[1] by Fred Brooks wherein "essential complexity" vs "accidental complexity" is explored.

[1]: http://en.wikipedia.org/wiki/No_Silver_Bullet

I have made it to 31 min in, and I genuinely think he made this talk too complicated.

Ha ha I skipped through until he started talking about code again. Go back and do that!

This appears to be a presentation of part of the work of VPRI[0], you should check it out, they made a whole OS and the (word processor, PDF viewer, Web Browser, etc) in under 20K lines of code. There is more info on: http://www.vpri.org/vp_wiki/index.php/Main_Page

Besides Nile, the parser OMeta is pretty cool to imho.

> you should check it out, they made a whole OS and the (word processor, PDF viewer, Web Browser, etc) in under 20K lines of code.

Check it out? Where is the source code for the OS? I haven't found it in the link you gave..

Alan Kaye seems to enjoy making asinine statements. "Teachers are taught science as a religion" "Romans had the best cement ever made" "We live in the century where the great art forms are the merging of science and engineering"

There's also a kind of arrogance when he shits on intel and djikstra that I find off-putting.

Only familiar with one of them, but "teachers are taught science as a religion" seems spot-on.

I disagree. Teaching science as a series of facts is in no way akin to religious indoctrination.

Edit: And Kaye is also ignoring the fact that every high school science curriculum covers the scientific method and includes hands on experimentation.

You've never seen how students too often get better grades on "perfect" science experiments than on honest reports of "failures" (experment differed from expected outcome)?

You must've been in an elite high school.

That actually did happen to me sometimes. I don't think you can convince me that there's anything religious about it.

Kay, not Kaye. Roman concrete is universally considered superior to modern concrete.

Somehow I'm not surprised that this is what turns up when I google Roman concrete:

> “Roman concrete is . . . considerably weaker than modern concretes. It’s approximately ten times weaker,” says Renato Perucchio, a mechanical engineer at the University of Rochester in New York. “What this material is assumed to have is phenomenal resistance over time.” http://www.smithsonianmag.com/history/the-secrets-of-ancient...

This talk blows everything out of the water. It's like the current culmination of the work from Douglas Engelbart to Bret Victor. Maybe just maybe there could be a rebirth of personal computing coming out of this research, it looks at least promising and ready for use - as demonstrated by Kay.

>It's like the current culmination of the work from Douglas Engelbart to Bret Victor.

This is no coincidence.

"Silicon Valley will be in trouble when it runs out of Doug Engelbarts ideas" -- Alan Kay

Bret works for Alan now, as far as I know.

I've noticed that in his talk. Easily looks like the most prestigious CS R&D lab in the world right now.

Well, we made it complicated because one must complicate pure things to make them real. I guess things would be better if our code ran on some hypothetically better CPU than what Intel makes. But "actually existing in the world" is a valuable property too.

Plato was pretty cool, though, I agree.

So the "frank" system, which he uses throughout the presentation is real enough to browse the internet and author active essays in. It is written in 1000x less code than Powerpoint running on Windows 7.

People in this thread who seem to be asserting that building for the Web as opposed to native apps is an example of how things should be "less complex" perhaps have never written for both platforms... Also the entire argument completely misses the point of the lecture.

I think those people are just suggesting that we should be writing apps and NOT android app, ios app, windows app, linux app etc etc.

we will do that once there is just OS, not android, ios, windows, linux, etc. And no, web does not fit as a substitute for common OS.

15 minutes in, I am finding this talk to be slow and unfocused. I assume this is getting a lot of promoters because we all agree with the theme?

Seeing the 1:42:51 length scared me at first. That's going to take me days to get through given how much free time I have per day! Anyway best bit is to skim through bits of it. It is all interesting and good but I do prefer it when he talks about code.

This is where I find 1) downloading the video and 2) playing it back using a local media player to be vastly superior to viewing online. I can squeeze the window down to a small size (400px presently), speed it up (150%), start and stop it at will, zoom to full screen, and _not_ have to figure out where in a stack of browser windows and tabs it is.

Pretty much the point I was making some time back where I got in a tiff with a documentary creator for D/L'ing his video from Vimeo (its browser player has pants UI/UX in terms of scanning back and forth -- the site switches videos rather than advances/retreats through the actual video you're watching).

And then as now I maintain: users control presentation, once you've released your work you're letting your baby fly. Insisting that people use it only how you see fit is infantile itself, and ultimately futile.

(Fitting with my Vanity theme above).

P.S. In Chrome I can play almost all Youtube videos at 2x speed. Most presentations are still very understandable. I've been doing this for a while, maybe start at 1.5x.

Anyways, just a tip. Its saved me hours and keeps slow speakers from getting boring.

This video is very interesting.

Very good point about the manufacturing of processors being backwards. Why _not_ put an interpreter inside the L1 cache that accomodates higher level programming of the computer? Why are we still stuck with assembly as the interface to the CPU?

Because when DEC did that in the VAX 11/780, nobody used it. Instructions could be added to the microcode, which was loaded at boot time from an 8" floppy. The VAX 11/780 machine language was designed to allow clean expression of program concepts. There's integer overflow checking, a clean CALL instruction, MULTICS-style rings of protection, and very general addressing forms. It's a very pleasant machine to program in assembler.

Unfortunately, it was slow, at 1 MIPS, and had an overly complex CPU that took 12 large boards. (It was late for that; by the time VAXen shipped in volume, the Motorola 68000 was out.) The CALL instruction was microcoded, and it took longer for it to do all the steps and register saving than a call written without using that instruction. Nobody ever added to the microcode to implement new features, except experimentally. DEC's own VMS OS used the rings of protection, but UNIX did not. One of the results of UNIX was the triumph of vanilla hardware. UNIX can't do much with exotic hardware, so there's not much point in building it.

There's a long history of exotic CPUs. Machines for FORTH, LISP, and BASIC (http://www.classiccmp.org/cini/pdf/NatSemi/INS8073_DataSheet...) have been built. None were very successful. NS's cheap BASIC chip was used for little control systems at the appliance level, and is probably the most successful one.

There are things we could do in hardware. Better support for domain crossing (not just user to kernel, but user to middleware). Standardized I/O channels with memory protection between device and memory. Integer overflow detection. Better atomic primitives for multiprocessors. But those features would just be added as new supported items for existing operating systems, not as an architectural base for new one.

(I'm writing this while listening to Alan Kay talk at great length. Someone needs to edit that video down to 20 minutes.)

The Data General MV-8000 (of Soul of a New Machine book fame) also had programmable microcode, if memory serves. The feature was touted, and there were supposed to exist a few brave souls, desperate for performance, who used it.

We had already Lisp running directly on a CPU: http://en.wikipedia.org/wiki/Lisp_machine

Java runs directly on hardware: http://en.wikipedia.org/wiki/Java_processor , http://en.wikipedia.org/wiki/Java_Card

Pascal MicroEngine: http://en.wikipedia.org/wiki/Pascal_MicroEngine

And some more: http://en.wikipedia.org/wiki/Category:High-level_language_co...

And the x86 instruction set is CISC, but (modern) x86 architecture is RISC (inside) - so your modern Intel/AMD CPU already does it. http://stackoverflow.com/questions/13071221/is-x86-risc-or-c...

Backward compatibility aside, we could have CPUs that support a ANSI C or the upcoming Rust directly in hardware, instead of ASM.

Don't forget the Burroughs B5000, from 1961: https://en.wikipedia.org/wiki/Burroughs_large_systems#B5000

The fact that we had an architecture running entirely on ALGOL with no ASM as far back as 54 years ago, is quite astonishing, followed by depressing.

I know. I once took a computer architecture course from Bill McKeeman at UCSC, and we had an obsolete B5500 to play with. We got to step through instruction execution from the front panel, watching operands push and pop off the stack. The top two locations on the stack were hardware registers to speed things up.

An address on the Burroughs machines is a path - something like "Process 22 / function 15 / array 3 / offset 14. The actual memory address is no more visible to the program than an actual disk address is visible in a modern file system. Each of those elements is pageable, and arrays can be grown. The OS controls the memory mapping tree.

One of the things that killed those machines was the rise of C and UNIX. C and UNIX assume a flat address space. The Burroughs machines don't support C-style pointers.

Don't forget transmeta either. Unfortunately they didn't succeed either. Did employ Linus Torvalds for a while though.

This should answer your question. Albeit satirically but the context is very solid [1]. Basically like most forms of engineering, making processors is hard.

[1] https://www.usenix.org/system/files/1309_14-17_mickens.pdf

Around 51:40 he talks about how semaphores are a bad idea, and says that something called "pseudotime" was a much better idea but which never caught on.

What is this "pseudotime"? Google turns up nothing. Am I mis-hearing what he said?

My prof. of calculus once said that complex doesn't mean complicated, yet more group/set/combination. He clearly didn't use any of these terms as all have a specific meaning in Maths (but I don't have any better translation from Italian). I believe it's very true for complex numbers, and I'm trying to keep the same distinction in real life as well. So, for me, complex means putting together more things in a smart way to have a more clear -not more complicated- explanation/solution.

Around the 31 minute mark, shouldn't the gear and biology analogies be the other way around (biological systems being analogous to bloated software), or am I missing something?

Oh, the tactics should be realized by biological systems rather than by manual "brick stacking" of technical systems?

Someone asked Alan Kay an excellent question about the iPad, and his answer is so interesting that I'll transcribe here.

To his credit, he handled the questioner's faux pas much more gracefully than how RMS typically responds to questions about Linux and Open Source. ;)

Questioner: So you came up with the DynaPad --

Alan Kay: DynaBook.

Questioner: DynaBook!

Yes, I'm sorry. Which is mostly -- you know, we've got iPads and all these tablet computers now.

But does it tick you off that we can't even run Squeak on it now?

Alan Kay: Well, you can...

Q: Yea, but you've got to pay Apple $100 bucks just to get a developer's license.

Alan Kay: Well, there's a variety of things.

See, I'll tell you what does tick me off, though.

Basically two things.

The number one thing is, yeah, you can run Squeak, and you can run the eToys version of Squeak on it, so children can do things.

But Apple absolutely forbids any child from putting a creation of theirs to the internet, and forbids any other child in the world from downloading that creation.

That couldn't be any more anti-personal-computing if you tried.

That's what ticks me off.

Then the lesser thing is that the user interface on the iPad is so bad.

Because they went for the lowest common denominator.

I actually have a nice slide for that, which shows a two-year-old kid using an iPad, and an 85-year-old lady using an iPad. And then the next thing shows both of them in walkers.

Because that's what Apple has catered to: they've catered to the absolute extreme.

But in between, people, you know, when you're two or three, you start using crayons, you start using tools.

And yeah, you can buy a capacitive pen for the iPad, but where do you put it?

So there's no place on the iPad for putting that capacitive pen.

So Apple, in spite of the fact of making a pretty good touch sensitive surface, absolutely has no thought of selling to anybody who wants to learn something on it.

And again, who cares?

There's nothing wrong with having something that is brain dead, and only shows ordinary media.

The problem is that people don't know it's brain dead.

And so it's actually replacing computers that can actually do more for children.

And to me, that's anti-ethical.

My favorite story in the Bible is the one of Esau.

Esau came back from hunting, and his brother Joseph was cooking up a pot of soup.

And Esau said "I'm hungry, I'd like a cup of soup."

And Joseph said "Well, I'll give it to you for your birth right."

And Esau was hungry, so he said "OK".

That's humanity.

Because we're constantly giving up what's most important just for mere convenience, and not realizing what the actual cost is.

So you could blame the schools.

I really blame Apple, because they know what they're doing.

And I blame the schools because they haven't taken the trouble to know what they're doing over the last 30 years.

But I blame Apple more for that.

I spent a lot of -- just to get things like Squeak running, and other systems like Scratch running on it, took many phone calls between me and Steve, before he died.

I spent -- you know, he and I used to talk on the phone about once a month, and I spent a long -- and it was clear that he was not in control of the company any more.

So he got one little lightning bolt down to allow people to put interpreters on, but not enough to allow interpretations to be shared over the internet.

So people do crazy things like attaching things into mail.

But that's not the same as finding something via search in a web browser.

So I think it's just completely messed up.

You know, it's the world that we're in.

It's a consumer world where the consumers are thought of as serfs, and only good enough to provide money.

Not good enough to learn anything worthwhile.

Do you know of any languages that you could use to specify precisely what a program should do?

Can you use that specification to verify if a program does what it should? I know BDD taken to extremes kinda does that.

But could a specification be verified if it's non-contradictory and perhaps some other interesting things like where are its edges that it does not cover?

People that enjoyed this talk should check this out http://www.infoq.com/presentations/Simple-Made-Easy

Wow, this is a really great presentation that resonates a lot with me.

I'm 15 minutes in and frankly I'm finding the video hard to follow. Is there an article that accompanies this that might help summarize his points?

What was he talking about at 51:45? Pseudo timer?

Pseudo-time -- I think he's talking about something called Croquet. There's a trail here: http://lambda-the-ultimate.org/node/2021/

Still no love for Heaviside and Poincaré? :(


Probably. or they may have already seen it, or be able to tell from the first part that it deserves an upvote.

I'm not sure I have a problem with the behavior. The new links page cycles so fast that watching the whole video before upvoting would result in the post not making it to the front page.

Name recognition seems to encourage reflexive upvoting, which is one reason we often take names out of titles.

We've removed "Alan Kay" from this one. (Nothing personal—we're fans.)


To add a third reason: to save it and view it later.

I always watch conference talks at double speed, speaker speed permitting. Alan Kay's voice works fine.

(But I didn't upvote, and your question is valid)

I don't think I've ever upvoted after reading/watching what's been shared. I normally upvote, if at all, prior to clicking, otherwise I would never remember.

There was an alan kay video on the front page yesterday.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact