Hacker News new | past | comments | ask | show | jobs | submit | lcuff's comments login

I confess I was disappointed in the video. As someone who decades ago read Stephen Hawking's book A Brief History of Time, it doesn't substantially add to my (lame!) understanding of the Standard Model. It names the 17 entities in the model, classifies them (fermions vs bosons), makes the distinction between matter particles and force particles, chats about each of the four forces, and drops in a number of other factoids. All well and good, and if you had no previous exposure to the standard model, you'd learn a lot. But the contrast to videos in other realms I explore (3Blue1Brown in mathematics, Guitar instructional videos by folks who include tab + video of actual fingers + sound, and woodworking videos where tricky constructions are shown from many angles) casts this Standard Model video in a poor light. The visuals just recapitulate the words that are being spoken. Sigh. Maybe there's nothing else to show, but, as I led with, I ended up disappointed....

> All well and good, and if you had no previous exposure to the standard model, you'd learn a lot.

That is exactly what this video is about. If you want something to be more in-depth, this video is not going to help you. But that's okay.


Okay, but my main complaint is that the medium (video) feels very under-utilized.

>at a comfortable 10 point font size.

How lovely for you. As someone with significantly impaired vision, even when corrected, I have my font size set to 18, thank you very much. A coding standard that assumes a 10 point font size would violate the Americans with Disabilities Act's 'reasonable accommodation' mandate. I wasn't pushy enough to act on it, but it sure pissed me off when my fellow team members blew off my complaints about how we formatted our code. (Including two space indents...grrrr.) My worse than 20/200 vision (Can't see the big E on an eye chart) lets me see code, but legally, I'm blind in one eye.

I admit, it's a classic 'no perfect solution' scenario, because I like very long variable and function names. I write tiny functions (5-10 lines) because they only need one or two levels of indent. Some programmers I've worked with really dislike such small functions.


> A coding standard that assumes a 10 point font size would violate the Americans with Disabilities Act's 'reasonable accommodation' mandate.

Would it? You can still set the size to 18, you just might have to scroll or line wrap. That's a mild inconvenience, not "inaccessible".


I try and maintain 100 char width in most code, because I feel like it makes people more concise.

I’m curious on several statements you made, please take these as genuine and well intended questions:

what’s wrong with two space indent?

Would a bigger monitor with higher DPI solve some of these issues?

Have you considered a horizontal scroll wheel or similar? I think this is a Band-Aid to a bad pattern, but may be a legitimate option

You may appreciate python PEP8, which discusses things like highly nested code and functions being considered bad. I first followed PEP8 kicking and screaming, but I think it forced me to remove some bad programming habits and I now lint check my code habitually.


Sorry to take so long to answer. Good questions. 2 space indents, in (say) 30 line functions, get harder to discern the more indent levels. This problem is exacerbated by bad eyesight.

Bigger monitor with higher DPI does help, yep.

Horizontal scroll: I suppose it could help. The more immediate tradeoff tends to be that I use a terminal full width on a large monitor when needed, and tolerate the fact that I'd prefer to have space to have a web browser open on the same screen.

I'm a big fan of Sandy Metz, who advocates 5 line routines. In Python and Ruby, this is possible, and the indentation problems go away. In C, it's more of a challenge.


So long as everyone adopts a consistent base it doesn't matter if the font ends up with the number 2 or 200, the actual size that'll appear for each user is tuned to the overall visual scale that they like best on their device.

It's one of the things that drives me nuts about shared Excel spreadsheets. Rather than just zoom in or out someone will mess with the font size to make it fit their screen and suddenly it's just screwy enough for someone else with different eyes on a different monitor when then goes to change... you get the idea.


> A coding standard that assumes a 10 point font size would violate the Americans with Disabilities Act's 'reasonable accommodation' mandate.

Would it? I don't think having to make all my lines 44% shorter than they should be is reasonable; that's going to be a massive impingement on productivity.


44% shorter: is your claim that limiting line lengths to 80 or 100 characters is going to 'massively impinge' your productivity? That seems unlikely to me.


It absolutely does. You might think it would just mean more lines, but that means your functions are longer so you have to break them up more, which means there's a hard limit on how big a coherent thing you can build. If you think it wouldn't matter, try limiting yourself to 60 or 40 character lines in a codebase and see how much it changes.


Write tiny functions because of aesthetics; not because the flow calls for it ಠ_ಠ


In this incident, as with Air France flight 447, pilot and co-pilot were holding the controls in opposite directions, and the software averages the input. In this case the warning that the controls were mismatched was not of sufficiently high priority to be issued (other warnings were taking precedence: You're about to crash). This user interface just continues to appall me.

With mechanically joined controls, it is impossible to have this happen. I think if I were designing a modern aircraft, I might retain physical linkage for just the reason.


I always wondered who even decided that averaging the input is a good idea.

It sounds like it makes sense at first glance, but if you think about it a little bit more it actually doesn't make any sense.

The average of two inputs is basically garbage, it doesn't do what either of the pilots want to do and it breaks feedback for both of the pilots.

After watching tons of Mentour Pilot videos (who, by the way, covered [0] this incident) I am convinced that this feature shouldn't exist at all.

And no, I don't think that I'm smarter than people who originally designed this system. I just think that this particular feature was not designed at all. It seems like an afterthought. Like, "hey, there is this corner case that we haven't thought about, what should we do if both pilots input something on the controls? - well, let's just average it, kinda makes sense, right?"

[0] https://www.youtube.com/watch?v=6tIVu0Dpc2o


> After watching tons of Mentour Pilot videos (who, by the way, covered [0] this incident) I am convinced that this feature shouldn't exist at all.

There is some selection bias at play here. We don't know how many situations happened where averaging the input was the right thing to do and avoided an accident, as Mentour Pilot does not make videos about those.

I'm not saying averaging is good. I have no idea. But a number of videos about crashes (which I watch and think are awesome) are not a good reason to form beliefs.

> I don't think that I'm smarter than people who originally designed this system.

This sentence says one thing, the other sentences in your comment say the opposite. It certainly reads like you think you're smarter than those people. Which as far as I know could be true, no idea. My point is a disclaimer does nothing if you actually do the mistake you know you should avoid.


> The average of two inputs is basically garbage, it doesn't do what either of the pilots want to do and it breaks feedback for both of the pilots.

I think it's done in case one of the sticks has a bit of drift. If there was an alarm for dual input it would constantly be going off in that case.


There is an alarm on dual inputs.


Except when there's not ... In this case, it was superseded by "more pressing" alarms, namely "pull up"/"you're too low".


Yeah, that's a good point though, of course in most situations it is more urgent so this choice makes sense. BUT in this case not being aware of the dual input made the GPWS worse. So in this paricular case it was not.

Personally I would do a different type of alarm for dual input, like a big red light somewhere. Or just not allowing dual input somehow (always requiring the use of the takeover button).


I mean, what choice, besides averaging, would make sense? Completely disregarding one pilots input seems worse, and averaging is what happens in a mechanically connected system. The crucial difference is that in that case the pilots can feel that this is happening. I don't know what sort of force feedback the Airbus sidesticks provide, but this lack of feedback seems to me to be the real root of the problem, not the averaging itself.


Disregarding one pilot input seems better: one pilot can correctly fly the plane while the other does nothing vs two pilots getting confused and flying planes into the ground. Even better would be a system that somehow follows the "I have the stick" procedure, although I don't know if that is possible.

You are right though that either way force feedback make sense. You could even just do a buzzing if there's dual inputs, like when you take your hand off a lane-controlled vehicle.


Say one of the pilots is suicidal, or had a heart attack and is unconsciously while holding the stick in the wrong direction, how does the airplane know which input to ignore?


> I don't know what sort of force feedback the Airbus sidesticks provide,

None.


Mechanically joined flight controls typically have a linkage designed to break when sufficient force is applied. This can cause equally disastrous results when the two pilots are putting in different control inputs.

https://www.aopa.org/news-and-media/all-news/2000/february/p...


My dad taught instrument flying in fighter jets. He'd ride in the back seat, with the student in the front. The controls were linked together.

Against regulations, he carried with him a length of steel pipe. The problem was sometimes a student would freeze and hang onto the controls with all his might. The pipe was so my dad could beat him on the head until he let go, and save both their lives.

Fortunately, he was never forced to do this. But he said "I'll be damned if I let any student kill me!"


P.S. the thing about instrument flying is your senses lie to you, and you need to rely on the instruments. A green student is at risk of panicking and believing the lies his inner ear is telling him (spacial disorientation). When JFK jr crashed in the mist at sunset, my dad passed by the TV when they reported it, and said "spacial disorientation". It's killed a lot of pilots.

A major part of learning to fly instrument is to learn to ignore your body screaming at you that you're flying upside down.


And the reason they have this, is so that pilots can overcome a jam by breaking that linkage. Only half the plane will then be responding to the controls, but that's much better than none.


The rationale for this (I did some work on this system at Boeing) was that the pilots would not be fighting each other for control, they would be fighting a jam.

Flight controls at the time were not designed for dealing with a crazy or malicious pilot.


A stick to the head is the classic solution for a pilot going crazy on controls for whatever reasons. Sometimes the stick is not metaphorical.


Interestingly, when this happens on the 777 (and I guess the 787), the inputs are averaged, like on an Airbus.


More concretely, the warning that took precedence was the GPWS telling them to PULL UP. If this didn't convince the captain that flying towards the sea was a bad idea and he should in fact pull up, I'm not sure any other technical measures would have?


The mechanical force from the other pilot pulling up (as if his life depended on it) might very well have “convinced” him.


In this case the pilot attempting to start flying failed to yell "I have control!". They should have only grabbed the stick after the pilot currently flying said "You have control!". It is quite obvious that the pilot that grabbed the stick simply panicked. If the controls had been linked the two pilots would have fought each other and would likely of produced an equally bad result.

In the AF 447 case the pilot not flying did the request, but did not wait for a response before fighting on the controls. The pilot not flying eventually got control, but the pilot initially flying panicked and started fighting on the controls.

Failure to properly request/acknowledge control handover will often create the opposite situation where each pilot thinks the other is flying. The results of that situation will be the same regardless of any mechanical control linkage.


> The results of that situation will be the same regardless of any mechanical control linkage.

I highly doubt that. In a fly-by-wire plane with mechanically linked controls the only possible source of force feedback on the controls is input from the other pilot. We humans have a very long evolutionary history of wrestling for control of the same stick. We can recognize that situation on a very deep instinctual level. We can also instinctively realize “that other guy is really pulling hard… am I in the wrong here?” If you remove the force feedback and just average the input then all this is lost.


If the pilots can't resolve their difference of opinion verbally, things will not go better when they try to physically overpower the other. There are many accidents in the Admiral Cloudberg corpus that involved pilots fighting each other on mechanically linked controls.


> If the pilots can't resolve their difference of opinion verbally, things will not go better when they try to physically overpower the other.

The primary problem is not that the pilots can’t resolve their difference of opinion, it’s that they are not aware that they have one.

> There are many accidents in the Admiral Cloudberg corpus that involved pilots fighting each other on mechanically linked controls.

How many of those accidents were in fly-by-wire planes? Again, the primary issue here is lack of feedback / ambiguity. If the plane is not fly-by-wire then it’s very hard for the pilots to understand that they are fighting each other, and not the plane.


Apparently, Airbus is working on force-feedback sidesticks now.

But yeah, they should have added something like a stick shaker to indicate the dual input.


I did a lot of the work in my 40 year software career as an individual, which meant it was on me to estimate the time of the task. My first estimate was almost always an "If nothing goes wrong" estimate. I would attempt to make a more accurate estimate by asking myself "is there a 50% chance I could finish early?". I considered that a 'true' estimate, and could rarely bring myself to offer that estimate 'up the chain' (I'm a wimp ...). When I hear "it's going to be tight for Q2", in the contexts I worked in, that meant "there's no hope". None of this invalidates the notion of a carefulness knob, but I do kinda laugh at the tenor of the imagined conversations that attribute a lot more accuracy to the original estimate that I ever found in reality in my career. Retired 5 years now, maybe some magic has happened while I wasn't looking.


More than once I've used the xkcd method (Pull a gut number out of thin air, then double the numerator and increment the unit e.g. 1 hour -> 2 days, 3 weeks -> 6 months). When dealing with certain customers this has proven disappointingly realistic.


"Hard times create strong men". At what age do the hard times start for this to be true? Do children who are victims of abuse become strong? Some perhaps, but I suspect not more than a control group. Do the hard times occur when the people are full adults? Anecdotally in my life, I've seen hard times be precursors to people cope by using drink and drugs, and seen hard times to lead other people to step up to the challenge. And what is the definition of strong, here? Seems so vague as to be pointless. This old saying seems like complete bunk to me.


Does anybody have a pointer to a good description of what Alan Kay means by messaging?



In practice it means late binding [1].

[1] https://en.wikipedia.org/wiki/Late_binding


When I hear Alan Kay talk dismissively about current applications and interfaces, and the lack of attention given to what was developed at PARC 40 or 50 years ago, I often wish he was more explicit about WHAT was developed. (I have watched the mother of all demos, which is truly awesome, but partial information). This video is another significant chunk, and it puts modern interfaces to shame for their lack of power and imagination. The depth of power here is analogous to the power of Lispy languages, where, until you really understand the concepts, you are ignorant as to how (for example) C++ is in no way "Object Oriented" in the way Alan Kay meant it, how impoverished it is, and how critical late binding is.


You might be interested in Brad Myers' new book "Pick, Click, Flick! The Story of Interaction Techniques". He's a prominent researcher in HCI at Carnegie Mellon University (and one of my colleagues). It gives a great overview of the history of how we interact with computers.

https://www.cs.cmu.edu/~bam/ixtbook/

Here's a summary of the book: This book provides a comprehensive study of the many ways to interact with computers and computerized devices. An “interaction technique” starts when the user performs an action that causes an electronic device to respond, and includes the direct feedback from the device to the user. Examples include physical buttons and switches, on-screen menus and scrollbars operated by a mouse, touchscreen widgets and gestures such as flick-to-scroll, text entry on computers and touchscreens, consumer electronic controls such as remote controls, game controllers, input for virtual reality systems like waving a Nintendo Wii wand or your hands in front of a Microsoft Kinect, interactions with conversational agents such as Apple Siri, Google Assistant, Amazon Alexa or Microsoft Cortana, and adaptations of all of these for people with disabilities. The book starts with a history of the invention and development of these techniques, discusses the various options used today, and continues on to the future with the latest research on interaction techniques such as presented at academic conferences. It features summaries of interviews with the original inventors of some interaction techniques such as Larry Tesler (copy-and-paste), David Canfield Smith (the desktop and icons), Dan Bricklin (spreadsheets), Loren Brichter (Pull-to-Refresh), Bill Atkinson (Menu Bar and HyperCard), Ted Selker (IBM TrackPoint pointing stick), and many others. Sections also cover how to use, model, implement, and evaluate new interaction techniques. The goal of the book is to be useful for anyone interested in why we interact with electronic devices the way we do, to designers creating the interaction techniques of tomorrow who need to know the options and constraints and what has been tried, and even for implementers and consumers who want to get the most out of their interaction techniques.


I’m not sure how this puts things now to shame. Yes it’s very playful and neat, but it’s wildly impractical. It doesn’t really “do” anything by itself. Instead you’ve to build up all interactions, most of which are physical, and know how to do so. And what’s the result? Some spring interactions and markers on a bitmap?

Compare this to how the web or apps are used today. They’re task/purpose driven, and all of the UX has already been thought out. It’s a far more simple and straightforward approach to build the functionality you want which you refine over time than to give a blank canvas that does everything and nothing and tell people to go figure it out themselves.

It seems to me this is really just wanting to take the underpinnings of small talk and turn it into a physical UI representation. That’s fun… but now what? I’m zero percent surprised this hasn’t lasted.


> Wildly impractical

I couldn't disagree more strongly. Part of the vision in PARC (and Alan Kay's life work) was to create learning environments for children, and this demo is set in that realm, so having kids figure it out themselves is enormously empowering. Obviously not what you want if you're trying to sell widgets, but the widget-sellers themselves could benefit from this kind of environment to develop a usable widget-buying experience.

> The UX has already been thought out ... [and] ... refine[d] over time.

In such an impoverished way, in my experience. One of the things Alan Kay has said (elsewhere) is that the environment at PARC allowed them to experiment with radically different user interfaces in the afternoon, after thinking about it at lunch. This implies an in-depth knowledge of the tools, yes, but developers need that in any environment. I'm doing volunteer work now and having to learn to use several software systems (EventBrite, MailChimp, Salesforce). The difficulty of using these systems is in range for me as a retired software engineer, (although endlessly annoying), but other volunteers need extensive training. There's is nothing 'refined' about them, in terms of ease-of-use to the end users. The current tools for developing the interfaces (HTML, CSS, SQL, Javascript, etc.) are also primitive compared to the environment shown here. Again, it's a lengthy task to become a power use of the system demo-ed here, but no more so than to become a power use of the 10 or more technologies you need to learn to make a modern web site.

As a web site developer, with these kinds of tools, you'd develop your own tools and abstractions (an intermediate layer) to build with, none of which would appear anything like the physical world manipulation tools that have been built here.


>I often wish he was more explicit about WHAT was developed. (I have watched the mother of all demos, which is truly awesome, but partial information).

Here's one of several demos Kay has done of the Squeak/Croquet/Etoys/Frank systems, all built to extend on the PARC original. The others are good, too:

https://www.youtube.com/watch?v=prIwpKL57dM


In a more consumer friendly architecture, the bank would send the consumer a list of the current organizations that have a recurring charge 'relationship' and ask the consumer which ones should be transferred to the new card. It takes it from automatic, to semi-automatic, but much more consumer friendly. Needs lots of security around it, of course.


Evokes my favorite Warren Buffett quote

"Basically, when you get to my age, you'll really measure your success in life by how many of the people you want to have love you actually do love you."


>>> In place of anything like a novel proper, we get a would-be bildungsroman breaking through to the surface in disparate fragments. These scraps are Winston’s yearnings, memories, sensual instincts, which have, as yet, somehow gone unmurdered by the regime. The entire state-sponsored enterprise of Pavlovian sadism in Oceania is devoted to snuffing out this remnant interiority.

The details of Orwell's life were interesting, but the above-quoted paragraph lands as highbrow prose that doesn't say much. I had to look up bildungsroman "A novel whose principal subject is the moral, psychological, and intellectual development of a usually youthful main character". Oh my. It was/is a proper novel. I disagree that it should be characterized as bildungsroman, with all the caveats and flourishes. In fact, it's more about the protagonist's discovery of the truly moral depravity of the culture he lives in. The creative element of the ugliness of the culture is made even more compelling for me by the fact that the storyline avoids the conventional "Shape of Stories" patterns described by Kurt Vonnegut. It ends with the protagonist having all creative life and individuality crushed out of him. Not an easy end to encompass. No happy ending here.


I think the tortured prose comes from an effort to acknowledge 1984 isn't a conventional "good novel" but still avoid saying that Orwell's popularity comes because his writing has a lot of relevance because the modern world can certainly seem "Orwellian".

This quote gives an idea how much the author discounts the validity Orwell's political insight: The meeting had been ominous to Orwell: It placed in his head the idea of Roosevelt, Churchill, and Stalin divvying up the postwar world, leading to a global triopoly of super-states. The man can be forgiven for pouring every ounce of his grief, self-pity, paranoia (literary lore had it that he thought Stalin might have an ice pick with his name on it), and embittered egoism into the predicament of his latest protagonist, Winston Smith.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: