Hacker News new | past | comments | ask | show | jobs | submit login
The Utopian UI Architect (medium.com)
174 points by kawera on Nov 29, 2015 | hide | past | web | favorite | 54 comments

I think Bret Victor is a living genius. So much of what is designed today implicitly accepts the layers of nonsense imposed on us as "the way to do things", convinced that nothing better is possible. It is somehow hard to see when you are the person who prefers a horse cart to a car, accusing Model T of being impractical, if it was so great why is everyone using carriages etc. claiming it's a "well debugged and understood system". He is attempting a rethink of the entire field starting from first principles, asking at every decision point if we needlessly separate modalities. He is one of the few people that gets me excited about computing and its future.

Tangential, but it always irks me when people who do interesting things get exalted as geniuses. To me, it's just a roundabout way of allowing yourself to lower your expectations: "that guy/gal is a genius, so clearly I shouldn't even bother trying to accomplish things on their level".

Recently, I've been really impressed by the stuff Bret Victor, Vi Hart, and Nicky Case have been doing. But I don't put them on a pedestal. I dig into their source code, learn some lessons, and set my goals a little higher. That's how we make progress.

This isn't just a tech problem, either. The concept of "canonical" artists, writers, and musicians can really stifle creativity and make people aim way lower than they ought to.

I haven't thought of it this way before. Thanks for changing my perspective.

And I think he's heavily overrated because he presents well.

For one thing, his basic premise is not really much different from Engelbart and Kay.

For another, he doesn't propose specific processes or nuts and bolts design. The thinking trails off after "and then we make it more interactive..." Again, Engelbart did this. We've incorporated some pieces of it. We can still refine things. But the design problems always exist in the realm of compromise - if we automate more, we also have to specify more. General purpose solutions tend to fall back towards tried and true. When you put up a slick interactive demo, you have the benefit of being able to solve a very small problem, very precisely.

You know what UX problem needs the most solving? Documentation. You made a thing to automate a thing. But you didn't document it. So it's too non obvious to bother with. So it doesn't get used. The best work that Victor, Case, et al have done is fundamentally a form of documentation, not expressive tools. I've seen lots of folks try to take the lesson to heart and make their app more like that documentation. But it's a mess because it tends to leave out some necessary level of power or performance to be a good tool. It's extremely expensive to add the interactivity in many real world cases.

I have never followed him before and the article does little mention his actual accomplishments. Do you think he is a living genius because of his talks/ideas or has he actually implemented anything revolutionary to back up those ideas?

My opinion is based on his writings and talks, available on his personal homepage worrydream.com ~ he has prototyped a number of his ideas as interactive webpages, responsive development environments, demos as part of his talks etc. The genius scale is not usually known to be an objective and absolute system of measurement :) Part of the magic is (I believe) that he was outside of any narrow corporate project, or the confines of the paper publishing & grant system; and was able to explore and develop some of the ideas with the free thinking and depth required on his own time - in this day unfortunately true freedom today lies outside of both industry and academia.

I also reject in more essential terms the line of thinking around "actually implemented anything revolutionary as opposed to talks/ideas". Don't underestimate the power of well-argued ideas. See re: horse carriages above. You need to start somewhere, and that somewhere is ideas, sketches, equations, talks. If it doesn't make sense to you, fine. If it makes sense to some people, they'll go and use, reuse, some of the ideas in a hundred different projects and implementations. The turing machine was a mathematical device invented to solve a rather abstract mathematical / logical problem. It took some time to get to the iPhone from there. You need free space for the next Turing Machine to come from. Those types of ideas are very rare.

Side note - It is curious to me that some of the most interesting projects start out as "side projects" of PhD students, since they have the free time (the true quantity of which rarely admitted), usually not directly related to narrow research focus of whatever P.I. they work below minimum wage for asked them to work on.

I might be missing something but a website that intentionally breaks scrolling and wrecks havoc with the standard page display doesn't really speak too highly of UX genius.

I'm not saying he's not a genius, I'm just frustrated that he's making such elementary mistakes on his own website for the sake of graphic design "beauty". It's almost ironic because he has articles up there that kinda rant about just this type of thing...

Everything about the website screams scroll down, (Chrome Win10) but the scroll wheel doesn't work and there is no middle click option(!), On MSEdge the scroll does work but veeeeeery veeeeeeery sloooooooowly (and still no middle click).

What you're missing is that Bret Victor isn't the author of that web page.

If you want to some of his inspired UX, view https://vimeo.com/67076984

Ah thanks, I knew someone with the amount of work (just from that page) wouldn't make such an obvious blunder.

If you mean http://worrydream.com/ it is made by Bret Victor himself.

I think it'd be fair to refer to him as such once people use his ideas as inspiration to create something revolutionary, but it seems weird to do so preemptively.

You may not consider him a genius. But Bret does have some pretty innovate ideas on Programming and Design (and more). Good examples:

Inventing on Principle: https://www.youtube.com/watch?v=PUv66718DII

The Humane Representation of Thought: https://vimeo.com/115154289

This reminds me of a friend of mine who I worked with years ago.

His background was graphic design or photography or something but he had worked at a very high level at a revolutionary software company.

He then went to a small company that was acquired by Google. Once there, he was working with these freshly minted Stanford HCI PhDs.

They wanted to do studies and gather empirical data before making even the smallest decision. My friend would say "trust me, I've seen this before, this is the correct answer" but he'd get overruled and they'd go through all this effort and end up at the very same conclusion.

Some people just "get it". Pay attention to them.

How do you know who "gets it" though and isn't just a cult of personality that sells you on his ideas that may be wrong. I'm sure once his ideas are consistently backed up by the data they pay more and more attention to them?

This is a good question but I think you are exaggerating what OP meant.

What he says is that some people have very good intuition (build on experience and some perspective) which allow them to make good decisions without the need to conduct lengthly studies before they make any decision.

This might not be good scientific inquiry but it does make them very effective in moving things forward. Furthermore much of that intuition is built around a very unbiased view of the world which means they might be wrong but will be quick to adjust.

Compare that to those Phd students who only learned a method but don't have any other way than to ask others what the right decision is.

It's like reading a book and not understanding what it's about.

And keep in mind that you don't have to be right to "get it your way" data can back you up repeatedly and you end up failing anyway because you are measuring the wrong things.

I don't know why you're downvoted, as this is a very real problem in many sectors with self-proclaimed "gurus".

Anyone who "gets it" will be measurably right often enough that nobody should have to take things on faith for very long.

You could use this same argument to argue against testing software. If one of the people who "gets it" has written this piece of software then there should be no reason to test it because they _know_ it works. Anyone coming in for a job interview at a professional company with this attitude will not be taken seriously.

"They" are human and are prone to the exact same errors and mistakes as anyone else. Google is technologically successful because they try to base their decision on analyzed data instead of a gut feeling.

And yet Android UI is mediocre and isn't a qualitative improvement on the previous status quo.

I think the point of GP is that listening to the 'someone who "gets it"' can speed up the development process. Indecision has costs, and you can run studies in parallel anyway. Sometimes costs of having to backtrack every now and then are outweighted by the benefits of moving fast.

RE arguments against testing in code - testing is cool and all, but at some point you have to ask yourself whether you want to ship a product or a test suite.

BTW. the whole anecdote reminds me of a story from Microsoft about problems coming from a group of PhDs:


"Getting it" can also mean having already tried something before, or having tried enough things that are similar that you can triangulate on what something would actually be like.

I'm all for testing and empirical studies, but it can be taken too far. Zynga focuses so much on measuring and testing, that it sucks all the fun and personality out of their game designs.

Empirical data tends to find local maximas. You need someone with a vision how it all fits together and which guiding principles to embrace.

The projects in which Brett Victor is involved have serious implications for HCI. Implementing them in a way that makes them impactful requires ubiquity and that NECESSARILY takes hundreds of man-years across many companies/products/individuals.

It would be a waste of his time to do more "implementing" than quick sketches for the purpose of communicating the ideas. His ideas are his "product".

I associate the word genius with thinking and ideas rather than production. Which is to say, accomplishments can be ideas or new approaches (not just implementations). Wikipedia helpfully mentions "new advances in a domain of knowledge". Many people here are not be convinced that an advance has been made without an implementation, without functioning proof. Maybe you would say you can only look back later once ideas have been proven to classify someone as a genius. Not everyone is like that. For me, genius is independent of proven success. It's more a general mode of operation, a type of creativity coupled with directed effort, and you know it when you see it. I think this dichotomy of people is interesting and helpful. Maybe the people calling Bret a genius are just in the latter category, and you're in the former.

You must have an idea to implement. Innovation begins with philosophers who think up thought provoking concepts, then scientists set out to discover/prove/disprove these concepts, and engineers implement. You can be a genius at each one of these stages.

Sure, but can these ideas be called genius until these concepts are tested and implemented? These ideas may sound great on paper (or on computers as his interface of choice), but the implementation details may not necessarily succeed and create this revolutionary new paradigm.

You are right that only time will test, if Bret's ideas are long-lasting and it might be a bit premature to put him on a pedestal. But it can be said that he has already inspired a lot of people with his work, which to me is a sign that these ideas have been bubbling under on other people's minds too.

Yes, his work has directly influenced key projects at Apple (iPad, Swift), Chris Granger (Light Table, Ada), and his work has inspired countless other projects, initiatives, and people interested in UX.

I agree wholeheartedly! The premise of his "Magic Ink" article [1], "Interactivity Considered Harmful", ruffled my feathers until I read his explanation.

He argues that interactivity is actually a failure state of software that doesn't figure out what the user wants from history and context, without demanding their time and attention.

>"I argue that interactivity is actually a curse for users and a crutch for designers, and users’ goals can be better satisfied through other means."

>"Information software design can be seen as the design of context-sensitive information graphics. I demonstrate the crucial role of information graphic design, and present three approaches to context-sensitivity, of which interactivity is the last resort."

Working at TomTom on GPS navigation really drove that one home, where an "interactive" user interface could cost people their lives. Imagine a popup that said "Did you really mean to miss that exit? [yes] [no]", instead of just recomputing the route without asking.

[1] http://worrydream.com/MagicInk/#interactivity_considered_har...

Bret is the real thing. His video talks are epic.

I'd like to put him and Loren Brichter (another genius) in a room and be a fly on the wall, and listen.

His ideas are inspiring. But I'm a "mechanical" guy, I want to know how it works. Stop me if I'm wrong, but there seems to be very little on that front.

If you look for instance at LightTables, which sort of scratch the same visualization itch. The demos are awesome, but then you think at what it takes to make them work, you realize it's a lot of case-per-case work. Work people won't always be willing to put in to have a nicer visualization of a problem instance. It's dead in the water for general debugging, for instance. I can see it work for specialized frameworks (animation ...) and scientific research however.

So I'm interested but not excited enough to jump into the bandwagon before I know where it's headed.

I fully agree. Does anyone have an insight into: Why doesn't he make the source the code he uses in his presentations (e.g. the demo in 'Inventing on Principle') available? It would be hugely instructive and joyful to play with that code :)

Edit: typo

One of the most useful concepts in user interfaces is not visual. It comes from the original Macintosh User Interface Guidelines. "You should never have to tell the computer something it already knows."

Do your web site forms do form fill properly?

There's an RFC for that.


Economy, ~superconductivity, !resistance. Seek them everywhere.

Good that he works now for SAP. The software needs a new UI.

The SAP ERP software UI dates back to the 80s, and is still basically all text based, even the tables are still made out of ASCII chars.

Screenshot of an text based table (SAP ERP): https://mysapbasis.files.wordpress.com/2015/02/screenshot1.j... ... the table borders are drawn over the all text (= a lot of space and tab chars) based table.

Textfields cannot be longer than 40 chars, multiline textfields don't exist (basically one textfield per line) and it looks like a dinosaur UI in 2015 - optimized for 14" CRT monitors. Their HTML4 Netweaver UI and Java "experiments" failed.

The ABAP script language dates back to the 80s too and looks like COBOL. https://en.wikipedia.org/wiki/ABAP

I was under the impression he works indirectly for SAP via this company that SAP funded (search for similar situation with Vi Hart and/or Alan Kay).

Can't argue the old SAP ERP looks outdated (but ... somehow it works :)).

Recently SAP released a new product called Cloud for Analytics, here's some of the screenshot:





There are still plenty rooms to improve ...

PS: I'm part of the Cloud for Analytics team.

(disclaimer: I'm a 23 yr old IT dude) See it's interesting, I would rather use the old UI over what you show here, it could be I'm not really seeing everything, but it looks like all you did was update the graphics and add more white space without adding any real UX improvements, I shudder thinking about trying to explain this "improvement" to whichever dept (finance? marketing? analytics?) this software is meant for. The one thing all my colleagues hate is not being able to see all the data (well really, not being able to see as much data as possible) give 'em cramped boxes over huge whitewashed screens any day of the week.

I worked on the table for a different SAP product (Lumira) and I think ours works pretty well. It's for business analysts and lets you stack hierarchies in a bunch of different ways. For a large overview of data, it's better to use a different visualization anyways which was a click away in Lumira.


Ugh, this almost deserves an NSFW tag. Just thinking about the pile of legacy code that must sitting behind this horror gives me nausea.

Weirdly enough, this goes even to the database:

- I've seen my fair share of columns typed as fixed-width NCHAR. Yeah, a "89.99" order total you see on the screen might be stored internally as `000000000089.99`.

- Booleans? Why not use `x`, ` ` and `#` instead of true, false and null?

I really enjoy Bret Victor's talks. Here is one where he discusses his driving principle: Creators need an immediate connection to what they create (e.g. immediate feedback)


Along these lines, this is why I love hot-loading code. For example, there's react-hot-loader [0] (which was actually superseded already by react-transform [1]). This enables you to iterate much more quickly while you're building an interface. Along that same line, the same guy created redux and redux-devtools [2], which also enables you to iterate quickly on application code.

I've been using hotloading css, react and redux for the past few months and it's an amazing experience. It enables you to iterate quickly and see changes immediately. (Admittedly, as a project grows, there's a small delay between saving and seeing the change. I'm not sure if it's avoidable or not... But it's still much faster than having to do a full-page reload.)

[0] https://gaearon.github.io/react-hot-loader/

[1] https://github.com/gaearon/react-transform-boilerplate

[2] https://github.com/gaearon/redux-devtools

Check out Elm [1], it had hot-swapping and a time travelling debugger even before React.

[1] http://www.elm-lang.org

If you like hot-loading code you should checkout this strangeloop video of Bruce Hauman talking about devcards


A small side track to the topic: I've been evaluating data querying, caching (or syncing) options for building modern JS apps. Public GraphQL solutons seem not to be practical enough for fast prototyping yet, but redux looks a promising middle-ground. Have you done production apps on it? Any downsides?

I'm currently in process of rewriting our work application from AngularJS to Redux.

The downside with redux is that simpler things can take a bit more code than you might wish. You have to add an action handler / reducer, action creator, etc. However, it's generally very easy to extend previously written code, and it's very easy to reason about the data / logic flow.

Another pitfall with redux is that it doesn't provide any help when you have to react to changes. A few days ago in reddit I wrote a small example explaining what I mean [0].

You might be interested in checking out relay-local-schema [1], it looks promising for experimenting with GraphQL and Relay.

[0] https://www.reddit.com/r/javascript/comments/3u0167/getting_...

[1] https://github.com/relay-tools/relay-local-schema

It's a pity the code isn't open but I'm sure there's a reason.

It's great that he questions everything and proposes radical new approaches, but at some point, you have to actually commit and go build something. Revolutionary ideas and spirit are not a revolution.

I also have a lot of highly-opinionated grandiose ideas. Perhaps I should put up a website and start giving talks.

I totally disagree. For example, Edward Tufte never built some grandiose project around his ideas, but influenced thousands of people with his books, projects and talks. We might evaluate his legacy in 50 years, and it might be bigger than any single application of his ideas.

Same with Victor, I think it's better that he focuses on the exploring the idea space and finds even better approaches. If his ideas are any good, there will be modern Henry Fords that will assemble teams and resources and will build businesses around his ideas. Henry Fords will potentially reap millions of dollars of profits.

Bret Victor will have his legacy as a reseacher and explorer of ideas. He will never get tons of money out of his ideas, but it seems that it is the path he has deliberately chosen. His name might be mentioned in human computer interface research 100 years from now, alongside Engelbart, Kay, etc.

You should do it and see if you have any original ideas others find truly interesting. It's hard to have genuinely new things to say. I think its fine to build, it's fine to think and share, and it's fine to do both. Why so many "Musts"? He has built very fine prototypes btw.

If you inspire people to follow through and experiment with your ideas, go ahead.

Don't discount the difficulty, even after coming up with a brilliant idea, to communicate that idea effectively and in a way that influences people to see its potential.

Fascinating. Didn't realize how hungry I was for more of the human story behind Bret.

It certainly makes me wonder how much creative talent is locked up within big corporations. All you end up seeing is super-refined output, like a single polished grain of sake rice. Since leaving Apple it's undoubtable that Bret has made much more far-reaching contributions that can be attributed to to his process of thinking aloud and publishing his thought process.

I find Bret Victors talks quite inspiring and would love to collaborate with him on interfaces for collaborative mathematics. I think he is misguided though on the "Kill Math" angle. There are two different kinds of maths. 1) The well-understood one, and for this kind of math there should actually exist much better interfaces for it than the "freakish manipulation of abstract symbols". 2) The new one, where you are not sure about your concepts yet etc. For the new one it is ALSO (maybe even more so?) important to be able to approach it via (virtual) physical experiments / models, but in the end the new territory has to be mapped out by symbols. By logic. Only then the full power of math can be realized. But of course there is no reason why working with logic should NOT be supported by the computer; actually I would think the computer is ideally suited to help with that task.

worrydream.com has a lot of very interesting material on it that is presented in a way that will appeal to students in the 16+ age range. Fortunately, the College network PCs do have Chrome available (not happening in Firefox on Linux).

I did notice that the OA is illustrated with really nice pictures of a studio/workspace furnished with books, notice boards and desks alive with paper illustrations. when I read the tag line...

"An ex-Apple interface designer’s 40-year plan to redesign not just the way we use computers, but the way we think with them"

... I thought the article might be about Jef Raskin and the Humane Interface for a moment.



They could have quoted Jaron Lanier, he used to make people impersonate I-cant-recall-what-insect in VR suits, mapping various limbs and senses together. To move you had to remap your own senses of mobility to the animal. A great way to rethink and learn.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact