Hacker News new | past | comments | ask | show | jobs | submit login
Bret Victor: Seeing Spaces [video] (vimeo.com)
302 points by zindlerb on June 11, 2014 | hide | past | web | favorite | 83 comments

Great presentation as usual. One fundamental tension I see in much of the work he does is between purpose-built and general-purpose tooling and environments.

The challenge in both the maker space as well as much of the visual learning and programming material he has done previously is that each of them is incredibly time consuming to adapt to each new different project. In the real world even similar tasks within projects in the same domain often have enough subtle differences that re-use is not possible or very costly.

That isn't to say these are insurmountable but maybe much of the focus needs to be on meta-tooling that can accelerate the work of experts to build these purpose-built environments (as opposed to making generic tooling).

Inspiring stuff.

Especially for programming, the tools have to be narrowly tailored to the examples since you're constantly wrestling with the specter of Turing-completeness. Any given program is an instance of an infinite number of more general classes of programs, and it's the tool designer's job to choose which dimensions of the design space are meaningful and important enough to be worth simultaneously visualizing the consequences of possible alternatives. I think it's going to take some very judicious integration of recent work on modularity-enhancing programming paradigms into the design of reflective language implementations before it becomes tractable to build responsive special-purpose reflective tools on top of a generic infrastructure. Or at least that's the strategy I'm trying.

That strategy lead to quite the run-on sentence. You have much work ahead of you friend from that appearance alone.

Agreed, I've been thinking too much in code, not enough in English.

> purpose-built and general-purpose tooling and environments

That seemed to me to be a big unresolved tension in his comparison between command centers and physical workshops. Command centers still just have lots of general use multi-purpose workstations. TV stations do much better on this point, interestingly.

While they all allow for seeing, I wonder what kinds of improvements we could have with purpose-built HID.

I think software engineer and the fundamentals of coding have always had a bias towards those who can conceptualize ideas in the abstract, then build with the assumptions that those concepts are happening regardless of their ability to see them.

This is fine, except that it limits those who need to tinker in order to find out how those concepts work. When the elements are visually recognizable and physically manipulatable, you can tinker without having to hold the entire chain on concepts in your mind. It reduces the load and increases the likelyhood of 'playing around'.

I hope some day more of Victor's ideas can be realized through the understanding that visualizing processes allows us to use more of our brain to design and develop or products -- not to mention stumble upon and explore unexpected outcomes.

I think there's an important misconception about Victor's works and ideas. Augmenting human intellect is not only about visualization, it's about gathering and merging symbolic, interactive and visual representations in a single tool. He brought this point in this conference†, quoting research from Jerome Bruner ††. He shows the example of an electric circuit. Increasing some value on a resistance and seeing the change reverberating on all the plots is equally symbolic, interactive and visual.


†† https://en.wikipedia.org/wiki/Jerome_Bruner

This is also a huge concept in Mindstorms, a book sharing many of the same goals as Victor, a stated influence. Here the idea is to expose children to tools for thought ambivalent to their form. Papert's experience is part visual and part formal linguistic—he built LOGO.

I can't tell if everyone one is like that (I assume not), but I am extremely reliant on visualization, and I suppose Bret is also. I believe profoundly on it's power partly because it's the only way I can really get things done properly, which is why those tools resonate so much to me. I have a friend however that shudders every time I mention making programming more visual.

I mean, If I'm not visualizing something, I can perhaps find some solution in a logical way following some guidelines -- much like following a recipe to solve a equation or doing trial an error to solve an algebraic problem. But critically, I can't create this way. I may stumble unto something useful, but it's an entirely different process from creativity -- it's efficiency is so much lower that it's qualitatively different.

But my difficulty at that is not fundamental, I think. I believe you can be creative purely 'algebraically', but I have no experience with it -- but I do believe that some people have the same efficiency that I have when I think visually and perhaps stumble a little more when trying to visualize things themselves.

I do think this diversity is quite good, but that's missing the point. My point is that Bret's tools shouldn't be universally essential, but are probably universally beneficial. And for some people, like me, they'd be simply enabling.

To give an exaple: when I see a system of linear equations, I don't think of a bunch of steps to solve them and the number of cases that the solutions may look like in terms of constraints of variables an so on. I think of the Image hyperplane, which can be abstracted as a plane in 3D, and the Kernel, which may be for example a line not on the Image. Then instead I wonder if the Kernel is non-null, what funny things this matrix is doing to the vectors (are they rotating, contracting, and so on), or what are the invariant subspaces. I can answer most questions one could ask about such system, but it's a distinct way, I'm not sure if more efficient or not (and that may be up to the task).

A major problem I can see is that the more interesting a problem becomes, the more impossible to visualize. You might visualize a system of linear equations when it is 3 dimensional. But try visualizing a system with 1000 dimensions. It breaks down very easily. In the end my opinion is that one is simply going to be better off learning to think critically and in the abstract.

Again just my opinion - those who say they are better at visual problems often just need to practice more without the crutch.

A big part of abstraction for me is bringing problems into something I can visualize. This reduction itself works better if I have visual intuition. I suppose linear systems are a simple case, since their functional behavior for finite dimensions can pretty much all be mapped into 3D (I guess 4D would be a little richer with double complex eigenvalue pairs, but that's just 2x 2D rotations). I don't claim to be a great problem solver, but I've gone halfway through college so far with pretty good grades without changing how much visually I think.

Learning how to deal with two dimensions, where visualization a are useful, is the first step to learning how to deal with 100 dimensions. Crutches are useful in getting started.

But maybe our intuition is wrong here:

"Hundhausen and colleagues found that how the visualizations were used matters a lot. For example, using visualizations in lecture demonstration had little impact on student learning. But having students build their own visualizations had significant impact on those students’ learning." p.118

Oram, Andrew, and Greg Wilson. 2011. Making Software: What Really Works, and Why We Believe It. Farnham; Cambridge: O’Reilly.

The Pharo guys are working on that:


Starts slow, but gets really interesting (disclaimer: I'm a biased interaction designer in this regard, I love this stuff) halfway in.

Pharo might actually become a very good fit for Victor's idea of a big-screen "seeing room" debugging environment, now that I think of it - combine it with some of Victor's idea of drawing dynamic visualisations[1] it would probably be a great environment for creating tools on the fly, and the "everything is an object" model is fitting for the tinkering-mentality of the maker space.

[1] https://vimeo.com/66085662

Does Pharo support anything more than the standard hot code swapping and live object (but not code) manipulation?

If you're familiar with Smalltalk, skip the first twenty minutes of that first video I shared and then stick around for at least fifteen minutes, that's where the customizable views are explained. I think that answers your question - although I don't know how innovative this if you take into account all the alternative UI's that failed. However, the way it is being implemented in Pharo really appeals to me, and I can see it really work well in this "visual debugging seeing room environment thingy" Bret Victor wants us to aim for.

> I have a friend however that shudders every time I mention making programming more visual.

Whenever someone mentions making _programming_ more visual I just get this feeling I don't really understand what they mean. The act of programming is just text to me. Doing simulations, measurements and what not can be visual, but the only way I can see "visual programming" is clunky and cumbersome.

I don't know if I'm just misunderstanding people, but the composition of code doesn't need to be anything else than text-based in my opinion.

Edit: I have seen some of Bret Victor's tools and his tool for visualizing change in game programming is very cool and I'm sure very helpful, but that is a very, very specific thing and whenever you take it out of that context it becomes so much less interesting and helpful.

I'm not advocating just giving up on the whole idea, but I feel like people are trying to generalize things that they've only imagined for very specific purposes.

That bias came from the necessity that engineering work is mostly invisible and you can't completely generalize every concept to fit every project. You need to put in extra work to visualize a graphic that would respond to the state of the system and the concepts you are visualizing might only be relevant to the project at hand. Despite these challenges, when we can visualize certain general concepts, which apply to a wide group of systems, but engineers do a good enough job and rarely are visual designers or UX experts involved. At that point its a cultural issue and that's why I'm grateful Bret Victor is around to advocate better designed tools for engineers.

There are UX designers that focus on high investment tools....at AutoDesk for example. Also, high investment tools are more difficult to change given...well...the investments made in them by the user base. Also, be careful to distinguish between visual and UX, those aren't often the same people outside of the web world. Heck, data visualization people often aren't visual designers.

It blows my mind that Bret keeps giving talks in public and sharing his ideas for free when pretty much each of them could have been used as a startup pitch in return for likely investment. But I guess he's more interested in inspiring others than just committing to one idea for years. I'm glad we have him around.

This idealism appears to be one reason why he left Apple:


That's… horrible. It feels like he's under a gag order. I wager this stuff could be useful to all of us, but Apple just clings on this "Intellectual Property" like, like… Well, like any corporation.


He worked for a company out of his own will, and that company paid him in exchange for his work, with the full understanding that all ownership of the work would go to them. I mean, it's not even as if Apple hides that about them- that's one thing they tell you OVER and OVER when you interview with them.

He knew exactly what he was getting into when he accepted to work for them (and did so for many years), and he definitely doesn't seem to mind what he got out of it (salary + the ability to call himself an "ex-Apple employee" and extract the social proof/appeal to authority that comes with it).

I love Bret's work, but this page on his website is very distasteful and comes across as fairly petty.

> He worked for a company out of his own will

Did he?

I understand a guy like Brett Victor has more options than most people, but think of the statistics for a second. Most of us work for corporations. The only real choice here is which corporation. (There are other ways, but they often amount to helping corporations, or building your own. That doesn't exactly solve the problem.)

At the end of the day, one's gotta eat. Are you willing to starve to save your own soul? Neither am I. But I'd like to have my cake and eat it too anyway.


Background: I side with Noam Chomsky and most classical anarchists here: corporations are a systemic problem: they just concentrate too much power. We should dissolve them. As for the problems they solve (like shiny graphic cards), I'm sure we can think of a better way than corporate capitalism (no, I'm not thinking of command economies that are often associated with "socialism" —yet are anything but).

Maybe he once had a hope more ideas would see the light of day?

Maybe that hope was crushed and he realized he made a mistake.

The contract theory of morality does not admit well to asymmetry of information. Most organizations preach about the impact one will have, not exclusively about compensation. Certainly Apple would have, certainly Bret would have been looking for impact. Bret was at the informational disadvantage there.

In general, it seems, at least, that if employers decide not to do something with creative works, that they at least have some other life raft they can escape upon.

   he got out of it (salary ...
This is an argument I've heard many employers use that is factually incorrect.

Every month, employees freely give their time to a company. The company is in debt to the employee. At the end of that month there is a financial reckoning and the debt is repaid. The moment the employee comes to work the following day the company is debt to the employee again.

I cannot stress this enough - for the whole month it is the Company that is in debt and should call itself lucky, not the employee.

It is employees that keep the economy going - not employers - as, at the end of each month billions have been lent to Companies by those who work there.

Not necessarily; that depends on one's particular arrangements. I, for example, get paid approximately mid-month, so I start out indebted to the company each month and the balance gradually shifts over time.

I believe only the USA pays bi-monthly; elsewhere you usually only get paid monthly.

I'm an exempt employee in the US and get paid monthly. The classified employees at my organization get paid weekly.

Is this your own opinion, or is there some books/articles/etc. backing it? I'm asking this because this stance you depict seems really correct and I'd like to find out more on it if possible.

There are exceptions, but most people are indeed paid like this, at the end of the month (or the end of the week). But this is more an outlook than a hard fact.

Another outlook is, the employee should count herself lucky to be authorized to work for a company. The company provide money to the employee, which means it feeds her. If it didn't the employee would have to find another benevolent corporation, or starve. Now there is still the occasional maverick who tries to start her own business, but mostly, employees rely on companies to eat.

Yet another outlook is, the company don't feed the employee. The customers of the company feed the employee. Which is why the customer is king: piss him off, and you won't see him again. Do that too much, and you your employees will starve. It will be your fault, and maybe the fault of some of your employees. It's certainly not the fault of the customer, who has every right to choose what to consume.

I personally find these three views (the one you awnsered to, an the two I mentioned) a bit extreme. But they're not strawmen either. Some people really do believe that. In my opinion, this is the sign of a deep problem. I don't know what exactly, but something is off with the current customer/employer/employee arrangement.

I don't think it's distasteful. He lays the blame where appropriate: it was his own "terrible mistake".

Nah, the companies pushing these contracts are the ones exhibiting distasteful behavior. That page is very tasteful.

> He knew exactly what he was getting into

> he definitely doesn't seem to mind what he got out of it

you might have glossed over the second entry in the FAQ, where he calls it a "mistake".

sounds to me he made an error in judgement that he would have liked to make differently, in hindsight.

IMO, in such circumstances it seems a bit cold to side with the corporation for the sole reason that they happen to be legally in the right. but maybe that's just me, being all compassionate and silly.

The really sad thing is how many great ideas must get forgotten because they were developed in secret and didn't align with the commercial interests of the company paying for it.

it blows my mind is how little of what he's put forth has led to others creating startups out of them.

...what blows my mind more though is he keeps giving talks and hasn't open sourced any of his software on github, like he said he would. i doubt he's working on some big product that puts all his software to use. it's all going to waste and could have helped propel the visions he's set forth. it makes zero sense.

Both the Light Table team (http://www.lighttable.com/2014/06/10/light-table-and-apples-...) and the Apple Swift lead (Playgrounds feature: http://nondot.org/sabre/) have explicictly cited Bret as an important influence — his ideas are certainly not going to waste.

It is not going to waste. Deep ideas are more durable than either code or startups, but they often take longer to have an impact.

As an example, "Inventing on Principle" has inspired many people and companies to think more deeply about development environments, and to try implementing more speculative ideas. A skeptic can point out that much of that consequent work is of mixed quality, but that's just Sturgeon's Law. What matters is that more people will think better thoughts about more important problems. And that's certainly occurring as a result of Victor's talks and essays.

I decided to quit my job and start Webflow.com on the night that I saw Bret's Inventing on Principle video [1] and read his Magic Ink paper [2]. Based on lots of conversations I've had with other entrepreneurs, I'd say that a lot more people are inspired by Bret's work than you might think...

[1] https://vimeo.com/36579366

[2] http://worrydream.com/MagicInk/

tangle is really old, and his least innovative stuff at this point. it hardly counts my friend. Bret needs to stop teasing us. At least deliver the code for the one he said he would put on github.

I think it's very obvious what needs to happen to software development right now. I'm straightup doing it--my startup is making come to life much of what Bret put forth in Javascript, where it should exist (on the web, not objective c and other technologies). He could help a lot by sharing his code, no matter what language it's in.

His stuff could change the entire landscape, and now. I get it--we're all stupid developers doing the status quo who can't think for ourselves and invent something new. I get his message. And yes, that's a big part of his message whether he knows it or not. But the point is that the stuff he's put forth for all the live coding and coding observability stuff needs to happen now and like a lot--it will enhance coding by several orders of magnitude. So, I don't need any more talks about how we must think. We need to make simply what he put forth. We need to get done his gospel already so he himself can see what it actually looks like and move to the next level.

My guess is he doesn't want to share the code because of some sort of philosophy that we need to do it ourselves, and more people need to be true thinkers and inventors like him, thinking outside the box. Fuck the philosophy. It's kinda arrogant. Release the code if ur not making ur own company out of it. I used to think he was starting his own company, which is why I never spoke up. But my latest prediction (after seeing this maker spaces drivel) is that he certainly is not.

...And it's drivel because he's making the whole point about observability regarding hardware startups, where it's way less of a big deal as in coding. Like, his breakthrough moment was when he pointed out how all us coders were so stupid for going along with how current coding tools work, which are totally unobservable. That is especially relevant in the abstract world of coding, whereas it is way less profound when it comes to concrete hardware objects, which by nature are observable. Not that it doesn't apply--it's just not at the level of breakthrough described in all his essays on his site.

ps. ur talkin to someone who's read every single essay on his site, cherished his every word. this maker space stuff is crap. he's fallin off, side tracked, whatever. I'd like to see him truly help get some of his ideas executed. It would take zero energy on his part--release the code is basically all im asking.

Have you tried asking yourself why Bret would not want to open source his demos? Can you think of any other reasons that don't pre-suppose he is "arrogant"?

Yea no good ones. He's OCD and not happy with his code. He has a lame philosophy that we must make it on our own and shouldn't have help, ie that providing the code will detract from it. ..he's the one saying he would. I get the sense he's just disorganized and all over the place, hyper focused with his latest discovery du jour. ie there is no grand plan. I used to hope he was buildin a commercial company but I just doubt he is. I want answers. If u people weren't so busy kissing his ass u would do. I love Bret Victor--I'm holding him accountable to the greatness he is.

It seems your response says a lot more about your thinking than it does about Bret's. Perhaps you might try taking the view that Bret has really good reasons for doing what he's doing and try to figure out what they are, rather than presuming what he's doing is not for good reasons.

yea fellas, i know this, and it's great. love light table, etc, etc. you're talking to someone who's read every single character on his site several times over. He needs to release his damn code. He could be many times more helpful that way. nobody's saying his talks and research isn't profoundly meaningful--but he could be more so if he released with the code. It's very hard to do a lot of the things he's hinted at. Code will help majorly.

Yeah, I actually agree. Him just open sourcing the Stop Drawing Dead Fish tool for instance would move things forward a lot -- just to see how the internal data structures & algos work.

I love Bret Victor's talks, blog posts, etc. They're super inspiring.

2 things came to mind though.

1. It seems, possibly, the exact wrong time to make rooms with giant displays. With things like Google Glass and Oculus Rift as first gen (2nd?) VR/AR you could project all of that info virtually and cheaply and be able to have all the visualization he describes wherever you are, not just at a makerspace that only a few people can use at a time.

2. I'm always super inspired by the Bret's visualizations but when I actually try to figure out how they'd be implemented I'm clearly not smart enough to figure out how that would happen.

In this example in particular, he shows graph toward the end where the system tries every setting and graphs the results so it's easy to pick out the best setting. How would that happen? How does the system know what "good" is? It seems to me it can't know that. You'd have to program it which in itself would be pretty hard. Worse, most system have not just one adjustment but many. Just a few and there'd be tens of thousands of variations/combinations to try to figure out "best".

I'm not saying we can't get there. Maybe the first step is building a framework that would make it easy to make systems like that with various kinds of visualizers, analysers, time-recording, searching features etc, and maybe somewhere along the way we'd figure out how to automate more of it.

I'd love to help work on such a system.

> the system tries every setting and graphs the results so it's easy to pick out the best setting. How would that happen? How does the system know what "good" is?

In a research field of Parameter Tuning we try to answer that question. The field is more focused to optimizing algorithm parameters, but it could be applied to physical world of robots etc., if it is feasible to automatically repeat the experiment few dozen to around hundred times.

If we would do like Bret proposed, by logging each experiment we would already have done some "probing" on the parameter space. This, in turn, would allow us to build a statistical model of the phenomena and then minimize/maximize on that. The resulting parameter configuration would then be evaluated, the model updated and the process repeated until satisfactory level of performance was reached.

See for example the recent works from Hutter et al. [1], where they use Random Forests with parameter tuning to make parameter "goodness" predictions (in order to reduce the actual experiments on the target algoritm/robot/whatever).

[1] http://www.cs.ubc.ca/labs/beta/Projects/SMAC/

I just wanted to make a couple of comments; 1. I think that when something is new, like Google Glass or occulus rift, we (as nerds) tend to think it's the future because it hasn't been possible before before we have quite worked out when it's most applicable. Giant displays are awesome when you're brainstorming a topic with people in the room, which is what maker spaces seem to be about.

I can see glass being good for things like surgery, where you need to concentrate your attention on something, but would like some extra data to be easily available.

VR might be a substitute for live brainstorming sessions, and it might allow you to visualise in new ways. But latency is going to be killer across remote locations, and that's a big challenge in terms of infrastructure.

2. I'd imagine as these types of techniques become commonplace, we'll have to learn more stats.

You could also apply clustering and neural nets to a lot of data sets, but yeah - maybe we'll all have to become statisticians.

> 1. It seems, possibly, the exact wrong time to make rooms with giant displays. With things like Google Glass and Oculus Rift as first gen (2nd?) VR/AR you could project all of that info virtually and cheaply and be able to have all the visualization he describes wherever you are, not just at a makerspace that only a few people can use at a time.

I think the reason he wants giant displays is the reason we have giant shared displays in control rooms to: so you can see what others are looking at. Don't forget maker spaces are supposed to be communal spaces where you share ideas. You can learn a lot just by watching someone else use a tool the right way. And then there's of course the collaboration, where shared displays are even more important.

I think it would be some sort of teaching the system what good it. Let's say you are making his little robot. You want it to follow the light. So you pick a variable (light sensor reading) and optimize for a certain setting or direction (this reading as high as possible). The tool track that reading over the course of each try and picks the "best" one. This will have complexities of course. Do you want the highest single reading, or the highest mean or median over the course of the test? etc. But definitely solvable as he states. We managed to build 3d printers and space shuttles, we can build tools like this.

I don't think he was saying that the computer itself decides what the best setting is, but rather that it tests a range of options (given by the user) and presents the results in aggregate (using reductions specified by the user) so that the user can more easily make the decision.

This specification of the ranges also solves the problem of optimizing multiple parameters simultaneously — people usually begin with hunches about what the optima are, which significantly limits the number of possibilities the system will have to test.

There are two big problems that I see with the current generation of VR goggles for applications like this. The first is bandwidth, the Oculus's screen had too low of a resolution for densely packed information and text, this may have been solved in the latest but I haven't seen it in person. Google Glass has the problem because of it's really small size, trying to access and manipulate much information on that interface would be extremely slow and cumbersome.

The second is with full VR goggles. Right now there's not a really good way to be able to use them and interact with anything outside the VR environment.

For your other point about the graph the system doesn't have to pick out the best on its own. Just having the possible values and the results displayed the user could pick which one achieves their goals best. For his light following robot though there's a pretty easy way to evaluate the parameters, how far is the robot from the light, that is a simple function and could easily be programmed.

The integration of the whole room and how you get the robot to automatically do many runs with different parameters, for the viewing across possibilities, is where I think a lot of the difficulty lies. To do that with a small robot you have to have a lot of things automated: robot repositioning, light movement, data collection on the robot, etc.

The new Oculus and the consumer version are high enough resolution for text rendering. Though obviously artifacts of rendering them in 3D if anything but screen aligned and pixel snapped.

Additionally there are already plans for a front facing camera (that could do pass throug), because of the being blind part and the potential for hand/peripheral tracking in addition to augmented reality.

Bonus: I hope they just go all out and make/purchase custom hardware tech for structure.io/project tango structured light real time 3d scanning. Would be so cool!

I think display are, for now, the better option. AR and VR are still far too in their infancy for them to be as intuitively useful as a room full of displays is to NASA or a power grid operations team(per his example). That will change with time of course, but for something we can do now and over the next few years I think displays and projectors are the way to go. In a way, the projectors over the worktable he shows are(in effect) a limited form of augmented reality.

I guess the question is how many years? How many years until we have the libraries, servers, sensors, visualizers, and other systems in place for make this a reality?

Assuming it's 5 years (probably over estimating) where will fb oculus be? If it happens to have become fb oculus glass, something you wear that can augment the world this system of Bret's will come to fruition just as there displays are no longer needed

I guess that's irrelevant though as switching displays will be trivial. The real work is in the software

The bigger problem I wondered about, is that he glossed over how to "reset" the robot, to start out in a set initial condition for each run.

He had some fade-to-black effect in his presentation, after which the robot magically appears at its starting point ... yeah. You're basically going to need a robot arm if you want to automate that bit.

Every time I realize I'm guessing about (rather than directly seeing) the behavior of my code I think of Bret's talks. I never actually improve my workflow, but at least now I'm angry about it!

Interesting. I frequently think back to his talks and use it as a framework to guide my approach especially when debugging. I think about what I can see, and what I can't, and try to address what I can't see. This process has broken through many bugs that seem challenging until they are visible.


I think debuggers are under appreciated by Open Source languages(as in, it's not THE priority). An approach I am forced to use is heavy log statements and a DEBUG flag, so that I can feel the program show me what it's upto.

I think Bret's ideas are worth pondering on. The only issue when dealing with the philosophical idea of time, I find for myself, is that it's an easy rabbit hole to fall into.

Nice interesting paper tangentially related to this, http://www.vpri.org/pdf/tr2011001_final_worlds.pdf

In other words... a meatspace debugger? Cool idea, but I don't quite buy the comparison to "spaces laden with sensors and visualizations" like the NASA control center, Large Hadron Collider, etc.. All of those spaces revolve around monitoring, not the design/making process. In a similar vein, in my field of computational science, heaps of money has been invested in spaces for data exploration/visualization [1], unfortunately, they are essentially useless for the scientific process.

[1]: http://en.wikipedia.org/wiki/Cave_automatic_virtual_environm...

> All of those spaces revolve around monitoring

It's not only monitoring because it's not passive, they interact with the system and make changes on the fly, so it's making in a sense too.

I agree. The big-screen control-center examples he gave are for monitoring a mission in real time. The Hadron Collider scientists analyze the data later on ordinary-size monitors. Its the systematic data collection that is important.

Some of this is extremely similar to Jun Kato's research.


More specifically see phybots:


Kato leverages the overhead camera trick in this system, though in a bit different way. See "A Toolkit for Easy Development of Mobile Robot Applications with Visual Markers and a Ceiling Camera:"


Thanks for the excellent link. I only took a quick look, so maybe the information is somewhere on the site, but do you know if this person (or others) are doing similar work for robots in 3d?

These kind of things would be really great for science labs also.

Let say you're doing some medical research, growing some cell cultures and you add some compounds to the cell cultures to see what happens. Then something weird happened to some of the cell cultures, and you don't know exactly what caused it. Perhaps that thing was really an important scientific discovery waiting to happen, but you missed it, because you didn't have all the data.

The process is normally recorded with a lab diary, where you write down everything deemed important. The problem is, you're not going to notice everything, and there is also a lot of things that you can't see without more sensors that just your eyes.

The system Bret describes here is basically an automated lab diary. With enough sensors it could record much more data, much more accurately than a person, and it has a way to query the actual data rather than having to either manually browse through pages of text or searching through it with just a basic full-text search engine.

A problem with many scientific experiments is that you might a lot of measuring equipment and sensors for the thing you are doing an experiment on, but you don't have the same thing for the experiment itself, to easily be able to debug the process and to see where something went right or wrong. Why was one lab able to reproduce an experiment, but another couldn't? This kind of questions can be very difficult and time consuming to answer.

BTW, for good reading material on control rooms, look on Google Scholar for papers by Paul Heath and Christian Luff. They're very thorough in their analysis of how people in control rooms communicate and "spontaneously" synchronise their actions.

I think spaces like this would do well to incorporate projected augmented reality ala CastAR: https://www.youtube.com/watch?v=GpmKq_qg3Tk

You could collaborate, sharing the same view, or each individual could project different views, or mix and match.

I agree with Bret Victor here. I already have something similar of what he is proposing. Not so great, but my prototype is real and works. You can make one of this using "inexpensive" TVs for most of the room. Cheap cameras with HDMI and framegrabbers, a PC with CUDA-OpenCL cards. Arduino sensors work anywhere with all OSes and super easy to use, albeit not very efficient.

My experience with years of embedded programming is that NO HUMAN BEING is made for working with the cold, brainless machine or metal if you don't visualize your data.

Even the person who tells you she likes doing it, she can't work on it for long periods of time without burning.

It is like climbing over 7.000meters of altitude. Humans could survive for some time with those conditions, but depleting internal resources fast.

Some of the software shown in the first minute: http://vimeo.com/66085662

Some pretty tools in there.

He just keeps knocking stuff out of the park.

no, he's not. this is his weakest presentation yet. he was on, but now he fell off. it's not making any sense that he's gone in this direction, and let last year's software ideas wither away on the vine. He should actually propel forward last year's ideas by at least releasing the stuff he said he'd open source on github. Instead, he hasn't done that, nor has he made his own commercial company (which would be a perfectly understandable route), but instead is giving lower quality talks that regurgitate material about observability that was way more profound when he shared it in relation to software, which by naturel, unlike hardware products, is unobservable.

You should hesitate for a second; your arguments favor exploitation rather than exploration. Bret's work aims to move beyond 'the tiny rectangle' of computer screens, and companies are nearly decade scale commitments. Why not continue to produce great research before deciding upon the idea that's 'the one?'

I realize this is antithetical to the prevailing theory of lean startups, but, from a deep research perspective, that is exactly why committing to support an open source project or a company is so dangerous. Soon you would have people depending upon you, and the freedom to explore vanishes.

I think with things like swift playground and lighttable which have been released due to a direct influence on Brett's work, maybe he's in exactly the right place? By discussion the broader concepts and getting people excited, he's probably getting more done than he could with a single company.

Bret is your typical genius visionary. He's bored by the thing that excited him last year.

nobody said don't do these talks. I'm saying just release the damn code along with it. like, i said, it makes zero sense if he's trying to help us all. The devil is in the details with coding and creation--the details of his partial implementations could help us all.

I am totally in favor of good tools with good visual representations but those almost always have to be handcrafted for every specific problem. Which is probably why Bret has never delivered anything useful.

And if you're going to talk about ideas and inspiration: Lighttable does nothing that emacs didn't do 20 years ago except a little prettier.

A theory that I have been developing that might be a basis for understanding the possibility of Seeing Spaces is called Schemas Theory.

See http://SchemaTheory.net for a draft presentation that is still in work. Audios are still in production for the tutorial.

Other papers on Schemas Theory are at https://independent.academia.edu/KentPalmer and http://emergentdesign.net and http://archonic.net

A good book on Schemas is Umberto Eco Kant and the Platypus.

Basically schemas theory tells us what it is possible to see and also give us the intelligible templates for our designs.


I think Oculus technology would allow people to do all of this virtually with the physical portion being merely props. This would be a lot cheaper than doing everything for real.

I think this is how the NSA became the NSA as we know it today. When your task is to prevent terrorism you need to see. You need to see in time and detect patterns. So you need to store as much data as possible because.

So it's good to stick to some boundaries. In the example of the robot: you could measure room temperature, because maybe the sensors are acting to it. Or you could measure the amount of people in the room because sensors could act to it. Heck, maybe the sensors are acting different to different people, so track there faces and store it. Well maybe sensors are sensitive to somebodies smell so track that too.

There are limits to what is useful to track.

After a while of reading the replies, Iron Man came to mind.

Bret Victor is the Leonardo DaVinci of the age. A curator, assembler and presenter of the great ideas of our time.

Augmented Breality

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact