The challenge in both the maker space as well as much of the visual learning and programming material he has done previously is that each of them is incredibly time consuming to adapt to each new different project. In the real world even similar tasks within projects in the same domain often have enough subtle differences that re-use is not possible or very costly.
That isn't to say these are insurmountable but maybe much of the focus needs to be on meta-tooling that can accelerate the work of experts to build these purpose-built environments (as opposed to making generic tooling).
That seemed to me to be a big unresolved tension in his comparison between command centers and physical workshops. Command centers still just have lots of general use multi-purpose workstations. TV stations do much better on this point, interestingly.
While they all allow for seeing, I wonder what kinds of improvements we could have with purpose-built HID.
This is fine, except that it limits those who need to tinker in order to find out how those concepts work. When the elements are visually recognizable and physically manipulatable, you can tinker without having to hold the entire chain on concepts in your mind. It reduces the load and increases the likelyhood of 'playing around'.
I hope some day more of Victor's ideas can be realized through the understanding that visualizing processes allows us to use more of our brain to design and develop or products -- not to mention stumble upon and explore unexpected outcomes.
I mean, If I'm not visualizing something, I can perhaps find some solution in a logical way following some guidelines -- much like following a recipe to solve a equation or doing trial an error to solve an algebraic problem. But critically, I can't create this way. I may stumble unto something useful, but it's an entirely different process from creativity -- it's efficiency is so much lower that it's qualitatively different.
But my difficulty at that is not fundamental, I think. I believe you can be creative purely 'algebraically', but I have no experience with it -- but I do believe that some people have the same efficiency that I have when I think visually and perhaps stumble a little more when trying to visualize things themselves.
I do think this diversity is quite good, but that's missing the point. My point is that Bret's tools shouldn't be universally essential, but are probably universally beneficial. And for some people, like me, they'd be simply enabling.
To give an exaple: when I see a system of linear equations, I don't think of a bunch of steps to solve them and the number of cases that the solutions may look like in terms of constraints of variables an so on. I think of the Image hyperplane, which can be abstracted as a plane in 3D, and the Kernel, which may be for example a line not on the Image. Then instead I wonder if the Kernel is non-null, what funny things this matrix is doing to the vectors (are they rotating, contracting, and so on), or what are the invariant subspaces. I can answer most questions one could ask about such system, but it's a distinct way, I'm not sure if more efficient or not (and that may be up to the task).
Again just my opinion - those who say they are better at visual problems often just need to practice more without the crutch.
"Hundhausen and colleagues found that how the visualizations were used matters a lot. For example, using visualizations in lecture demonstration had little impact on student learning. But having students build their own visualizations had significant impact on those students’ learning." p.118
Oram, Andrew, and Greg Wilson. 2011. Making Software: What Really Works, and Why We Believe It. Farnham; Cambridge: O’Reilly.
Starts slow, but gets really interesting (disclaimer: I'm a biased interaction designer in this regard, I love this stuff) halfway in.
Pharo might actually become a very good fit for Victor's idea of a big-screen "seeing room" debugging environment, now that I think of it - combine it with some of Victor's idea of drawing dynamic visualisations it would probably be a great environment for creating tools on the fly, and the "everything is an object" model is fitting for the tinkering-mentality of the maker space.
Whenever someone mentions making _programming_ more visual I just get this feeling I don't really understand what they mean. The act of programming is just text to me. Doing simulations, measurements and what not can be visual, but the only way I can see "visual programming" is clunky and cumbersome.
I don't know if I'm just misunderstanding people, but the composition of code doesn't need to be anything else than text-based in my opinion.
Edit: I have seen some of Bret Victor's tools and his tool for visualizing change in game programming is very cool and I'm sure very helpful, but that is a very, very specific thing and whenever you take it out of that context it becomes so much less interesting and helpful.
I'm not advocating just giving up on the whole idea, but I feel like people are trying to generalize things that they've only imagined for very specific purposes.
He worked for a company out of his own will, and that company paid him in exchange for his work, with the full understanding that all ownership of the work would go to them. I mean, it's not even as if Apple hides that about them- that's one thing they tell you OVER and OVER when you interview with them.
He knew exactly what he was getting into when he accepted to work for them (and did so for many years), and he definitely doesn't seem to mind what he got out of it (salary + the ability to call himself an "ex-Apple employee" and extract the social proof/appeal to authority that comes with it).
I love Bret's work, but this page on his website is very distasteful and comes across as fairly petty.
I understand a guy like Brett Victor has more options than most people, but think of the statistics for a second. Most of us work for corporations. The only real choice here is which corporation. (There are other ways, but they often amount to helping corporations, or building your own. That doesn't exactly solve the problem.)
At the end of the day, one's gotta eat. Are you willing to starve to save your own soul? Neither am I. But I'd like to have my cake and eat it too anyway.
Background: I side with Noam Chomsky and most classical anarchists here: corporations are a systemic problem: they just concentrate too much power. We should dissolve them. As for the problems they solve (like shiny graphic cards), I'm sure we can think of a better way than corporate capitalism (no, I'm not thinking of command economies that are often associated with "socialism" —yet are anything but).
Maybe that hope was crushed and he realized he made a mistake.
The contract theory of morality does not admit well to asymmetry of information. Most organizations preach about the impact one will have, not exclusively about compensation. Certainly Apple would have, certainly Bret would have been looking for impact. Bret was at the informational disadvantage there.
In general, it seems, at least, that if employers decide not to do something with creative works, that they at least have some other life raft they can escape upon.
he got out of it (salary ...
Every month, employees freely give their time to a company. The company is in debt to the employee. At the end of that month there is a financial reckoning and the debt is repaid. The moment the employee comes to work the following day the company is debt to the employee again.
I cannot stress this enough - for the whole month it is the Company that is in debt and should call itself lucky, not the employee.
It is employees that keep the economy going - not employers - as, at the end of each month billions have been lent to Companies by those who work there.
Another outlook is, the employee should count herself lucky to be authorized to work for a company. The company provide money to the employee, which means it feeds her. If it didn't the employee would have to find another benevolent corporation, or starve. Now there is still the occasional maverick who tries to start her own business, but mostly, employees rely on companies to eat.
Yet another outlook is, the company don't feed the employee. The customers of the company feed the employee. Which is why the customer is king: piss him off, and you won't see him again. Do that too much, and you your employees will starve. It will be your fault, and maybe the fault of some of your employees. It's certainly not the fault of the customer, who has every right to choose what to consume.
I personally find these three views (the one you awnsered to, an the two I mentioned) a bit extreme. But they're not strawmen either. Some people really do believe that. In my opinion, this is the sign of a deep problem. I don't know what exactly, but something is off with the current customer/employer/employee arrangement.
> he definitely doesn't seem to mind what he got out of it
you might have glossed over the second entry in the FAQ, where he calls it a "mistake".
sounds to me he made an error in judgement that he would have liked to make differently, in hindsight.
IMO, in such circumstances it seems a bit cold to side with the corporation for the sole reason that they happen to be legally in the right. but maybe that's just me, being all compassionate and silly.
...what blows my mind more though is he keeps giving talks and hasn't open sourced any of his software on github, like he said he would. i doubt he's working on some big product that puts all his software to use. it's all going to waste and could have helped propel the visions he's set forth. it makes zero sense.
As an example, "Inventing on Principle" has inspired many people and companies to think more deeply about development environments, and to try implementing more speculative ideas. A skeptic can point out that much of that consequent work is of mixed quality, but that's just Sturgeon's Law. What matters is that more people will think better thoughts about more important problems. And that's certainly occurring as a result of Victor's talks and essays.
His stuff could change the entire landscape, and now. I get it--we're all stupid developers doing the status quo who can't think for ourselves and invent something new. I get his message. And yes, that's a big part of his message whether he knows it or not. But the point is that the stuff he's put forth for all the live coding and coding observability stuff needs to happen now and like a lot--it will enhance coding by several orders of magnitude. So, I don't need any more talks about how we must think. We need to make simply what he put forth. We need to get done his gospel already so he himself can see what it actually looks like and move to the next level.
My guess is he doesn't want to share the code because of some sort of philosophy that we need to do it ourselves, and more people need to be true thinkers and inventors like him, thinking outside the box. Fuck the philosophy. It's kinda arrogant. Release the code if ur not making ur own company out of it. I used to think he was starting his own company, which is why I never spoke up. But my latest prediction (after seeing this maker spaces drivel) is that he certainly is not.
...And it's drivel because he's making the whole point about observability regarding hardware startups, where it's way less of a big deal as in coding. Like, his breakthrough moment was when he pointed out how all us coders were so stupid for going along with how current coding tools work, which are totally unobservable. That is especially relevant in the abstract world of coding, whereas it is way less profound when it comes to concrete hardware objects, which by nature are observable. Not that it doesn't apply--it's just not at the level of breakthrough described in all his essays on his site.
ps. ur talkin to someone who's read every single essay on his site, cherished his every word. this maker space stuff is crap. he's fallin off, side tracked, whatever. I'd like to see him truly help get some of his ideas executed. It would take zero energy on his part--release the code is basically all im asking.
2 things came to mind though.
1. It seems, possibly, the exact wrong time to make rooms with giant displays. With things like Google Glass and Oculus Rift as first gen (2nd?) VR/AR you could project all of that info virtually and cheaply and be able to have all the visualization he describes wherever you are, not just at a makerspace that only a few people can use at a time.
2. I'm always super inspired by the Bret's visualizations but when I actually try to figure out how they'd be implemented I'm clearly not smart enough to figure out how that would happen.
In this example in particular, he shows graph toward the end where the system tries every setting and graphs the results so it's easy to pick out the best setting. How would that happen? How does the system know what "good" is? It seems to me it can't know that. You'd have to program it which in itself would be pretty hard. Worse, most system have not just one adjustment but many. Just a few and there'd be tens of thousands of variations/combinations to try to figure out "best".
I'm not saying we can't get there. Maybe the first step is building a framework that would make it easy to make systems like that with various kinds of visualizers, analysers, time-recording, searching features etc, and maybe somewhere along the way we'd figure out how to automate more of it.
I'd love to help work on such a system.
In a research field of Parameter Tuning we try to answer that question. The field is more focused to optimizing algorithm parameters, but it could be applied to physical world of robots etc., if it is feasible to automatically repeat the experiment few dozen to around hundred times.
If we would do like Bret proposed, by logging each experiment we would already have done some "probing" on the parameter space. This, in turn, would allow us to build a statistical model of the phenomena and then minimize/maximize on that. The resulting parameter configuration would then be evaluated, the model updated and the process repeated until satisfactory level of performance was reached.
See for example the recent works from Hutter et al. , where they use Random Forests with parameter tuning to make parameter "goodness" predictions (in order to reduce the actual experiments on the target algoritm/robot/whatever).
I can see glass being good for things like surgery, where you need to concentrate your attention on something, but would like some extra data to be easily available.
VR might be a substitute for live brainstorming sessions, and it might allow you to visualise in new ways. But latency is going to be killer across remote locations, and that's a big challenge in terms of infrastructure.
2. I'd imagine as these types of techniques become commonplace, we'll have to learn more stats.
You could also apply clustering and neural nets to a lot of data sets, but yeah - maybe we'll all have to become statisticians.
I think the reason he wants giant displays is the reason we have giant shared displays in control rooms to: so you can see what others are looking at. Don't forget maker spaces are supposed to be communal spaces where you share ideas. You can learn a lot just by watching someone else use a tool the right way. And then there's of course the collaboration, where shared displays are even more important.
This specification of the ranges also solves the problem of optimizing multiple parameters simultaneously — people usually begin with hunches about what the optima are, which significantly limits the number of possibilities the system will have to test.
The second is with full VR goggles. Right now there's not a really good way to be able to use them and interact with anything outside the VR environment.
For your other point about the graph the system doesn't have to pick out the best on its own. Just having the possible values and the results displayed the user could pick which one achieves their goals best. For his light following robot though there's a pretty easy way to evaluate the parameters, how far is the robot from the light, that is a simple function and could easily be programmed.
The integration of the whole room and how you get the robot to automatically do many runs with different parameters, for the viewing across possibilities, is where I think a lot of the difficulty lies. To do that with a small robot you have to have a lot of things automated: robot repositioning, light movement, data collection on the robot, etc.
Additionally there are already plans for a front facing camera (that could do pass throug), because of the being blind part and the potential for hand/peripheral tracking in addition to augmented reality.
Bonus: I hope they just go all out and make/purchase custom hardware tech for structure.io/project tango structured light real time 3d scanning. Would be so cool!
Assuming it's 5 years (probably over estimating) where will fb oculus be? If it happens to have become fb oculus glass, something you wear that can augment the world this system of Bret's will come to fruition just as there displays are no longer needed
I guess that's irrelevant though as switching displays will be trivial. The real work is in the software
He had some fade-to-black effect in his presentation, after which the robot magically appears at its starting point ... yeah. You're basically going to need a robot arm if you want to automate that bit.
I think debuggers are under appreciated by Open Source languages(as in, it's not THE priority). An approach I am forced to use is heavy log statements and a DEBUG flag, so that I can feel the program show me what it's upto.
I think Bret's ideas are worth pondering on. The only issue when dealing with the philosophical idea of time, I find for myself, is that it's an easy rabbit hole to fall into.
Nice interesting paper tangentially related to this, http://www.vpri.org/pdf/tr2011001_final_worlds.pdf
It's not only monitoring because it's not passive, they interact with the system and make changes on the fly, so it's making in a sense too.
More specifically see phybots:
Kato leverages the overhead camera trick in this system, though in a bit different way. See "A Toolkit for Easy Development of Mobile Robot Applications with Visual Markers and a Ceiling Camera:"
Let say you're doing some medical research, growing some cell cultures and you add some compounds to the cell cultures to see what happens. Then something weird happened to some of the cell cultures, and you don't know exactly what caused it. Perhaps that thing was really an important scientific discovery waiting to happen, but you missed it, because you didn't have all the data.
The process is normally recorded with a lab diary, where you write down everything deemed important. The problem is, you're not going to notice everything, and there is also a lot of things that you can't see without more sensors that just your eyes.
The system Bret describes here is basically an automated lab diary. With enough sensors it could record much more data, much more accurately than a person, and it has a way to query the actual data rather than having to either manually browse through pages of text or searching through it with just a basic full-text search engine.
A problem with many scientific experiments is that you might a lot of measuring equipment and sensors for the thing you are doing an experiment on, but you don't have the same thing for the experiment itself, to easily be able to debug the process and to see where something went right or wrong. Why was one lab able to reproduce an experiment, but another couldn't? This kind of questions can be very difficult and time consuming to answer.
You could collaborate, sharing the same view, or each individual could project different views, or mix and match.
My experience with years of embedded programming is that NO HUMAN BEING is made for working with the cold, brainless machine or metal if you don't visualize your data.
Even the person who tells you she likes doing it, she can't work on it for long periods of time without burning.
It is like climbing over 7.000meters of altitude. Humans could survive for some time with those conditions, but depleting internal resources fast.
Some pretty tools in there.
I realize this is antithetical to the prevailing theory of lean startups, but, from a deep research perspective, that is exactly why committing to support an open source project or a company is so dangerous. Soon you would have people depending upon you, and the freedom to explore vanishes.
And if you're going to talk about ideas and inspiration: Lighttable does nothing that emacs didn't do 20 years ago except a little prettier.
See http://SchemaTheory.net for a draft presentation that is still in work. Audios are still in production for the tutorial.
Other papers on Schemas Theory are at https://independent.academia.edu/KentPalmer and http://emergentdesign.net and http://archonic.net
A good book on Schemas is Umberto Eco Kant and the Platypus.
Basically schemas theory tells us what it is possible to see and also give us the intelligible templates for our designs.
So it's good to stick to some boundaries. In the example of the robot: you could measure room temperature, because maybe the sensors are acting to it. Or you could measure the amount of people in the room because sensors could act to it. Heck, maybe the sensors are acting different to different people, so track there faces and store it. Well maybe sensors are sensitive to somebodies smell so track that too.
There are limits to what is useful to track.