Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Has anyone fully attempted Bret Victor's vision?
276 points by youssefabdelm on Jan 23, 2023 | hide | past | favorite | 153 comments
I love how Bret Victor outlined the importance of programming transforming into a paradigm of direct data manipulation.

I'm much more curious about a programming paradigm that no longer uses text to communicate with computers but instead just directly manipulating data, receiving past, present, and future feedback of how it would change given your manipulations. Or to put it a different way "What if" feedback. "If you did this, the data would change in this way" is visualized across many different dimensions, allowing you to 'feel your way' through feedback where you wish to go.

In other words, you give your computer your input data, and you modify dimensions which allow you to specify what you want the program to do.

To be clear, I'm not searching for specialized interpretations of this "Oh someone did this with typography" or "Oh someone did this with a game" but rather some more generalizable form like "Someone tried to replace Python with an idea like this"

I suppose the nearest thing I can think of is manually modifying the parameters of a neural net but that's perhaps too cumbersome because there are so many. Perhaps if you can put an autoencoder on top of that, and reduce the parameters down to a smaller "meta" set of parameters that you can manipulate which manipulate the population of parameters in the larger neural net?

I'm just really curious if there have been instantiations along these lines (as opposed to code live-running with results on the sides).

I realize this is all quite difficult, may even seem 'impossible' to have some sort of generalizable system that does this for all sorts of programs. I've heard people say it can't be done, and code is the ideal format. I hold that in abeyance, I don't really know, but intrigued to discover those who have a counter perspective to that and have attempted to build something.

Also really curious if you know other similar people to Bret Victor I should check out!




I don't think it will ever happen except for toy projects. If you're manipulating some small list of 10-20 objects and those object have some kind of useful visual representation then for some small use case you can possibly, maybe, design a system that could do what's shown in the demos. I'm skeptical that it scales to more complex problems with more complex data.

Bret Victor himself has made zero headway. And no, Dynamicland is not it. Dynamicland is still coded in text with no visual representation itself.

Other examples always show the simplest stuff. A flappy bird demo. A simple recursive tree. A few houses made of 2-3 rectangles and a triangle. Etc...

To be even more pessimistic, AFAICT, if you find a single example you'll find that even the creators of the example have abandoned it. They aren't using it in their own projects. They made it, made some simple demos, realized it didn't really fit anything except simple demos, and went back to coding in text.

I'm not trying to be dismissive. I'd love to be proven wrong. I too was inspired when I first read his articles. But, the more I thought about it the more futile it seemed. The stuff I work on has too many moving parts to display any kind of useful representation in a reasonable amount of time.

What I can imagine is better debuggers with plugins for visualizers and manipulators. C# shipped with its property control that you could point at a class and it would magically made it editable. You could then write a custom UI for any time and it would show up in the property control (for example a color editor). I'd like to see more of that in debuggers. Especially if one of the more popular languages made it a core feature of their most popular debugger so that it became common for library writers to include debug time visualizers

Even then though, it's not clear to me how often it would be useful.


The seminal "No silver bullet" paper states that there is no universal way representation for arbitrary programs. The alternative is making specialized tools for your usecase.

I think Tudor Girba has the most usable and real-world implementation of a Victor-like vision; moldable development in Pharo

https://moldabledevelopment.com/

The idea is that you adjust your development environment in real-time with default and custom widgets, tools and visualizations.

I've never really understood Victor's examples to be "examples of applications you can create if you follow my way of thinking", but more "hey, this is some cool stuff you can do with computers that you probably never even considered.".


It seems Victor's vision has a hard dependency on an environment that makes this kind of meta-manipulation very natural and easy. Basically it seems his vision is the UX extension of this lower-level DX vision:

https://www.youtube.com/watch?v=8Ab3ArE8W3s "Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022)

Of course this kind of runtime freedom/power/access probably has a performance cost, aaand I have a hard time figuring out how it would work in a real-life setting.

So yeah, instead of this super-universal tinkerability what's really needed (at least in the short term) are better tools. A better strace/tcpdump, better frameworks (better batteries included in more places). More care for "post hoc" error investigation. (Yeah a 2000 line exception going back to EveFactory(AdamsRib) factory is great, but what was the incoming HTTP request? What was the value of the variables in the local scope? What was the git commit that produced this code? Etc.)


One big reason errors like that are usually unhelpful is for security. I would love it if my SQL errors printed the offending row out but that row also has PII so it can't be saved to a log. They'd need to be encrypted or stored in the database itself somehow to not ruin 10 layers of PII protections.


That's not the case or the issue for the vast majority of developers


Not everyone has to work with PII but the general rule to not log your data to generic log or stack traces still applies to everyone. On top of that tools like languages or frameworks don't know what the data they're working with does so they default to the secure option of not writing data out on errors. If you know the data isn't and it's a common spot for errors you can have the data logged by tossing a try catch statement around the pain point in your code.


If you write an application that makes money in a manner that involves transactions from people somehow, it will be the case for you. That is in fact the majority of developers.


I can't imagine directly translating any project I've worked on to a non-code representation. But, that's only because they've been developed not only in code, but for code. I can totally see a post-code development experience that mirrors how programs work much better than a big string.


>> I can totally see a post-code development experience that mirrors how programs work

I can't, because how programs work is a projection of how computers work and how computers work is by doing math. The whole reason why we have been clawing our way up the ladder of abstraction for so many decades is that it's really hard to express "Which aircrew do I need in Seattle tomorrow morning" in terms of adding 1 to a value in a register. We invented these cool little machines that do math really fast and then made them so cheap and affordable that of course we started simulating everything that could be represented mathematically. I've been programming since 1975 and when I think back I can recall dozens of these conversations over the years. How do we free programming from code? Personally, I don't think we can. The code is all that programming and computing is. Just because we have managed to do so many things with our ultrafast calculators doesn't mean they can somehow be elevated beyond what they fundamentally are. It's like we want to somehow lift them up to be like us, and on just a little reflection that seems absurd doesn't it? You might as well expect it from a toaster or a crescent wrench.


Hm, couldn't you make the same argument about punch cards? The abstractions you talk about translate to different mediums differently. I think text/code/string is a very universal and low tech medium, but I don't think there's anything about it that would make it ideal for working with those abstractions. And, let's not forget that there's a myriad of different abstractions, which to me suggests that there might be as many different ideal mediums.


Given that there isn't some superior visual representation of math, I think it is reasonable to say there won't be one for code - at least for a while.


There are superior visual representations of math as soon as you add specificity such that you can measure the difference between representations in terms of their impact or other property. Equation coloring according to semantics stands out as an example of this. Interestingly, this example already has wider adoption in code than it does in math. As someone who has tried to format a latex paper to have coloring and has not had to do the same for colored code, I can understand why. Yet if you look at KhanAcademy as an example, the technology lifts much of the burden from doing the coloring, so they do the coloring, because it helps to highlight the key ideas.

It can be a fun exercise and illuminating to go over equations you've written down and try to translate them to colored variants. The classification task forces your mind to more deeply engage with the equation and can improve understanding.


I don't know man. You could say that engineering and architecture is just applied math, but blueprints are not math notation (and neither is code, by the way).


If we had some good way to work with graphs, code could easily be represented as such. I mean still having code inside graph nodes, but you could switch views between program flow, data flow, data structures dependency etc.

It could already be made better to what we have, but there's tons of little improvements on text already and editors are very optimized to how we work currently, so it seems like it would take enormous effort to match that.

But for example for web development, seeing visual changes live is already a norm and it was not popular back when he was doing his presentations (well meta refresh tag in the 90s, good times), plus tests runinng in the loop only on things that changed also seem like a step in a good direction.

Most of all, all languages are optimized for text. Visual representation seem to have an inherent problem that it's most intuitive to touch it and that won't get anywhere close to efficiency of a keyboard. I don't think it's not possible to solve though. We just haven't yet. We've came a long way from switches and perforated cards and there is no reason to think we will stop here.

As much as it pains me to say, it's possible that much of code will be dictated to "AI" in the future and then graph representations of what's going on start to make much more sense.


We found that the key is to distinguish between writing and reading. Writing is tied to the computation and there we want as much expression as possible, and text is often hard to replace. But whatever we write, it's just data, and data can be read in many ways, depending on the question we have about it. That is the basis of what we call, moldable development. Not only are we using views extensively (in the range of thousands per system), we find it provides a significant competitive advantage, too.


> I'm not trying to be dismissive. I'd love to be proven wrong.

I don't see a world where simulating and visualizing n-steps can ever be as performant as just having one step. Even deploying immutable structures will lead to performance penalties.

Only place it is usable is in small/toy examples where computing power to n-steps can subjectively be as fast as 1 step.


There's probably an argument for having the compiled/optimized and deployed version be the low-dimensional projection of the high-dimensional simulation, and when an error happens the developer should be able to restore (at least partially) that state in the simulation, which could help to understand the problem.


I think a Brett Victor like dev tool would be a good companion to pure functional languages, and help improve their standing.

Shared state code does not respond well to running functions in a tight loop while changing the code. Mutations accumulate and the results become meaningless very quickly.

There have been some minor attempts that may even predate Brett's work, especially around trying to run unit tests on every edit. Those tend to be slow so some people have done work on using code coverage tools to figure out which code changes affect which tests, but I suspect they ran into problems since I haven't heard about any of those in some time now.


Please see his latest demo you might need to update those priors

(Bret Victor on stage aprox 14:14) https://youtu.be/_gXiVOmaVSo


Is the iPhone’s Dynamic Island inspired by Magic Ink? Both the look and the name seem to suggest so.


The most remarkable thing about "Bret Victor's vision" is how different people have interpreted what his vision even is. His ideas are multi-faceted, so they inspire in several important directions all at once, e.g.:

- "what if" feedback loops (crudely, "live programming")

- direct manipulation (an old idea but beautifully captured in his projects)

- making logic feel more geometric / concrete

- visualizing and manipulating data, especially over time

- humane interfaces (putting "computing" into the world around us, but without AR)

- etc.

Bret Victor is very much Alan Kay's protege and has unfortunately inherited the curse of people cherry-picking particular ideas and missing the bigger picture.

So as others have pointed out, the only person who may be fully attempting Bret Victor's vision is Bret Victor with Dynamicland. You may also be curious to check out Humane [1] which is a hardware startup founded by ex-Apple people. They're rumored to be shipping a projection-based wearable device this year. This device could potentially be a platform for people to experiment more in the direction of Bret Victor's vision.

[1] http://hu.ma.ne


> Bret Victor is very much Alan Kay's protege and has unfortunately inherited the curse of people cherry-picking particular ideas and missing the bigger picture.

Maybe it's time we lay some of the blame for us idiots just not getting Alan Kay's ideas on Alan Kay. At this point he only has himself to blame if he's spent 50 years trying and failing to communicate his wonderful ideas.


Honest question: who in this field has a better track record of their visionary ideas leading to real-world implementations than Alan Kay?

Not everything ended up being adopted in the form he envisioned or with the semantics he proposed, but there have been a lot of right calls and influential designs in his 50+ year career.


Douglas Engelbart. Tim Berners-Lee. Smalltalk is pretty cool, though.


What kind of impact did Engelbart have after "the mother of all demos"? Much more limited than Kay, IMO. Kay’s Dynabook is arguably just as important, and he went on to do a lot of other stuff.

Same for Berners-Lee. Sure, he remains influential over the web's incremental progress on W3C, but anything more visionary seems to be a miss: XHTML, semantic web, the Solid project...


The mother of all demos is the world we live in now. He demonstrated a working model of 30 years into the future. It's like someone walking on stage and showing you what life will be in 2050.

Engelbart invented the mouse and the entire idea of pointing at things on the screen to interact with them. The results of that interaction is now what you call "the web" (hypertext).


Englebart also substantially inspired Kay, but each of these people would probably have not had success without the network of ispirations. In fact I think we should talk about it like this; Bush->Englebart->Kay->Berners-Lee (superficially) rather than the individual on their own. It's also interesting that none of these people were at the head of big successful companies, though not surprising since that basically involves a lot of compromises.


I agree entirely with you but had a hard time ignoring the "yeah but what have you done for me lately?" tone above


Oh I'm with you. And I think (and hope) TBL isn't finished yet.


I think his experiment with DSLs (VPRI STEPS project) is very insightful, I always herpderp about how we need better tools to in situ express "the domain", but of course reality has a very strong bias for keeping apparent complexity down, keeping cognitive friction between components minimal, leaky abstractions are bad so encapsulation has to be total, so it's extra hard as an afterthought (hence ORMs and middlewares tend to be very clunky), etc.

Though there are a few examples of moving in the right (?) direction, for example styled components (for React, which moves CSS into JS/TS code, there's a VScode plugin for it, and thanks to the tagged templates they can be validated).


I remember seeing a demo where A.K. used an experimental OS made with ~1k LOC using STEPS approach to actually run his slides. Never found the link to it again (if someone has it I'd appreciate), but even more importantly, I'd love to know what happened with that OS. It would seem like a great research OS going forward if it really had GUI, networking and FS expressed with such low amount of user code. It also seems to me the project coming closest to Engelbart's vision (as their NLS also did everything just by meta-programming to an assembler with increasingly high levels of abstraction).


Alan Kay addresses Qualcomm https://vimeo.com/82301919


Thank you! Is he actually running their own OS here, or is it just a scripted slide application? What I saw was more of a smaller talk given to students if I remember correctly, where he goes into the technical details of his setup a bit.


I am one of three people who have this code running live. It is way more amazing than you think, it is not scripted at all. Its a full OS/GUI personal computer in 20 KLOC, no external libraries. The graphics for example are just 435 lines of code (versus millions for Cairo).


Have you considered creating e.g. a YouTube Series going through it? Or contacting e.g. Computerphile? This is way too awesome not to share with the world. How did you get involved, have you been working for Alan Kay?


I got involved when I was 17 years old back in 1981 reading about the Alto and Smalltalk in Byte magazine. Alan Kay and Dan Ingalls at Xerox PARC had build this amazing GUI, programming language and virtual machine [4]. By 1985 I was building my first Smalltalk Transputer Supercomputer and typing in the code listing of the Blue Book. Byte magazine even invited us to publish this supercomputer on their front page as a DIY construction kit for their readers.

Things got really interesting in 1996 when Alan and Dan released Squeak Smalltalk with Etoys as free and open source with this almost metacircular virtual machine.

In 2008 we had progressed to designing SiliconSqueak, a Smalltalk Wafer Scale Integration, a 10.000 manycore microprocessor with the late bound message passing Squeak Smalltalk as its IDE, operating system and the RoarVM virtual machine with adaptive compilation. We are still working on that, it costs $100K for the mask set that you send to the TSMC chip fab and you get back a 180nm wafer with the 10 billion transistor supercomputer for $600 a piece. Getting funding for mask sets at smaller nodes like $3 million for 28nm or the most advanced 3nm node what costs over 50 million for a million cores is a life's work.

We have not been directly working for Alan Kay, Dan Ingalls or David Ungar but we exchange emails, write scientific papers [2], give lectures [1] and meet in online video sessions [1] with the vibrant Smalltalk community.

When these researchers release the source code like the STEPS project, RoarVM or the Lively Kernel we try to port it to our SiliconSqueak supercomputer prototypes and of course we develop our own Smalltalk of the Future, parallel adaptive compilers, virtual machines and hardware X86 emulators.

So to answer your first question, yes, there are hundreds of lectures and talks on Youtube and we share all this work with the world. Bret Victor's, Dans or Alans lectures are just a small part of that.

The hard part of our research is getting $100K funding together for the 10.000 core supercomputer, a $2000 wafer scale integration (WSI) computer is a little to big an amount for a crowdfunding project.

So I still hope YCombinator will fund me, but they have this silly 'no single founder' restriction. You seem to be a researcher at ETH Zurich, why don't you join me as cofounder?

We make a 3 cent Smalltalk microcontroller (an ALTO on a chip) and a $1 version with 4 MB and gigabit ethernet, with Smalltalk, Etoys and Scratch built in you get a superior Raspberry Pi/Arduino successor that 5 year old children can program because Smalltalk and Etoys where designed with children in mind.

Our Morphle WSI would be a great desktop supercomputer but the real advance would be the $20.000 (retail price) costing 3nm wafer scale integration. More than 40 trillion transistors, a runtime reconfigurable amount of 1 million cores and the full IDE, GUI and OS in 10.000 lines of Smalltalk language, IDE, GUI and OS at exaflops per second. Way more advanced than CUDA on a GPU. I gave a 2 hour talk on that:

[1] https://vimeo.com/731037615

[2] https://scholar.google.nl/citations?user=mWS92YsAAAAJ&hl=en&...

https://scholar.google.nl/citations?hl=en&user=6wa49gkAAAAJ

[3] https://web.archive.org/web/20140501222143/http://www.morphl...

[4] https://youtu.be/id1WShzzMCQ?t=519


Super interesting stuff, will go through it! Somewhat unfortunately I've mostly departed from research and have defected to the financial industry. I actually recently gave a talk about Engelbart and his ideas to my colleague, in case someone here finds this interesting:

https://www.youtube.com/watch?v=jIlzXEaOH1I


You seem to be in a perfect position to advise Bret Victor or us about financing options for this work, especially the non-research parts. For example, we apply our wafer scale technology and Smalltalk software to energy systems and energy production at 1 cent per kWh, around 60 times lowr then the European Grid prices. That should interest the financing sector and asset management.


... holymoly.. that's certainly a kind of perseverance, fortitude and probably obsession :)

how are you going to cool the wafer? what's the TDP? :o

100K sounds very doable for crowdfunding - or maybe you need to find just one eccentric multi millionaire.


I cool a wafer scale with a liquid that boils at 43 C and immersing the wafer in this liquid. The bubbles (cavitation) of the boiling liquid should not damage the surface layers of the wafer, of course. This boiling liquid is further cooled by water and a sonic heat pump moving the heat into a water tank where the stored heat is used for showers or cooking [1].

Given 45 trillion transistors (45x10e12) times 3 femtojoule (3x10e-15) to switch each transistor at 1 Ghz (10e9) you get 1.000.000 joules/sec = 1 megawatt. These are ball-park numbers, back of the envelope calculations. In reality I make full physics simulations and electrical SPICE simulations of the entire wafer on a supercomputer aka on the wafer scale integration FPGA prototypes and the wafer supercomputer itself.

The EDA (Electronic Design Automation) software tools we write ourselves in Smalltalk and Ometa, and these also need our own supercomputer to run on. Of course the feedback loops are Bret Victor style visualizers [3][2]. Apple Silicon or this small company demonstrate that only with custom EDA tools can you design ultra-low power transistors to prevent our wafer to melt.

The FPGA prototype is a few thousand Cyclone 10 or Polarfire FPGA's with a few terabytes/sec memory bandwidth or a cluster of Mac Studio Ultra's networked together in a Slim Fly network that can double as a neighbourhood solar smart grid [5]. You need a dinosaur egg to design a dinosaur, or is is it the other way around? [6]

A TDP (Thermal Design Power) of 1 megawatt from a 450 mm disk is huge, it will melt the silicon wafer. But then not all transistors are switching all the time and we have the cooling effect of the liquid.

We must power the wafer from a small distance inductively or capacitively, best with AC. So we need AC-DC inverters on the wafer, self-test circuits to make sure we find defects from dust and contamination and isolate those parts and reroute the network on the wafer.

[1]https://vimeo.com/731037615 at 21 minutes

[2] https://youtu.be/V9xCa4RNfCM?t=86

[3] https://youtu.be/oUaOucZRlmE?t=313

[4] https://bit-tech.net/news/tech/cpus/micro-magic-64-bit-risc-...

[5] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[6] Frightening Ambitious Startup Ideas (dinosaur egg)

https://youtu.be/R9ITLdmfdLI?t=360

http://www.paulgraham.com/ambitious.html


OK, one thing I don't understand. You're talking about a ~1MW supercomputer. With 100K funding you could just about pay for the cost of electricity of this thing for 3-4 weeks (using US electricity prices). Actually building it would be on the order of at least 10s, if not 100s of million. I gathered from one video that you're an independent researcher group - how is this all being funded?


I am an independent researcher, my funding is zero and I am therefore rather poor. I get paid for technical work on the side, like programming or building custom batteries, tiny off-grid houses or custom computer chips (to charge batteries better). I am for hire.

Solar electricity prices can be below 1 cent per kWh [1]. I generate 20kW solar in my own garden and store some in my own custom battery system with our own charger chips. The prototype supercomputer warms my room. I hope to move to my own design off-grid tiny house in a nature reserve in Spain or Arizona to get 2.5 times more energy yield and even lower cost of living and cheaper 10 Gbps internet.

If you only run the computation during daylight and then move the computation with the sun to two wafers in two other timezones when that location has sunlight you keep below 1 cent per kWh. Some supercomputers do this already. In contrast, running 24/7 from batteries raises the cost to almost 2 cents per kWh, still far below bulk electricity prices in datacenters. Batteries turn out to be more expensive than having three solar supercomputers in three time zones. You learn from all this that energy costs dominate the cost of compute hardware, even our cheapest transistors cost. Hence our ultra-low transistors, not just to prevent melting of the wafer but mostly to make cheaper compute (for the cloud).

The wafer scale integration at 180nm costs around $600 per wafer to manufacture, only once it cost $100K to make the mask set, amortised over the $500 wafers you mass produce, this is how you get to $600 for 10000 cores at >1 Ghz.

These $600 wafer supercomputers use less than 100-700 watt with normal use, because not all transistors switch all the time at 1 Ghz. They are asynchronous ultra low power transistors, no global clock wasting 60% of your energy and transistors and you don't touch all SRAM locations all of the time. The larger 3nm wafer scale integrations won't use 1 MW either, just a few kW, less than a Watt per core.

Actually building these supercomputers will cost $100k for 180nm, $3 million at 28nm or around $30 million at 3nm. The FPGA prototypes cost $10 per core, similar to GPU prices. This includes the cost to write the software, the IDE, compilers, etc.

You can run X86 virtual machines unchanged on our 10000 - 1000000 manycore wafer scale integrations at 1 cent per kWh. This is by far the cheapest hyper-scale datacenter compute price ever and may come to outcompete current cloud datacenter which currently consume more than 5% of all the worlds electricity. And by locating our wafer supercomputers in your hot water storage tank at home [6], you'll monetise its waste heat so the compute cost drops below 1 cent per X cores (dependant on the efficiency of your software [5]). Another place you need these ultra low power wafer scale supercomputers is in self driving cars, robots, space vehicles and satellites, you can't put racks of computers there and you need to be frugal with battery storage.

These CMOS wafer scale integration supercomputers are themselves prototypes for carbon solar cells and carbon transistors we will grow from sunlight and CO2 in a decade from now [2]. Then they will cost almost nothing and run on completely free solar energy.

Eventually we will build a Dyson Swarm around our sun and have infinite free compute [3] called Matrioshka Brains [4]. To paraphrase Arthur C. Clarke, if you take these plans to seriously you will go bankrupt. If your children do not take these plans seriously they will go bankrupt.

[1] https://www.researchgate.net/profile/Merik-Voswinkel/publica...

[2] https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf

[3] https://en.wikipedia.org/wiki/Matrioshka_brain

[4] https://gwern.net/docs/ai/1999-bradbury-matrioshkabrains.pdf

[5] https://youtu.be/K3zpgoazRAM?t=1602

[6] https://tech.eu/features/7782/nerdalize-wants-to-heat-your-h...


The OS (in 20K lines of code) is called "Frank" and in the talks where Alan uses it for his slides at one point he zooms out and you can see a cartoon Frankenstein monster in the top left corner.

You might find this list of Kay's talks interesting:

https://tinlizzie.org/IA/index.php/Talks_by_Alan_Kay


Please see the comments on this Morphle HN account for those Alan Kay talks or mail morphle &at& ziggo &dot& nl for all those Alan Kay links student lectures you remember.


Alan Kay is the Tesla of programming. Beautiful design, genius implementation but utterly impractical in 90% cases.


He hasn't failed at all. What are you talking about? Dozens of programming languages are more sensible and ergonomic because of his influence.


I agree with this. It's hard to nail down why Victor's talks are so compelling, when each of these items separately are much more mundane but are still quite well explored areas.

* "What if" feedback loops/direct manipulation

Victor's vision abstractly seems to be trying to predict/explore the consequence of some action in programming, and in specific demonstration seems to be using small widgets to allow easy manipulation of inputs to get an intuitive understanding of outputs. This could be boiled down to different goals: "Allow a program to be more easily tweaked" and "Explore a concept to get intuition of a different viewpoint". The more cynical/pragmatic interpretations for these are "make a GUI for your program" and "use interactive demos when teaching certain topics".

The first interpretation is almost comical, but we can maybe expand this to be "when you make a GUI, think about how your interface is being interpreted intuitively and this can help make your app more usable". This can maybe understood more easily when taken with the fact that Bret Victor helped design the interface for the first iPhone - famously intuitive to use. This also leads to its limitations - only concepts that have another more intuitive viewpoint can be represented. I can add a colour wheel to my WYSIWYG editor rather than hex values, but I can't easily create a GUI that lets me express that I want to validate, strip the whitespace from an email address and put it into lowercase.

The second interpretation leads to explorable explanations, which Victor has made a few of himself [0,1], but I would also cite Nicki Case [2] and unconed [3] as being other good examples. Again, this is only afforded to specific topics that have scope for exploration.

* Making logic feel more geometric/concrete

This can be seen in things like Labview (made in 1986), Apache NiFi (made in 2006) among others, e.g. SAS. In a sense, this has existed in the form of UNIX pipelines and functional programming since the first LISP was made. There is a further point which is "there currently aren't tools like this that are suitable for a non-programming audience", which is what 'Low Code' and 'No Code' is trying to achieve, but unfortunately in practice as soon as you hit a limitation of the framework then you're back to needing an engineer again.

* Human Interfaces

Sort of addressed in 'feedback loops' point above, but the DynamicLand is an interesting demo of what he's trying to get to. I think this speaks more to me with internet of things. I have friends who have set up full smart-home heating systems and can move music between rooms which are all very much seen the same as adjusting a physical thermostat rather than 'programming' or similar.

There is definitely a lot that can be explored here for certain applications, but there probably isn't direct utility in arranging pieces of paper with coloured dots on it in order to set the path of a robot. I can see this in a more consulting/capture sense of presenting certain input parameters in a more physical format, but again this is deviating from the OP's notion that this is a whole programming environment.

[0] http://worrydream.com/LadderOfAbstraction/

[1] http://worrydream.com/KillMath/

[2] https://ncase.me

[3] https://acko.net


I really wish I kept a list of related stuff somewhere…

Edit: This comment is a goldmine: https://news.ycombinator.com/item?id=34485994

——-

There are lots of hobbyists, academics, and even companies inspired by Bret Victor’s talks alone.

I know of at least 2 open source experimental programs that were inspired by specific demos:

https://github.com/laszlokorte/reform-swift

http://recursivedrawing.com/

I know there are more too but I can’t find them right now. You could probably find a lot of good stuff just searching GitHub for “Bret Victor”.

There are lots of people in academia experimenting with programming languages and environments. Try searching for papers that cite Bret Victor as well and I’m sure you’ll find plenty.

For a quick glimpse at the academic world without spending hours looking for papers worth reading, I recommend perusing the Strange Loop Conference YouTube channel. There are some interesting experimental programming languages and IDEs out there.


It's interesting to note that webflow was also inspired by Bret Victor. I'd certainly say this is a succesful product.


We are trying to apply his ideas to quantum computing interactions. Starting with linked, scrubbable visualizations for building intuitions.



I helped with the Eve language, which was a VC funded attempt down this path (https://witheve.com). This was cut short when the money ran out, so it wasn’t a full attempt.

After that project ended I started working on my own attempt called Mech which specifically handles the time-travel and what-if features you mentioned (https://GitHub.com/mech-lang/mech you can play with an early alpha version here: http://docs.mech-lang.org/#/examples/bouncing-balls.mec). I’ve made sure money running out won’t kill this project, so hopefully it’s a fuller attempt.

Someone else posted a link to futureofcoding.org, which is a community that works on these types of projects. You can find a lot more there.


> This was cut short when the money ran out, so it wasn’t a full attempt.

Hm, I'm not sure I would agree with that. It was a full attempt, but an unsuccessful one in my view, why do you think it wasn't a full attempt?

You tried, it didn't work (which is fine) but money running out is usually not the cause of a failure but the reality check that the project is a failure, if it wasn't the money would not have run out!


That's false. You could have a project well underway to fulfilling its goal but not in a timeframe in accordance with what a VC would expect. It's not a full attempt because the developers didn't get to give it their all until they gave up, they just didn't have enough money to both pursue it further and support themselves and/or their families at the same time.


If you can't communicate that you are on to an actual win before the cash runs out that's failure, pure and simple.

I've seen more than a few of these 'oh, if only we had more money we'd have surely made it' or even 'the shareholders killed our company by refusing to invest any more' but that's in all of the cases that I've seen simply a company that is on life support from day #1 but refusing to admit the reality of the situation: that it's time to move on.

Been there, done that, and I totally sympathize with how it feels but coming up with a realistic time-to-market and an actual working setup is part and parcel of success, the lack thereof is highly correlated with failure and/or acquihires.

VCs don't have a set timeframe for your success, you set that timeframe yourself when you go and ask for money in the first place. There are some exceptions to the rule (Cisco comes to mind) but in general this is just another form of failure. Getting funding is to get someone to write a lottery ticket, they are under no obligation to provide more funding than that particular time and failure to raise more money from existing investors or new ones is strongly correlated with an inability to admit to failure and/or pivot fast enough that you can still land on your feet. This sort of introspection is a massive advantage for founders that are capable of doing that, and a huge hindrance for those that believe so strongly in their vision that they can not depart from their tracks until they hit the wall.


Failure as a VC-funded enterprise != failure of the underlying 'thing' as a concept. A programming language seems to be a weird fit for the VC model in the first place, no?


Yes, that's true and that's why I state my position: the choice to bring in foreign capital changes the equation drastically. They were lucky to get funded, and burned through the cash at a higher rate than they should have to make it to the finish line.

I don't see how you can blame the VCs for that. I've seen a couple of VCs invest in language eco systems and/or language related products and some of those are quite successful but they had a working product and paying customers, they decided to use VC as an accelerator which is perfectly ok.

If you need VC to do your basic research and you are pre-product with all kinds of unknowns then that's a bad match regardless of the field, unless you make it perfectly clear up front that you expect that follow on rounds are going to be required.

As for outcomes: there are successes, soft landings and failures, I have not seen a fourth case in practice across many, many companies. Not as many as YC but a fairly representative fraction of that number.


I interpreted the title of this submission to mean "Has anyone exhaustively tried out Bret Victor's ideas?" With respect to Eve, the answer is that although we tried many things he demoed, we didn't try them all, and therefore I characterized it as not being a full attempt at his ideas. And to be clear, I don't want to sell Chris short by implying that Eve was just some attempt at reifying Bret's ideas, because Eve did have a lot of ideas that didn't come from Bret (or that Bret himself got from others like Kay).

Whatever failings of Eve a business idea (admittedly many), it didn't prove that these ideas are a dead end. Far from it, we arrived at the best ideas toward the end of our runway. Early on we spent a lot of time trying things out that seemed cool in Bret's demos, but didn't really scale to being an actually useful tool, either because the demo was too niche (e.g. the Mario jumping thing), or because it wasn't actually a useful thing (e.g. live coding by recompiling a program on every keystroke). That's what the 2mil actually bought I think.

In the end, we abandoned the business but not the ideas. The Eve v0.6 runtime was forked into Mech, and although that's been largely replaced by now, it still maintains the spirit. Chris has been working on his own project for a while now based on the final Eve prototypes that we demoed right before we closed up shop.

> burned through the cash at a higher rate than they should have to make it to the finish line.

We were actually very frugal with the money, and set the burn rate as low as we reasonably could. Most of it went to our 5 salaries, which were far below market in SF; and our rent, which was pretty reasonable for SOMA. Like I said, Paul took a different more drastic approach with Dark to set their burn rate even lower, but Chris (and all of us really) didn't want to go that way because it was better working on Eve as a team. The best thing we could have done actually would have been to leave SF, but that wasn't really an option for many reasons.

The problem with determining burn rate to make it to the finish line is: how do you a priori figure out where the finish line is in such a nebulous space? We went home in the end, because the entire idea of Eve was essentially about trying to identify that line. How far did we need to push in order to get to a business that empower a billion to code? We identified several viable businesses along the way (you can see companies like Wix and Miro becoming large in this timeframe, and Eve could have been something like that), but they weren't the business, the one that would bring a billion other people into coding. Low-code isn't the way.

I'm sure we could have convinced some VC to fund a low-code SAAS at the time -- we had the team to build it. But it was a decision we made to actively not do that thing, and shoot for the moon. Can't blame VCs for not funding that kind of ambition in perpetuity, but I still can't characterize the effort, not having realized its full vision, as a full attempt.


Excellent comment, thank you for the additional insight. In my opinion - but feel free to ignore it - you either start such a project from within a university, a large company that then funds it all the way through (though the project can still get axed) or you bootstrap it whilst being extremely frugal. VC is an extremely bad match for such a project for many reasons, just when in the life cycle of a fund an investment is made can have a very large effect on the outcome.

Betty Blocks comes to mind, they are funded and Mendix eventually got acquired by Siemens so there are some examples of success stories in this space but also many failures and compared to those two this is a much more ambitious project and much more risky as well.

And I agree that you can't a-priori determine how much cash you will need but you can make sure that you either have plenty or nothing at all assuming that the finish line is achievable in the first place, which I'm not necessarily sold on.


Oh hey, I loved reading about Eve around the time of its release. Also isomorf (https://blog.isomorf.io/what-is-isomor%C6%92-bdc50ce597ee). Something about languages that try to shift the paradigm has always fascinated me. Looking forward to learning about mech!


Hi Nick, thank for sharing that project, it hadn't been on my radar. I think it is a project squarely in the Eve ethos! Sadly I can't find any code on their GitHub accounts, so they must be doing development in private.


Thanks for sharing. Both Eve and Mech have me intrigued. I’m going to do a deep-dive into them later today to learn more and explore the concept.


I'd start here for a quick 10 minute overview: https://www.hytradboi.com/2022/i-tried-rubbing-a-database-on...


Mech looks really interesting! Does it handle cycles in the data dependencies? I find this usually to be the crux of the reactive model.


Yeah, it does. We can have cycles in data and do recursion (although not in functions yet), but it is admittedly a little strange compare to imperative world; there's a recursion limit set that will short-circuit a computation if it nests too deeply. This isn't because of a danger of stack overflows, but because Mech programs are intended to reach some steady state at the end of each event. The problem with cycles is that we don't know when something is in an infinite loop, or if it's still churning before a steady state is eventually reached.

That said, one of the things I'm really interested in is seeing how far I can take the model on a DAG, because programs are much easier to reason able when they don't have cycles in the data. So far it seems you can write some very complicated programs (games, graphical user interfaces, robotics) as a DAG.


Thanks for the detailed explanation!

I've been trying to write a game and I always keep running into cycles. You're right, a lot can be done as a DAG, but especially for games, cycles happen a lot.

I'd love some resources on how to deal with those cycles when they come up.


For the people asking who is Bret Victor: When "What are your favourite tech talks of all time?" is asked on HN, he gets more mentions than anyone else, usually.

Some of his talks:

Inventing on Principle https://www.youtube.com/watch?v=PUv66718DII

The Future of Programming https://www.youtube.com/watch?v=8pTEmbeENF4

The Humane Representation of Thought https://www.youtube.com/watch?v=agOdP2Bmieg

Media for Thinking the Unthinkable https://www.youtube.com/watch?v=oUaOucZRlmE

Seeing Spaces https://www.youtube.com/watch?v=klTjiXjqHrQ

Drawing Dynamic Visualizations https://www.youtube.com/watch?v=ef2jpjTEB5U

http://worrydream.com/


I mean, the fact that Bret Victor has himself been working on the problem for ten years (without popular success, yet) should count as an attempt at reifying his vision. I would guess that lack of attempts isn't the problem.

Reactivity has certainly become more popular, and is a standard part of web development now. And ipywidgets are an example of creating manipulatable abstractions in data science.


> Bret Victor has himself been working on the problem for ten years

tbf, he measures the timescale of the Dynamicland project in decades, and marks 2022 as the first time "Dynamicland meets the world" [1]. Seems like he's matching the pace he set for himself, so you really can't say he hasn't had popular success if he hasn't even tried to popularize it yet.

Also tbf, any concept involving "a place where a bunch of people congregate and touch things" has suffered since the pandemic, so I think we can still maintain a "wait-and-see" stance instead of declaring the ideas dead.

[1] https://dynamicland.org


> I'm much more curious about a programming paradigm that no longer uses text to communicate with computers but instead just directly manipulating data

I don't think there's any way to get away from this abstraction. At it's lowest level, everything is encoded in binary. All abstractions on top of binary are just interpretations of the underlying stream, text being a relatively simple encoding (ASCII table or UTF8's multi-byte structure). Structure data is similar, just multiple pieces packed into one contiguous space. You will always have to build on top of this fundamental, there is no simpler

That being said, I quite like:

* Datasette - https://datasette.io/ - I have a feeling ou could connect a lot of these instances and truly make something interesting there

* Lightable IDE - https://www.youtube.com/watch?app=desktop&v=H58-n7uldoU

* I have at least 2 more but I can't find them in my favorites

I'm trying to make my own as well. Hardest thing is giving myself enough time to do it, but I'm currently starting to structure my life around it.


This is the latest thing I know of from him and was recorded at the 2022 Foresight Designing Molecular Machines Workshop talk called "Nanoscale Instruments for Visualizing Small Proteins" & Bret Victor comes in about 14:14 [1]

It shows just how far the Dynamicland concept can be pushed into a hyperspecific feature set customized to fit a single domain -- because they can deconstruct the user experience to that level of detail.

Extending a Tangible UI out to the actual OS itself has been the thing at Dynamicland from the get go but here we see it finally as physical 3d objects participating with realtime digital feedback in the built environment !!

[1] https://youtu.be/_gXiVOmaVSo


In my opinion the idea is more than direct data manipulation. It is about how we get feedback. In drawing, the medium to draw is the same medium to read. In programming, there is often a mismatch - coding on a text file, running on somewhere else, e.g. terminal, browser, remote server. If you count surrounding activities for programming, like versioning, debugging, metering and profiling, even more system is involved. We are not even touching the myriad of SaaS offering each tackling carve out a little pie out of the programming life cycle.

Back to your question, from my naive understanding, smalltalk seems to be an all in one environment. The Glamorous Toolkit [1] seems to be that environment on steroid. I have no useful experience to share though.

https://gtoolkit.com/


I think that's something inherent to programming. It's not even the only field to have this "problem" - screenplays are written on paper but watched on a stage. Movies are that, but with yet another level of indirection.


Slightly off-topic but I've often daydreamed about a kind of Lisp-driven 3D Movie Maker (https://en.wikipedia.org/wiki/3D_Movie_Maker) where you could build movies from a REPL/SLIME the same way you do ordinary Lisp code -- evaluate a form, see the scene play out. With additions like modern text-to-speech and ML models generating meshes/textures, it could be a very pliable medium for a lone filmmaker to put together a movie all on their own, or even adopt a GitOps-y approach where multiple people (e.g. a script supervisor, cinematographer, light rigger) can collaborate through pull requests and code reviews.


I've genuinely had the same general idea (minus the ML part), and I've started working on it once but abandoned it. I might get back on that some day.

Another thing I considered was that, instead of the lisp, you'd have a math-y language and compose everything together. This has the added benefit that it compiles easily to GPU code. So like a chroma key thing would be a function that chooses one buffer over the other based on pixel color. You the compose that with other functions to create a frame. An APL-like language could be amazing here.

I feel like there's _a lot_ of potential here.


Have you seen his Dynamicland? https://dynamicland.org


One of the first things I do when working on a new stack is to tighten the feedback loop as much as possible.

Usually a combination of hot reloading, good intellisense / autocomplete / quick docs, debugging.

I did a talk and built the animations with Manim (mathematical animation library) and at the time you had to render the clip and play it each time you wanted to preview it, causing a significant delay (10-15s) between each change and what it looked like rendered.

It was unbearable (but finished the project). Afterwards, I put together an environment using p5.js that allowed instant feedback, even at a specific point in time. I also threw in an in-browser editor so I could keep working on an animation on my phone as I was doing a lot of walking around that time (usable, but barely).

This was the result of that project:

https://github.com/jasonjmcghee/viz-studio

https://viz.intelligence.rocks/


I was a really interested in BV's work for a while, but some of the major roadblocks you quickly run into once you start thinking about how to implement his ideas are that:

1. Whenever you zoom out of the code-level, you lose granularity and thus flexibility and power.

2. In order to gain expressiveness, you can constrain the domain, but again you lose flexibility to implement what you want and how you want.

3. It's difficult to avoid losing the ability to express things in general ways whenever you switch to visual or physical representation of code.

4. A lot of the ideas you might have end up being more simply represented by code, and more easily manipulated by way of text and keyboard.

5. A lot of things end up just being superficial wrappers over code. Superficial in the sense that they only hide surface-level complexity (e.g., reducing the visual volume of large code blocks).

6. Catering interfaces to novices often hampers experts.

There seem to be a lot of trade-offs. I don't know if these are laws per se, but they seem difficult to break.

What interests me particularly are new ways to create general purpose programs using methods that more efficient and more intuitive, but it seems like a really difficult task bound by near-inescapable trade-offs.


I think https://observablehq.com notebooks are a step towards it. Once you get into it it has the fastest feedback loop I have ever experienced. The concept of reactively changing code with partial recomputation of the dependency graph is amazing.


Hi

I want a lot of things that Bret Victor wants from computers.

I journal my ideas on GitHub in the open, see my profile for links.

I want a GUI that is self referential that tells you how it is built and allows the backend to be visualised. This is similar to React Component browser extensions that let you see the live tree of elements.

Observability of software is verry difficult. I want to see train tracks animations of software running.


Trying to follow the right people on this topic:

https://twitter.com/i/lists/1617421345121353733

Many people are doing really great and innovative work in the space. They are just mostly researchers and hard to find.

Hope to find more from threads like this.

Edit: downvotes?


I don't know how close this is to his ultimate vision, but there is a lot of that in the clojure and clojurescript world, this (old) video of figwheel live coding flappy bird is definitely influenced in his work: https://rigsomelight.com/2014/05/01/interactive-programming-...


I think if we divorce the idea of direct data manipulation from exclusively non-textual representation, we are slowly making progress in this direction through traditional notebooks (e.g. Jupyter) and notebook successors like Clerk (see Moldable Live Programming With Clerk[1]).

These are not the sweeping, fundamental changes that Bret Victor envisioned, but we are collectively moving toward more interactive programming. Imagine modern web development without hot reloading.

Clojure is the language where I see this happening most, and which is seeing the most expansion toward "visualization and interactivity as part of a the backend dev experience".

[1]: https://www.youtube.com/watch?v=3bs3QX92kYA


I think it's still pretty rough, but seems to be actively worked on and close to what you're talking about. It doesnt completely abandon text, but it has a neat dual representation. I didn't see it mentioned yet so I'll drop it here - https://enso.org/


Does Dan Ingalls' Lively Web count?

https://lively-web.org


For me the most practical way to realize such interactive programing, in ways that solve real world problems for me, with visual interfaces, is to use jupyter notebooks.

You just have to bind sliders to variables, and tie your outputs to graphical plots and there you go.

The big challenge is to come up with a large library of pre-built canonical graphical representations for different programming abstractions and being able to wire them seamlessly together.


Observable HQ takes a step beyond that, being a reactive notebook.


Jupyter notebooks seem very close to what I would imagine a practical application of Victors ideas to be. In fact, its very close to a couple demos of his I remember.


I think it's a great presenter of the general idea of visualization in computing but I've not seen many practical applications for it. The biggest thing I wish we could as programmers do is get behind the one idea of actually having computer programs actually negotiate how to communicate with each other in a more hands off approach where APIs like he mentions in the Future of Programming presentation would be possible. Basically, the programs would structure their negotiation in a way that allows each one to 'know' what the other has in terms of functionality which then the calling program would figure out which facilities best fit its request based on some complex set of rules to then fulfill that request (ex. converting a raw image file into a defined format like PNG or JPEG).

Essentially, I want programs to be less dependent on low level constructs much like today we don't depend on pointers or registers (assembly) to do our work these days. The idea that we can't have larger abstractions handled by compilers or runtimes seems silly to me.


Those ChatGPT posts where someone gets it to pretend it's a bash shell come to mind, but you are right: this is an incredibly difficult problem domain and you start hitting roadblocks very quickly as soon as you move from cool demo to tool intended for real people to use.

I was one of the backers of https://www.kickstarter.com/projects/ibdknox/light-table/des... and have wanted to see progress in this domain for a long time.

One thing I would add to the conversation is that one of the most potent ways to move this discussion forward is to create technical demonstrations of how this sort of interface could work, presented as video. It's completely unimportant if the functionality is actually working, so long as you disclose this up front.

The goal is to give people with less imagination and hopefully more technical acumen an opportunity to roll up their sleeves and maybe work on making it real.


Friends of mine are developing Enso (https://enso.org/), an interactive programming language with dual visual and textual representations.

Even well before Bret Victor's time, there were tools for visual programming. I have been using LabView to maintain data processing in an optical laboratory.


Data Rabbit was inspired by Bret Victor's Inventing on Principle talk. One of the strengths of Clojure is the interactive coding experience you get using the REPL. Data Rabbit adds visual controls and data visualisations.

https://www.datarabbit.com/


A lot of people did. To what others have said, I'm going to add this: https://www.youtube.com/watch?v=5V1ynVyud4M Which is an interesting talk about how these approaches fail in practice.


Bret Victor's vision was a major inspiration for my PhD research and thesis, mainly "Inventing on Principle" and "Ladder of Abstractions". The final concept and implementation that emerged is a sort-of "Interactive What-If-Machine" for 3D simulations -- however I'm afraid it's just another small step into the hopefully right direction:

"Interactive Analysis and Optimization of Digital Twins": https://doi.org/10.18154/RWTH-2022-07066

Direct link to PDF: https://publications.rwth-aachen.de/record/849852/files/8498... (94 MB) Unfortunately it's in German, but there are many many illustrations and pictures, as well as some english quotes.

I am still amazed how some of Victor's very basic principles (e.e. "Show the data, show comparisons!", immediate feedback loops, ...) are always so essential/fruitful in generating amazing solutions to certain problems...

Aside from my work, I think Processing's "Tweak Mode" is another very good "real-life" example you might want to check out: See e.g. http://galsasson.com/tweakmode/


webflow is probably the most well known startup directly inspired by bret victor. vlad has mentioned it multiple times in every origin story interview.

whats funny is Bret’s message wasnt actually “you should go make direct manipulation ui’s”. it was “you should have design principles” and direct manipulation happened to be his baby (to the point where he went off to do dynamicland). i have heard he feels most people misunderstand his talk for the superficial wow moments.


ML systems approximate solutions to Bret's vision. They don't solve the particular things he demoed, rather, they actually go further in that rather than enmeshing users in a specific interface to implement a specification, they jump from an incomplete specification to a probable solution.

Your program doesn't need time travelling debugging if it already works.


I'm doubtful it will ever work (but I'm glad someone like Bret is working on it). If I think about the technical things I've learned the hard part is usually learning how to think about them, not telling a computer what to do. Even in a low-level language like C++ it is generally not too hard to express what you want once you know what that is.

Now, it might appear that the tools Bret is working on help you to think about your problem better, and I think that's true at the margin. But they don't seem to help that much since mostly people are lazy and don't want to think hard (myself included a lot of the time). So these tools slightly lower the activation energy and probably somewhat increase the number of people who are able to learn certain concepts, but they don't lower it that much and a motivated person can generally find a good explanation of anything that's not at the research frontier.


This VS Code might be relevant to what you’re asking: https://dev.to/ko1/rstfilter-vscode-extension-for-your-new-r...

It prints the results of executing your code line by line, next to your source code.


I think this is the future of programming.

When we ask ourselves: "how should we structure our code?", the primary goal should be: so that it's easily visualizable and debuggable. Our guiding principles at the moment are all over the place. For a long time it was: make code testable. Then we had things like: eliminate side-effects, immutability, one-way data flow, static type-checkable, etc.

The problem is that by ignoring visualizability, when you do come to visualize it (which everyone inevitably needs to when reading code and building a mental model), it's this huge tangled mess. People forget that the purpose of structuring code is to make it easy for HUMANS to understand. The computer only understands assembly at the end of the day. So anything beyond that is suppose to be for our benefit.


I used to read Bret Victors magazine articles on CPUs in the early 1990s. Very inspiring.


Unison has potential to underpin a paradigm like that: https://www.unison-lang.org/learn/the-big-idea/


HyperCard and Macromedia Director remain the gold standard for this sort of visual-interactive design. It was all so early and new and ubiquitous that we never realized how special it was.

https://m.youtube.com/watch?v=TqISbaJ7qug

Iterating on this in a modern way remains TODO. As John Henry and his counterpart note in that video, the beauty of those tools was that people who didn’t care about code were able to create interactive experiences.

The whole “everyone learn to code or you’ll be poor” thing of the past 10 years has been a huge and unnecessary distraction.


Check out what the Clojure and Elm languages came up with nearly a decade ago, inspired by his ideas. Time travel debugging, declarative paradigm, many of his ideas have been successfully realized a long time ago.


The most commercial thing I can think of that feels like a step towards Swift Playgrouds, the iPad app that's an intro to Swift. A lot of us here would just look right over it and jump to learning Swift from somewhere else, however that app feels like it's got a bit of his influence in it, the way you can toy around with things while writing code.

Bret's long term goal was to reform society and strech goal to fall in love. I'm pleased to say I've achieved one of those.


Can I get back to you in a couple years?


I created Call Stacking - you can easily see what methods call which, in a nice nested visual timeline.

The analysis is all done passively, as the methods are being called, no breakpoints needed. E.g. a new, onboarding engineer could easily see the most important methods for a given API endpoint.

It's not directly manipulable, but it does give the feeling of "ohhhh, THIS is what my program is doing."

I think Bret would be proud.

https://callstacking.com/


To me, the the most interesting part of Bret Victors ideas lie in the re-embodiment of disembodied interactions. The computers choreograph us anyhow, but often in a very poor and limited way – only our fingertips and our eyes move a bit. Why not make the choreography of interaction richer? Why not create a computer-aided full-body choreography of interaction?

In a way, the interactions can be broken up in subject, verb and object. I edit text. I crop a photograph. The subject is the user, and all the representations that extend the user. The mouse pointer in that sense is part of the user. The verb is the action. To select text on screen, I put pressure on a touchpad, move my fingers and release. This is a learned interaction, there are no inherent affordances to a medium (only, at best, inherent affordances to a tool that extends you). The object is a representation. With computer interfaces, you hardly ever interact with something in an immediate way. I want a comment to appear on this site but instead I am writing this text in a white box and not where the comment would appear. All the computer interactions are mediated by these in-between steps. (An example for unmediated interaction would be cooking. What you chop is what you get.)

To get a rich feedback loop in such a mediated environment you need to try and make it as unmediated as possible. To make the subject less mediated, the whole body of the user, the quality of the movements could be integrated into integrations. Here I have hopes for AI supported movement recognition. In addition, the representations of the user (e.g. the mouse pointer in existing systems) could become less binary in their state (I don't have a good example for it, but a hammer can be used with a whole range of intensity while a mouse pointer either performs a click or not).

To make the object less mediated, its representation should ideally be as transparent as possible. AR in this regard is much more promising than VR, since in VR the representation takes place within another representation.

To make the action less mediated, the action should be able to be embodied by the (technologically extended) user, as well as being inherent to the medium which represents the subject. Here we are building a bridge between the human body and a way to represent things that physically do not exist. It's never going to be ideal, but it could be better than what we have now.

So for such a task you'd need to be a UX person, a choreographer, a programmer, a AR/AI person, and ideally have some insights into media theory. It's just not an easy task.


We already did, it was called Lisp Machines and Smalltalk, and we are yet to fully replicate them.


Pretend Bret Victor's ideal IDE had been created. How much would you pay for that IDE? How much do you pay for JetBrains? Or Github Copilot? It's not that it's impossible, it's that there doesn't seem to be any money in it.


> it's that there doesn't seem to be any money in it

there never seems to be money on anything ahead of time and everything seems "obviously needed" after it achieves the famous "market fit". yet somehow radical stuff gets invented and the world changes.

was not very familiar with his ideas so this is a great thread to learn something new, but its fair to say that when it comes to programming our interface with computers has stagnated for a very long time.

software development might be too anxious and anal-retentive a pursuit to be the testbed for new ideas. activities like livecoding and computer assisted performance of arts might be where something practical develops


I think things like unity are getting close. It’s not the same but it’s almost there


ChatGPT will realize Bret's dream beyond his wildest imagination.


This is the path, I agree, using generative AI. I'm working on something like that.


If you want to see how far this idea can go in an artistic set up, take a look at Golan Levin's work.

As a side effect, you'll find about people working in similar instalations.



Just to add to this comment, one of the grad students who worked on sketch-n-sketch published a successor program for more general programs (not just drawings) called maniposynth: http://maniposynth.org/


Many people have tried.

It is very exciting to post the vision. It gets many views and spreads far and wide.

It is much less exciting to hear about the issues the vision encountered. And they do encounter visions.

Humans can only hold so many ideas in their head at the same time, commonly expressed as somewhere around 7 or so. Our programming systems are intrinsically and deeply bent around this, trying to limit the necessary context for any given bit of code to fit within this constraint. We don't even see this because it is the water we swim in.

So when we imagine coding, we can't hardly help but imagine manipulating like 7 things. And 7 things fits on the screen great, and 7 things fits in our minds great, and it makes a great demo. What is much harder is realizing just how often we work with things that are much larger than that, and, intrinsically so.

A good solid example that is larger than that, but still in the realm of things we can understand, is an HTTP request. A few dozen elements, each of which may have a few "things" in it, amounting to one or two hundred things. You've probably seen visual representations of such, with the header names on the left and the values on the right. Already any attempt at making this visual and live is straining, but it can be done.

And then you have a database table with, let's say, 150 columns and several million rows, and the visual metaphor is dead.

We have too many things like the latter. We encounter them all the time. The reason for this is not sloppiness in programming or a failure of vision, but the fact that our 7 +/- 2 values we can hold in our mind is really really small and simply inadequate to address the universe. We encounter things all the time that are very challenging to fit into that box. Any programming metaphor that requires that everything fit into that sized box is a non-starter. A total non-starter.

This is ultimately why "visual programming" in general has failed and will always fail.

If we could as easily hold, say, 50 things in our mind, we would have so many more options. We burn so many of our 7 +/- 2 values just holding references to the place we need to go to go get more values if we need them, e.g., dedicating a slot just to the fact we have a database connection. Further slots needed to handle what we're querying from that database, etc. If we had more registers we could spend a lot less time just managing the registers, even if we would still ultimately hit limits. But this stuff slams into a complexity wall with us humans so, soooooo quickly after the first pretty demos.

This is why you see a steady stream of such demos, which look awesome, and then they go nowhere, because you can't hook it up to a web server, or a non-trivial database, or just about any code you can imagine, really. Games are already an "easy mode" for this demo, most things are not games or graphical displays, and they fall over quickly too because again as soon as it's non-trivial in the game world it doesn't work anymore.

And this makes me sad. But it also makes me sad to see people jousting with the same windmill over and over. So my usual followup here is, if you are interested in trying yourself, I strongly suggest looking at the historical efforts, and at least coming up with a theory as to why whatever it is you are thinking about will do better and will solve the problems I lay out here. Maybe there is a solution. I can't guarantee there isn't. And precisely because I know it is hard, if you succeed, I will be that much more impressed. But I do want you to know it is a hard problem. The common characterization of it as being easy and obvious and my god how could everyone else be so stupid as to miss this is frankly offensive. We are not collectively stupid. We are not collectively idiots. We do what we do for good reasons. If you do not start from that understanding, if you don't understand the good reasons why the current textual paradigm is dominant, you're doomed from the beginning.


I think observable notebooks like Pluto.jl is something like his vision though not exactly. It's just more general and useful ?


Never heard of him before, and I can't tell if he's a crackpot or going to invent the next big thing or both.

It seems like his whole thing is allowing people to access new thoughts that they couldn't before.

I'm not really familiar with that kind of almost psychonautic inspired approach to programming, so I don't really have a good understanding of his vision. But it seems like a few tools have parts of his vision.

FORTH is famous for small programs that almost redefine the language itself and can say a lot in less than a page. Seems very relevant, but I don't know anything about it besides the basic outline and history.

LISP seems to have some of the same characteristics. Maybe a live code environment for FORTH would be of interest?

I'd argue Excel and co are probably the biggest success in live general purpose data manipulation, and probably the only ones out there I actually have any interest in.

Spreadsheets are also an amazing example of something uniquely digital, that's not quite just a paper emulator, and isn't just some crazy impractical experiment.

Spreadsheets are pretty amazing when you think about it.

Really the big thing that makes them special is the idea that you can put an =expression wherever you could put a value, embedded in a normal, consumery app that otherwise works similar to other software.

The other thing that makes it special is that it's highly constrained. You get a 2D grid of cells. Creativity loves constraint and they seem to have the perfect amount.

It seems average users are perfectly capable of cramming their use case into a set of high level primitives. That's different from almost every other "Code for everyone" system that just tries to make low level primitives accessible.

It's easier to use the wrong framework to do something than build the right thing with raw materials.

From that people who are not programmers at all run half the financial system. And it works. The world has not exploded.

It might be almost never the ideal choice, but then again, I don't think any DIY programming is likely the ideal choice when off the shelf special purpose apps exist, I'd never suggest anyone but a megacorp do their own booking and billing app or something.

I can't think of a single other tool that lets people who would never program, and hate programming... write programs. All the other tools are just things an average person could learn. But they won't, because they'll wonder why. And they'll be right, because they probably don't want to spend enough time with it to be able to do anything they couldn't do in a Google Sheet.

Not only that, it's a live environment that's truly practical, it's not just a tool for thinking, it's a tool for a subset of the same stuff Python might do.

It's something I use on occasion, and frequently use =expressions in other contexts.

It might not be truly general purpose, but it sure is impressive.


One of my crackpot ideas is a novel type of hypermedia, a bit like the semantic web but without the open world assumption. The mental model and user interface can be something like Mathematica. It should be a tool for advanced knowledge workers and not software engineers.


Another vote for LabView, it is inherently visual and data oriented.


sounds like a combination of reactivity / instrumentation / declarative programming

reminds me of Bush's memex "pathways"


Some related projects:

- Jupiter notebooks

- Dev Cards

- Storybook

- Dark Lang

- Excel!


- Blender

- AfterEffects


I'm not sure what "directly manipulating data" means. And - with respect to Bret Victor - I suspect no one does.

Before you can manipulate anything you have to define a set of affordances. If you have no affordances you have... nothing.

A lot of programming is really about manually creating affordances that can be applied to specific domains. The traditional medium for this is text, with dataflow diagrams a distant second.

People often forget that this is still symbolic programming. You could replace all the keywords in a language with emojis, different photos of Seattle, or hex colour codes, but we use text because it's mnemonic in a way that more abstract representations aren't.

Dataflow diagrams are good as far as they go, but it doesn't take much for a visual representation to become too complex to understand. With text you can at least take it in small chunks, and abstraction/encapsulation make it relatively easy to move between different chunk levels.

At the meta level you can imagine a magical affordance factory that somehow pre-digests data, intuits a comprehensible set of affordances, and then presents them to you. But how would that work without a model of what kinds of affordances humans find comprehensible and useful?

ML etc are the opposite of this. They pre-digest data and provide very crude access through text prompts, but they're like wearing power-gloves that can pick up cars but not small objects. You can't tell ML that a specific detail is wrong without retraining it. The affordances for that just aren't there.

And of course many domains require specialised expert skills. So a workable solution would require a form of AGI clever enough to understand the specific skills of an individual user, so that affordances could be tailored to their level of expertise.

I can't see generalised intuitive domain manipulation being possible until those problems are solved.


Just in case you wonder like me what affordance means: https://en.wikipedia.org/wiki/Affordance

It is a word that was coined in 1966.


If anyone is looking for book recommendations that dive deeper into affordances and more things, "design of everyday things" is a great book that is helpful for everyone building things, hardware or software.

That's where I came across the concept first I think.


> I'm not sure what "directly manipulating data" means. And - with respect to Bret Victor - I suspect no one does.

Well, the best approximation we have is spreadsheets, actually. That's why they're super popular.

Put the data in front, marginalize the code. Code is either "hidden" in cells where you get a live[1] preview of the data or in modules other have built but you can modify[1] if you want to.

Now, how do we take this approach to the next level, that's a problem on the scale of figuring out human genetic engineering or fixing climate change :-)

[1] Most of the time.


First let’s figure out how to scale spreadsheets to the complexity of running a medium size business.


The people who think everything would be great if only we had visual dataflow programming remind me of people who get into crypto. They have mystic beliefs about computers rather than being able to figure out how something will actually work.


From Bret Victor:

“ In his influential essay No Silver Bullet, Fred Brooks makes the case that software is inherently "invisible and unvisualizable", and points out the universal failure of so-called "visual programming" environments. I don't fault Fred Brooks for his mistake -- the visual programming that he's thinking of indeed has little to offer. But that's because it visualizes the wrong thing.

Traditional visual environments visualize the code. They visualize static structure. But that's not what we need to understand. We need to understand what the code is doing.

Visualize data, not code. Dynamic behavior, not static structure.”

http://worrydream.com/#!/LearnableProgramming


When it comes to audio, most of the above is wrong (or at best, off-base).

Max, PureData and now a new generation of software modular synthesis applications visualize code-that-spews-data-at-other-code. They are used to build highly modifiable (though not dynamic) structures that behave in ways that are often hard to hold in a human mind. They are widely used, much loved, and insanely powerful. They use visualization to add visual memory to the cognitive toolset in ways that textual code does not.

Of course, such tools are unlikely to ever be used build such tools. You don't implement/bootstrap a visual (audio) data flow language using a visual data flow language.


> You don't implement/bootstrap a visual (audio) data flow language using a visual data flow language.

I do hope someone gets nerd sniped by this challenge. Not me, but someone.


I hear the criticism, but sometimes we just can't know if an idea is useful without trying it out. And trying it out can be wildly expensive - in terms of time and money. So its kinda great when someone else does that work on behalf of everyone else.

Its taken 60 years of improvements to finally build ChatGPT and Stable Diffusion. If we collectively gave up on AI years ago, we might never know that.

We still don't know if VR will ever get good enough for mass market appeal. Facebook (Meta) is investing big. I hope their investment pays off, but we don't know yet! It might not work. But it might! I appreciate the risk their investors are taking.

And I think the same is true for visual dataflow programming. We have no idea until someone builds it out. And if their implementation is bad, we might still not know. Any implementation might just be a bad design.

Anyway, Brett Victor talks about ideas that go way beyond visual dataflow programming. There's a lot of interesting stuff in there, even if a lot of it might never come to pass.


Dataflow programming works well for many a visual artist (shader graphs), musician (max/pure data) and CAD engineer (rhino's grasshopper)

Grasshopper is meant to parametrically generate 3D objects but it was so good at splitting and recombining tables of data that were being continuously updated that I started using it for general purpose text manipulation.

the ENIAC was a dataflow machine before it was a stored program machine btw


Most data flow programming environments cannot represent lambdas, and this is why the graphs always end up turning into spaghetti: you don't actually have tools to reduce repetition in the graph's structure, using the graph itself.

They are successful in artist and music contexts because the graphs tend to be simple pipelines at heart. Having dealt with sufficiently complex grasshopper graphs, I disagree that it's good at arbitrary list processing, certainly compared to ordinary list operators and iterators in code.

My conclusion is that a dataflow environment that does allow for lambdas and proper n-way forking would necessarily have to be an effect system in the FP sense. It's a data flow graph that computes its own continuation and which has no fixed or preset topology. It can rewire itself based on the data flowing through it.


Do you know why the aspect of debugging a data flow programming environment is rarely considered when designing them? In my experience it's not just missing lambdas but the very practical matter of introspection / effectively reverse engineering someone's work is so much harder in data flow programming environments as they are currently generally designed.


It does, but my favorite dataflow based video editor (AVISynth) is a text-based programming language with an unusual evaluation model, the programs themselves aren't visual.

I guess the most popular dataflow language in the world would be Verilog? But that's not visual. I don't know much about how it works professionally though.


I've never used Verilog but was curious about it and now that you mention it's a non visual dataflow language, a lightbulb went off. Thanks a million. That implies that somebody somewhere makes a dataflow visualizer for Verilog. I'll have to look for one.


Never heard of AVISynth, looks really useful thanks, I edit videos sparingly and painfully with ffmpeg

It's a good distinction to draw, that dataflow is separate from the editor interface. I suppose Excel is a visual dataflow editor as well.


Could you please elaborate how to use Grasshopper for text manipulation?


Computer programs are Turing-complete. How would you ever "modify dimensions [...] to specify what you want the program to do"? The space of possible programs grows like exponentially as you linearily increase the length of the program.

It's trivial to explore visually the space of all possible outputs for a program of length 1 instruction. But how about a million? The output is just gonna be useless noise 99.99999%.

What even is a dimension of a computer program? How many divisions it performs? How many gotos there are?


Computer programs needn’t be Turing Complete. There are plenty of useful languages that are not, the Relational Algebra being example 1, which is equivalent to First Order Logic, and is in a pretty strong sense about as good as you can do without Turing Completeness.

The entire industry’s understanding of the power of the relational model has been destroyed by SQL, which is the largest foot gun of so many in our industry.

In fact, much more of most business applications (hell, just about all applications) can and would profitably be expressable in First Order Logic. The resulting programs would be simpler, easier to modify, easier to get correct, and other things besides.

Much of what Bret Victor has shown could be expressed in First Order Logic.

Would you still want Turing Complete traditional programming languages? Of course. But both programmers and non-programmers would benefit from being able to express more of programs using the sorts of declarative idioms that Bret Victor has shown and that the relational model provides.


> The entire industry’s understanding of the power of the relational model has been destroyed by SQL, which is the largest foot gun of so many in our industry.

When people say stuff like this, I think of traditional post&beam construction guys complaining about the way that stud framing has destroyed the industry's understanding of the power of "real construction". Whether or not that's true, stud framing has been used to build homes for on the order of a million times more people than post&beam. Perhaps it is true that stud framing obscures a deeper understanding of the nature of wood, joinery, loads and so forth, but sometimes the point is just to build a shit ton of houses, cheaply, efficiently, effectively, knowing that changes in requirements will make most houses largely redundant before (most) stud framed houses are gone.

SQL: shallow, weak, wrong, incomplete, misleading, and how the world gets built.


Yes, I think we should build more DSL toolkits. Turing-complete languages are too expressive.

If we had a meta-language like Lisp or ML that had very good capabilities to quickly develop DSLs with very clean restricted semantics for every particular component of a big system, and tooling to automate proofs for said restricted DSLs, software would be more robust and easy to develop.

Alan Kay was pursuing parts of this vision at Viewpoints Research Institute. Racket is also very focused on DSLs. Any others?


90%[1] of popular DSLs turn into Turing-complete languages over time. And they usually end up as very badly designed Turing-complete languages.

[1] Or some other made up, but still very high number.


Are there languages today that embody first-order logic (perhaps Prolog)?


Datalog: https://en.m.wikipedia.org/wiki/Datalog

Prolog is actually Turing complete.


Indeed, sir.


Who is Bret Victor?



He is a highly influential computer scientist, with interesting talks and ideas on unique topics.


Google and Wikipedia are waiting for you: https://en.wikipedia.org/wiki/Bret_Victor




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: