Another tangent, I went on a trip to LA. The drive to airport was noisy, airport was noisy, airplane was noisy, city was noisy, air conditioner in room was noisy. This went on for the whole week. I didn't really notice.
But when I got dropped off at my car after the flight home, to get in my car it was late at night in a small town. And I noticed it. The silence. I stood there, beside my car keys in hand for a good 5 minutes, just taking it in.
I had no idea how noisy it had been for the past week until then.
I love this. Probably because I've been thinking of similar ideas for what seems like the last decade, I just have coders block when it comes to actually implementing it. Seeing it (well something like it) actually implemented makes me happy.
My use-case would be a "real-time" debug tool of sorts, that would allow the viewing of trees as they have, but also the modification of them, and the values at the nodes (think the hierarchy+inspector of unity3d, but a remote tool).
Anyway this post is food for thought and I'm going to read more about how it was done. I like the API they've created for sending views, really interesting.
I think this is possibly related to something I was thinking about when dealing with graph traversal in Python, wanting a debug tool that let me snapshot each step but then diff the watched variables. Being able to move back and forth to see the state change, and the connections so I could determine what was influencing the state.
This was because no matter how hard I tried, I kept running into variables that I was watching that appeared to be the same reference, have different values (so I know I was referring to something wrong, but could not for the life of me figure it out, and though if I only had a way to step/trace and do this visually like an AST that highlights changed values and you could see back to its root where it's actually being modified visually)
I found out there's a thing called "deterministic debugging" (and the biggest known example afaict is rr: https://rr-project.org/
Apparently MS has a time-travel debugger... I pictured the AST being populated and repopulated via the steps, and the ability to diff changes over time.
Here's a wiki on various systems, though most of these seem to be typical text based "trace" options.
I found a professor at one point doing some research and he seemed to have given up without more funding. I thought I had it bookmarked, but am unable to find it right now. If I can remember (ha) I'll see if I can find it tomorrow. He had the most interesting approach that seemed closest to what I think you (and possibly I) are thinking, with propagation through the tree (I don't think I was picturing editing the graph directly per se - but that would be a cool idea).
Have you looked into LightTable or Brett Victor's work at all?
http://lighttable.com/ (somewhat tangentially related, I think?)
Sorry this is so scattered I need to hit the hay LOL.
Yes, I'm a big fan of Bret Victor's work. I share his Inventing on Principle talk on a near yearly basis.
I've seen light table years ago and thought it looked really neat, but I haven't looked into it much as it wasn't directly applicable to my environments. But, probably worth a look for ideas.
Ah thanks, I was trying to remember Lighttable; there must be emacs plugins for CL for that I guess? As some type of more addition to the REPL functionality; having it more automated?
Oh Sorry I don't know if there's more functionality - didn't mean to imply something like LightTable exists for it, but for some reason I thought it allowed you to step back and forward in the execution to watch things (still based in text, etc... I have SBCL on my system but never really played with it beyond "hello world" so have never actually used the full power of the REPL).
AFAIK it doesn't have any sort of extension like this though. But maybe someone with more CL experience can advise! Hopefully someone has a better idea for you :)
I see the stuff in the parent post but not sure exactly what it can do vs what I envision or you may be picturing...
Ishmael or My Ishmael touches on this subject. Thank you for reminding me.
I forget exactly, but, the basic idea is primitive people didn't have all these laws about what to do. They expected you to behave, and if you did not, the tribe did not necessarily punish you, they taught you and made it right somehow (justice).
Any MY description does not give this idea justice, so I need to go back and find the reference in the books.
Do you remember the guy who took hostages at the Discovery Channel offices in Washington DC, and tried to force them to promote his Ishmael-based manifesto on television? He was part of a MySpace group that I frequented where we discussed Quinn’s work. I remember having pretty strong disagreements with him in the forums, before he took up arms anyway…
I don't remember that happening, I think I wasn't watching the news much during that period in my life. But I did hear about it in past year after reading Quinn's books and following some mental threads afterwards. Wild that you had conversations with him!
Unfortunate people take ideas so far... we are so sure we are right.
That works at the scale of a tribe. We do the same thing with kids in a family: punishment (should) only happens after multiple attempts at "teaching" have failed and it's clear that what's happening is disobedience.
I've used this technique quite a bit in the past, but often not enough. I work in games so a lot of problems are actual visual 2d or 3d problems. Things like: finding the closest valid object I can target at my given heading. Simple enough, but as things get complicated it's often the simple things that trip you up, and visualizations can often make the problem obvious.
That p5 editor looks nice for this sort of thing. It's important to be able to get the visualizations in quickly, otherwise you risk wasting time.
A tool I've dreamed up, that I never seem to have time to implement, is to send debug information to some form of database, and then being able to query and render that data as you like (from another client). To see bar/line charts of data, spatial visuals, more abstract graphs like in your examples, timeline scrubbers, etc... maybe some day I'll get around to making it.
Anyway, thanks for your comment. I'm interested in this kind of thing.
The eye is not a single frame snapshot camera. It is more like a video stream. The eye moves rapidly in small angular amounts and continually updates the image in one's brain to "paint" the detail. We also have two eyes, and our brains combine the signals to increase the resolution further. We also typically move our eyes around the scene to gather more information. Because of these factors, the eye plus brain assembles a higher resolution image than possible with the number of photoreceptors in the retina. So the megapixel equivalent numbers below refer to the spatial detail in an image that would be required to show what the human eye could see when you view a scene.
Based on the above data for the resolution of the human eye, let's try a "small" example first. Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be
90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).
At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let's be conservative and use 120 degrees for the field of view. Then we would see
120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.
The full angle of human vision would require even more megapixels. This kind of image detail requires A large format camera to record.
From the about:
"This is an independent DIY search engine that focuses on non-commercial content, and attempts to show you sites you perhaps weren't aware of in favor of the sort of sites you probably already knew existed. "
I personally find it hard to put into words, but the old internet and old search engines had this feel to them that you never knew what you were going to get. Each site looked different. Each site had it's own philosophy of content and design. Everybody was winging it. It just felt more personal and interesting. At the risk of hyperbole, now it seems search engines give back mostly SEO blogspam that all looks the same.
> now it seems search engines give back mostly SEO blogspam
And with technical questions, too many results are not correct. It seems that Google search is really going down hill in this regard. I'd like a way to vote down results that are obviously SEO trash, but I'm pretty sure if that were provided, it would be gamed too.