Further reading: 1972 - Mark Wells' A Review of Two-Dimensional Programming Languages
And I think not everybody is like this, though! I think there's a difference between "visual programmers" and "text programmers", and this is what causes the flamewars between people who want to "remove noise" by getting rid of parens, braces and semicolons, and people like me who use them as a sort of "signage".
The real problem is that I bet the "text programmers" also write more, well, normal text, and are overrepresented in blogs, articles, and whatnot. (Even though they're wrong XD)
I'm not sure this characterization appropriately divides "text" from "visual" programmers. The argument I usually see for removing symbols is that the general shape and flow of your program should be able to stand on its own and that minutiae like a misplaced semicolon shouldn't be allowed to play tricks on your eyes -- it's not that the visuals are unimportant, but that they _are_ important and that too many extra symbols lead to hard-to-parse visual clutter and more easily facilitate deceptive control flow.
Whether that argument has any merit or not is its own debate of course.
It’s sort of there in my mind and I can see how things are connected, grab stuff and move it around, etc. Super handy for dealing with systems complexity
EDIT: Ahh, I see another comments that this is more of a block based no code kind of thing. I'm not looking to replace my program with purely visual constructs, I'm more interested in representing the file, object, and function layout on a 2D space. Nested folders of files seems so limiting when I want to put particular functions next to each other, and grouped over _here_, while another set of functions should be placed over _there_.
I don't think the greatest power comes from doing away with textual code, I would just prefer that the units which compose a program can be organized, described, and related to each other in a richer way.
For your file example, you could have a "File Open" block that takes a filename as input and outputs a "File Reference" datatype, and then wire that reference to various file blocks (File Read / File Seek / File Write / etc) to perform operations on the file. When needed you could wire a reference to an arbitrary code block, perform whatever operations you want on the file using textural programming, and then output the reference to continue putting it through other blocks. Objects (as in JSON objects) could be represented in the same way, twisting and turning throughout your program as they are manipulated. Ditto for databases, file storage (like S3 or hard drives), sensors and their values, etc. As for functions, you'd be able to build your own blocks from existing ones, so you could take a textual function and turn it into a visual block that integrates with the rest of your code.
You could group blocks together in the editor so that they are literally next to each other as you said, but one feature I'm particularly excited about that was inspired by SmallTalk is the Block Search: instead of searching by name, you describe the input and output types of the block, and the editor will search through the global database for any kind of block that matches your description. For example, you could describe a block that takes two strings and outputs one, and you could get a block back like "String Append." For our hypothetical file example, you would just search for blocks that take in File References, and you'd be able to find every block that does anything to a file, including your own.
I know it's hard to imagine, which is why I'm working on building a demo ASAP so I can show and not tell. I saw from your bio that you're interested in STEM edutech so if this doesn't scratch your itch it may still be helpful for students to get started with software a la Scratch! Let me know if you're still interested - no worries if not though!
My language is a lot simpler: it uses functional programming principles to transform input to output without storing state. The programmer would place down blocks on the screen and draw lines between them to indicate where they want data to flow. They'd start on the left (input side) and end on the right (output side). The cool benefit of this paradigm is that it's recursive; you can build blocks that contain blocks that contain blocks etc... all of which shuttle and transform data from the left to the right.
The other part of this is that it won't just be a language, I'm planning on building an actual cloud environment where you can click a button to deploy your code onto a REST endpoint, so you can integrate it with your existing websites + scripts + apps.
(which is admittedly a terrible pitch, unless you're trying to hook an old demo scenester...)
Not so far fetched but I’m not sure you need a new language to support that.
You could press a button and get this view. But I think the idea is to build with always this view on sight ? I have no idea.
The idea also rhymes a lot with George Lakoff theory that we build concepts as towers of analogies and metaphors, whose bases are quite instinctive and frequently spacial (high is good, broad space gives you options, etc.)
it would be interesting to evolve a language from 2D or 3D to more abstract, higher dimensional representation of objects and topologies, like the cognitive spaces described in the article which operate on sparse vectors in a multidimensional system of coordinates (aka "grid cells").
i'm wondering if there is a language for defining network topologies like that, having some abstractions for JOIN/LINK operators to form & evolve a network  and FIND or FIND_SIMILAR operator for traversing that graph (looking for similar objects or clusters of objects), and something that allows to simulate "small world networks" effects, heuristics and shortcuts we see in real world graphs (including neural connectivity and distributed communication we observe in the brain)
 Leslie Valiant, Memorization and Association on a Realistic Neural Model https://web.stanford.edu/class/cs379c/archive/2012/suggested...
Not only are the data structures 2D/3D, but the language itself is homoiconic.
I'm thinking of things like 2D+ spreadsheets.
Like this from Alan Kay in 1984 (https://pbs.twimg.com/media/E1UAukcUYAAI9lS?format=png&name=...).
A more recent things along these lines that I worked on: https://www.youtube.com/watch?v=0l2QWH-iV3k
At first I actually thought you meant human languages, which are already 3d to a large extent, because they're full of movement metaphors.
Python programs to not copy/paste well into spreadsheets (a simple test for "2Dness").
The possible downside however is that we will likely be able to quantify the capabilities of any particular brain based solely on an assessment of its physical attributes. That seems like a dark place.
Here is a fun rabbit hole: https://www.youtube.com/results?search_query=minicolumn&sp=E...
I agree both extremes are possible. I don't feel confident betting on either extreme. I would bet it's both at the same time (nature AND nurture) to the point where the individual influence of each is practically impossible to disentangle.
How you can infer the shape of an object by progressively touching it with your fingers even if you cannot see it. Try to implement an algorithm or ML solution that does this and you'll know how insanely hard this problem is.
My naive instinct is that you would detect and store points of contact between the sensor surface and the object while also positionally tracking the sensor in 3D space relative to an origin. You would certainly need a lot of precision in movement tracking to minimize error. Maybe that's where it gets tricky?
Also worth noting that the problem feels somewhat moot since there are easier solutions (e.g. lidar).
It's more likely we almost all share the same feature set. Doesn't mean exceptions or tendencies don't exist. You know... Like... Most of us have fingers, ears, skin, etc.
What you do is you speak out loud what you see "A purple blob in the middle, it's moving to the right, now it's morphing into a more green blob" etc
If you do this for a couple of minutes what you will see in your closed eyes will become more shape-like "the blob is now more circular. It's back to a blob again, now it sort of resembles a weird pillow"
Somehow, the feedback loop of objectively describing out loud what you're seeing trains the brain to fantasize more real shapes and objects.
Try it out if this sounds interesting to you!
But it can get really dark as well. I have done this between 10-20 times on my own, and the one time the visuals are about like... fruit, and the other time they're about really really bad super shameful and traumatic personal experiences.
It's not necessarily a game, although I can imagine it has the potential to be one. The danger is that this requires you to speak up about anything that comes to mind, and sometimes your mind will wander to very serious memories or pictures.
I can say that it works incredibly well though. The purpose of image streaming is that it should improve your ability to imagine shaped in your head, and it works _really, really well_. I personally stopped because I didn't feel comfortable speaking out loud many of the things I saw (even when I was alone), it involves "confronting your shadow" in Jungian terms
Here's the inventor's article on it by the way: http://www.winwenger.com/imstream.htm
^1: the person who invented it says it has something to do with how the feedback loop is more real if there is real consequence to what you say iirc
To me a market isn't a space that can be occupied in the same sense of occupying a finite physical space.
Providing crackpot, pseudo-philosophical examples of why something can't be done can make me look smart but ultimately has no utility.
It is a lot like a constant word association game. Just sometimes without words. And no narrator voice at all.