There are still Visual Basic Apps from the 1990s floating around in large corporations and in government.
Users who were expert in the process but not expert developers could easily code up tools to their exact specifications.
Admittedly the deployment of web applications is much better and they can run on different devices.
Hopefully when web assembly takes off easier to use tools will reappear.
Part of the problem is that ad code requires a messy environment, so that ad blockers and click generators have a hard time. Google, by policy, does not permit you to put their ads in a iframe, where they belong. You can't even put Google Hostile Code Loader ("tag manager") in an iframe sandbox.
I believe the proper term for this is "Well there's your problem.". That sort of antisocial behavior is what caused the issue in the first place and rather than solve this by being civil enough to the user that they don't try to block everything they double down on the untrustworthiness and act /even more/ malicious.
We had _simple_ drag and drop UIs decades ago.
Computers operated within a lot more limited constructs back then. UI windows didn't resize, or it was reasonable to expect that they're fixed. How did those drag and drop tools back then handle creating a UI for screens ranging from 400pt wide to 2560pt wide?
For simpler forms you'd just set anchoring properties of the widgets in question (akRight/akBottom in Borland's VCL or whatever its counterpart is called in WinForms or what was its predecessor). Nowadays it's even easier with things like, say, GTK's HBox/VBox.
> Computers operated within a lot more limited constructs back then.
Resource limits didn't stop web browsers of the era from rendering complicated tables in a variety of sizes.
WinForms also had Dock (Left, Right, Top, Bottom, or Fill) which basically forced the control against that edge of its parent, taking up full length and/or height accordingly in one dimension, and using the explicitly specified size in the other. With multiple controls with Dock, they'd gradually fill the available space in Z-index order, stacking up against each other; then whatever control had Dock=Fill, would fill the remaining space.
So yeah, resizable windows were common, and easy to deal with. The real problem is dynamically sized controls themselves, e.g due to DPI changes, or because the control contains dynamic content of unknown size. With those, anchors no longer work well, and you need some kind of box and grid layouts. Which are available - but it's kinda hard to edit them visually in a way that's more convenient than just writing the corresponding code or markup.
The closest I've seen to visually editing UI that can accommodate dynamically sized controls was actually in NetBeans, which had its own layout manager for Swing. That thing allowed you to anchor controls relative to other controls, not just to the parent container. Thus, as controls resized according to their context, other controls would get moved as needed:
Still, you needed to be very careful in the UI form editor, to make sure that controls are anchored against other controls in ways that make sense.
Fundamentally, I think the biggest problem with UI designers today is that they can't really be WYSIWYG, the way they used to be in the RAD era. The DPI of the user machine might be different, fonts might be different, on some platforms (Linux) themes can be different in ways that drastically affect dimensions etc.
WPF and UWP can deal with it perfectly fine.
If you choose to use layouts, it can display them, but editing it with a mouse is no longer convenient. It's easier to just drop into XAML and hack on it there. On every project I worked on that used WPF (which is quite a few by now), nobody on the team actually used the UI designer, and everybody would switch the default mode for .xaml files in VS to open the markup directly, rather than the usual split view.
As for devs preferring to type by hand, well their loss I guess.
There are 2 major strategies for responding to such a difference:
1. Vector scaling. It will make the difference irrelevant and the resulting UI will look approximately the same on whatever a resolution.
2. Capacity scaling. I.e. keep the same font letting more information to fit in the control without scrolling.
It may be a little bit hard to formalize but these two can be combined intelligently. It is also important to know which one the user prefers (e.g. I mostly prefer the second while many prefer the first).
Why? What else is there? I'm actually curious. I personally have never dealt with displays bigger than 1280x1024.
This isn't true at all. The original Mac in 1984 had resizable windows for most apps. Most other WIMP GUIs followed suit.
It's got a drag'n'drop UI creator, you use Python to build the front-end and the back-end (with proper autocomplete, VB-style), and it even has a built-in database if you need one.
Don't create your own icons, use already known icons, Ab and Ab-but-with-underline doesn't clearly mean "Text" and "Link" but the word "Text" and the famous anchor icon do.
Also as far as I can understand it won't let you export apps the way one could run them on their own outside of your servers - this limits the usage too.
I liked the tree view that netobjects fusion had years ago, it makes bigger sites easier to work with.
Still trying to carve out time to try pinegrow to see if it's got enough drag and drop and resize to make me no longer miss netobjest fusion.
I used to use hose 90s GUI tools - VB's, VC's, Symantec's and Borland's java gui tools. Although they did work well for fixed UIs (absolute positioning) -- they were rather hard to get non-fixed uis working.
IMHO Bootstrap's grid system was a real leap in this regard and (to me) still a pleasure to work with.
But all that may be totally wishful thinking.
And while it's technically possible to say, just compile QT to wasm and give it a CANVAS tag as a dumb drawing surface, you've just broken accessibility, and that's a deal breaker.
By the way, Qt-on-wasm-on-canvas is already a thing:
And while accessibility is a big issue, I must sadly acknowledge that I am yet to have a project require compliance and validate its implementation, as such it never gets done.
But is it really, if you take into account all the incompatibilities between browsers and the different UX paradigms between classes of devices?
- Uses a statically-typed language (Object Pascal), compiled to JS, code compression is a simple build option, and the compiler is very fast:
- Layout management is universal (not various, competing forms of layout) and easy to use:
- You can create your own non-visual components and controls and install them into the IDE or share them with others:
(the source code to the entire runtime and component library is included with the product)
- The look and feel of every control in an EWB application can be customized:
- Uses icon fonts for icons (you can also use raster images, if you want), so applications look crisp on any device:
- You can use any existing external JS code by coding an external interface to the external JS code (FFI), so even external JS code references will be type-checked by the compiler:
- The code editor is rudimentary and requires some more polish (code-completion, etc.)
- Debugging isn't supported via the IDE, yet
- There are still a few missing controls like a treeview and charts, but you can interface with JS products like HighCharts without issue
- The compiler still needs some work in the areas of interfaces, generics, and set support
We have a new Elevate Web Builder 3 coming out soon that has a new IDE and web/application server with built-in TLS, authentication, session management, role-based access control, database API, remote application deployment, and event logging. You can, effectively, manage and monitor any EWB 3 web server remotely from within the new IDE:
Initially EWB 3's web server will be available for Windows only, but we will be offering a Linux/Mac daemon version in early 2019.
The ultimate goal for the product is to provide a single-language solution to both front-end and back-end applications with one-click application deployment and deployment of server instances.
(Sorry for the "advertisement" - I just see this sentiment come up a lot here, and I think it's important for people to know that that there are companies working on solutions)
Borland really screwed up that attempt, it was anyway shortly before that whole Inprise stuff. :\
Python: slow, useless for threaded programming, full of obvious mistakes (anybody likes one-line lambdas?)
Clojure: a flaccid, viagra-less Lisp.
C#: basically same as Java, the yardstick on "boring" languages
How can they be exciting?!
However, I agree that Rust is exciting, very exciting, because it has fearless concurrency, zero-cost abstractions, move semantics, trait-based generics and efficient C bindings.
I think if someone finds the right sweet spot between extensibility and ease-of-use for web apps they could make a killing. Something similar to what VB did for the desktop.
Rule based means I tell use instruction like "this button must be on top on that button horizontally centered", "this label must fit that text", "this image must be between this and that", etc... and let the layout engine deal with it. UIs are usually not paintings, window sizes vary, text length changes with localization, decorations change depending on the environment, etc... Approaching a UI like a canvas will certainly yield good results on the designer machine, but will look out of place everywhere else, if it is usable at all.
I think it is the basis or what they call "responsive web design".
I find it quite pleasant once you get used to it.
CSS isn't even rule based. If it was a constraint system, like "A must be to the left of B", and "C and D must have the same height", extended with "A and B must be at least this big, and drop C if it won't fit", it would be rule based.
Nothing says you need to allow users to do absolute positioning vs rule based when someone drags stuff around. You can help this along with multiple common views at different resolutions, so users don’t try and force a pixel perfect version.
Even this software is actually not simpler, on the contrary: It is far more complex, because you don't know how to create a given widget [assuming you already learned what widgets there are, because the software doesn't tell you]. You have to learn what the software recognizes and how you need to draw it in multiple strokes. Since recognition is ML-based, it is difficult to tweak and a black box to both user and developer ("why doesn't it recognize this...?").
Contrast with 90s form designers with a simple drag-and-drop palette. (They didn't have layouts just yet, that came a bit later). You can immediately see what widgets are offered and to instantiate them you simply drag them from their "reservoir" to the active area. Simplicity itself.
WebFlow and OutSystems are two examples that come to my mind.
I hope that WebComponents will make it easier to adopt such tooling.
But the moment you wanted something like: there's two buttons, one following the other in the bottom right corner of the window, with a certain fixed spacing between them, but otherwise dynamically sized to content, it all broke down. And this just happens to be one of the most simple scenarios, just a basic dialog box with "OK" and "Cancel"!
How did it work in practice? We just made widgets "wide enough" to fit anything that could conceivably be thrown at them. If later that assumption was proven wrong - e.g. because translators came up with a very long string for the label - then the developers would have to go back and redo the UI.
Keep in mind it was usually always the case you could set widget sizes (or do anything else you wanted) "in code", also.
It's not as though you were ever forced to always use the visual designer for absolutely everything.
In general, there are no significant differences whatsoever between the way something like React actually works and the way something like WinForms works.
Likewise, Windows Forms table layout managers and Swing layouts, while not as powerful, did the job.
(WinForms designer technically supports them. But it's less "drag and drop", and more like "drag and ... um, what the hell is this thing doing there now?").
The only problem is how buggy VS designer in some releases tends to be, forcing to restart VS from time to time.
But that affects all kind of stuff, including apps not using layout managers.
And regarding Motif, I surely recall the GUI designers being relatively good.
Likewise with Java Swing and designers like Netbeans Matisse.
Matisse is the only UI designer that I know that does true drag and drop (letting you position widgets exactly where you want them) while also producing flexible layout. And IIRC they had to write a custom Swing layout manager for that.
Sun prototyped it on Netbeans, made it open source and then when everyone was happy, it became yet another layout manager available in any Java compliant platform.
That is the whole point of a layout manager engine, they are extensible.
Which is why project Houdini is having a layout engine APIs as well.
* There are no instructions on how to actually run the thing.
* There is no requirements.txt or similar, so I have no idea which version of dependencies I'd need.
* The repository is strewn with unnecessary files (.pyc/.ds_store/.so...), random-looking images with names like "plswork.png", a HTML file from some "starter kit"...
* I can't seem to find the React frontend that is mentioned in the readme -- on the other hand, it looks like `server2.py` is looking for them outside the repository (`".././reactExperiments"`).
It's a pretty cool proof of concept ;) Go easy!
"This is not a prodution worthy piece of software,it is only meant for demo purposes"
I’m also not a Pythonista and I’ve only been working with Python for about a year, but including required packages in the requirements.txt is like Python 102.
Certainly there's a lot to be improved in terms of git hygiene and publishing an easy-to-try-out project, but it seems a bit excessive to say that the author shouldn't have posted it at all.
And it seems the creator plans to do just that. So, kudos.
This might make more sense for a designer, but they shouldn’t be so close to production anyways.
edit: another thought is that this concept could encourage people in your org who struggle with wireframe technology to express their ideas. Generationally and across culture, smartphone use is now accepted. People also know how to draw on pencil and paper. Now all you are asking them is a final DSL to express their thoughts. Lower barrier?
edit2: there is also something to be said for having someone step through their wireframe and flow control by taking pictures. It may take the abstract and create something tangible as they can logically piece their work together with actual pieces of paper?
Unless you mean just sketch using graph paper and translating coordinates. Not sure why I need a camera for that. :(
Also, many people just think that business logic for sanitization and validation "just happens." The barrier to wireframing, for them, is too high so they don't. But in this idea, I could see someone submitting a wireframe to me and my response being "well what happens when a phone number is international?" I'm educating stakeholders on the functional cost of producing their idea.
This would theoretically create a feedback loop for future ideas and initiatives as now, they've begun to be educated on the process. They have direct experience.
Anyway, anything to lower that barrier in order to partner with and teach my executives and their supporting staff would be a huge win. At least for me.
Maybe the barrier is where it should be. Or maybe it should be even higher! People who can't understand the logic of an interface have no business creating or suggesting interfaces. An UI is meant to be used, not looked at like a pretty picture in a frame. It should feel good and feel smooth and increase productivity... not look good. Some of the best looking UIs I've ever seem were also the most utterly user-hostile, unituitive and productivity lowering.
Sure, if you can afford to pay someone 500/hour or smth "outrageous" like that (hint: you need a word-class artist, with advanced knowledge of user psychology, that also has the brain of a business logic analist or of programmer involved in product design) you could get something that both looks goorgeous and feels smooth and increases user productivity 10x. But usually you need to make sacrifices, and the ones the user will hate you for are those that make his life harder despite seeming nice and slick at first.
> Maybe the barrier is where it should be. Or maybe it should be even higher!
Across the industry people in leadership positions assume that making UI/UX is easy. Those same people are usually the owners or major stakeholders of the project. Any avenue to put more functional ownership back onto that group to empower and educate is a worth endeavor.
In this example, unless not handling international phone numbers leads to failure of the project, that can be handled later, say once the project is approved and time estimation is being done. If I'm building a notes app, and someone is proposing a new sign up form to increase conversions, and it has a phone number field, handling international numbers is the last thing to worry about at this stage (unless international numbers are a significant problem with the old form leading to abandonment).
We shouldn't doom good ideas with irrelevant details, which are absolutely relevant later, but not now. Product development happens in phases of increasing fidelity, and issues need to be brought up at the appropriate time, not too early, not too late.
Imo this is one of those things best left unhandled, eg. "just use a plain mostly unvalidated textfield, and throw and error only when you want to use that data via another system like for a text message campaign". Mostly in real-life if you want to target the entire freaking planet (not just 99% of phone using people, but 99.999%), you'll get to realize that any validation is not enough and that some phone numbers need to contain arbitrary letters and symbols in them (better don't ask... the world is big and weird :P), and that yeah, those numbers will not be procesable by things like Twillio, but human users with local knowledge will know how to actually "dial" them...
But it needs to be a conscious decision, to consciously choose to not-validate and to understand that you give up the ability to 100% target phone numbers for things like 2-factor-auth later on.
Not "forget that phone numbers need to be validated" and then, go and say, "oh, let's do phone-based 2-fa mandatory" or whatever user interaction messup like that.
It seems that people are coming to you to help estimate how long an idea takes to implement. If that's the case, I agree with everything you've said.
But if they're proposing an idea, say a new sign up form to increase conversions, phone number validation is an irrelevant detail to worry about at this point (unless that was a significant problem with the old sign up form).
Whatever it may be, would a tool that educates and puts some of the cost back on the "idea person" or stakeholder be a good thing? I think it would.
So would I, as I mentioned in my reply to you. It's easy to propose ideas without regard to cost like a "minor enhancement" that takes 6 person-months.
There is no easy way for them to reveal to the user what gestures are possible (short of a showing a palette of commands, including animations of gestures which are directionally sensitive), and no clear and wide separation of distinct gestures, so they're difficult to learn and remember, and their ambiguity leads to a high error rate. And they're not suitable for applications where it's not easy and inconsequential to undo mistakes (like real time games, nuclear power plant control, etc).
For example, handwriting recognition has a hard time distinguishing from "h" and "n" and "u", or "2" and "Z", so systems like Graffiti avoid lower case characters entirely, and force you to write upper case characters in specially contrived distinct non-standard ways, in order to make them distinct from each other (widely separated in gesture space). It's important for there to be a lot of "gesture space" between each symbol, or else gesture recognition has a high error rate.
Graffiti is an essentially single-stroke shorthand handwriting recognition system used in PDAs based on the Palm OS.
The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.
OLPC Sugar Discussion about Pie Menus: Excerpt About Gesture Space
I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing”  because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
They also provide the ability of “Reselection” , which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
The thing that this demo shows in minutes could've been typed up in text in seconds. It is not a productive way to do things. There's a reason that these systems never catched on, they are unnecessary.
You may say, but "non-programmers" will use it! No, they won't. Designers will use real design tools to create (non-functional) visual designs. Programmers will bring those visual designs to functionality. That procedure works. It'll keep working. These systems are diversions, not improvements. Worthy of investigation, but not practical.
Our take was that we really do design on paper or whiteboard first & foremost, which is why our project emphasized the webcam + sharpie thing rather than drawing in-browser etc.
Here's a related thing I wrote about the need for design tools to design the real thing, rather than facsimiles of the thing: https://jon.gold/2017/08/dragging-rectangles/ - so so so much process waste is because developers have to re-implement static pictures of designs.
In our case, we didn't get buy-in to keep developing the project, but I'm kinda jazzed that so many people are running with the idea
Okay, but did you attack that problem in a way that actually is more efficient than established UI paradigms?
Let's split the problem in two parts:
- "Semantic" Design (Checkbox, ImageView, TextInput...)
- Visual Design (fonts, colors, margins, ratios)
Your solution covers only the "semantic" part. Just look at the data. It's basically a simple component tree. It would be more efficient to just type it up. It also would be more efficient to just drag rectangles from a toolshelf instead of defining the type of the rectangle by drawing extra hints.
As for the visual design, that's where you use a design tool like Illustrator or Photoshop. Typing that up (e.g. in CSS) is surely a pain, but sketching it all up is out of the question. I certainly do see room for improvement in the workflow here, but a sketchy interface isn't helping.
You have to question a lot of assumptions here, but also consider how designers are most efficient with the tools they already know and have used for years. Don't mistake something that you want to create for something that users will actually want to use.
"The things this demo shows could have been typed up in text in seconds. Designers will use real design tools to create (rough) visual designs. Engineers will bring those visual designs to blueprints."
And yet half a century later, CAD is firmly in the domain of visual designers, where it seems so obvious that you would have to be crazy to think people would be designing in code. But hindsight is 20/20!
The way forward to visual programming might not be super clear, but we'll get there. If you don't think text-based REPL-style programming is limiting, I encourage you to check out Bret Victor's explorations of abstraction and direct manipulation. http://worrydream.com/
> The way forward to visual programming might not be super clear, but we'll get there.
This isn't even visual programming, nor is it a step in the right direction. My text editor has all kinds of visual tools. The data I edit however is textual, which has a lot of benefits.
> If you don't think text-based REPL-style programming is limiting...
I don't think REPLs are very useful for programming either.
> ...I encourage you to check out Bret Victor's explorations of abstraction and direct manipulation.
I'm aware of this stuff, it looks nice, but I don't think you need an entire visual programming language to get that benefit. If I need visualization, there are lots of tools to use.
I do aggree that it's pretty high bar in this case though - it's changing the flow, not just improving it. So it'd have to get very polished to be able to compete, which I just don't think it will.
Still, maybe someday.
My whole point is that this is not an improvement, it's actually a worse way to enter a simple datastructure into the computer. It's even worse than using the already established UI paradigm of programs like Paint. Picking a tool and dragging out a box is faster, because you don't have to learn the visual language of how to draw these widgets.
It does look cool, because it makes the computer appear smart, but it's just not a good interface for actual use.
Yet looking around I see plenty of people still building systems by writing code.
Maybe one day the surf will come in for these ideas.
I’d be interested to see new things like this though!
It’s fast and expressive, and always there and always on, just a single tool / interface (pen to paper), which is a huge advantage when just trying to get concepts down visually. The clincher in this decade though, is the necessity to think responsively while sketching, understanding that there are all sorts of device sizes now. Then either mock it up in a design tool later or straight code it up once I have the general concepts down. I’ve tried all sorts of things (was really disappointed when SubForm shut down, that one was kind of interesting). From concept to product, starting off with paper and pen is still the quickest route for me.
Drawing where to put the widgets (and not using constraints or grids or automatic layout or adaptive rules or responsive design, or user testing and performance measurement and empirical evaluation) isn't the hard or important part of user interface design.
Who is supposed to benefit from this? A company who refuses to hire a competent user interface designer and wants to crank something out really quick regardless of quality? Users spend much more time using an interface than you spend designing and implementing it, so optimizing the time and amount of mental effort you have to put into making a user interface isn't worth it if it doesn't result in a better, easier to use interface.
We are already there with Framer X and competitors pending launch in the near future. There is a learning curve that most designers are not super comfortable with yet, but I expect that will improve quickly. We also are limited currently to React for Framer X but I think opening it up to other front-end frameworks is on the horizon. Exciting times!
Regardless, it's a cool idea that made me laugh a bit at first (seems almost absurd at first glance) but then got me thinking about possibilities. Good job!
It would be great if you could get it in a state where people could really try it out (even in an unpolished state), either locally or ideally with a web-based demo.
Again, thanks :)
after just watching the video - seems like adding a plugin system for targeting various UI library (ex bootstrap) would be really cool (I didn't read all of the text - maybe he suggested or already has this..)
It may not seem like much right now, but the fundamental idea behind this sort of stuff is the future, especially for front-end code.
Do code your UI.
We're in this obnoxious age of confusing useful criticism with any reaction one can come up with on the fly, no matter how superficial. Like it's their destiny to weigh in on something as rapidly as they can, and as if they're doing some critical service for the universe.
The comment above exemplifies this when they say "what, we're supposed to pat them on the head and say attaboy?" No, the problem is that you think you need to fire off some undigested response at all. If you have nothing meaningful to say, then just say nothing. It's okay.
- add a requirements.txt to list dependencies
- create a .gitignore file.
Reinforces my personal rule-of-thumb that any comment HN has about UX can be safely ignored.
You could automate the entire software design and development process! Could't be any worse that what we have now, amiright?
A few billion dollars must have been flushed down this toilet bowl already; I wonder how much is yet to come...