This might make more sense for a designer, but they shouldn’t be so close to production anyways.
edit: another thought is that this concept could encourage people in your org who struggle with wireframe technology to express their ideas. Generationally and across culture, smartphone use is now accepted. People also know how to draw on pencil and paper. Now all you are asking them is a final DSL to express their thoughts. Lower barrier?
edit2: there is also something to be said for having someone step through their wireframe and flow control by taking pictures. It may take the abstract and create something tangible as they can logically piece their work together with actual pieces of paper?
Unless you mean just sketch using graph paper and translating coordinates. Not sure why I need a camera for that. :(
Also, many people just think that business logic for sanitization and validation "just happens." The barrier to wireframing, for them, is too high so they don't. But in this idea, I could see someone submitting a wireframe to me and my response being "well what happens when a phone number is international?" I'm educating stakeholders on the functional cost of producing their idea.
This would theoretically create a feedback loop for future ideas and initiatives as now, they've begun to be educated on the process. They have direct experience.
Anyway, anything to lower that barrier in order to partner with and teach my executives and their supporting staff would be a huge win. At least for me.
Maybe the barrier is where it should be. Or maybe it should be even higher! People who can't understand the logic of an interface have no business creating or suggesting interfaces. An UI is meant to be used, not looked at like a pretty picture in a frame. It should feel good and feel smooth and increase productivity... not look good. Some of the best looking UIs I've ever seem were also the most utterly user-hostile, unituitive and productivity lowering.
Sure, if you can afford to pay someone 500/hour or smth "outrageous" like that (hint: you need a word-class artist, with advanced knowledge of user psychology, that also has the brain of a business logic analist or of programmer involved in product design) you could get something that both looks goorgeous and feels smooth and increases user productivity 10x. But usually you need to make sacrifices, and the ones the user will hate you for are those that make his life harder despite seeming nice and slick at first.
> Maybe the barrier is where it should be. Or maybe it should be even higher!
Across the industry people in leadership positions assume that making UI/UX is easy. Those same people are usually the owners or major stakeholders of the project. Any avenue to put more functional ownership back onto that group to empower and educate is a worth endeavor.
In this example, unless not handling international phone numbers leads to failure of the project, that can be handled later, say once the project is approved and time estimation is being done. If I'm building a notes app, and someone is proposing a new sign up form to increase conversions, and it has a phone number field, handling international numbers is the last thing to worry about at this stage (unless international numbers are a significant problem with the old form leading to abandonment).
We shouldn't doom good ideas with irrelevant details, which are absolutely relevant later, but not now. Product development happens in phases of increasing fidelity, and issues need to be brought up at the appropriate time, not too early, not too late.
Imo this is one of those things best left unhandled, eg. "just use a plain mostly unvalidated textfield, and throw and error only when you want to use that data via another system like for a text message campaign". Mostly in real-life if you want to target the entire freaking planet (not just 99% of phone using people, but 99.999%), you'll get to realize that any validation is not enough and that some phone numbers need to contain arbitrary letters and symbols in them (better don't ask... the world is big and weird :P), and that yeah, those numbers will not be procesable by things like Twillio, but human users with local knowledge will know how to actually "dial" them...
But it needs to be a conscious decision, to consciously choose to not-validate and to understand that you give up the ability to 100% target phone numbers for things like 2-factor-auth later on.
Not "forget that phone numbers need to be validated" and then, go and say, "oh, let's do phone-based 2-fa mandatory" or whatever user interaction messup like that.
It seems that people are coming to you to help estimate how long an idea takes to implement. If that's the case, I agree with everything you've said.
But if they're proposing an idea, say a new sign up form to increase conversions, phone number validation is an irrelevant detail to worry about at this point (unless that was a significant problem with the old sign up form).
Whatever it may be, would a tool that educates and puts some of the cost back on the "idea person" or stakeholder be a good thing? I think it would.
So would I, as I mentioned in my reply to you. It's easy to propose ideas without regard to cost like a "minor enhancement" that takes 6 person-months.
There is no easy way for them to reveal to the user what gestures are possible (short of a showing a palette of commands, including animations of gestures which are directionally sensitive), and no clear and wide separation of distinct gestures, so they're difficult to learn and remember, and their ambiguity leads to a high error rate. And they're not suitable for applications where it's not easy and inconsequential to undo mistakes (like real time games, nuclear power plant control, etc).
For example, handwriting recognition has a hard time distinguishing from "h" and "n" and "u", or "2" and "Z", so systems like Graffiti avoid lower case characters entirely, and force you to write upper case characters in specially contrived distinct non-standard ways, in order to make them distinct from each other (widely separated in gesture space). It's important for there to be a lot of "gesture space" between each symbol, or else gesture recognition has a high error rate.
Graffiti is an essentially single-stroke shorthand handwriting recognition system used in PDAs based on the Palm OS.
The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.
OLPC Sugar Discussion about Pie Menus: Excerpt About Gesture Space
I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing”  because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
They also provide the ability of “Reselection” , which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.