It's not a company. It's not a product. You're not being asked to buy it or buy into it, just to discuss the concept if you'd like.
This website is a portfolio piece for a 21-year-old university student hoping to find an internship. In my opinion, it's an impressive demonstration of his design and technical skills. It certainly says a lot more than the average 21-year-old's resume listing what courses they've taken so far.
A lot of things get "stuck" and for good reason. Steering wheel, gas, and brake. Sight, trigger, stock. Underwear, pants, socks, etc.
Personally, I think the NT team really nailed the concept of the desktop back in 1993 with 3.1. The start button which hid programs and allowed access to other functions was incredibly handy, a bit like how a desk has a desk drawer within reach at all times. The desktop as a dumping ground for whatever you choose and system tray for misc stuff really, really works well and with mimimal cruft. Windows applications worked unlike MacOS applications by integrating the menu bar directly into the application itself instead of making it a detached accessory of the OS. Right-click support also allowed a lot of dynamic functionality.
I understand that almost none of these specific were their innovations, but they took a lot of half-cooked ideas and made something very, very usable.
Its a shame the NT team is never brought up like the Amiga or MacOS teams. I don't think they've gotten their proper due. Its also interesting that MS ran back to the start button/menu only three years after letting it go. Its incredible how powerful the NT way of doing things still is.
One of us is remembering things incorrectly ... I am quite certain that the start menu interface debuted with Windows 95 and was then later adopted by Windows NT 4.0.
NT 3.1, 3.5 and 3.51 all used the old Windows 3.1 user interface.
The buttons on the left open windows containing applications and files, like the Resources window that is already open. On the right are running applications. The menu on the text editor has been opened with the middle mouse button; to complete the save action (for a first time save, or new destination) the icon can be dragged to a file window.
The desktop can have files, folders or applications pinned to it, but there are none in this screenshot.
The screenshot is as the OS was released in 1991. The first released version, from 1987, had similar functionality but lacked icons on the desktop ("pinboard").
(The other thing I liked about this OS was how applications were packaged. They were simply directories beginning with "!" containing, at minimum, an executable file called "!Run". This made it nice to explore, especially as !Run was often written in BASIC, and there was a BASIC editor and interpreter in ROM.)
That said, one of the most elegant and wonderful UX I ever had was with Softimage starting on sgi in 1993.
Still to this day I felt they nailed user interaction so wonderfully back then.
When you watch someone interact with a deeply complex application and see them just swim through using it - it's really great.
Like professional animators and both sound and video editors/producers etc.
I don't have any opinion at the moment on this as I need to read all the comments and watch his video, but we'll put together site.
Agreed...feedback can make all the difference...if you're the affected party crowd-source the feedback on your own...what's valuable?, what's simply old-school thought thinly disguising envy?, and what's in-between...lean toward the "valuable" and pay close attention to the "in-between" if you want to have a reasonable chance of moving forward...
This is a brilliant resume and he ought to be looking for gainful employment, rather than just an internship.
Presumably he wants to finish college, in which case an internship totally makes sense.
Could you elaborate on this? I don't feel that way at all when using any modern OS/device, but maybe I just haven't thought about it in the same way.
Using Apple devices requires a great deal of memorisation. For power users this is a relatively minor burden, but it is a serious issue for inexperienced users, or those with problems in cognition or memory.
People spend hours a day interacting with a computer in one form or another, they will memorize things.
PS: Remember getting annoyed when Microsoft when gaga over the 'ribbon' concept? I bet you don't think about that one much anymore.
Many of Apple's UI interactions can only be discovered through guesswork or reading the docs. If you open Launchpad often, nothing alerts you to the fact that there's a touchpad gesture for that. Unlabeled icons are rife in iOS and there is no discovery mechanism equivalent to a tooltip.
You can. And I do. And given your post, I assume you do too.
But did you know that most people don't?
I teach people computer skills (a lot of children, ages 8-12, but other ages as well). Only the very clever ones (those we can expect to welcome on HN in a couple of years ;-) ) figure it out by themselves. You'd be surprised of the in-your-face stuff people simply do not read on the screen. I then point it out, but it's about 50/50 whether they'll pick up the habit.
And it's the same thing with this interface the author has designed. It's all intuitions about how he uses computers, how people like him use computers, and how the self-selected group of feedback-providers use computers. Nothing about the general public. No research, no tests, no reference to classic UX textbooks and theory.
Now if he had stated upfront that those three categories are actually the only intended target audience, then at least he would have acknowledged that this is a possible problem, up-front. Not doing so, it appears he hasn't given much thought to it.
So, sorry but what I see is not a user interaction designer, but a graphics designer with a cute hobby.
If you think that's harsh, imagine he'd have redesigned the user interface of your car. Would you trust the ideas reading only about mockups and "intuitive" justifications? Wouldn't you think, mm-mm yes nice ideas but until you actually tested them on a focus group they could go anywhere and further effort is pretty much wasted until you do. What if you assume his driving style is rather different from yours?
Settings > Keyboard > Shortcuts > Keyboard
Settings > Trackpad > ...
OS X actually allows a lot of customization. You assign a keyboard shortcut to any menu item in System Preferences -> Keyboard -> Shortcuts -> App Shortcuts.
This was invaluable when working on complex documents. Post ribbon all that's forcibly broken up across multiple tabs. Instead I get giant buttons for functions I don't use permanently taking up space, rather the. An efficient array of those I do.
The shortcuts are consistent across all Mac apps, are discoverable (and completely modifiable) from System Preferences -> Keyboard -> Shortcuts, or from Help -> Search in any app.
Trackpad and mouse gestures are shown, with videos in fact, in System -> Trackpad and System -> Mouse.
When I look at Apple's UI design, a lot of it seems to be about interacting with the device and not interacting with my task. Partly this is due, I think, to the amount and types of integration of apps into the environment (via Cocoa.) And partly, it's the fact that Apple's UIs/APIs aren't as generalized and engineered as Microsoft's -- which is to say, as an end user, you can do what you need to do, but there's generally fewer ways to do it and when you do it's much more defined.
In many ways, it owes more to the Metro interface than it does OS X or iOS, for better or worse.
Yes the webpage is beautiful, there are nice animations, but that's it, there is nothing else. It's an nice portfolio though, but there is nothing to discuss when it comes to UX/UI itself.
If he wants a job building marketing pages and videos for software, sure.
But if he wants a job as a UX designer, and I would go by the info on his sites, his design process consists of mockups and talking about it with his professors.
Seriously, if these are his skills, how the hell would he do UX design for an assignment when the target audience does not include himself?
Is your criticism that we shouldn't discuss things which aren't in the physical world yet, or that we can't discuss unimplemented ideas?
If you feel some need to be defensive about that, I'm sure that has a parallel in those discussions, too.
I'm actually a bit baffled. He's not just a student, he actually studies "interface design"!
"I am 21 years old and study interface design at the University of Applied Sciences Potsdam."
Of course just a list of what courses you've taken so far isn't very sexy or interesting. You don't need to list them explicitly. But if you do claim to study a particular field or industry, you should at least demonstrate that you have taken some courses (in particular on the topics you're passionate about), did interesting projects to complete them (there must be, even tiny ones), what/that you learned from them, why they matter.
Doesn't need to be very explicit, but there's literally nothing on this page, nor on his personal website indicating what he learned studying interface design. Drop some names, terminology, textbooks, works, important research. Show us how "I study interface design" means more than "I'm working to obtain a piece of paper that lets me claim I studied interface design".
1) reused assets from other companies (big no no, even for just a mockup) that are inconsistent with the overall visual style of the project
2) very implausible interactions that were clearly not prototyped (eg the 6 finger pinch... what?!)
3) work heavily inspired from other research projects and designs (10 GUI, various tiling window managers, etc) without showing a clear unifying interaction model. This is more of a patchwork of loosely related ideas, and to me doesn't show clear, deep understanding of prior work (both academic and commercial) in the covered areas.
4) awkward copy that jumps from marketing-style speak to explaining interactions from an analytical point of view. This is the most minor of all points but just hurts the polish of the piece.
So this tells me that the candidate is not a strong graphic designer, not a strong prototyper, nor a strong researcher. He's looking for an internship so only one of the three with promises in the other two would be sufficient, but that's not apparent enough to me here.
Hope that's constructive - I see a fair number of such portfolios/resumes every month.
EDIT: lots of hate on my comment below. The main point seems to be "well you're not totally wrong, but he's a 21 year old intern". Sure, although that doesn't change any of my feedback. I am assessing the work as it stands on its own merits, independently of the designer's age. There is certain work on which 21 year old interns don't have a lot to contribute on, such as redesigning entire OS user interfaces. When it comes to assessing a designer's skills, I prefer to see a series of small, focused pieces rather than one large, sprawling one (the latter is much harder to pull of well unless you have tons of experience). I'm looking forward to looking at how his work has evolved a few years from now.
The concept and presentation of it are definitely impressive, especially for someone who's only 21 and still looking for internship-level work. He's easily in the top 5%, probably top 1% of designers, in terms of being able to see a complicated concept through from ideation to execution.
#1 — There's nothing wrong with using assets from other companies like Amazon, Facebook, Wikipedia, etc. in a concept mockup. If anything, it's nice to see realistic assets being used instead of stock junk that won't actually work in real-world situations.
#2 — Yup, the 6-finger-pinch would probably be too convoluted, I agree. But that's easily replaced with a regular pinch. Easy to get little details like this wrong in the scheme of things, especially for the more minor gestures.
#3 — Who cares? If anything this shows that he knows about what's out there in terms of prior art, and is able to build on it in an inspiring way. That's a good sign in my book, not a deal breaker.
#4 — The fact that he was able to put together this page, with all of the marketing copy, and the well-design screenshots (with zooming), etc. already shows that for a visual and interaction designer he's way ahead of the curve in terms of marketing.
I'm not saying the concept is perfect, but it's definitely well presented, and well thought out. Add in the fact that the designer is 21 and looking for an internship, and he is an absolutely obvious candidate for a phone screen. I'd be surprised if he isn't able to land internships with any of the big tech companies (Apple, Google, Facebook, etc.) with this piece as part of a larger portfolio. He's clearly good at coming up with, polishing, and communicating ideas.
You may have a different assessment of the candidate's potential value as an intern. The parent clearly stated "Here's feedback as to why I'm not sending this website to my studio lead to schedule a phone screen with the designer".
Making assumptions about the parents intent provides no value to the discussion.
The former adds nothing to the conversation and is pretty close to being a personal attack, the latter provides feedback and (may) prompt constructive debate about how to provide critique.
Just a list of things that are "wrong," according to the poster.
Parent comment supplied criticism – I'm not arguing that. But s/he did not at all help to reveal a path towards a better outcome. That's what would make it constructive.
Sure, but perhaps I'd be more surprised if that internship had to do with UX/interaction design. Maybe marketing, or webdesign.
If I were to make a drawing of a house, even a detailed rendering, and made a website about why I would like such a house, why it would be a nice house, that wouldn't make me an architect.
Your point about the larger portfolio is an important one. Where is it? At 21 he must have been studying this field for 2 maybe 3 years?
The site does look very neat and well-polished. That is impressive, no matter what. Clear indicator of talent. Whether that talent is UX design (people keep talking about "design" in this thread, as if it's some general thing), he does not demonstrate: no tests, no design documents, no research. Except he does claim to have focused on research, yet no word about what this research entails, the process, how it went, what he found out.
So I'm going to say, he's got great talent, for marketing shiny things to the general public. And if you think about it, it makes a lot more sense that someone great at marketing reaches the top of HN, than someone great at UX design? ;-)
I'm not a graphic designer nor do I hire them, but your conclusion seems overly dramatic. Perhaps this prototype does not meet your high bar, but that does not mean that the candidate overall is not qualified. It just means that this particular work did not demonstrate these skills to your satisfaction. The candidate might have these skills and dismissing them after one prototype seems to be setting yourself up for many false negatives.
4 critiques is not bad for a student/internship piece. Most students simultaneously do not have deep knowledge of prior work and are also (one of) strong designer or prototyper or researcher because those things simply take time/experience to get.
However that's not an issue, since he's a student. Experience, and further learning will make him absolutely exception in a single area or multiple ones.
However that's speaking in comparison to everyone else in the world.
In comparison to other students, he's done a fairly good job.
He's just being a jerk to be a jerk.
*not all design people, just a certain flavor
2. "concept art" for a tiling wm makes no sense, because as you point out, it is first and foremost something that is work focused. This is why you build functional prototypes.
Concept art for a tiling window manager certainly does make sense. Especially if you want to define the appearance of one.
DesktopNeo has some more promising ideas (there's some really good thought iterating on full-height window workflows) and some less promising (some of the more awkward gestures), but it's a strong showing overall.
Just curious ... how many of the folks that you have sent to your studio lead have had their project at the #1 spot on the front page of HN with 100+ comments ?
The candidate is 21 years old and looking for an internship. Would you really expect them to be strong at any of those. It seems he/she does have some strengths doesn't it? Would it not be useful to focus on those and see how the others could be developed?
Your reply really doesn't come off as an attempt to be constructive. It comes off as someone with a bit of knowledge about the field wanting to demonstrate that for strangers on the internet by tearing someone's work down.
There's obviously a strong foundation for the hard skills (like visual design or prototyping or copywriting) that can be easily built upon in the field. That's the whole point of an internship: find a hard worker with a good foundation, teach them how to succeed in your company.
Not sure what the hell parent comment's problem is.
Undergrad final year projects demand a similar investment of time, effort and view of the big picture. It's something we do in the UK - not sure about the US and others.
I am by no means a designer, but from an outsider's perspective(after all designs are consumed by regular folk like myself, not necessarily other designers) Lennart's work seems pretty well done. Especially considering that he is at the very start of his career.
Oh please, he's certainly a strong graphic designer, by the looks of the website he certainly has web development abilities, and if you want people with so much previous experience, I don't see why you even feel the need to rip this guy apart.
I have my own complaints with parts of the design, but overall it shows a lot of hard work, a lot of great ideas, and a hell of a lot of creativity. He'd be incredibly valuable for any company lucky enough to pick him up.
Even snuck in an edit to try and disarm future criticism by preempting it. Bravo.
It basically boils down to. You are no good enough. This is hardly constructive criticism at all thats just criticism.
I will gladly forward his portfolio to anyone looking for an intern. It should be fairly easy to assess in an interview whether this guy is the real deal or not. And if he is he got a great future in front of him.
Whatever this place is, they must have standards even higher than Apple's or Google's.
This is important, and often separates the good from the bad...can't count the number of websites I've visited, or products I've considered buying, where I was stopped short because of marketing that appears to have been hastily constructed, or that undervalued consistency in presentation...
That always puts my head on a swivel--are the ideas being presented, the product being sold, etc., of actual value, when the front-end appears thrown together, or carelessly edited?
You don't do an admitted novice a favor by pulling punches...they deserve honest feedback...they can aggregate opinions, and come to their own conclusions as to how to move forward...that's crowd-sourcing in a nutshell, right?
This is the nature of Hacker News recently: people post as soon as possible to demonstrate their knowledge on a related subject. I've twice recently had articles posted on HN where a commenter loudly ranted about the article missing something that's covered explicitly in the first 3 paragraphs.
dang how about adding 'RTFA' to https://news.ycombinator.com/newsguidelines.html ?
Why is everyone out for the blood of the desktop/windows paradigm? Why can't the simplified-tablet market and the desktop-power-user markets coexist? Windows and taskbars and start menus are wonderful and my favorite way of interacting with computers, why must it be taken away?
But with i3, I have keybindings to "open the last browser window i used" or "open the last gvim window i opened." I can tile them in different workspaces as I please extremely efficiently and quickly, and move them around as needed. I can move much faster using the keyboard than when I'm in my Windows machine. Having this tiling ability on a tablet seems odd to me.
I am kind of getting tired of the "attacks" on the traditional desktop interface. People just don't want to admit that just because tablets exist they are not a superior interface. Just a different one.
The interface they propose is beautiful but it would really have to walk a thin, thin line between usability and screen space.
I do like the idea of a tag-based filesystem though. Obviously then you get into the problem of managing your tags (a tree-based filesystem solves this problem naturally).
In fact, a main reason I use i3 is because it is literally the least ideological tiling window manager when it comes to mouse use. The productivity gains I get out of tiling don't come from using the keyboard, they come from not having to deal with the (imo, especially in the post-1024x768 era) very messy overlapping window model. Things are where I expect. I can move them from one screen to another and sane things happen. Plugging in and unplugging a display doesn't make hash of my organization. Things are never buried under incomprehensible layers. And so on.
I agree, it was one of the things I was missing back when I was using i3 + Ubuntu as my daily machine. Now, I'm using OS X with Amethyst, which while it's less powerful than i3, combined with OS X's window management features I honestly feel like I get the best of both worlds. It's quite great, to be honest.
This sounds interesting. Would you care to elaborate?
I think the broader problem is that there is this awkward question of where tags belong in the application heirarchy. Right now most media files embed tags while text files do not. Binary files certainly do not, and folders are not true files and thus are not on-disk things to put tags on, they are usually filesystem constructs in a data tree somewhere.
Your file indexer could maintain the tags database, but then they are not file-portable. They could be part of the extended file traits in your filesystem, but then filesystems lacking support (including online uploads) would drop them. Thus we are at a point where tagging is done on a per-format basis, and thus some files are just not taggable but instead require indexing.
You would probably want to throw a semantic relevancy engine on top of that to associate vocabulary with similar terms, so if i search for food i can get things like cookie as a related search term.
I will try it soon with ranger , I think managing tags with it won't be a problem as you can call the shell where you want.
Ditto, I use tags heavily for web content and photos and would love to do the same for local files.
Seems that it might be possible to overload soft-links to do the job - create a hierarchy of tags and then put a soft link for each file in the relevant place. That way you could use current filesystems and filemanagers whilst you develop the concept and apps.
Someone mentioned file searches in Ubuntu (using Baloo) but the default file manager needs to have an interface to apply tags as a MVP og file tagging.
The problem with traditional hierarchical organization is that objects often belong to multiple categories. With hierarchical tags, that problem disappears because you can easily assign multiple tags to the same object.
Actually, we can already do this with hierarchical filesystems and hard (not soft) links.
I already do this to some extent with my 11,000+ bookmarks on Pinboard. Entering the Usenet-style tags is really easy thanks to autocomplete, which also helps me avoid ending up with misspelled tags. Pinboard currently lacks searching by tag prefix, but it's trivial to script it locally using the API.
I don't want to replace the complexity of the desktop with interfaces that are easier. I want them to be better, to be more efficient for professionals.
Concepts like panels, tags and touch input are probably way harder to understand than windows, folders and mouse input. But it doesn’t matter, because professionals spend so much time on desktop computers that it's worth the effort to master the interface.
This is a good goal. But gestures and multitouch will not lead you on the path to that goal. Consider tiling window managers like awesome or i3, that already today help programmers (myself included) be more efficient. They achieve this by reducing time-consuming dragging, dropping, resizing and clicking actions either by replacing them with keyboard shortcuts or simply by automating them away. A UI where you have to move your hands even further (distance keyboard-screen > distance keyboard-mouse) will actually decrease productivity (and likely cause RSI problems as well).
I realise this is a mockup/concept design. But if you want to pursue the idea further, I very much recommend using a tiling window manager for an extended period of time, to learn from that paradigm.
This is a very powerful paradigm to pursue: An intuitive version of tiling WMs that appeals to everyone. Go, ziburski, go!
Back when I went to Gmail from a folder type email representation, it was hard at first. But search proved superior. Basically, a lot of organizational time investment had no return. Getting better at search always has a return, and you save all the laborious folder management.
I've taken to one or two big directories filled with reference material, docs, pictures, etc... search generally pays off.
Smarter tagging is likely to work in similar ways.
But when I'm not trying to write code I can see this working.
There are a lot of attempts to redesign the desktop or do some clever thing to increase productivity but all of them tend to forget the fundamentals of interacting with the computer. The forget why what we have is successful and seem unclear as to what they are trying to move towards as an improvement.
That probably seems like a bold statement so lets look at a few things.
Lets look at why something like paper is still around. Do we care that one piece of paper might hide behind another in a stack of papers on a desk, is that a terminal failure of the medium? I'd say no. Just like windows hiding others, it's not a fundamental issue that needs solving.
Why else might paper be successful? I'd suggest it's not the paper itself, it the way we interact with paper. We don't use our hands to mark the paper, we use tools, we use pens, brushes, pencils, chalks and all manor of things. Paper is a tool that allows us to get the best from a range of our 'marking' tools.
Like wise, if we are to interact via touch with the desktop, it stands to reason that to get the best from the experience we need to have a range of tactile tools to do so. We need our keyboard, our mice, our touch pads, touch screens and stylus'. Concomitantly, we need our desktops to allow us to get the most out of these interaction tools.
To my mind, the problem with the desktop isn't that it needs a fundamentally more restrictive use pattern, it needs a better use of the available control mechanisms.
Let's think about why Vim is so enduring and successful. Vim only does one thing, manipulates text and it's got a mode of operation totally alien to many who've used a text editor. Yet for decades now it's what many gravitate towards as the pinnacle of interacting with text on a computer (shout out to Emacs folks, this is just an example!).
Vims's method of operation seeks to ruthlessly release the power of the keyboard to offer huge benefits in terms of productivity and manipulation.
What is your design seeking to ruthlessly release from the power of our interactive devices? Why are our existing tools better for interacting with your design than a traditional desktop? what power are you releasing from them?
All of this is to say that whilst you may redesign the desktop and have clever (and useful) schemes to increase productivity, these designs don't pay enough attention to the things we use to interact with the computer, and so don't offer true lasting benefit and therefore don't catch on or endure.
If you redesign the way the desktop works, you need to either re-imagine the tools we use to interact with it or take measures, without pity or care for convention, to release the power of the existing ones.
Apple's done an interesting job in this area in it's move to bring touch gestures into the control of the desktop, but I'd don't think it's hardly been a fully realised opportunity.
- having to use a mouse (or, God help us, a touchpad) is annoying when you have to switch between it and the keyboard
- using the mouse is slow
- windows on traditional desktops take way too much space (compare with how much better they look on tiling WMs)
- ALT+TAB (and equivalents) suck - you can only switch back and forth between two applications; anything else requires paying attention
- window locations and sizes are ephemeral, which again is annoying when you want to set up a good working environment
- wheel menus are awesome and the fact that pretty much nobody is doing them baffles me
Tablets suck for creative work, I agree, but the way I see this project is not as a nod towards touch interfaces. On the contrary, it's about using sane(r) window management, combined with tags, eye-tracking, voice control and wheel menus to minimize the amount of effort you need to make to issue commands or search for things. It sounds like a perfect addition to keyboard-driven work style.
Existing desktop workflows may not be improved, but existing custom-control-surface workflows can be made vastly cheaper and more flexible, consolidated onto relatively commodotized hardware, and in some niches this is happening.
I think there are definitely strengths and weaknesses to the use of touch-oriented control surfaces. On the negative side, they require you to look at the control surface a lot more often than a setup featuring jog wheels and sliders. But on the positive side, being able to customize the control surface and allow a level of more "analog" control compared to pointing and clicking or typing.
There are a lot of less expensive options that use more common USB interfaces to map hardware controls to software functions which can really help if your only other option is that pro (read: expensive and proprietary) gear you mentioned. Still, you're always limited by the number and layout of pads, buttons, sliders, and dials that can be mapped to software functions.
I've played around a bit with software like TouchDesigner which is meant for creating custom touch interfaces and programming the ways that each control affects parameters in software. It's like a more open and flexible alternative to the old Crestron and similar systems where you could define the control surface in software but were limited to their processors and peripherals.
It's something I don't really do much in my work (since we use those aforementioned Crestron-type systems) but in my spare time, I like playing around with using software-defined interfaces to make interactive audiovisual projects.
It maps well to the sorts of things you want to do with a tiny device in your pocket, but it's not suitable for producing anything other than selfies.
This is the hole in the "mobile is the future" argument. Mobile (as it exists today) is only the future if nobody has anything substantial to say or create and we are all just passive consumers.
This is really great desktop UI work and I do hope someone takes a look and gets inspired. It would really be nice if Linux desktop efforts stopped trying to deliver Windows 95 or early versions of MacOSX in 2016 and actually innovated.
A lot of tiling problems on the desktop would disappear if monitors were two or three times (or more...) as large on each side and wall-mounted.
Being able to see most of your windows at the same time and work in any one without moving or switching would be revolutionary.
I'd never heard of GUI/10. I don't particularly like the windowing scheme, but I think a ten-way touch interface which flipped between being a multi-pointer and a text keyboard - possibly with a touch point in one corner to switch modes, and not that show/hide thing tablets do now - would be a very interesting thing.
It wasn't practical when 10/GUI was first discussed. It's getting more and more practical now.
>It would really be nice if Linux desktop efforts stopped trying to deliver Windows 95 or early versions of MacOSX in 2016 and actually innovated.
Any innovation has to be much better than current practice. If it's "interesting, but..." it's not enough to get people to switch.
There are window manager who can do this but imho not as simple as Blender.
The OP had a similar thing with the radial marking menu, though I've shied away from that approach since it doesn't scale with lots of icons/options unless you do hierarchies - also I simply just wanted to try something different. One random thing I stumbled upon yesterday is an old Autodesk Research video looking testing these versus other methods: https://www.youtube.com/watch?v=YHZB0d20640
The answer to designing complex, professional-level applications in that paradigm is "make it fullscreen, and use a keyboard and mouse". Tablets just don't give you the precision and screen space to work with extremely complex tools.
I think Windows 10 actually manages this well. I'm using a Surface Book, and despite its various issues, this tablet/notebook hybrid actually works quite well. Plug the screen into the base and I have a full blown, high res screen with a keyboard and dedicated GPU for my coding/Photoshop/Blender. But if I just want to consume, I can pull the screen off and sit on the couch.
Honestly I think a large part is because microsoft dominations the desktop market and is failing in the tablet market.
Google quickly became an obviously superior approach.
Back when Yahoo! was a categorized directory of sites, my computer had probably 1GB of storage. I didn't have a digital camera or many thousands of email messages. I didn't have as many documents that I either created myself or was given by someone else. I didn't have ebooks. Or mp3 files.
The hierarchy that we have is not necessarily a given and even if it exists on disk, the UI metaphor does not need to be tied to that paradigm. Perhaps there is a better way for the things we do today?
a. Hard to access the data outside the single ui
b. Performance of data import / export trails raw folders / file systems
c. Hard to figure out how to atomically backup the content in an incremental way.
d. No programmatic access.
For example, "Panels use screen space more efficiently and are a more elegant way to multitask than normal windows."
Says who? You? It just drives me crazy when I see statements like these. It's not an effective way to get your point across. You want to make your case for something like that? Actually make your case. Present some evidence and your conclusions. Not everything has to be a Jony Ive marketing video.
edit: Last thing I'll say - I just re-watched the video, and caught the last line - "it rethinks desktop computing to help you get work done". My advice to the author - go work for various companies for 5-10 years, and then come back and see if that statement holds up. My ability to get work done would be crippled with this; in fact, my work would come to a grinding halt. "Work" just doesn't work in the kinds of idealistic ways these types of marketing-like videos always seem to show.
And don't get me wrong, I like the author's ambition. If I was in the internship-givin' business, hell, I'd probably consider him. I think this is a good way to get your name out there, even if it attracts criticism (like mine).
Most of the time, design rationales seem like they are based on the subjective POV of the designer rather than conclusion based on recorded feedback.
That's how he knows, I'm sure. He wouldn't just be making that stuff up out of thin air, and surely the "research" and "design process" did not exclusively consist out of the previous iterations of mockups we can see on that page.
Certainly he's also written the technical design document that describes a clearly outlined goal at the start, works that out referring to well-known theories, studies and previous research done in the field of UX design.
Obviously the statements about efficiency have been corroborated by hard numbers obtained from use cases, user studies--done on people other than himself or his professors--about discoverability, efficiency, employing user tasks and scripts (as described in the literature he must be familiar with). He's a student of interaction design on a university. So he must certainly know that research implies science and interaction design involves actual social studies and that building mockups, animations and a gorgeous website with demo video is only a tiny and not even that important part of the field.
Certainly it can't be that this guy is in fact quite talented at marketing and that's why he's currently at the top of HN.
Right now, switching between projects means setting all that up every single time.
On top of that, I really don't want all of my software always open in a separate desktop. There is no need for me to keep Photoshop open, using resources, while I have transitioned from work to play time. I'd much rather have a "working" session saved and a "gaming" session saved where-in different programs are auto opened when I start a session.
I think this is a problem with Notepad++, because I can move around single instances of all other apps I tried (Chrome, VS, etc).
>On top of that, I really don't want all of my software always open in a separate desktop.
Ideally the user should not need to worry about "open" apps and the resources they are taking up just by being open (not doing any work). It is the app's and the OS's job to ensure that background apps do not use CPU for doing nothing and the memory management is good enough that you always have enough memory for what you wanna run NOW. And I think that is quite good in Windows 10 and OSX right now. I can have multiple VS projects, a couple of VMs, multiple SSH sessions, Outlook, couple of Office docs, ~30 tabs across several Chrome windows open and I can still create a new Desktop and play some Need for Speed when I reach home.
Being able to specify a workspace state and return to it is hugely significant. But that requires deep stateful awareness of pretty much every app, including terminals, and the command / console-mode tools running within them.
But at the higher level, I'm liking what I'm seeing here.
If you were to use one VM per project though one obvious drawback is that you would have to maintain a whole bunch of OS installations. It's also a bit tricky to be disciplined and use the correct instance of an application if it exists in both the host OS and the guest OS (even more so if you are using seamless windowing mode), Firefox as example i prefer to use in the host, because that one has all my extensions installed, but that means all the tabs i open that are related to the project will spill over to the next session when i close the project and switch to play-mode.
Last time I checked they were neither virtual desktops nor IDE workspaces; it was hard to think of an activity as a container that holds together all the instances of data related to a project.
They interacted with open applications in weird ways, rather than having one instance of the application associated always to the same activity as you'd expect.
You should check out atlas.co and sign up for the beta for an invite. I'm currently working on this project, and it is exactly aimed at the problem you're describing.
While feeling the shortcomings of a window system, I never felt like I could productively work with a tiling manager, since I frequently overlap windows as a mechanism for easy access ... and there are lots of popup windows I don't want to cover the main application.
Oh yes, and I purposefully keep my Mac apps out of Full screen mode (and use a keyboard shortcut to maximize when needed, keeping out of full screen).
Part of the idea of most tiling window managers is that you use the keyboard to control them, not the mouse. As such, you no longer need to overlap windows to quickly switch to the one you want. You have keybinds to do so (which actually exist in the default Windows and OSX window manager as well.)
I personally hate the basic tile schemes, and full screen in many of my use cases. Yes, when I'm using one app hard, full screen is good.
When I'm using multiple ones, I often want to copy paste info, or just refer to it. Auto pop to foreground is toxic, it forces a change in state when there is no need for that change most of the time.
On IRIX, focus follows mouse, and middle button copy paste were awesome. Still prefer that over other schemes.
Many people get a second screen to accomplish what can very easily be done with one a lot of the time.
Tablets are fine consumption devices. Some create is there, like graphics, but most create has UX requirements that exceed the simple touch sweet spots.
I don't use nor like tiling much, due to overlaps and how I find myself using them.
Some here are saying more iteration could improve on that. Maybe so. I'm sure not oppose to giving new things a go.
EDIT I might be confused here. I'm reading that you are assuming that one has to use a tiling WM to get one of the features you want.
I do prefer overlap with those features, among others, present.
My hunch is that its just lacking a good amount of iterations. Our modern windowed desktop didnt came out ready either. So give this some love, and im sure something really cool and usable, can bootstrap from this.
With todays technology and established interface patterns from mobile, I believe it's feasible to use the ideas of tiling window managers beyond a niche market.
Second: less-than-full-height windows can be quite useful. Being able to specify a "stack" into which, say, I place a video (or video-chat) app, various A/V controls, and possibly some monitors or other smaller windows, perhaps a shorter terminal window or three, is easily something I'd like to do.
With WindowMaker I can _arrange_ windows in a pseudo-tiled state. Unfortunately it's too easy to accidentally drag one off somewhere. Generally, though, this can be quite handy.
What I'd really like to be able to do is to specify roles for certain window locations. Say, a browser, image editor, and audio editor all occupying the same larger window space, which I can tab between.
And I think you're quite right about applicability of tiling metaphors.
To me, an exquisitely blind-accessible GUI's should function like a fancy keyboard navigable/editable graph data structure that echoes the hierarchies and relationships represented on the sighted displays. The biggest boon to such a GUI would be that the blind-accessible controls would function as an "expert" mode of navigating the GUI - one would never have to touch the mouse or trackpad to get stuff done.
Sighted users would benefit from learning these keyboard controls and we'd inject some hyper-productivity back into our apps to counter the fisher-price'ification that has been creeping into GUI's over the past 10 years.
A three finger tap or swipe is not easy for everyone. I'd like to see more (digital) navigation based on actual (physical) navigation -- waypoint labels and directional paths.
"We now use smartphones and tablets most of the time, since they are much easier to use."
No. Just no. I don't want to use a touchscreen and closed ecosystem to develop software. That would be a nightmare.
The rest of the design seems to be taken straight from 10/GUI.
And yep, the first part about using panels was very much inspired by 10/GUI. I am linking to that concept on the website, and also talked to Clayton Miller before publishing Desktop Neo.
Maybe the your article needs to immediately clarify who you are and the intent of the design, and maybe differentiate between your design intended for producers vs. consumers.
Thanks for share!
Does the kind of large multitouch surface with flyover support needed by 10/GUI actually exist? Because I think I want one.
Interesting. I have the opposite impression. I think _customization_ is the key for real productivity. The more customizable the system the more productive I can be with it.
Phones and Tablets are easier to use as long as we deal not with more than two apps at a time. The Neo GUI is an interesting idea but what productivity for? As for me, I prefer a window pager where I can address any window instantly with just one mouse click. Swapping between several pages all the time would confuse me.
> "Windows are now inefficient and incompatible with modern productivity interfaces. ... "Window Management is Outdated"."
No, surely not. I would always prefer a PC with a customizable GUI (KDE and OpenBox for instance). I barely use my phone and pad because they are only useful for basic things.
Isn't this the kind of thinking behind Windows 8?
I have seen plenty of consultants hired, but nearly every startup I've seen or been at/around (~20) prototyped their designs in house. As an example bu.mp hired an industrial design firm to help them make physical prototypes of a POS competitor to NFC technology (which ultimately failed), but had their own engineers and designers actually create and test the working prototypes.
Shortcomings notwithstanding, I think this is an example of excellent work and a creative way to find an internship. Given the opportunity, I'd most certainly offer this guy an internship if I could.
Nice job. Ich wünsche dir viel Glück.
I'd like to try something like this in action; but the problem, as always, is going to be bootstrapping. Look how badly Ubuntu manages something as simple as putting the application's menu bar in a non-standard place.
...I worked once with a desktop environment for the PC, GEOS. It had a feature where your application's UI was described in logical terms and this was then mapped to a physical UI when the app loaded. It allowed pluggable look-and-feels to drastically modify the look and behaviour of the application as they saw fit.
If we had something like that, this would be easy. Shame we don't, really.
With respect to eye tracking, I had a similar idea the other day. Imagine holding a key, then moving your gaze to see an on-screen target following where you're looking at. You could use this as a really quick way to scroll or highlight/copy text without leaving your home row.
I really don't like leaving my home row.
I may be crazy.
A few thoughts there:
1. What happens when my mental schemas change in a year? The word I use to look something up changes, and I suddenly can no longer find it.
2. What happens when I get a little lazy in obediently tagging everything I create? Imagining the 2 or 3 words I'll want to use in the future to look something up (see the first thought) is really tough. Mentally taxing = a barrier to adoption.
3. Folders can get unnecessarily deep, stale, etc... but having the structure available to browse can be an extremely useful trigger in reestablishing the hallways of my desktop-stored "mind palace."
4. Having many ways to discover information I'm looking for > having a few ways. Search, browse, categorize, all have a purpose depending on the way a file or piece of information imprinted on my memory.
Currently, Firefox's bookmarks is the main task where I use tags these days. Bookmarks so quickly become quite a mess! I really like tags for the ability to have both a mess, quickly add things without having to think too hard, and ultimately being able to find stuff of course.
I had a quick excursion using Chromium for a few weeks--does it even have tags in the bookmark db? They weren't picked up when I exported from Firefox. I went back to Firefox because Chromium's address bar doesn't quite search my bookmarks and history the way Firefox does and only displays the top five results or so. Not something I can depend on to be able to find something back.
I really want to get back to having a load of tags on my music collection. Used to have that in Amarok, but I lost the DB many years ago. So useful, for custom playlists, weird personal microgenres that make sense only to you, tagging tracks you almost certainly don't want to hear if you put some album into a larger playlist (Daft Punk - Touch :-P) etc.
Anyway. Your thoughts:
1. Yes. For that, I imagine a powerful and snappy interface to organize tagged stuff. A bit like those automated MP3 ID3v2 tagger tools, but a bit more general. Should allow for mass re-tagging operations like: add tag #newthing to all objects tagged #onething + #otherthing but not #notthisthing. With the title matching "* about things". Only for objects (created) older than 1 year (because my mental schema also changed the way I used the #otherthing tag).
That's only power-users I'm afraid. I have no idea how to solve this for regular users. Although I've seen motivated/determined "regular" users structuring years of digital photos with folder systems in ways that maybe they wouldn't shy from such a tool as long as it's intuitive to use.
2. This is a big shortcoming of Firefox's tagged bookmarks. Back when I used del.icio.us (also many years ago), there was a bookmarklet I could use to add a site to my del.icio.us bookmarks. The greatest thing about it was that it would predict/suggest tags for your bookmarks, I miss this so much. It did so in two visually distinct ways: First, predictions on your own bookmarks, afaik it was just a set of tags that you commonly used in conjunction with the one or two tags you've typed so far. Maybe it also matched keywords in the title and url. If I were to implement this I'd sort them by Bayesian P(#newtag|title-url-keywords-tags-typed-so-far). The second set of tags were suggestions based on what other del.icio.us users had tagged that url (less useful and less privacy).
3. I think you could have both? But most importantly, what helps me a lot to navigate and orient myself in this tagged mind-palace (nice analogy btw), is the ability to not just sort and slice your database by tags and combinations of tags, but to also just be able to browse everything sorted by time. I find this in the photo gallery on my phone, which is an utter mess, but if I really need to find something, I switch to by-date view (groups by month). Even if I don't exactly recall what month a photo was taken (or it could be a picture that I saved from Twitter, or maybe that someone sent me by IM), scrolling through I see pictures through time and usually I quickly get a familiar feeling "wait it was before then, and after ... yea .. when the thing .. ah! got it!".
It depends on how your memory works of course. But I imagine it would work because even though you can tag and re-tag and restructure your documents, the ordered slice of time-line of say 1.5yr ago hardly changes if at all.
4. Yes exactly. You might notice the "solutions" or ideas in the points above are all based in some way or another in this observation.
For instance, If you can track focus, you can make the monitor seem larger than it really is. Just scale everything that isn't being looked at down a bit. As the gaze shifts towards other objects, shift things slowly around and overlap the enlarged window over other background windows.
You can make focus-dependent shortcuts. Imagine vim with nouns and motions that can refer to and act on the focus point. `ytF`: yank-to-focus, where F is a motion from cursor to focus. Lots of rich possibilities there.
The only major problem I see is that you can't really share work easily (unless you open it up to multiple focus points somehow.) Also, it would be frustrating to have to look where you're typing. I'll often look at something else while typing just before switching tasks. I'm going to start paying attention to my focus more to see if there are other potential pitfalls with the computer knowing about it and changing modes in response.
In general I like the possibilities opened up by having focus. Heck even with a traditional mouse+keyboard, the extra data would help the computer understand us better. It might be more suited to a VR desktop, where the 'multiple-viewers' problem does not exist.
I have to say, I really like tagging-as-filesystem concept (where you can also meta-tag something), and the gaze/touch interaction proposed would be awesome to have in my opinion.
The current interaction with tagging (basing off OS X) is still pretty clunky. It's a separate field, the new vs previous tag selection is aggravating with a keyboard, and there's no easy way to browse or select multiple tags to filter content.
Having to move to and from the mouse a lot is a bit of a pain, and even with the best touchpads on the market, the interaction with them can still be annoying when trying to do things like click on a HN upvote arrow. Being able to start the mouse from the point you're looking at, or even forgot the mouse entirely when starting typing into the field you're looking at - those would be fantastic additions.
I'm slightly more dubious about the voice interaction, though there are times where "Hey Siri, set a timer for 10 minutes" is a great way to interact with an otherwise over-complicated device.
When KDE 4 was released it included Nepomuk (https://userbase.kde.org/Nepomuk) as a universal tagging-and-search mechanism to do precisely this kind of thing. Unfortunately, not very many people use it and even those that do only use it in a very limited fashion. Partly because it only works in KDE apps and also because tagging all your files after-the-fact is a very time consuming process.
Then there's this problem: What if I never had any trouble finding my documents? What will I gain by tagging everything?
Tagging makes a lot of sense for things like photos and videos where search-by-content just doesn't work. For everything else you can just index your files and search normally.
Another problem with tagging is that--if you're going to be disciplined about it--you're not really getting much benefit over just "staying organized" with directories/folders. What's the this-helps-me difference between a directory structure like, "Company X/Clients" or a hierarchical tagging structure such as "#company/#clients"? Sure, with tags you're not limited to the parent->child concept but ultimately--if you don't stay organized with a hierarchy--you'll end up with a huge mess that can only be navigated with an intelligent tag-based search tool.
Unfortunately, there's a lot more than just photos and videos which are unsearchable - but even with just photos and videos, there's a lot to be won by implementing this.
> What's the this-helps-me difference between a directory structure
Simply put: any time you have a logical statement (and, or, not) between two criteria, the folder hierarchy falls apart. Files typically live only as a single leaf node under a hierarchical file structure, but the contents of a file could potentially live under multiple branches.
As an example, if I have around 9,000 photos and videos, and I want to view family photos from Christmas 2015, that have my wife and niece in them... without tagging, I'm SOL.
That's probably a reason for 40+ years of success.
It's interesting how IT steps backwards in recent years. First the "cloud" wave came to convince us to leave our autonomous PCs and to turn back to centralized IT servers. The current wave (Pads, Neo etc) looks like turning back to single screen terminals. What comes next? Shall we get rid of the mouse?
Tablet applications literally do less so they don't need as complex of a UI. You can't take that simplicity and apply to more complex applications.
This is really important. It's 2016, search is easy, and I shouldn't have to hunt around because I don't know whether you stuck your options dialog under File, Edit, or Tools. I appreciate Ubuntu for implementing this OS-wide with Unity's HUD feature.
Although, Office 2016 apps have this on top of every window. You can type and it will search every option available.
Neo was designed to inspire and provoke discussions about the future of productive computing. It is not going to be a real working operating system interface, it is just a concept. I am not saying that these ideas would definitely work and that this is the future of computing. However, there is large potential in rethinking the core interfaces of desktop computing for modern needs, and somebody has to try."
Design fiction tends to be some combination of portfolio piece/job application, argument about the direction of whatever is being designed (here, desktop-ish UI/UX), wish fulfillment, conversation piece, and sometime social critique.
Taking a look at the gestures...
1) Scroll through panels - Alt+TAB is unquestionably faster (< 1sec)
2) Open App Control - Win key? (< 1sec)
3) Open Apps menu - Keyboard Shortcuts (< 1sec)
4) Open Finder - Win + E (< 1sec)
5) Close Panel - Alt F4 ((< 1sec)
6,7,8) Resize - Win + UP/DOWN/LEFT/RIGHT (< 1sec)
Now, sure I see some people claiming that "nobody" knows these shortcuts or that they are "not intuitive" (itself a loaded term), etc.
Proposal 1 (Teach people these shortcuts)
Proposal 2 (Invent completely new gestures - teach people these new gestures, which will then slow them down because a keyboard is just crazy fast compared to touch.)
Now, I'm going off of the whole Desktop phrasing. Maybe on a mobile, all of these might make more sense..
Conceptually, I'm really liking the approach. I could see it being a viable refreshing of desktop paradigms. There is an interesting mix of OS X, Windows, Linux WMs, and some other goodies from other apps here.
Visually, the biggest drawback is I could not tell while watching the video which panel has focus. I am assuming the idea is this would be handled by tracking eye focus. Perhaps I could get used to that, but it'd have to be instantaneous switching. As I'm typing this in a half-width browser window that takes up all vertical space, I have the Desktop Neo site in another half-width browser window taking up full vertical space by its side. I'm bouncing my eyes back and forth between the site and this textbox I'm typing in. I'm currently staring at the Neo site while I'm typing, without any looking back. Such a desktop paradigm would have to remain very intelligent about recognizing that I'm currently typing in a panel while looking at, and perhaps scrolling through another panel, without wanting my current action to lose focus or be interrupted in any way. I work this way all the time.
Okay, buddy. Tell me how you're going to navigate a complex project built with multiple apps with your cool new tag-only scheme.
How would you re-organize the myriad number of source code files, library sources, images, image source files, readmes, and other things? Hierarchical structures are good for that kind of thing. If you're going to advocate throwing them out then I think it is imperative for you to tell me how you will organize real projects rather than just a handwave about search and adding tags to tags. How would I reorganize the directory full of 193 Illustrator art files and a ton of subdirectories, many with multiple layers of their own subdirectories, that make up the graphic novel I finished last year? It's got a total of ~2.7k files in it but all of that complexity is hidden behind a bunch of subdirectories so I can quickly find what I need at any moment.
(And also: holy crap those sample images are so much WHITE, using this will be like staring into a spotlight. And so impersonal - the user-set desktop picture is a wonderful thing that makes the computer feel like it's theirs and I kinda feel like this proposal completely drops affordances like that in favor of a blown-up iPad UI.)
I would have bought this more if you were talking about code, but even then an IDE could simply make the project a single file (like OS X '.apps's) to contain the whole project.
This is actually similar to how Xcode works where how you organize your project doesn't always relate to how it is on disk.
It's also important to note that these two gestures perform features (fullscreen / minimize) that can also be done by just resizing the panel (either with 3 fingers, or by going to App Control).
If you try to actually use this on a normal sized and normal resolution screen the text would be too small to read. So, to solve this you have to increase the text size, but this means each element becomes too big so that nothing fits, to solve this you have to reduce the padding of each element but then it doesn't look as good anymore. The contrast of the text is also too faint and if you make it more clear the "wow-looks" of the concepts are all gone. I think this is why you hardly ever see designs like this in the wild.
What I mean is, open this picture and resize it to comfortable reading level, after doing so i can only see slightly more than half of it vertically. Things that previously looked like pretty design elements now just seem to waste space, like why would i need a thumbnail of a map and a movie-cover on my start screen?
It's just like architects designing the block of a city, they always show fancy renders of the place from above(?!) in the best weather imaginable. But that's not how people will actually perceive it when being there because then you see things from street level where it doesn't look the same at all.
Windows + Mouse with hotkeys is still the most comfortable set up for most people including me. I don't need an optimized experience.
I've tried Metro, I've tried i3, I've tried bspwm, I use Vim with split screen, and I still prefer the concept of draggable windows at the end of the day. It feels more free to be able to drag windows and take ownership of the layout than having a computer dictate what my layout should be.
At the end of the day, I feel like I'm more and more preferring simple UI's rather than these multitouch optimized experiences
But he doesn't explain HOW he selected the text. Since there's no mouse cursor, I have no idea how he just did that. A glance doesn't seem to be sufficient since it requires movement and intent.
Place one finger on the touchpad to adjust a cursor at the gaze position. Use a second finger to set the end of the selection.
Think about a music program you want to write. You want to find all artists that music files on this computer have and when the user clicks on an artist find all the music that belongs to this artist.
One general insight about GUIs is that easily undone mistakes aren't so bad. That's the Amazon one-click purchase insight. The innovation is not that you can buy with one click. It's that you can cancel for the next 10 minutes or so. Most shopping systems had a "we have your money now, muahahaha!" attitude on cancellation until Amazon came along.
I completely disagree with that.
If you gave two people (one on a pc, the other on a tablet or phone) some task, ie. move a file from here to here, assuming they were both equally conversant in their chosen system, the pc user would be able to do it in a fraction of the time of the tablet or phone user.
- 6 fingers on the trackpad means two hands on the trackpad, which means moving one of them out of its working position. Just like you are avoiding moving the mouse around with the look-and-tap thing, you want to avoid users getting their hands off the keyboard, onto the trackpad, and back to the keyboard;
- the context menu is really nice.
It is a very impressive piece of work for someone who is looking to show off his skills while still in college. I made projects myself before graduating, but they were always amateur-y and only for myself. I never made it to actually share it with the world. He actually did something and put it out there for others to see and judge.
Kudos to the guy, specially if the whole design process which he claims to have gone through (research, target groups, user flows) is as thorough as it sounds.
The first is the consideration of running this on current hardware. Obviously Neo is meant to be run on hardware designed for these interactions (as shown with the voice key on the keyboard and the gaze tracking camera) but I'd like more detail on how this could run on systems that do not have this hardware.
The second is in professional applications. I did not see any screenshots that deal with Photoshop or Final Cut Pro or After Effects or Solidworks or really any of the crucially important applications for the desktop. These are exactly what keep the desktop around, and what keep people from accepting Metro.
It'd be great if we could solve these two issues, or at least discuss them. I think these ideas are really great and the presentation was amazing. I'm seriously impressed, but it's a little held back from widespread adoption until we can figure a few things out.
P.S. Also I'd like to be able to split my terminal windows horizontally please :)
I found that Neo really resonated with my residential use, but not so much for my work.
From the Author's referenced blog post 'The Desktop is Outdated': "We interact with a lot of different content today, and a large part is outside of files". Not in my work environment where the majority of content is inside of files. But sitting at home - yeah, this is true for me.
It appears to me that the mobile interface cart is trying to drive the productivity desktop horse here. I don't know to what extent I buy it, but I like the way Neo challenges current desktop design.
I'm going to go back and read it again.
What does this even mean? ALL my data is in files and folders. Is there a filesystem that doesn't use the concept of files and directories? If not, then isn't it best to model the system closely in the UI?
I also read the "Window management is outdated" and it completely failed to detail how windows are bad. It seems to me that windows are the most flexible UI paradigm, allowing you to decide how exactly you want to use your screen space. The challenge is on the devs to make apps with a reactive UI that changes according to the size of the window.
I would much prefer all of the things I do to be organised by the work I'm doing. Here is an example.
If I switch to the "personal project" I get a completely clean workspace (or whatever I left it as) with email filtered to be about only things related to my personal project. I want only the apps that I use for this project (browser, pycharm, photoshop, terminal) all on different screens all of these apps only showing the work and paths I have associated with a specific project.
Basically build GTD into all my apps and the desktop and allow me to filter said desktop by tags, projects, people etc.
And I think I'll take this opportunity to turn on no procrast again...
AKA the "goatse" gesture.
And here I am, with my desktop computer, comfy keyboard & screens, fully hackable. I've had it for almost 12 years now, and I just replaced parts every now and then to keep it somewhat current. That will never go away. Then I got an old laptop for working away from home, but I hardly ever use it.
So when I hear touch screens will become ubiquitous and its necessary minimalistic UX will drive the consumer computer industry, I have my doubts.
Never gonna happen.
Something being useful does not mean it will necessarily eat everyone's lunch. Nor that is has to.
he will see how the organization in panels and not windows was called "con10uum" (min 3:30). I wonder if some Microsoft designer/VP/exec saw this and get the idea to put "Continuum" as one of the most important features of Windows 10.
Besides from that, great work, congratulations @ziburski. Like many others I miss shortcuts on a productivity environment, but this really could work.
For example, ctrl-alt-command is what I use for most shortcuts (and I don't use most of the ones that come out of the box):
C-M-S + Up Arrow: maximize fully (horizontal + vertical)
C-M-S + Left or Right: maximize vertically, constrain to left or right side. This is neat in that it cycles between making you window 1/2, 3/4, or 1/4 of your screen wide, so you can easily have one app take up 1/4 of your screen and another take 3/4.
shift-command-D: change which display something is on.
All of these play nicely with the dock, even when it is on the left side of the screen instead of the bottom.