Hacker News new | past | comments | ask | show | jobs | submit login
Desktop Neo – rethinking the desktop interface for productivity (desktopneo.com)
798 points by ziburski on Jan 19, 2016 | hide | past | favorite | 314 comments

It seems like a lot of people skimmed without reading the top or bottom of this site:

It's not a company. It's not a product. You're not being asked to buy it or buy into it, just to discuss the concept if you'd like.

This website is a portfolio piece for a 21-year-old university student hoping to find an internship. In my opinion, it's an impressive demonstration of his design and technical skills. It certainly says a lot more than the average 21-year-old's resume listing what courses they've taken so far.

This guy has the chops to come up with a well designed concept and should have the opportunity to defend & iterate on it based on constructive feedback. Some of these comments provide that feedback while others flatly have an issue with the idea of a concept altogether. It's unfortunate that desktop UI has remained stuck for quite awhile and only MS & Apple can really provide a new paradigm for people to adopt. Both companies are basically thinking mobile-first which makes the compromises for productive desktop UIs inevitable. By forcing ourselves to think of productivity as a priority, it brings about concepts like this which give us a glimmer of hope on how technology is more than just giving us innovation in consumption only.

He's interested in the feedback and is here participating in the discussion (@ziburski). I just felt compelled to write something after reading a couple silly "you lost me as a customer" and "what's the point" comments.

I actually wanted to thank you for your parent comment. It needed to be said and gave me a chance to say what was on my mind. Hope you didn't take my comment negatively.

>t's unfortunate that desktop UI has remained stuck for quite awhile

A lot of things get "stuck" and for good reason. Steering wheel, gas, and brake. Sight, trigger, stock. Underwear, pants, socks, etc.

Personally, I think the NT team really nailed the concept of the desktop back in 1993 with 3.1. The start button which hid programs and allowed access to other functions was incredibly handy, a bit like how a desk has a desk drawer within reach at all times. The desktop as a dumping ground for whatever you choose and system tray for misc stuff really, really works well and with mimimal cruft. Windows applications worked unlike MacOS applications by integrating the menu bar directly into the application itself instead of making it a detached accessory of the OS. Right-click support also allowed a lot of dynamic functionality.

I understand that almost none of these specific were their innovations, but they took a lot of half-cooked ideas and made something very, very usable.

Its a shame the NT team is never brought up like the Amiga or MacOS teams. I don't think they've gotten their proper due. Its also interesting that MS ran back to the start button/menu only three years after letting it go. Its incredible how powerful the NT way of doing things still is.

"Personally, I think the NT team really nailed the concept of the desktop back in 1993 with 3.1. The start button which hid programs and allowed access to other functions was incredibly handy, a bit like how a desk has a desk drawer within reach at all times."

One of us is remembering things incorrectly ... I am quite certain that the start menu interface debuted with Windows 95 and was then later adopted by Windows NT 4.0.

NT 3.1, 3.5 and 3.51 all used the old Windows 3.1 user interface.

Right ?

You're correct. Windows NT 3.1 was visually nearly identical to Windows 3.1. This was part of the reason for releasing a 1.0 product with a 3.1 version number.

I think part of it too is that on the desktop, we generally stay in one or two applications for long amounts of time. For example, we'll spend hours inside 1 or 2 browser windows. All the changes is the content within the browser, the positioning will stay static for hours. Or an IDE, spreadsheet, wordprocessor, etc... - all are long-running apps, constant opening and closing or resizing isn't needed. Current DEs also get app launching right, you can have workspaces, etc... The practical difference between using the OSX, Gnome Shell, KDE Plasma 5 or KDE 4, Ubuntu Unity, even Windows - isn't all that big. You can open your app with search, resize it to half the screen or maximize it easily, and you can switch through apps easily. They've all converged with a similar set of features for everyday use, the main differences are simply little widgets here and there.

I see those concepts in RISC OS¹, developed by Acorn (as in ARM, originally Acorn RISC Machines).

The buttons on the left open windows containing applications and files, like the Resources window that is already open. On the right are running applications. The menu on the text editor has been opened with the middle mouse button; to complete the save action (for a first time save, or new destination) the icon can be dragged to a file window.

The desktop can have files, folders or applications pinned to it, but there are none in this screenshot.

The screenshot is as the OS was released in 1991. The first released version, from 1987, had similar functionality but lacked icons on the desktop ("pinboard").

(The other thing I liked about this OS was how applications were packaged. They were simply directories beginning with "!" containing, at minimum, an executable file called "!Run". This made it nice to explore, especially as !Run was often written in BASIC, and there was a BASIC editor and interpreter in ROM.)

[1] https://en.m.wikipedia.org/wiki/History_of_RISC_OS#/media/Fi...

First I think this site and idea presented is brilliantly built and communicated - while I don't like gestures myself, they are useful for many things...

That said, one of the most elegant and wonderful UX I ever had was with Softimage starting on sgi in 1993.

Still to this day I felt they nailed user interaction so wonderfully back then.

When you watch someone interact with a deeply complex application and see them just swim through using it - it's really great.

Like professional animators and both sound and video editors/producers etc.

I don't have any opinion at the moment on this as I need to read all the comments and watch his video, but we'll put together site.

They actually took the "Start menu" from OS/2, which probably got inspired by some of Unix WMs back then.

>Some of these comments provide that feedback while others flatly have an issue with the idea of a concept altogether.<

Agreed...feedback can make all the difference...if you're the affected party crowd-source the feedback on your own...what's valuable?, what's simply old-school thought thinly disguising envy?, and what's in-between...lean toward the "valuable" and pay close attention to the "in-between" if you want to have a reasonable chance of moving forward...

I agree with everything you said.

This is a brilliant resume and he ought to be looking for gainful employment, rather than just an internship.

> he should be looking for gainful employment, rather than just an internship.

Presumably he wants to finish college, in which case an internship totally makes sense.

Besides, tech internships are very gainful.

Agreed, he's much more skilled than many designers I know.

To be fair, it's also a good, cohesive amalgamation of UI interaction concepts as well. If someone did realize this design, I'd be on-board for trying it out.

I wouldn't. It seems like a very Apple style of interaction, which emphasizes that you are interacting with an Apple UI. By which I mean to say, using a Mac or an iPhone is at least 50% about using a Mac or an iPhone, and maybe 50% about doing whatever it is you are trying to do. In an ideal world, you shouldn't be thinking about the tool.

> By which I mean to say, using a Mac or an iPhone is at least 50% about using a Mac or an iPhone, and maybe 50% about doing whatever it is you are trying to do.

Could you elaborate on this? I don't feel that way at all when using any modern OS/device, but maybe I just haven't thought about it in the same way.

Apple's focus on minimalist aesthetics have come at the cost of discoverability. Interactions are concealed by gestural input and incomprehensible icons without text labels. This issue is getting worse rather than better, e.g. force touch on the new Macbooks.

Using Apple devices requires a great deal of memorisation. For power users this is a relatively minor burden, but it is a serious issue for inexperienced users, or those with problems in cognition or memory.


Apple is just using another approach to having power users and novices using the same UI. How many people started by using a mouse for Edit (copy) before using, [Alt] [E]dit (copy), then Ctrl-C all in one step. Yes, you need to memorize Ctrl-C, but that's not a long term issue. The same is true of touch interfaces, you can use them knowing few shortcuts and slowly add more over time.

People spend hours a day interacting with a computer in one form or another, they will memorize things.

PS: Remember getting annoyed when Microsoft when gaga over the 'ribbon' concept? I bet you don't think about that one much anymore.

Keyboard shortcuts have good discoverability in most applications, because they're listed alongside the drop-down menu options. If you use Edit->Copy a lot, you can see in the menu that there's a keyboard shortcut.

Many of Apple's UI interactions can only be discovered through guesswork or reading the docs. If you open Launchpad often, nothing alerts you to the fact that there's a touchpad gesture for that. Unlabeled icons are rife in iOS and there is no discovery mechanism equivalent to a tooltip.

Windows OS is full of shortcuts that are non obvious ex: Windows + E or even the classic Alt + Tab.

Exactly. GP has a good point, but it's a problem not exclusive to Apple.

> If you use Edit->Copy a lot, you can see in the menu that there's a keyboard shortcut.

You can. And I do. And given your post, I assume you do too.

But did you know that most people don't?

I teach people computer skills (a lot of children, ages 8-12, but other ages as well). Only the very clever ones (those we can expect to welcome on HN in a couple of years ;-) ) figure it out by themselves. You'd be surprised of the in-your-face stuff people simply do not read on the screen. I then point it out, but it's about 50/50 whether they'll pick up the habit.

And it's the same thing with this interface the author has designed. It's all intuitions about how he uses computers, how people like him use computers, and how the self-selected group of feedback-providers use computers. Nothing about the general public. No research, no tests, no reference to classic UX textbooks and theory.

Now if he had stated upfront that those three categories are actually the only intended target audience, then at least he would have acknowledged that this is a possible problem, up-front. Not doing so, it appears he hasn't given much thought to it.

So, sorry but what I see is not a user interaction designer, but a graphics designer with a cute hobby.

If you think that's harsh, imagine he'd have redesigned the user interface of your car. Would you trust the ideas reading only about mockups and "intuitive" justifications? Wouldn't you think, mm-mm yes nice ideas but until you actually tested them on a focus group they could go anywhere and further effort is pretty much wasted until you do. What if you assume his driving style is rather different from yours?

The lack of discoverability extends to OSX's keyboard shortcuts as well. I completely changed how I work after I discovered 'Command Tab' and 'Command ~'

The discoverability is there, but not in the wild of the OS; it's in the Settings panel. Better yet, it's configurable (and powerful).

    Settings > Keyboard > Shortcuts > Keyboard
Ditto that for the touchpad (including animated examples)

    Settings > Trackpad > ...
Perhaps we as power users are just used to not having to visit these locations?

Those are actually precisely where I discovered a bunch of OSX's shortcuts. I think this was around the time I configured the two finger right click to make playing Minecraft without an external mouse feasible.

To this day, I have not figured out how to switch to another desktop on my Macbook Air using a keyboard shortcut. To be fair, I use it very rarely.

I'm pretty sure the default is Ctrl+Left and Ctrl+Right, but I've customized it so many times I can't be completely sure.

OS X actually allows a lot of customization. You assign a keyboard shortcut to any menu item in System Preferences -> Keyboard -> Shortcuts -> App Shortcuts.

Control-Left|Right Arrow.

Gotta try it when I bring it out next. Thanks!

It's amazing how buried some of these are. Nothing tops print screen for me - Cmd + shift + 3 for the whole screen or Cmd + shift + 4 for the selection option...

My absolute favorite is cmd + shift + 3 - release - space to take a shot of a single window. Maybe it's the release part between 3 buttons and pressing space, but that shortcut is always wired to use for me.

I didn't knew this one, it's awesome, but it's with 4 instead of 3 :)

I think about that one constantly, in the v sense that it motivated me to ditch Office as much as possible (I mostly use LibreOffice these days)

Sure, if you avoid a product then you’re going to get stuck in the 'novice user' trap. IMO, the problem with a lot of these designs is they were created by and for frequent users. I remember Photoshop at being incredibly opaque to start, but you can also get used to it surprisingly quickly while regularly learning new things.

Not sure what you're getting at? The ribbon breaks office for frequent users. Around office 2003 I could have files, fonts, formatting and review all on the same amount of screen real estate as the ribbon.

This was invaluable when working on complex documents. Post ribbon all that's forcibly broken up across multiple tabs. Instead I get giant buttons for functions I don't use permanently taking up space, rather the. An efficient array of those I do.

You can still create custom ribbons. Screen real estate should be at slightly less premium now, so even if it's slightly worse I don't see that as a big deal.


Alan Kay's criticisms are much more damning.

It drives me mad when people talk about touch interfaces as being "intuitive". Once you get past "touch icon to activate" and "touch and drag to scroll", gestures aren't intuitive, they're a secret handshake.

I agre with you and I constantly get hammered for having this same opinion: gestures are inscrutable, difficult to discover, hard to memorise, and just generally an off-putting deviation from the ”touch/click items/icons/runes to do things” metaphor.

The OS X UX has demonstrably more discoverability than any other contemporary OS, and certainly much less "mystery meat" than Windows.

The shortcuts are consistent across all Mac apps, are discoverable (and completely modifiable) from System Preferences -> Keyboard -> Shortcuts, or from Help -> Search in any app.

Trackpad and mouse gestures are shown, with videos in fact, in System -> Trackpad and System -> Mouse.

I think this design handles those criticisms well. The application menu looks great for discoverability, and the context radial includes clear labels.


When I look at Apple's UI design, a lot of it seems to be about interacting with the device and not interacting with my task. Partly this is due, I think, to the amount and types of integration of apps into the environment (via Cocoa.) And partly, it's the fact that Apple's UIs/APIs aren't as generalized and engineered as Microsoft's -- which is to say, as an end user, you can do what you need to do, but there's generally fewer ways to do it and when you do it's much more defined.

I'll agree that using the OS X window manager is a serious pain sometimes, however there is little in this which reminds me of either OS X or iOS (aside from the icon set).

In many ways, it owes more to the Metro interface than it does OS X or iOS, for better or worse.

I thought it was much more metro than iOS.

Agree completely. I like my mouse. This doesn't work with a mouse because its design ideals say mice shouldn't exist. Hmm.

I think some of the core concepts would work well as a Gnome 3 shell extension.

You might try building on top of (or researching) ShellShape: https://github.com/timbertson/shellshape

The problem is style of substance. The presentation,the design of the website is great, but the content feels like a bunch of random incoherent stuff put together with a catchy name.

Yes the webpage is beautiful, there are nice animations, but that's it, there is nothing else. It's an nice portfolio though, but there is nothing to discuss when it comes to UX/UI itself.


Because that's been done before; go lookup 10 GUI, and it will look remarkably similar. It's different from the accepted standard, but not actually a new concept.

How is it random incoherent stuff? Did you even watch the video? It seems pretty straightforward and refreshing.

There's nothing about UX design in the video either.

If he wants a job building marketing pages and videos for software, sure.

But if he wants a job as a UX designer, and I would go by the info on his sites, his design process consists of mockups and talking about it with his professors.

Seriously, if these are his skills, how the hell would he do UX design for an assignment when the target audience does not include himself?

I'm reminded of that run of "brilliant new design for <physical machine>" linkbait posts on tech sites a few years back, touting the incomparable benefits of some colorful, curvy piece of hardware over anything out there. The original source was always some design student's portfolio, and the "brilliant" design was never more than a colorful, curvy housing that didn't exist outside a rendering.

Yes, and?

Is your criticism that we shouldn't discuss things which aren't in the physical world yet, or that we can't discuss unimplemented ideas?

No, my observation was that those stories (and discussions) were similar. And they are, down to the spectrum of reactions in people talking about this design piece as if it were a real product.

If you feel some need to be defensive about that, I'm sure that has a parallel in those discussions, too.

> This website is a portfolio piece for a 21-year-old university student hoping to find an internship. In my opinion, it's an impressive demonstration of his design and technical skills. It certainly says a lot more than the average 21-year-old's resume listing what courses they've taken so far.

I'm actually a bit baffled. He's not just a student, he actually studies "interface design"!

"I am 21 years old and study interface design at the University of Applied Sciences Potsdam."

Of course just a list of what courses you've taken so far isn't very sexy or interesting. You don't need to list them explicitly. But if you do claim to study a particular field or industry, you should at least demonstrate that you have taken some courses (in particular on the topics you're passionate about), did interesting projects to complete them (there must be, even tiny ones), what/that you learned from them, why they matter.

Doesn't need to be very explicit, but there's literally nothing on this page, nor on his personal website indicating what he learned studying interface design. Drop some names, terminology, textbooks, works, important research. Show us how "I study interface design" means more than "I'm working to obtain a piece of paper that lets me claim I studied interface design".

Microsoft should hire him yesterday.

He also knocked marketing out of the park.

I work at a company where we hire people for exactly that kind of gig, with a heavy focus on building prototypes of your designs that are actually usable because that's the only viable way to design interfaces. Here's feedback as to why I'm not sending this website to my studio lead to schedule a phone screen with the designer:

1) reused assets from other companies (big no no, even for just a mockup) that are inconsistent with the overall visual style of the project

2) very implausible interactions that were clearly not prototyped (eg the 6 finger pinch... what?!)

3) work heavily inspired from other research projects and designs (10 GUI, various tiling window managers, etc) without showing a clear unifying interaction model. This is more of a patchwork of loosely related ideas, and to me doesn't show clear, deep understanding of prior work (both academic and commercial) in the covered areas.

4) awkward copy that jumps from marketing-style speak to explaining interactions from an analytical point of view. This is the most minor of all points but just hurts the polish of the piece.

So this tells me that the candidate is not a strong graphic designer, not a strong prototyper, nor a strong researcher. He's looking for an internship so only one of the three with promises in the other two would be sufficient, but that's not apparent enough to me here.

Hope that's constructive - I see a fair number of such portfolios/resumes every month.

EDIT: lots of hate on my comment below. The main point seems to be "well you're not totally wrong, but he's a 21 year old intern". Sure, although that doesn't change any of my feedback. I am assessing the work as it stands on its own merits, independently of the designer's age. There is certain work on which 21 year old interns don't have a lot to contribute on, such as redesigning entire OS user interfaces. When it comes to assessing a designer's skills, I prefer to see a series of small, focused pieces rather than one large, sprawling one (the latter is much harder to pull of well unless you have tons of experience). I'm looking forward to looking at how his work has evolved a few years from now.

I don't believe your comment was intended to be constructive at all. It sounds much more like jealousy and looking for an excuse to tear down someone who's put in great effort.

The concept and presentation of it are definitely impressive, especially for someone who's only 21 and still looking for internship-level work. He's easily in the top 5%, probably top 1% of designers, in terms of being able to see a complicated concept through from ideation to execution.

#1 — There's nothing wrong with using assets from other companies like Amazon, Facebook, Wikipedia, etc. in a concept mockup. If anything, it's nice to see realistic assets being used instead of stock junk that won't actually work in real-world situations.

#2 — Yup, the 6-finger-pinch would probably be too convoluted, I agree. But that's easily replaced with a regular pinch. Easy to get little details like this wrong in the scheme of things, especially for the more minor gestures.

#3 — Who cares? If anything this shows that he knows about what's out there in terms of prior art, and is able to build on it in an inspiring way. That's a good sign in my book, not a deal breaker.

#4 — The fact that he was able to put together this page, with all of the marketing copy, and the well-design screenshots (with zooming), etc. already shows that for a visual and interaction designer he's way ahead of the curve in terms of marketing.

I'm not saying the concept is perfect, but it's definitely well presented, and well thought out. Add in the fact that the designer is 21 and looking for an internship, and he is an absolutely obvious candidate for a phone screen. I'd be surprised if he isn't able to land internships with any of the big tech companies (Apple, Google, Facebook, etc.) with this piece as part of a larger portfolio. He's clearly good at coming up with, polishing, and communicating ideas.

> I don't believe your comment was intended to be constructive at all.

You may have a different assessment of the candidate's potential value as an intern. The parent clearly stated "Here's feedback as to why I'm not sending this website to my studio lead to schedule a phone screen with the designer".

Making assumptions about the parents intent provides no value to the discussion.

It's not an assumption, the commenter also clearly stated, "I hope that's constructive". To some of us, it sounds belittling and dismissive, regardless of how well he knows his company better than us.

There is a big difference between "I don't believe you were trying to be constructive", (which make assumptions about intent) and "I found your critique to be un-constructive, here's why:"

The former adds nothing to the conversation and is pretty close to being a personal attack, the latter provides feedback and (may) prompt constructive debate about how to provide critique.

Sure, here's why it's not constructive: not once in the parent comment is an alternative suggested or a question raised.

Just a list of things that are "wrong," according to the poster.

It's a resume piece and the parent works for a design company. He's not looking to start a discussion on the merits of the design and how to improve it, only his problems with it from a hiring perspective (polish, consistency, lack of research). It feels very constructive to me.

Then what separates constructive criticism from just criticism?

Parent comment supplied criticism – I'm not arguing that. But s/he did not at all help to reveal a path towards a better outcome. That's what would make it constructive.

Judgements on motive, while critical for investigations and character analysis, are obviously irrelevant to online tech discussions, famously impervious to baggage, ideology, and petty warble.

> I'd be surprised if he isn't able to land internships with any of the big tech companies (Apple, Google, Facebook, etc.) with this piece as part of a larger portfolio.

Sure, but perhaps I'd be more surprised if that internship had to do with UX/interaction design. Maybe marketing, or webdesign.

If I were to make a drawing of a house, even a detailed rendering, and made a website about why I would like such a house, why it would be a nice house, that wouldn't make me an architect.

Your point about the larger portfolio is an important one. Where is it? At 21 he must have been studying this field for 2 maybe 3 years?

The site does look very neat and well-polished. That is impressive, no matter what. Clear indicator of talent. Whether that talent is UX design (people keep talking about "design" in this thread, as if it's some general thing), he does not demonstrate: no tests, no design documents, no research. Except he does claim to have focused on research, yet no word about what this research entails, the process, how it went, what he found out.

So I'm going to say, he's got great talent, for marketing shiny things to the general public. And if you think about it, it makes a lot more sense that someone great at marketing reaches the top of HN, than someone great at UX design? ;-)

>So this tells me that the candidate is not a strong graphic designer, not a strong prototyper, nor a strong researcher.

I'm not a graphic designer nor do I hire them, but your conclusion seems overly dramatic. Perhaps this prototype does not meet your high bar, but that does not mean that the candidate overall is not qualified. It just means that this particular work did not demonstrate these skills to your satisfaction. The candidate might have these skills and dismissing them after one prototype seems to be setting yourself up for many false negatives.

If he's in charge of some super high level design boutique charging millions to customers that everyone is murdering kittens to get into, then I can understand his position.

4 critiques is not bad for a student/internship piece. Most students simultaneously do not have deep knowledge of prior work and are also (one of) strong designer or prototyper or researcher because those things simply take time/experience to get.

His bar to me seems like "entry-level position, 12 years JavaScript experience required" but for UX/UI

I think he's qualified to be an intern, I don't think he's strong in any particular area.

However that's not an issue, since he's a student. Experience, and further learning will make him absolutely exception in a single area or multiple ones.

However that's speaking in comparison to everyone else in the world.

In comparison to other students, he's done a fairly good job.

As the top-level reply to that person's comment pointed out, the post stems from insecurity, plain and simple.

I have no experience in your field, but I'm very surprised to hear that this isn't worth a phone screen for a 21 year old. Tough industry!

Yeah this is ludicrous. There are thousands of designers making a killing in tech who don't show this level of refinement, effort, or robustness in their thought.

He's just being a jerk to be a jerk.

+1 I thought it was quite impressive until reading that comment from someone who works in the design field!

Don't believe that guy, it's still impressive.

Of course design people* don't like good concept art for a tiling wm. It's not "modern", emphasizes work instead pointless high rez stock photos, and acknowledges the desktop.

*not all design people, just a certain flavor

1. I'm actually a big fan of tiling WMs, I use xmonad on my personal archlinux laptop.

2. "concept art" for a tiling wm makes no sense, because as you point out, it is first and foremost something that is work focused. This is why you build functional prototypes.

I see you're also a big fan of numbered lists.

Concept art for a tiling window manager certainly does make sense. Especially if you want to define the appearance of one.

It is, he's just finding reasons to be an asshole. It's only tough because people like him make it needlessly tough due to some inferiority complex.

You have some fair points. Personally, I would not discount the conceptual sprawl of the piece – I think it's in some ways an unavoidable part of a solo project to re-think something as broad as a general-purpose computing UI. When I was working on 10/GUI, I don't think it quite occurred to me just how disconnected the hardware / pointing part and the window management part were. At the time, they seemed largely inseparable because I had this big idea of how the whole system should work, and it wasn't until I started reading feedback that I realized just how readily the window manager part could be adapted to stand on its own.

DesktopNeo has some more promising ideas (there's some really good thought iterating on full-height window workflows) and some less promising (some of the more awkward gestures), but it's a strong showing overall.

Touch seems great on a phone or tablet, but I absolutely don't want it on my desktop or even laptop computer (I have a Thinkpad with a touch screen). Small gestures work well on a touchpad (well on Apple touchpads at least) and bigger gestures could be picked up by the camera. But I have no desire to drag my fingers across the screen. Fingerprints on a monitor are the worst.

So he can quote the creator of 10/GUI, "DesktopNeo has some more promising ideas (there's some really good thought iterating on full-height window workflows)... it's a strong showing overall."

"Here's feedback as to why I'm not sending this website to my studio lead to schedule a phone screen with the designer:"

Just curious ... how many of the folks that you have sent to your studio lead have had their project at the #1 spot on the front page of HN with 100+ comments ?

I've had personal work reach HN with lots of votes and comments that went nearly nowhere. I would have killed (metaphorically) to get concrete criticism like this; that's way more valuable than the votes.

> the candidate is not a strong graphic designer, not a strong prototyper, nor a strong researcher.

The candidate is 21 years old and looking for an internship. Would you really expect them to be strong at any of those. It seems he/she does have some strengths doesn't it? Would it not be useful to focus on those and see how the others could be developed?

Your reply really doesn't come off as an attempt to be constructive. It comes off as someone with a bit of knowledge about the field wanting to demonstrate that for strangers on the internet by tearing someone's work down.

The ability to just pull a project of this scale to completion is something that virtually no 21 year old has. That alone is worth something to a company with some sense.

There's obviously a strong foundation for the hard skills (like visual design or prototyping or copywriting) that can be easily built upon in the field. That's the whole point of an internship: find a hard worker with a good foundation, teach them how to succeed in your company.

Not sure what the hell parent comment's problem is.

> The ability to just pull a project of this scale to completion is something that virtually no 21 year old has.

Undergrad final year projects demand a similar investment of time, effort and view of the big picture. It's something we do in the UK - not sure about the US and others.

Yes, you are also paying/being graded on the work. That's a whole lot of direct incentive that isn't present for a project like this.

Seems like you have the bar set pretty high for what you consider to be a good designer. Would you mind sharing some of the work your studio does?

I am by no means a designer, but from an outsider's perspective(after all designs are consumed by regular folk like myself, not necessarily other designers) Lennart's work seems pretty well done. Especially considering that he is at the very start of his career.

>So this tells me that the candidate is not a strong graphic designer, not a strong prototyper, nor a strong researcher.

Oh please, he's certainly a strong graphic designer, by the looks of the website he certainly has web development abilities, and if you want people with so much previous experience, I don't see why you even feel the need to rip this guy apart.

I have my own complaints with parts of the design, but overall it shows a lot of hard work, a lot of great ideas, and a hell of a lot of creativity. He'd be incredibly valuable for any company lucky enough to pick him up.

The ol HN trope where someone tries to wow the rest of us by pointing out just how unimpressed they are.

Even snuck in an edit to try and disarm future criticism by preempting it. Bravo.

As someone who have also seen my fair share of portfolios (used to run a fairly large design agency in Denmark) I am trying to understand what part is of your critique is constructive.

It basically boils down to. You are no good enough. This is hardly constructive criticism at all thats just criticism.

I will gladly forward his portfolio to anyone looking for an intern. It should be fairly easy to assess in an interview whether this guy is the real deal or not. And if he is he got a great future in front of him.

+1 for #2. 6 fingers for what will likely be a fairly regular action is clunky at best. The way it's illustrated makes me think it was more along the lines of a board you stand at and not a desktop.

I'd love to know for who you are working for.

Some place with such incredibly awesome design that this isn't even worth a quick phone chat about a lousy unpaid internship.

Whatever this place is, they must have standards even higher than Apple's or Google's.

Looks like these companies where you need 10 years of experience in Sketch/Node/SCSS for junior position.

>4) awkward copy that jumps from marketing-style speak to explaining interactions from an analytical point of view. This is the most minor of all points but just hurts the polish of the piece.<

This is important, and often separates the good from the bad...can't count the number of websites I've visited, or products I've considered buying, where I was stopped short because of marketing that appears to have been hastily constructed, or that undervalued consistency in presentation...

That always puts my head on a swivel--are the ideas being presented, the product being sold, etc., of actual value, when the front-end appears thrown together, or carelessly edited?

You don't do an admitted novice a favor by pulling punches...they deserve honest feedback...they can aggregate opinions, and come to their own conclusions as to how to move forward...that's crowd-sourcing in a nutshell, right?

would be funny if this guy became the next Jonathan Ive so we could see this feedback in a different light 10 years from now.

I'm perhaps in a minority here, but Jon Ive isn't that great. I mean my iphone should have a notice/indicator light on the front. It wouldn't have been that hard to put an LED behind the glass so that if I leave my phone in the other room and I receive a message or something, then simply walking past my phone, the blinking light will catch my attention.

One of the most widely respected and awarded designer in the world... not that great. Perhaps you should re-evaluate your measure of greatness?

I had this feature in my old blackberry. It sounds like a good idea but it ends up making me tense every time I see the thing blink. I much rather like checking my notifications on my terms by turning on the screen.

"Let me put it this way - ever heard of Plato? Socrates? Morons."

I think you are also in a minority wanting that blinking light. I don't even want email notifications when I am logged into my phone.

Right, but this way to check your messages, you pick your phone up and interact with it constantly. That's good for Apple.

Who is a good designer then ?

What's the name of your company?

Your company is probably doing it wrong if this guy isn't worth an interview.

> It seems like a lot of people skimmed without reading the top or bottom of this site:

This is the nature of Hacker News recently: people post as soon as possible to demonstrate their knowledge on a related subject. I've twice recently had articles posted on HN where a commenter loudly ranted about the article missing something that's covered explicitly in the first 3 paragraphs.

dang how about adding 'RTFA' to https://news.ycombinator.com/newsguidelines.html ?



The problem with the tablet analogy is nobody seems to know how to design complex, professional-level (like Photoshop, or Blender) applications in that paradigm. Tablet apps are overwhelmingly consumption-oriented, and those that aren't all seem to be oriented around popular (and populist) appeal.

Why is everyone out for the blood of the desktop/windows paradigm? Why can't the simplified-tablet market and the desktop-power-user markets coexist? Windows and taskbars and start menus are wonderful and my favorite way of interacting with computers, why must it be taken away?

Right, this seems to me to be a tiling window manager with a touch interface. This is kind of an odd combination too, because most tiling WMs are keyboard oriented, whereas most non-tiling WMs are pointer oriented. Maybe that's just historical and doesn't really make much of a difference though.

But with i3, I have keybindings to "open the last browser window i used" or "open the last gvim window i opened." I can tile them in different workspaces as I please extremely efficiently and quickly, and move them around as needed. I can move much faster using the keyboard than when I'm in my Windows machine. Having this tiling ability on a tablet seems odd to me.

I am kind of getting tired of the "attacks" on the traditional desktop interface. People just don't want to admit that just because tablets exist they are not a superior interface. Just a different one.

The interface they propose is beautiful but it would really have to walk a thin, thin line between usability and screen space.

I do like the idea of a tag-based filesystem though. Obviously then you get into the problem of managing your tags (a tree-based filesystem solves this problem naturally).

I would personally kill for better mouse integration in i3 (which I use on all my daily-use machines), and I would be extremely happy to augment it with touch screen support (which I would largely want to use for tile manipulation).

In fact, a main reason I use i3 is because it is literally the least ideological tiling window manager when it comes to mouse use. The productivity gains I get out of tiling don't come from using the keyboard, they come from not having to deal with the (imo, especially in the post-1024x768 era) very messy overlapping window model. Things are where I expect. I can move them from one screen to another and sane things happen. Plugging in and unplugging a display doesn't make hash of my organization. Things are never buried under incomprehensible layers. And so on.

> I would personally kill for better mouse integration in i3

I agree, it was one of the things I was missing back when I was using i3 + Ubuntu as my daily machine. Now, I'm using OS X with Amethyst[1], which while it's less powerful than i3, combined with OS X's window management features I honestly feel like I get the best of both worlds. It's quite great, to be honest.

[1] https://github.com/ianyh/Amethyst

> I would personally kill for better mouse integration in i3 (which I use on all my daily-use machines), and I would be extremely happy to augment it with touch screen support (which I would largely want to use for tile manipulation).

This sounds interesting. Would you care to elaborate?

I know at least KDE's Baloo indexer uses tags on files when available. If I search for the word "cookie" I get every filed named cookie, anything tagged cookie, contacts, bookmarks, and any parsable text document containing cookie including emails and IM logs.

I think the broader problem is that there is this awkward question of where tags belong in the application heirarchy. Right now most media files embed tags while text files do not. Binary files certainly do not, and folders are not true files and thus are not on-disk things to put tags on, they are usually filesystem constructs in a data tree somewhere.

Your file indexer could maintain the tags database, but then they are not file-portable. They could be part of the extended file traits in your filesystem, but then filesystems lacking support (including online uploads) would drop them. Thus we are at a point where tagging is done on a per-format basis, and thus some files are just not taggable but instead require indexing.

You would probably want to throw a semantic relevancy engine on top of that to associate vocabulary with similar terms, so if i search for food i can get things like cookie as a related search term.

In case, for the tag-based filesystem, there are tmsu [1].

I will try it soon with ranger [2], I think managing tags with it won't be a problem as you can call the shell where you want.

[1] http://tmsu.org/ [2] http://ranger.nongnu.org/

Looks great but what happens when you move files from one system to another? Can one keep the tags?

It looks like it's "hard" to do correctly, so TMSU doesn't do any magic but will try it's best through the "repair" command:


>I do like the idea of a tag-based filesystem though. //

Ditto, I use tags heavily for web content and photos and would love to do the same for local files.

Seems that it might be possible to overload soft-links to do the job - create a hierarchy of tags and then put a soft link for each file in the relevant place. That way you could use current filesystems and filemanagers whilst you develop the concept and apps.

Someone mentioned file searches in Ubuntu (using Baloo) but the default file manager needs to have an interface to apply tags as a MVP og file tagging.

I want hierarchical tags, e.g. "comp.lang.ruby.rails" instead of just "ruby" and "rails". And of course, when I search "comp.lang.ruby", I want to see results from all of its children.

The problem with traditional hierarchical organization is that objects often belong to multiple categories. With hierarchical tags, that problem disappears because you can easily assign multiple tags to the same object.

Actually, we can already do this with hierarchical filesystems and hard (not soft) links.

Do they really have to be hierarchical? Your example would work with AND.

What if I have two dozen ruby-related tags, not just rails? In order to search all of them without hierarchical tags, I would have to either add the "ruby" tag to all of them, or run a search with a lot of ANDs.

I already do this to some extent with my 11,000+ bookmarks on Pinboard. Entering the Usenet-style tags is really easy thanks to autocomplete, which also helps me avoid ending up with misspelled tags. Pinboard currently lacks searching by tag prefix, but it's trivial to script it locally using the API.

Talk of tag/DB based file systems always reminds me of WinFS[1]. Microsoft were trying to build a tag (well relational database) backed file system since 1990... eventually abandoned - presumably due to back-compat issues.

1. https://en.wikipedia.org/wiki/WinFS

Lots of reading material about the conception and death of WinFS: http://www.metafilter.com/134799/WinFS-what-it-could-have-be...

This is exactly the idea of Desktop Neo. If tablets and phones are used for consumption, the desktop can be focused on productivity.

I don't want to replace the complexity of the desktop with interfaces that are easier. I want them to be better, to be more efficient for professionals.

Concepts like panels, tags and touch input are probably way harder to understand than windows, folders and mouse input. But it doesn’t matter, because professionals spend so much time on desktop computers that it's worth the effort to master the interface.

> I want them to be better, to be more efficient for professionals.

This is a good goal. But gestures and multitouch will not lead you on the path to that goal. Consider tiling window managers like awesome or i3, that already today help programmers (myself included) be more efficient. They achieve this by reducing time-consuming dragging, dropping, resizing and clicking actions either by replacing them with keyboard shortcuts or simply by automating them away. A UI where you have to move your hands even further (distance keyboard-screen > distance keyboard-mouse) will actually decrease productivity (and likely cause RSI problems as well).

I realise this is a mockup/concept design. But if you want to pursue the idea further, I very much recommend using a tiling window manager for an extended period of time, to learn from that paradigm.

The mockup interface of i3 + graphical rofi[1] + touch/mouse integration.

This is a very powerful paradigm to pursue: An intuitive version of tiling WMs that appeals to everyone. Go, ziburski, go!

[1]: https://davedavenport.github.io/rofi/#window-switcher

I do think you are on a good path with tags.

Back when I went to Gmail from a folder type email representation, it was hard at first. But search proved superior. Basically, a lot of organizational time investment had no return. Getting better at search always has a return, and you save all the laborious folder management.

I've taken to one or two big directories filled with reference material, docs, pictures, etc... search generally pays off.

Smarter tagging is likely to work in similar ways.

My beef with search-based, tag-based UIs are they are not great at browsing content. Sometimes I want to just review and look at older content, not search for one particular thing.

I will plug in a date, but yes, agreed.

As a developer, I unfortunately have to live in a fundamentally different paradigm where file systems are inescapable (unless OSes change dramatically under the hood), keyboards are essential, etc. etc.

But when I'm not trying to write code I can see this working.

This is a long response, but I think it might be useful to read.

There are a lot of attempts to redesign the desktop or do some clever thing to increase productivity but all of them tend to forget the fundamentals of interacting with the computer. The forget why what we have is successful and seem unclear as to what they are trying to move towards as an improvement.

That probably seems like a bold statement so lets look at a few things.

Lets look at why something like paper is still around. Do we care that one piece of paper might hide behind another in a stack of papers on a desk, is that a terminal failure of the medium? I'd say no. Just like windows hiding others, it's not a fundamental issue that needs solving.

Why else might paper be successful? I'd suggest it's not the paper itself, it the way we interact with paper. We don't use our hands to mark the paper, we use tools, we use pens, brushes, pencils, chalks and all manor of things. Paper is a tool that allows us to get the best from a range of our 'marking' tools.

Like wise, if we are to interact via touch with the desktop, it stands to reason that to get the best from the experience we need to have a range of tactile tools to do so. We need our keyboard, our mice, our touch pads, touch screens and stylus'. Concomitantly, we need our desktops to allow us to get the most out of these interaction tools.

To my mind, the problem with the desktop isn't that it needs a fundamentally more restrictive use pattern, it needs a better use of the available control mechanisms.

Let's think about why Vim is so enduring and successful. Vim only does one thing, manipulates text and it's got a mode of operation totally alien to many who've used a text editor. Yet for decades now it's what many gravitate towards as the pinnacle of interacting with text on a computer (shout out to Emacs folks, this is just an example!).

Vims's method of operation seeks to ruthlessly release the power of the keyboard to offer huge benefits in terms of productivity and manipulation.

What is your design seeking to ruthlessly release from the power of our interactive devices? Why are our existing tools better for interacting with your design than a traditional desktop? what power are you releasing from them?

All of this is to say that whilst you may redesign the desktop and have clever (and useful) schemes to increase productivity, these designs don't pay enough attention to the things we use to interact with the computer, and so don't offer true lasting benefit and therefore don't catch on or endure.

If you redesign the way the desktop works, you need to either re-imagine the tools we use to interact with it or take measures, without pity or care for convention, to release the power of the existing ones.

Apple's done an interesting job in this area in it's move to bring touch gestures into the control of the desktop, but I'd don't think it's hardly been a fully realised opportunity.

polite spelling correction: it's "all manner of things", not "all manor of things".


I can see the reason from the power user's point of view. I've noticed that quite a lot of power users don't use desktops (or window affordances) at all! Their only use is to provide a place for a cool background picture. I see a couple of reasons:

- having to use a mouse (or, God help us, a touchpad) is annoying when you have to switch between it and the keyboard

- using the mouse is slow

- windows on traditional desktops take way too much space (compare with how much better they look on tiling WMs)

- ALT+TAB (and equivalents) suck - you can only switch back and forth between two applications; anything else requires paying attention

- window locations and sizes are ephemeral, which again is annoying when you want to set up a good working environment

- wheel menus are awesome and the fact that pretty much nobody is doing them baffles me

Tablets suck for creative work, I agree, but the way I see this project is not as a nod towards touch interfaces. On the contrary, it's about using sane(r) window management, combined with tags, eye-tracking, voice control and wheel menus to minimize the amount of effort you need to make to issue commands or search for things. It sounds like a perfect addition to keyboard-driven work style.

In my world (music, concert, theater production) we depend on a lot of equipment which is basically a computer and a custom physical control surface, retailing for 10x the price of the computer. The keyboard and mouse interface is simply not good enough to replicate faders and knobs. The tablet interface, on the other hand, is. This has important implications for light boards, sound boards, and DAW controllers. For many of them the domain-specific IO by itself isn't actually that expensive and the software can run on commodity computers. We were only locked into custom hardware because we needed its control surfaces to work efficiently.

Existing desktop workflows may not be improved, but existing custom-control-surface workflows can be made vastly cheaper and more flexible, consolidated onto relatively commodotized hardware, and in some niches this is happening.

There's a lot of this sort of thing showing up in the DJ/VJ space and with visual and audio performance in general.

I think there are definitely strengths and weaknesses to the use of touch-oriented control surfaces. On the negative side, they require you to look at the control surface a lot more often than a setup featuring jog wheels and sliders. But on the positive side, being able to customize the control surface and allow a level of more "analog" control compared to pointing and clicking or typing.

There are a lot of less expensive options that use more common USB interfaces to map hardware controls to software functions which can really help if your only other option is that pro (read: expensive and proprietary) gear you mentioned. Still, you're always limited by the number and layout of pads, buttons, sliders, and dials that can be mapped to software functions.

I've played around a bit with software like TouchDesigner which is meant for creating custom touch interfaces and programming the ways that each control affects parameters in software. It's like a more open and flexible alternative to the old Crestron and similar systems where you could define the control surface in software but were limited to their processors and peripherals.

It's something I don't really do much in my work (since we use those aforementioned Crestron-type systems) but in my spare time, I like playing around with using software-defined interfaces to make interactive audiovisual projects.

I'm surprised that no one's made a small add-on control with faders/knobs to work with Photoshop for font-sizing, opacity, and all the other things usually done with the tiny palette interface.

Someone pointed me towards this: http://palettegear.com/how_it_works

I think some of that results from the limitations of mobile OSes: everything is tied to an app store, jailed, etc. A mobile device is not a "real computer" not because of any hardware limitation but because the OS is designed for it to be a dumb terminal to access the cloud and consume content and rapid-interaction services.

It maps well to the sorts of things you want to do with a tiny device in your pocket, but it's not suitable for producing anything other than selfies.

This is the hole in the "mobile is the future" argument. Mobile (as it exists today) is only the future if nobody has anything substantial to say or create and we are all just passive consumers.

This is really great desktop UI work and I do hope someone takes a look and gets inspired. It would really be nice if Linux desktop efforts stopped trying to deliver Windows 95 or early versions of MacOSX in 2016 and actually innovated.

Also, the screen is smaller.

A lot of tiling problems on the desktop would disappear if monitors were two or three times (or more...) as large on each side and wall-mounted.

Being able to see most of your windows at the same time and work in any one without moving or switching would be revolutionary.

I'd never heard of GUI/10. I don't particularly like the windowing scheme, but I think a ten-way touch interface which flipped between being a multi-pointer and a text keyboard - possibly with a touch point in one corner to switch modes, and not that show/hide thing tablets do now - would be a very interesting thing.

It wasn't practical when 10/GUI was first discussed. It's getting more and more practical now.

>It would really be nice if Linux desktop efforts stopped trying to deliver Windows 95 or early versions of MacOSX in 2016 and actually innovated.

Any innovation has to be much better than current practice. If it's "interesting, but..." it's not enough to get people to switch.

Talking about Blender: I think that the way windows work in Blender could make a great OS window manager. Devide the panels the way you like and select a program or part of a program you like to show inside the panel.

There are window manager who can do this but imho not as simple as Blender.

It would definitely take some changes to make it work at an OS level, but Blender's modular window organization system would be awesome for more uses! It definitely is power user only, such a UI style would overwhelm most users with the raw number of choices.

What do you think generates the most money for Apple and Google... You being reduced to a consumer that buys movies, music and apps in their stores — or you fiddling around in Logic or Photoshop?

This is a shameless plug, but I've been developing something involving 'scrolling with a resting thumb' to bring up a quazimodal menu, intended for applications where one is creating content in a spacial context… (i.e. here's a gif demo with Photoshop: https://twitter.com/vivekgani/status/659437589896630272 ) The intent though is for folks that want some middle ground between having to remember a bunch of keyboard shortcuts and mousing out to click something. This is an actual working app that works with Sketch & PS and more apps soon - if you'd like to try it email vivek@thimbleup.com and I'd be happy to share it.

The OP had a similar thing with the radial marking menu, though I've shied away from that approach since it doesn't scale with lots of icons/options unless you do hierarchies - also I simply just wanted to try something different. One random thing I stumbled upon yesterday is an old Autodesk Research video looking testing these versus other methods: https://www.youtube.com/watch?v=YHZB0d20640

I think this is the best objection I've seen to what could otherwise be a pretty cool idea to try and refine.

The answer to designing complex, professional-level applications in that paradigm is "make it fullscreen, and use a keyboard and mouse". Tablets just don't give you the precision and screen space to work with extremely complex tools.

I think Windows 10 actually manages this well. I'm using a Surface Book, and despite its various issues, this tablet/notebook hybrid actually works quite well. Plug the screen into the base and I have a full blown, high res screen with a keyboard and dedicated GPU for my coding/Photoshop/Blender. But if I just want to consume, I can pull the screen off and sit on the couch.

> Why is everyone out for the blood of the desktop/windows paradigm?

Honestly I think a large part is because microsoft dominations the desktop market and is failing in the tablet market.

Suggest looking into the work that the PennyArcade guys have done with Microsoft to make the Surface Pro good for illustrators.

Agreed, the folder metaphor works as its a direct visualization of the hierarchy of the computer.

I remember what Yahoo! was originally like: a nice, hierarchical categorization of websites. That was great when there were 1,000 websites. Maybe even okay for 10,000.

Google quickly became an obviously superior approach.

Back when Yahoo! was a categorized directory of sites, my computer had probably 1GB of storage. I didn't have a digital camera or many thousands of email messages. I didn't have as many documents that I either created myself or was given by someone else. I didn't have ebooks. Or mp3 files.

The hierarchy that we have is not necessarily a given and even if it exists on disk, the UI metaphor does not need to be tied to that paradigm. Perhaps there is a better way for the things we do today?

I think the central problem is that few userspace programs get as much engineering and tooling thrown at them as the filesystems. Itunes and similar apps which implement their own abstractions for organizing content have always trailed the filesystem for me in the following aspects

a. Hard to access the data outside the single ui

b. Performance of data import / export trails raw folders / file systems

c. Hard to figure out how to atomically backup the content in an incremental way.

d. No programmatic access.

Perhaps, but I think the answer lies more in backend tweaks than a UI/UX overhaul. File/folder tagging, for example, could integrate easily into existing UI paradigms without having to overhaul (or remove) the files/folders analogy. I agree that dealing with folders gets too complicated above a certain threshold, but to me that signals a need for additional (thoughtful) tooling.

There are a few things I like in this (particularly the Finder bit), but so much of this is just a bunch of extraordinary statements with absolutely no universally-accepted justification, spoken as if they have some sort of universally accepted justification.

For example, "Panels use screen space more efficiently and are a more elegant way to multitask than normal windows."

Says who? You? It just drives me crazy when I see statements like these. It's not an effective way to get your point across. You want to make your case for something like that? Actually make your case. Present some evidence and your conclusions. Not everything has to be a Jony Ive marketing video.

edit: Last thing I'll say - I just re-watched the video, and caught the last line - "it rethinks desktop computing to help you get work done". My advice to the author - go work for various companies for 5-10 years, and then come back and see if that statement holds up. My ability to get work done would be crippled with this; in fact, my work would come to a grinding halt. "Work" just doesn't work in the kinds of idealistic ways these types of marketing-like videos always seem to show.

And don't get me wrong, I like the author's ambition. If I was in the internship-givin' business, hell, I'd probably consider him. I think this is a good way to get your name out there, even if it attracts criticism (like mine).

Even as a designer, I think you hit the nail on one my biggest criticisms about the profession: rationales without evidence.

Most of the time, design rationales seem like they are based on the subjective POV of the designer rather than conclusion based on recorded feedback.

No, no, he's done research, right from the start. Says so right on the page titled "design process".

That's how he knows, I'm sure. He wouldn't just be making that stuff up out of thin air, and surely the "research" and "design process" did not exclusively consist out of the previous iterations of mockups we can see on that page.

Certainly he's also written the technical design document that describes a clearly outlined goal at the start, works that out referring to well-known theories, studies and previous research done in the field of UX design.

Obviously the statements about efficiency have been corroborated by hard numbers obtained from use cases, user studies--done on people other than himself or his professors--about discoverability, efficiency, employing user tasks and scripts (as described in the literature he must be familiar with). He's a student of interaction design on a university. So he must certainly know that research implies science and interaction design involves actual social studies and that building mockups, animations and a gorgeous website with demo video is only a tiny and not even that important part of the field.

Certainly it can't be that this guy is in fact quite talented at marketing and that's why he's currently at the top of HN.

The biggest thing I find lacking in desktop UI is the ability to save and switch between states. We need the idea of project 'sessions' where I can open up the software I need to use, pull up the files I need, set my favorite folders, etc and then hit 'save' and be able to restore back to that setup at the click of a button.

Right now, switching between projects means setting all that up every single time.

I agree whole-heartedly. I was excited when I heard Windows 10 was adding multiple desktops, however I find they are quite lacking in use. For example if I want to move a single window of Notepad++ to a different desktop it moves all of the Notepad++ instances.

On top of that, I really don't want all of my software always open in a separate desktop. There is no need for me to keep Photoshop open, using resources, while I have transitioned from work to play time. I'd much rather have a "working" session saved and a "gaming" session saved where-in different programs are auto opened when I start a session.

>For example if I want to move a single window of Notepad++ to a different desktop it moves all of the Notepad++ instances.

I think this is a problem with Notepad++, because I can move around single instances of all other apps I tried (Chrome, VS, etc).

>On top of that, I really don't want all of my software always open in a separate desktop.

Ideally the user should not need to worry about "open" apps and the resources they are taking up just by being open (not doing any work). It is the app's and the OS's job to ensure that background apps do not use CPU for doing nothing and the memory management is good enough that you always have enough memory for what you wanna run NOW. And I think that is quite good in Windows 10 and OSX right now. I can have multiple VS projects, a couple of VMs, multiple SSH sessions, Outlook, couple of Office docs, ~30 tabs across several Chrome windows open and I can still create a new Desktop and play some Need for Speed when I reach home.

State preservation, generally, is a huge failing of computing. From losing my desktop arrangements in Linux between reboots (or even logins), to Web tools that lose comments and posts, etc.

Being able to specify a workspace state and return to it is hugely significant. But that requires deep stateful awareness of pretty much every app, including terminals, and the command / console-mode tools running within them.

But at the higher level, I'm liking what I'm seeing here.

Agreed, where this would be tough to implement would be with all the individual programs and their states/settings. I guess you could sort of get around that by running everything in a virtual environment potentially?

Eventually you've got to restart something. Virtualisation actually adds a layer of complexity. Strikes me that's going the wrong way.

I found Virtual Machines to handle this quite neatly. I use windows as a host OS and I have to use desktop linux in a VM for some tasks, what i noticed was that once i don't have to do those tasks i just hibernate the VM and when i start working again i resume the whole VM exactly where i left it with all the required applications already running inside.

If you were to use one VM per project though one obvious drawback is that you would have to maintain a whole bunch of OS installations. It's also a bit tricky to be disciplined and use the correct instance of an application if it exists in both the host OS and the guest OS (even more so if you are using seamless windowing mode), Firefox as example i prefer to use in the host, because that one has all my extensions installed, but that means all the tabs i open that are related to the project will spill over to the next session when i close the project and switch to play-mode.

I believe that is what KDE is trying to achieve with "activities".

Certainly, but KDE activities are overtly complicated and lack a unifying metaphor.

Last time I checked they were neither virtual desktops nor IDE workspaces; it was hard to think of an activity as a container that holds together all the instances of data related to a project.

They interacted with open applications in weird ways, rather than having one instance of the application associated always to the same activity as you'd expect.

Hey Trevor,

You should check out atlas.co and sign up for the beta for an invite. I'm currently working on this project, and it is exactly aimed at the problem you're describing.

More details:



Thank you for the reply! I will check this out. Interested in hearing more about this, seems like a challenging problem to solve!

Absolutely, big problem and no existing solution. Would love to hear your thoughts once you get access :P

I believe KDE has had support for sessions like that where it saves all your open applications, documents, folders and desktop widgets since at least KDE 4. You can switch between those sessions using the cashew thingy.

Hmm, isn't that "just" a tiling window manager at the bottom of a few other nice things?


While feeling the shortcomings of a window system, I never felt like I could productively work with a tiling manager, since I frequently overlap windows as a mechanism for easy access ... and there are lots of popup windows I don't want to cover the main application.

Oh yes, and I purposefully keep my Mac apps out of Full screen mode (and use a keyboard shortcut to maximize when needed, keeping out of full screen).

> I never felt like I could productively work with a tiling manager, since I frequently overlap windows as a mechanism for easy access

Part of the idea of most tiling window managers is that you use the keyboard to control them, not the mouse. As such, you no longer need to overlap windows to quickly switch to the one you want. You have keybinds to do so (which actually exist in the default Windows and OSX window manager as well.)

Seeing more than one at a time is high value.

I personally hate the basic tile schemes, and full screen in many of my use cases. Yes, when I'm using one app hard, full screen is good.

When I'm using multiple ones, I often want to copy paste info, or just refer to it. Auto pop to foreground is toxic, it forces a change in state when there is no need for that change most of the time.

On IRIX, focus follows mouse, and middle button copy paste were awesome. Still prefer that over other schemes.

Many people get a second screen to accomplish what can very easily be done with one a lot of the time.

Tablets are fine consumption devices. Some create is there, like graphics, but most create has UX requirements that exceed the simple touch sweet spots.

Linux uses middle click for one of its clipboards. Most window mangers have focus follows mouse as an option.

Yes. I'm aware of and will use those features on Linux.

I don't use nor like tiling much, due to overlaps and how I find myself using them.

Some here are saying more iteration could improve on that. Maybe so. I'm sure not oppose to giving new things a go.

Just so you know xfce's window manager (or at least the xubuntu version version of it -- there may be patches not present in the upstream) allows focus follows mouse and has overlapping windows...

EDIT I might be confused here. I'm reading that you are assuming that one has to use a tiling WM to get one of the features you want.

Not at all. It's all good. I know about xfce, etc...

I do prefer overlap with those features, among others, present.

i3 defaults to focus following the mouse and middle pasting

I think this is just a question of somebody do it right. We just dont use this right now, because what we have now is too crude. But give it the right number of iteration trials, and im pretty sure a tiling window manager can be competitive, and more so, in a more touch oriented desktop.

My hunch is that its just lacking a good amount of iterations. Our modern windowed desktop didnt came out ready either. So give this some love, and im sure something really cool and usable, can bootstrap from this.

Yes, panels follow similar ideas and have similar advantages to tiling windows. However, I don’t think that the benefit of being able to work with windows that aren't full-height outweighs their added complexity compared to using only full-height panels.

With todays technology and established interface patterns from mobile, I believe it's feasible to use the ideas of tiling window managers beyond a niche market.

First: full-height maximisation is sufficiently useful to me that I gravitate strongly toward environments which allow me to hotkey this. WindowMaker's remained my go-to largely for this reason, along with a handful of others.

Second: less-than-full-height windows can be quite useful. Being able to specify a "stack" into which, say, I place a video (or video-chat) app, various A/V controls, and possibly some monitors or other smaller windows, perhaps a shorter terminal window or three, is easily something I'd like to do.

With WindowMaker I can _arrange_ windows in a pseudo-tiled state. Unfortunately it's too easy to accidentally drag one off somewhere. Generally, though, this can be quite handy.

What I'd really like to be able to do is to specify roles for certain window locations. Say, a browser, image editor, and audio editor all occupying the same larger window space, which I can tab between.

And I think you're quite right about applicability of tiling metaphors.

I've sometimes thought the most egalitarian way to design a GUI would be to make it exquisitely accessible to the blind. This would force designers to organize their GUI with sensible, navigable hierarchies of data and controls.

To me, an exquisitely blind-accessible GUI's should function like a fancy keyboard navigable/editable graph data structure that echoes the hierarchies and relationships represented on the sighted displays. The biggest boon to such a GUI would be that the blind-accessible controls would function as an "expert" mode of navigating the GUI - one would never have to touch the mouse or trackpad to get stuff done.

Sighted users would benefit from learning these keyboard controls and we'd inject some hyper-productivity back into our apps to counter the fisher-price'ification that has been creeping into GUI's over the past 10 years.

This is a very good point. Much of this UI is based on visual clues and accessibility seems to be an afterthought.

A three finger tap or swipe is not easy for everyone. I'd like to see more (digital) navigation based on actual (physical) navigation -- waypoint labels and directional paths.

Any swipe is virtually impossible in an office situation. Not everyone works hunched over a macbook. Reaching up to my monitor two swipe would destroy my shoulder.

Touch screens obviously aren't the answer here, but a good trackpad setup might be okay. I used to use Apple's "magic trackpad" when I had a macintosh desktop, and it did gesture inputs really well, though I always seemed to run into discomfort/strain issues earlier than I did with a mouse (which is part of why I switched back.) That could be unique to me, but I'd guess that the need to hover your hand above the pad is not good for anyone. I'd be up for trying that setup again if someone had a good idea to solve that problem.

First paragraph in and I'm already not their customer:

"We now use smartphones and tablets most of the time, since they are much easier to use."

No. Just no. I don't want to use a touchscreen and closed ecosystem to develop software. That would be a nightmare.

The rest of the design seems to be taken straight from 10/GUI.

I agree! Desktop Neo isn't about using touchscreen or tablets to replace the desktop. The opposite, actually. It tries to move the desktop even further away from phones by rethinking it for professional users.

And yep, the first part about using panels was very much inspired by 10/GUI. I am linking to that concept on the website, and also talked to Clayton Miller before publishing Desktop Neo.

I can see that most comments here didn't read your entire article. I work at my two-monitor desktop 90% of the time. I really like your ideas. Most of the time I only need to see two application windows at a time.

Maybe the your article needs to immediately clarify who you are and the intent of the design, and maybe differentiate between your design intended for producers vs. consumers.

I wasn't aware of 10/GUI

Thanks for share!


It seems that the tiling windows [0] of desktopneo or 10gui would not work on very large monitors like a UDH 40". Eg. the Philips [1] has such a large viewing angle for the lower two corners that they are less usable. Personally I use the middle for active windows and move others to the left & right. [0] https://en.m.wikipedia.org/wiki/Tiling_window_manager [1] https://pcmonitors.info/reviews/philips-bdm4065uc/

As an aside, I own the Phillips linked and it is absolutely fantastic for SWD. My windows dev environment is essentially a poor clone of a tiled window manager as I have zero overlap between my windows due to the insane amount of real estate.

I haven't seen that. That's really cool, and current applications should drop right into it, too. Disconnecting the touch surface from the mouse solves a lot of the current problems with touch screens.

Does the kind of large multitouch surface with flyover support needed by 10/GUI actually exist? Because I think I want one.

> "We now use smartphones and tablets most of the time, since they are much easier to use."

Interesting. I have the opposite impression. I think _customization_ is the key for real productivity. The more customizable the system the more productive I can be with it.

Phones and Tablets are easier to use as long as we deal not with more than two apps at a time. The Neo GUI is an interesting idea but what productivity for? As for me, I prefer a window pager where I can address any window instantly with just one mouse click. Swapping between several pages all the time would confuse me.

> "Windows are now inefficient and incompatible with modern productivity interfaces. ... "Window Management is Outdated"."

No, surely not. I would always prefer a PC with a customizable GUI (KDE and OpenBox for instance). I barely use my phone and pad because they are only useful for basic things.

"We now use smartphones and tablets most of the time, since they are much easier to use."

Isn't this the kind of thinking behind Windows 8?

>Not their customer They're not selling (or offering, for that matter) you anything. This is a design concept not a product.

This is a local bit of English slang. Essentially it means "this isn't for me" not that we are actually not going to buy the product or service, literally.

This is still hacker news, right? So hypothetically we're talking silicon valley startups. How many of these kinds of places are heavily reliant upon outside companies like those mentioned by @GuiA to build prototypes of their designs?

I have seen plenty of consultants hired, but nearly every startup I've seen or been at/around (~20) prototyped their designs in house. As an example bu.mp hired an industrial design firm to help them make physical prototypes of a POS competitor to NFC technology (which ultimately failed), but had their own engineers and designers actually create and test the working prototypes.

Shortcomings notwithstanding, I think this is an example of excellent work and a creative way to find an internship. Given the opportunity, I'd most certainly offer this guy an internship if I could.

Nice job. Ich wünsche dir viel Glück.

That's weird --- I looked at some of the sample animations, and I could swear I heard a tiny voice shouting 'Oberon! Plan-9! Rio!' in my ear...

I'd like to try something like this in action; but the problem, as always, is going to be bootstrapping. Look how badly Ubuntu manages something as simple as putting the application's menu bar in a non-standard place.

...I worked once with a desktop environment for the PC, GEOS. It had a feature where your application's UI was described in logical terms and this was then mapped to a physical UI when the app loaded. It allowed pluggable look-and-feels to drastically modify the look and behaviour of the application as they saw fit.

If we had something like that, this would be easy. Shame we don't, really.

So basically xmonad + with a tagged file indexer. Although fzf + https://github.com/rupa/z works for me most of the time.

With respect to eye tracking, I had a similar idea the other day. Imagine holding a key, then moving your gaze to see an on-screen target following where you're looking at. You could use this as a really quick way to scroll or highlight/copy text without leaving your home row.

I really don't like leaving my home row.

I may be crazy.

Yeah, that's exactly what I was thinking. Fzf is great, it's like anything.el/helm but works outside of emacs. Thanks!

Tags are really tough. Unstructured categorization comes with its own basket of issues -- some distinct from structured taxonomies, but not necessarily less tricky.

A few thoughts there:

1. What happens when my mental schemas change in a year? The word I use to look something up changes, and I suddenly can no longer find it.

2. What happens when I get a little lazy in obediently tagging everything I create? Imagining the 2 or 3 words I'll want to use in the future to look something up (see the first thought) is really tough. Mentally taxing = a barrier to adoption.

3. Folders can get unnecessarily deep, stale, etc... but having the structure available to browse can be an extremely useful trigger in reestablishing the hallways of my desktop-stored "mind palace."

4. Having many ways to discover information I'm looking for > having a few ways. Search, browse, categorize, all have a purpose depending on the way a file or piece of information imprinted on my memory.

Great points. I've thought about some of these.

Currently, Firefox's bookmarks is the main task where I use tags these days. Bookmarks so quickly become quite a mess! I really like tags for the ability to have both a mess, quickly add things without having to think too hard, and ultimately being able to find stuff of course.

I had a quick excursion using Chromium for a few weeks--does it even have tags in the bookmark db? They weren't picked up when I exported from Firefox. I went back to Firefox because Chromium's address bar doesn't quite search my bookmarks and history the way Firefox does and only displays the top five results or so. Not something I can depend on to be able to find something back.

I really want to get back to having a load of tags on my music collection. Used to have that in Amarok, but I lost the DB many years ago. So useful, for custom playlists, weird personal microgenres that make sense only to you, tagging tracks you almost certainly don't want to hear if you put some album into a larger playlist (Daft Punk - Touch :-P) etc.

Anyway. Your thoughts:

1. Yes. For that, I imagine a powerful and snappy interface to organize tagged stuff. A bit like those automated MP3 ID3v2 tagger tools, but a bit more general. Should allow for mass re-tagging operations like: add tag #newthing to all objects tagged #onething + #otherthing but not #notthisthing. With the title matching "* about things". Only for objects (created) older than 1 year (because my mental schema also changed the way I used the #otherthing tag).

That's only power-users I'm afraid. I have no idea how to solve this for regular users. Although I've seen motivated/determined "regular" users structuring years of digital photos with folder systems in ways that maybe they wouldn't shy from such a tool as long as it's intuitive to use.

2. This is a big shortcoming of Firefox's tagged bookmarks. Back when I used del.icio.us (also many years ago), there was a bookmarklet I could use to add a site to my del.icio.us bookmarks. The greatest thing about it was that it would predict/suggest tags for your bookmarks, I miss this so much. It did so in two visually distinct ways: First, predictions on your own bookmarks, afaik it was just a set of tags that you commonly used in conjunction with the one or two tags you've typed so far. Maybe it also matched keywords in the title and url. If I were to implement this I'd sort them by Bayesian P(#newtag|title-url-keywords-tags-typed-so-far). The second set of tags were suggestions based on what other del.icio.us users had tagged that url (less useful and less privacy).

3. I think you could have both? But most importantly, what helps me a lot to navigate and orient myself in this tagged mind-palace (nice analogy btw), is the ability to not just sort and slice your database by tags and combinations of tags, but to also just be able to browse everything sorted by time. I find this in the photo gallery on my phone, which is an utter mess, but if I really need to find something, I switch to by-date view (groups by month). Even if I don't exactly recall what month a photo was taken (or it could be a picture that I saved from Twitter, or maybe that someone sent me by IM), scrolling through I see pictures through time and usually I quickly get a familiar feeling "wait it was before then, and after ... yea .. when the thing .. ah! got it!".

It depends on how your memory works of course. But I imagine it would work because even though you can tag and re-tag and restructure your documents, the ordered slice of time-line of say 1.5yr ago hardly changes if at all.

4. Yes exactly. You might notice the "solutions" or ideas in the points above are all based in some way or another in this observation.

Computers knowing about focus is quite a compelling future direction. There are lots of really good possibilities. But also a few problems.

For instance, If you can track focus, you can make the monitor seem larger than it really is. Just scale everything that isn't being looked at down a bit. As the gaze shifts towards other objects, shift things slowly around and overlap the enlarged window over other background windows.

You can make focus-dependent shortcuts. Imagine vim with nouns and motions that can refer to and act on the focus point. `ytF`: yank-to-focus, where F is a motion from cursor to focus. Lots of rich possibilities there.

The only major problem I see is that you can't really share work easily (unless you open it up to multiple focus points somehow.) Also, it would be frustrating to have to look where you're typing. I'll often look at something else while typing just before switching tasks. I'm going to start paying attention to my focus more to see if there are other potential pitfalls with the computer knowing about it and changing modes in response.

In general I like the possibilities opened up by having focus. Heck even with a traditional mouse+keyboard, the extra data would help the computer understand us better. It might be more suited to a VR desktop, where the 'multiple-viewers' problem does not exist.

Tiling window manager, with easy back-and-forth? Works for me.

I have to say, I really like tagging-as-filesystem concept (where you can also meta-tag something), and the gaze/touch interaction proposed would be awesome to have in my opinion.

The current interaction with tagging (basing off OS X) is still pretty clunky. It's a separate field, the new vs previous tag selection is aggravating with a keyboard, and there's no easy way to browse or select multiple tags to filter content.

Having to move to and from the mouse a lot is a bit of a pain, and even with the best touchpads on the market, the interaction with them can still be annoying when trying to do things like click on a HN upvote arrow. Being able to start the mouse from the point you're looking at, or even forgot the mouse entirely when starting typing into the field you're looking at - those would be fantastic additions.

I'm slightly more dubious about the voice interaction, though there are times where "Hey Siri, set a timer for 10 minutes" is a great way to interact with an otherwise over-complicated device.

Hierarchical tagging is a good idea for a filesystem. The only problem with it is "where do you store that metadata?" In the individual files (requiring format-specific support)? In the filesystem (requiring applications that know how to read and write it)?

When KDE 4 was released it included Nepomuk (https://userbase.kde.org/Nepomuk) as a universal tagging-and-search mechanism to do precisely this kind of thing. Unfortunately, not very many people use it and even those that do only use it in a very limited fashion. Partly because it only works in KDE apps and also because tagging all your files after-the-fact is a very time consuming process.

Then there's this problem: What if I never had any trouble finding my documents? What will I gain by tagging everything?

Tagging makes a lot of sense for things like photos and videos where search-by-content just doesn't work. For everything else you can just index your files and search normally.

Another problem with tagging is that--if you're going to be disciplined about it--you're not really getting much benefit over just "staying organized" with directories/folders. What's the this-helps-me difference between a directory structure like, "Company X/Clients" or a hierarchical tagging structure such as "#company/#clients"? Sure, with tags you're not limited to the parent->child concept but ultimately--if you don't stay organized with a hierarchy--you'll end up with a huge mess that can only be navigated with an intelligent tag-based search tool.

> Tagging makes a lot of sense for things like photos and videos where search-by-content just doesn't work. For everything else you can just index your files and search normally.

Unfortunately, there's a lot more than just photos and videos which are unsearchable - but even with just photos and videos, there's a lot to be won by implementing this.

> What's the this-helps-me difference between a directory structure

Simply put: any time you have a logical statement (and, or, not) between two criteria, the folder hierarchy falls apart. Files typically live only as a single leaf node under a hierarchical file structure, but the contents of a file could potentially live under multiple branches.

As an example, if I have around 9,000 photos and videos, and I want to view family photos from Christmas 2015, that have my wife and niece in them... without tagging, I'm SOL.

A traditional desktop does everything I need. This just doesn't seem like enough of an improvement to make me jump ship. This whole "rethinking the desktop" buzz just seems like another fix looking for a problem.

> A traditional desktop does everything I need

That's probably a reason for 40+ years of success.

It's interesting how IT steps backwards in recent years. First the "cloud" wave came to convince us to leave our autonomous PCs and to turn back to centralized IT servers. The current wave (Pads, Neo etc) looks like turning back to single screen terminals. What comes next? Shall we get rid of the mouse?

I also think the simplicity of tablet/phone UI's is not applicable. There's always a trade off in design; you cannot make one UI good for everything. As it stands, the desktop UI is a bit too complicated for simple app/game/media consumption. But similarly the tablet/phone UI is way too simple for most desktop applications.

Tablet applications literally do less so they don't need as complex of a UI. You can't take that simplicity and apply to more complex applications.

> The [menu] is easily scannable and you can search for options just by typing.

This is really important. It's 2016, search is easy, and I shouldn't have to hunt around because I don't know whether you stuck your options dialog under File, Edit, or Tools. I appreciate Ubuntu for implementing this OS-wide with Unity's HUD feature.

The OS X Help menu does this as well.

Ditto for Windows. This is hardly new.

Where on Windows?

I have misunderstood your comment. This is certainly not on Windows, and it would be a great help.

Although, Office 2016 apps have this on top of every window. You can type and it will search every option available.

"About this Concept

Neo was designed to inspire and provoke discussions about the future of productive computing. It is not going to be a real working operating system interface, it is just a concept. I am not saying that these ideas would definitely work and that this is the future of computing. However, there is large potential in rethinking the core interfaces of desktop computing for modern needs, and somebody has to try."

Soooo sad. I want this now!

Argh! What's the point of developing of fully fleshed out work piece like this that doesn't and won't exist! Well other than self-promotion and a fear of it actually totally sucking after it hits the real world of people.

Bruce Sterling has a phrase for this: "design fiction".

Design fiction tends to be some combination of portfolio piece/job application, argument about the direction of whatever is being designed (here, desktop-ish UI/UX), wish fulfillment, conversation piece, and sometime social critique.

Well, since the person who did this is looking for an internship, the point is self-promotion and, I would guess, to flesh out the idea. Someone else might take the ball a bit farther and from there it might become real.

I feel like these sorts of presentations are not very honest. If your claim is improved productivity, why not do a side by side? A cursory glance shows that all of these features would completely destroy my productivity.

Taking a look at the gestures...

1) Scroll through panels - Alt+TAB is unquestionably faster (< 1sec)

2) Open App Control - Win key? (< 1sec)

3) Open Apps menu - Keyboard Shortcuts (< 1sec)

4) Open Finder - Win + E (< 1sec)

5) Close Panel - Alt F4 ((< 1sec)

6,7,8) Resize - Win + UP/DOWN/LEFT/RIGHT (< 1sec)

Now, sure I see some people claiming that "nobody" knows these shortcuts or that they are "not intuitive" (itself a loaded term), etc.


Proposal 1 (Teach people these shortcuts)

Proposal 2 (Invent completely new gestures - teach people these new gestures, which will then slow them down because a keyboard is just crazy fast compared to touch.)

Now, I'm going off of the whole Desktop phrasing. Maybe on a mobile, all of these might make more sense..

First impression:

Conceptually, I'm really liking the approach. I could see it being a viable refreshing of desktop paradigms. There is an interesting mix of OS X, Windows, Linux WMs, and some other goodies from other apps here.

Visually, the biggest drawback is I could not tell while watching the video which panel has focus. I am assuming the idea is this would be handled by tracking eye focus. Perhaps I could get used to that, but it'd have to be instantaneous switching. As I'm typing this in a half-width browser window that takes up all vertical space, I have the Desktop Neo site in another half-width browser window taking up full vertical space by its side. I'm bouncing my eyes back and forth between the site and this textbox I'm typing in. I'm currently staring at the Neo site while I'm typing, without any looking back. Such a desktop paradigm would have to remain very intelligent about recognizing that I'm currently typing in a panel while looking at, and perhaps scrolling through another panel, without wanting my current action to lose focus or be interrupted in any way. I work this way all the time.

The gestures for fullscreen and minimize operations need 6 fingers. Using both hands to do anything on a touch screen seems very impractical (except for typing with both the thumbs on a reasonably sized screen or a split keyboard).

Yeah, the two 6 finger gestures are certainly not ideal. I imagined a much larger touchpad, between the size of the Apple Keyboard and Apple Trackpad, so that might make it a little easier.

It's also important to note that these two gestures perform features (fullscreen / minimize) that can also be done by just resizing the panel (either with 3 fingers, or by going to App Control).

Why not put touch sensitivity into the keys so we can make gestures while our fingers are on the keyboard?

For what it's worth, this feels very similar to the 10/GUI concept, which made some waves in 2009. (http://10gui.com/video/)

It is cited as a source of inspiration on the webpage

> Folders were a great metaphor when our files were a handful of office documents. But today, complex hierachies make it hard to organize and find what we are looking for. The concept of a file being located in a single place seems outdated. Content is stored, synced, backed up and shared among many different devices. And our most important content lives in services or apps, not in folders and files. While mobile devices are trying to kill file management completely, it’s more important today than ever. With the enormous amount and complexity of content, we need a new solution.

Okay, buddy. Tell me how you're going to navigate a complex project built with multiple apps with your cool new tag-only scheme.

How would you re-organize the myriad number of source code files, library sources, images, image source files, readmes, and other things? Hierarchical structures are good for that kind of thing. If you're going to advocate throwing them out then I think it is imperative for you to tell me how you will organize real projects rather than just a handwave about search and adding tags to tags. How would I reorganize the directory full of 193 Illustrator art files and a ton of subdirectories, many with multiple layers of their own subdirectories, that make up the graphic novel I finished last year? It's got a total of ~2.7k files in it but all of that complexity is hidden behind a bunch of subdirectories so I can quickly find what I need at any moment.

(And also: holy crap those sample images are so much WHITE, using this will be like staring into a spotlight. And so impersonal - the user-set desktop picture is a wonderful thing that makes the computer feel like it's theirs and I kinda feel like this proposal completely drops affordances like that in favor of a blown-up iPad UI.)

Well for your example: Tag it with your project and maybe their purpose.

I would have bought this more if you were talking about code, but even then an IDE could simply make the project a single file (like OS X '.apps's) to contain the whole project.

This is actually similar to how Xcode works where how you organize your project doesn't always relate to how it is on disk.

Seems like too much unnecessary features to be honest.

Windows + Mouse with hotkeys is still the most comfortable set up for most people including me. I don't need an optimized experience.

I've tried Metro, I've tried i3, I've tried bspwm, I use Vim with split screen, and I still prefer the concept of draggable windows at the end of the day. It feels more free to be able to drag windows and take ownership of the layout than having a computer dictate what my layout should be.

At the end of the day, I feel like I'm more and more preferring simple UI's rather than these multitouch optimized experiences

I'd like to hear what Bret Victor would have to say about these ideas.

At 1:00 of the video, he says "In my document I want to make a sentence bold, so I just select it, and then just click..."

But he doesn't explain HOW he selected the text. Since there's no mouse cursor, I have no idea how he just did that. A glance doesn't seem to be sufficient since it requires movement and intent.


You are right. I left that out of the video to focus on the more interesting features. There is a small note on the website about how text selection could work with gaze and touch:


Text Selection

Place one finger on the touchpad to adjust a cursor at the gaze position. Use a second finger to set the end of the selection.

It looks really nice. That said I don't see how this interface is better productivity -- unless your work consists entirely in reading mails and surfing the web. Also I don't get why UI designers hate "folders" and want to replace them with "tags". Personally I don't think you can beat a good old file system with a bunch of tags. Of course the file system requires some effort on the part of the user to keep things organised, but then again so do tags.

I don't think tags are good enough to categorize data. Tags lack information what is being tagged. When it's clear what's tagged then they work. On twitter for instance it's the topic you're speaking about. What you really want is a property system like wikidata.

Think about a music program you want to write. You want to find all artists that music files on this computer have and when the user clicks on an artist find all the music that belongs to this artist.

I have a big gripe with the design of the finder window, and with all similar good looking designs from many mockup-designers of interfaces. It only looks good and practical at the same time when you've taken a printscreen, scaled it down 60% and don't try to read the text.

If you try to actually use this on a normal sized and normal resolution screen the text would be too small to read. So, to solve this you have to increase the text size, but this means each element becomes too big so that nothing fits, to solve this you have to reduce the padding of each element but then it doesn't look as good anymore. The contrast of the text is also too faint and if you make it more clear the "wow-looks" of the concepts are all gone. I think this is why you hardly ever see designs like this in the wild.

What I mean is, open this picture[1] and resize it to comfortable reading level, after doing so i can only see slightly more than half of it vertically. Things that previously looked like pretty design elements now just seem to waste space, like why would i need a thumbnail of a map and a movie-cover on my start screen?

It's just like architects designing the block of a city, they always show fancy renders of the place from above(?!) in the best weather imaginable. But that's not how people will actually perceive it when being there because then you see things from street level where it doesn't look the same at all.

[1] https://www.desktopneo.com/assets/images/finder_2234.jpg

The eye-tracking GUI concept is interesting. That needs to be tried again. It's been proposed before.[1][2] Dismissing notifications by looking at them is an interesting concept. It's not clear whether users would like it or hate it. That idea probably needs to come with an equally easy way to get it back. Maybe you want to do something about the notification.

One general insight about GUIs is that easily undone mistakes aren't so bad. That's the Amazon one-click purchase insight. The innovation is not that you can buy with one click. It's that you can cancel for the next 10 minutes or so. Most shopping systems had a "we have your money now, muahahaha!" attitude on cancellation until Amazon came along.

[1] http://hci.stanford.edu/research/GUIDe/ [2] http://www.cs.tufts.edu/~jacob/papers/barfield.pdf

"We now use smartphones and tablets most of the time, since they are much easier to use."

I completely disagree with that. If you gave two people (one on a pc, the other on a tablet or phone) some task, ie. move a file from here to here, assuming they were both equally conversant in their chosen system, the pc user would be able to do it in a fraction of the time of the tablet or phone user.

I assume he means more physically, and in that sense he is correct. You can use your smartphone or tablet basically anywhere and start using it on short notice. Something that you cannot really do with a tablet or laptop.

I think this is fantastic. There's only two problems as I see it.

The first is the consideration of running this on current hardware. Obviously Neo is meant to be run on hardware designed for these interactions (as shown with the voice key on the keyboard and the gaze tracking camera) but I'd like more detail on how this could run on systems that do not have this hardware.

The second is in professional applications. I did not see any screenshots that deal with Photoshop or Final Cut Pro or After Effects or Solidworks or really any of the crucially important applications for the desktop. These are exactly what keep the desktop around, and what keep people from accepting Metro.

It'd be great if we could solve these two issues, or at least discuss them. I think these ideas are really great and the presentation was amazing. I'm seriously impressed, but it's a little held back from widespread adoption until we can figure a few things out.

P.S. Also I'd like to be able to split my terminal windows horizontally please :)

This is a great example of how to get elements from mobile design (hamburger button, left sliding menu) and adapt it to desktop, which Apple has been slowly doing in the past versions.

My 2¢: - 6 fingers on the trackpad means two hands on the trackpad, which means moving one of them out of its working position. Just like you are avoiding moving the mouse around with the look-and-tap thing, you want to avoid users getting their hands off the keyboard, onto the trackpad, and back to the keyboard; - the context menu is really nice.

It is a very impressive piece of work for someone who is looking to show off his skills while still in college. I made projects myself before graduating, but they were always amateur-y and only for myself. I never made it to actually share it with the world. He actually did something and put it out there for others to see and judge.

Kudos to the guy, specially if the whole design process which he claims to have gone through (research, target groups, user flows) is as thorough as it sounds.

I found the repetitive use of "outdated" to be off-putting. It implies that much of the value of a new interface (including this one) lies in it's novelty. Except for a minority, the precise opposite is true - most people aren't interested in learning how to use their computer again.

Okay this is fine... as far as it goes. I think I still want windows sometimes.

I would much prefer all of the things I do to be organised by the work I'm doing. Here is an example.

If I switch to the "personal project" I get a completely clean workspace (or whatever I left it as) with email filtered to be about only things related to my personal project. I want only the apps that I use for this project (browser, pycharm, photoshop, terminal) all on different screens all of these apps only showing the work and paths I have associated with a specific project.

Basically build GTD into all my apps and the desktop and allow me to filter said desktop by tags, projects, people etc.

And I think I'll take this opportunity to turn on no procrast again...

I think this is very interesting content, and this is represented in the volume of dialogue it has generated.

I found that Neo really resonated with my residential use, but not so much for my work.

From the Author's referenced blog post 'The Desktop is Outdated': "We interact with a lot of different content today, and a large part is outside of files". Not in my work environment where the majority of content is inside of files. But sitting at home - yeah, this is true for me.

It appears to me that the mobile interface cart is trying to drive the productivity desktop horse here. I don't know to what extent I buy it, but I like the way Neo challenges current desktop design.

I'm going to go back and read it again.

I keep hearing doomeries as "Laptops will kill desktops", "tablets will kill laptops". What's next? we're going to work on smartwatches?

And here I am, with my desktop computer, comfy keyboard & screens, fully hackable. I've had it for almost 12 years now, and I just replaced parts every now and then to keep it somewhat current. That will never go away. Then I got an old laptop for working away from home, but I hardly ever use it.

So when I hear touch screens will become ubiquitous and its necessary minimalistic UX will drive the consumer computer industry, I have my doubts.

Never gonna happen.

Something being useful does not mean it will necessarily eat everyone's lunch. Nor that is has to.

"Fullscreen - click and drag outwards on a panel with 6 fingers"

AKA the "goatse" gesture.

I like this direction. Now that my primary personal laptop is a MacBook Air 11" (and my wife's is a 13"), I find we both use full-screen apps more frequently than on larger monitors. But, it doesn't feel like the OS has kept up, and the full-screen experience is less than ideal. Apps continue to open in windowed mode, and I have to resize or full-screen them. Swiping moves me from desktop to desktop, not app to app. Take some of the app-switching work that is now present in OSX and roll it into OSX (or Windows). I haven't put a lot of thought into this - I just know the OS "desktop" no longer feels like it works.

Have you tried Spectacle [0]? It has keyboard shortcuts (which you can redefine) for things like maximization, tiling on sides, and so forth. If you haven't tried it, consider doing so. It's really made my workflow much easier, and is almost a tiling window manager. It's easy to open a new window, drag it to the right workspace, and then size it with a keystroke.

For example, ctrl-alt-command is what I use for most shortcuts (and I don't use most of the ones that come out of the box):

C-M-S + Up Arrow: maximize fully (horizontal + vertical)

C-M-S + Left or Right: maximize vertically, constrain to left or right side. This is neat in that it cycles between making you window 1/2, 3/4, or 1/4 of your screen wide, so you can easily have one app take up 1/4 of your screen and another take 3/4.

shift-command-D: change which display something is on.

All of these play nicely with the dock, even when it is on the left side of the screen instead of the bottom.

0: https://www.spectacleapp.com/

The very first thing I noticed in the first image with 2 side by side is the example with wikipedia on the right. Wikipedia has side bars I don't care or need to see. This is exactly the instance I'd have the left one overlapping the roght to cover it up. Giving me more space to left. And chrome still allows me to scroll the window behind. Without clicking or bringing focus to it. Many applications also havesidebara or other whitespace that I normally might cover. Sometimes as simple as a calculator app or something. I am still open to this idea but until I see any tile manager address this well. It's a nom starter for me.

Wikipedia's mobile site (and Android app) dispense with the sidebars.

More generally: I'm finding myself prefering to negate sites' own styling increasingly, with very few exceptions. I'll reach for the "Reader Mode" button on my browser as the page loads (which, of course, doesn't work), and wish that that were the default instead.

Which is another area in which Desktop Neo might care to innovate. Treating documents uniformly regardless of where they originate.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact