Hacker Newsnew | comments | show | ask | jobs | submit | smacktoward's comments login

> Mods are claiming they're the backbone of the whole site, which is complete nonsense. It's the internet; one person steps down, there are thousands to replace them... just like reddit and other sites have always worked.

Perhaps we're seeing people slowly become aware of this fact. In any other context, having given long hours of uncompensated labor to a for-profit entity that views them as completely disposable is not something that most people would feel great about.

I had the privilege of being personally flamed by Derek Smart on Usenet in the mid-'90s.

If achievements had been a thing back then, and had Usenet had them, that would definitely have been one.

But did he ever apologize to the Coke machine?

It's probably time (past time, actually) for Mozilla to start looking into putting XUL/XBL to pasture for Firefox UI and using HTML/CSS/JS instead, since the Web platform has become sufficiently capable that the arguments for having a separate stack of technologies for building UI don't really hold anymore.

Still makes me a bit sad to see it go, though; I'm old enough to remember when XUL seemed like an exciting potential platform for general-purpose app development. Which never really panned out, alas, but was fascinating at the time.

Yep, I remember actually building a toy 2d tilemapped RPG engine with it maybe a decade ago. It definitely wasn't capable of anything fancy then so I didn't do much more than have it let you move around the map with a simple UI around it . Now HTML/CSS/JS can completely blow it out the water with what you can do.

> Windows are gone

Ehhh, yes and no; as mobile OSes have gotten more multitasking-friendly they've moved towards the "card" metaphor, in which each app runs in a card and there's some method for shuffling between them. And a card is really just a window that's maximized 100% of the time. (Which is how most general users use windows on the desktop, so they're not losing much.)

> And a card is really just a window that's maximized 100% of the time.

Ah, but that's not a 'window' at all in fact. That's how things worked _before_ the innovation of the 'window' UI, one 'screen' on the monitor at once, but maybe you can switch between them. The whole point of "windows" as a UI element is that it's not that.

> (Which is how most general users use windows on the desktop, so they're not losing much.)

It may indeed be that windows have not been a particularly successful UI pattern after all. :) Apple seems to tentatively trying to see if they can move away from them on desktop too, making it more like iOS, with the OSX full-screen mode.

Hopefully the reason for a lack of windows on mobile devices is because their screens are too small to manipulate them effectively with a finger.

Which is how most general users use windows on the desktop, so they're not losing much.

I've noticed this too, and it's puzzling - I've seen people maximise browsers on large monitors and end up with a narrow column of text surrounded by tons of blank space, or the more difficult-to-read extremely wide lines of text. Why don't they resize their windows to a comfortable width, and also gain the advantage of being able to interact with the other things outside the window?

I very rarely maximise any windows, but usually have several of them arranged such that I can see the important bits of each one and switch between them easily. One thing I do a lot is comparing the information in different windows, so perhaps it's entirely natural that I do this. It's definitely better than the alternative of switching repeatedly between full-screen windows and memorising their contents...

> I've seen people maximise browsers on large monitors

I am guilty of this, but what's odd is that it is something I only do in a Windows environment. When I'm in gnome or other linux environment I don't tend to ever maximise windows, but when I'm in a Windows environment I almost never not maximise windows.

I don't know what it is, whether it is an aesthetic thing or other reason I've not yet worked out why I have such different modes of working.

I think it's a consequence of one of the points TFA brought up - WIMP systems were originally designed to approach modelessness, but users want modal interfaces. They want to go into the context of an application and then switch into a "mode" where they only need to know about the keyboard shortcuts, UI idioms, etc. of that one application.

Sometimes you can't accomplish a task like this, but on the frequent occasions when that happens I'm reminded of exactly how annoying it is to, for example, look at a screen that's split between browser and terminal and switch back and forth between the very different interaction paradigms the two require.

It depends a lot on your screen size. Below ~19" I just keep everything maximized and use hotkeys to rapidly switch between windows. I'm usually flipping between a text editor and web browser. On even a rather large laptop, 50% of the screen width is too small for web pages. Unless you're doing data entry I doubt you're memorizing the contents of one window to input it in another, and if you're doing that copy/paste exists.

I maximise windows to eliminate distraction. I'd rather finish reading the article I'm reading, for example, and then Cmd-Tab to something else, rather than obsessively glancing at a Gmail window a dozen times through the article.

More the multitasking, the more stressed I get. One thing at a time is relaxing, and more productive. Feeling frantic doesn't result in any more work, or higher quality work, being done, after all.

I maximise windows even on my 30-inch monitor. Recently, I've switched to full-screen. I wish I could tell OS X to launch apps full-screen by default. Windows should, for me, be the exception, not the norm. This is one aspect where mobile OSs are superior, for me.

I seem to recall a old article that compared the usage of unix shell with that of ones daily life.

Could have been written by someone that trained older people on how to use computers.

This by showing how the shell task controls (ctrl-z, bg, fg) mapped to real life tasks.

Say you have a long running task, ctrl-z and bg makes it continue in the background. That i believe he likened to putting on a kettle in the morning. when either is done there will be someone sort of notification, and until then you don't really have to care about it.

Similarly you find something in the mail that you may want to deal with later. With physical mail you put it somewhere that you can see as you move around the home. With the shell you get the jobs command, and if you try to log out without ending them you get a warning.

This all going by memory.

I have a triple monitor setup, so almost everything is maximized and I do mobile like card flipping between them with alt-tab. I have a stack of browsers on one, a stack of IDEs on another, etc.

If things get out of hand I throw a bunch of windows in a separate activity.

Sometimes I turn on the KDE card flip effect to feel silly.

  > as mobile OSes have gotten more multitasking-friendly they've moved towards
  > the "card" metaphor, in which each app runs in a card and there's some
  > method for shuffling between them. And a card is really just a window
  > that's maximized 100% of the time. (Which is how most general users use
  > windows on the desktop, so they're not losing much.)
Meet the new boss, same as the old boss.

“Switcher,” written by Andy Hertzfeld, October 1984-1985.


Developer’s instructions:


And they're not even always maximised now either (thank God).

It was, but the problem is that the term has been picked up by clueless masses who are unable to realize that searching for a unicorn is by definition a doomed process, much like how "Born in the U.S.A." is now obliviously played at rallies by political candidates to clueless to realize it's a song about how ordinary people get screwed over by politicians.

This is like arguing that every brand of cereal should be labeled "Box of Cereal" because it is a box that contains cereal. It's possible for a label to be completely true and completely uninformative at the same time.


They don't have half their country driving around with swastikas painted on their pickup trucks, so they would certainly seem to be doing better than we are...

Your comment implies that half the people in the US fly that flag and that everyone who flies the flag is a racist and that the fact that so many people fly it hasn't made it lose much of its meaning.

Your comment makes a lot of assumptions and appears to me to be painting half the country (probably the south) as racists. This is a bigoted narrow minded attitude.

Even Kanye West has a line of clothing with the confederate flag, do you think he's part of the KKK?

Neither does the US...

A better place to look is their video ads, which have gotten quite a bit of viral traction: https://www.youtube.com/user/DollarShaveClub

The DSC business model is really simple: take a no-name commodity product (Dorco razor blades: see http://lifehacker.com/5903771/forget-dollar-shave-clubbuy-th...), wrap it in very-social-sharing-friendly marketing, charge three times as much. Boom, you make enough money to cover the costs of the marketing and then some.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact