In most computer nowadays you cannot code (tables and smartphones), are computers doomed to be an expensive tool for few "nerd" ? What will be the impact on computer literacy ?
...and RMS predicted this almost 20 years ago:
I think the rise of P2P, file sharing, and the openness of the Internet in the last decade significantly narrowed the developer-user gap; and it's been growing since then, motivated by corporations' desire to maintain control over their users.
I think that's only one factor, and not a majority one.
Most users don't want to have to deal with "how it works". They want a simple, easy to use tool that works reliably... And they want to call someone to "fix it" when it "breaks". That's how it works with plumbing, cars, landline phones, stereo components, televisions, and all the electronics they've ever used.
The exceptions are computers and some smartphones, which can present cryptic error messages, have weird things in their settings, and generally make a "dumb user" feel out of their element. Think about the confusion users feel when confronted with a funny noise in their car. "I'm not a mechanic, what does that noise mean?" is no different from "I'm not a computer person, what does that error mean?" What's more, the meaning of the question is not "what, mechanically/electrically, is at fault?" It is "how much time/money will it cost to get it fixed?"
It's not just a small preference, either - the height of luxury are "push button" services that "just work". Go to a high end hotel, and your room phone has just one button. Top end consumer products of all sorts struggle to be an easy-to-use "appliance". A dumbed down user interface without developer tools is user preference, it's status, it's customer comfort and pride, all tied into one.
So 99% of companies end up designing their interfaces like that hotel phone: http://salestores.com/stores/images/images_747/IPN330091.jpg
IMO the most impressive thing about OSX is how well it supports both audiences: it feels like a push-button, high luxury, comfortable, easy device to my mother. But under the hood there are great logs and a solid BSD-based operating system model. It comes prepackaged with a lot of developer tools, hidden in a place where I would look right away, but my mother would never notice.
Sure, some companies use software to limit and control their customers (cough cough Sony), usually with sharp legal/lobbyist teeth to enforce that control. But 99% of companies out there just want to make their users feel comfortable, high status, and competent to use their device.
While I agree with RMS that this split is inevitable, I don't believe it's about control. It's about two distinct market segments: auto enthusiasts who want control over the torque settings in their high end car, and people who just want a car that fucking works. Chefs who want sector-by-sector control over their oven's heating profile, and people who just want to be able to cook a fucking roast without burning it.
Tablets and phones are consumption. You can't do any serious work on them - development included.
This is why laptops and computers have stuck around in spite of the proliferation of cheap, tiny, elegant consumption devices.
So no, I don't think laptops and computers will go away for non-nerds, just for people who don't produce anything.
And a lot of music creation apps exist for tablets/phones.
This production/consumption divide is too rigid.
Now don't get me wrong - while I only use the two a little, I think they're fine. It's communication, an important part of human experience. But, at least in my mind, Instagram and Snapchat fall firmly into the same group as browsing Facebook or 9gag, as opposed to e.g. making a let's play video or a comic strip.
Yes, there are ways to take photos and create music on tablets and phones. You can do some basic editing on them, even. But the "professional" tools for photography and music, with all the bells and whistles you can think of, are still dominated by laptop / desktop computer programs. (The dominant programs being Photoshop for images, and DAWs like Logic, Ableton Live, and Pro Tools for music.)
The distinction between "production" and "consumption" devices is indeed kind of too rigid in a sense that, of course, professionals will utilize the creative tools that come on tablets and phones, even if the desktop / laptop programs are the primary tool. Tablets also can shine as an extended interface for desktop programs. (EG: Logic Pro (and others) have apps that turns an iPad into a remote controller for the main DAW. There are programs like Astropad that turn your iPad into a Wacom-like tablet for Photoshop, etc.)
The obstacle is interface. The fine-tuned control of a tablet or (especially) a phone is much poorer than using a mouse and keyboard with a large screen. Until that gets resolved, I doubt desktops / laptops will go anywhere.
Development isn't done on tablets because the input devices we have to make code are limited to a keyboard, and most people think text files are code, rather than a serialisation/deserialisation format for an AST.
You could easily build an AST with gestures and speech rather than tapping buttons, and I think in 10-20 years time that's how we'll make software.
I doubt it. Perhaps we'll be making ASTs by writing (i.e. drawing symbols with styli or pens), but I don't think we'll be doing it via gestures and speech. There's a reason that we don't teach math via interpretive dance.
But text is a pretty fine form of communication and I find myself using it very often at work (and at home I often talk this way to people not in the same room, but in the same flat). It's fast, it's convenient, it's less disrupting, and the only reason to avoid it are some silly preconceptions that digital communiction is somehow "worse" than spoken words.
Also, you never passed papers to your friends while in school? That's pre-smartphone equivalent of IM.
Edit: And easier to search, remember, read again and the less nice variant of that; 'you never said that to me' 'I did: ' copy/paste.
Regarding being easier to search and read it again, it seems like there are potential technical solutions to that problem, but I would agree that we're not there yet.
Recorded speech is also searchable so not sure that's relevant.
It is ; recorded speech is not very searchable, especially if you are talking in a group in a conference where people can be from different countries with different dialects (which is the normal situation for our group talks). Also it is not convenient and sometimes not possible to record every (conference) meeting (too much noise etc). With text it's automatically recorded and perfectly searchable...
Also some of my colleagues are not good at English listening but are very good technically; if I type what I mean they understand while if I/we tell them, everything has to be translated and/or repeated many times.
I think the tech is not there yet to say it's not relevant.
A more specialized scenario: I was copy/pasting stuff to a colleague in the same room yesterday.
I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.
One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.
I think there are ports of paredit-like features to those languages in Emacs too, and all the other semantic features of Emacs itself work with those. As long as the language's major mode properly defines what is e.g. a function, a symbol, etc. you can use semantic navigation and editing.
> One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.
I don't think this is something that AST gives you. AST is just a more machine-friendly representation of what you typed in the source code. Portability between different platforms depend on what bytecode/machine code gets generated from that AST. And since AST is generated from the compiled source anyway as one of the first steps in compilation, getting it to emit a right set of platform-specific instructions means you can compile the original source there too.
And AST doesn't solve the problem of calling platform-specific functions and libraries anyway.
Data structures are shapes. A shape is better drawn than described in text.
> Data structures are shapes. A shape is better drawn than described in text.
Draw me a linked list. Tell me how much faster it is than typing:
(list 1 2 (foobar) (make-hash-table) (list "a" "b" "c") 6)
Unless you can find a completely different way of designing UX, then a tablet won't be a suitable device for creation. None of the currently existing solutions come close to beating a physical keyboard and a mouse.
I don't normally use linked lists, but here's an array:
"list joe (subtle gesture) mary (subtle gesture) dave end
If I wanted to delete dave from the list I could grab it and slide it away or say "list delete last".
> Tell me how much faster it is than typing
Everyone in the room I'm in now can talk at 200 words per minute and use their hands. Very few of them could type that fast.
How will you go about drawing "joe" and "mary"? Is it faster than typing? Note that you can't always select stuff from dropdowns - you often have to create new symbols and values.
> Everyone in the room I'm in now can naturally talk at 200 words per minute.
How fast they can track back and correct a mistake made three words before? Or take the last sentence and make it a subnode of the one before that? Speech is not flexible enough for the task unless you go full AI and have software that understands what you mean.
> How will you go about drawing "joe" and "mary"?
I'll just say it, it's easier. As I said at the top of the thread, gestures and speech.
> How fast they can track back and correct a mistake made three words before?
I gave an example of opening an existing structure and modifying it in the comment you're replying to.
> Or take the last sentence and make it a subnode of the one before that?
Like in a DOM? Easily: grab it and move it, just like you do it in DevTools today, except with your hands rather than a mouse.
Sorry, I misunderstood what you meant by "subtle gesture" there.
Anyway, in the original comment you said:
I'll grant you that speaking + gestures may not be a bad way of entering and manipulating small data structures and preforming simple operations. But until we have a technology that can recognize speech and gestures reliably and accurately (and tablets with OSes that don't lag and hang up for half a second at random), physical keyboards will still be much faster and much less annoying.
But I still doubt you could extend that to more complex editing and navigating tasks. Take a brief look at the things you can do in Paredit:
Consider the last three or four subsections and ask yourself, how to solve them with touch, gestures and speech. Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel? How about splitting a node into two at a particular point? Joining them together? Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for)? Searching in code for a particular substring?
Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction. There are stories on the Internet of blind programmers using Emacs who can achieve comparable speed to sighted ones. This usually involves using voice pitch and style as a modifier, and also using short sounds for more complex operations. Like "ugh" for "function" and "barph" for "public class", etc. So yeah, with enough trickery it can be done. But the question is - unless you can't use the screen and the keyboard, why do it?
> Like in a DOM? Easily: grab it and move it, just like you do it in DevTools today, except with your hands rather than a mouse.
DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.
Totally agreed. Theoretically, you should just be able to gesture a list with your hands and say "joe mary dave" and the software knows from your tone that's three items and not one.
I don't know that much about lisp and s-expressions asides from that it can edit it's own AST. That's not a way of avoiding the question, it's just my own lack of experience.
> Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel?
Yes, I already think of a tree of blocks/scopes when editing code with a keyboard, visualising that seems reasonable.
> Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for).
Here's the kind of stuff I use an AST for: finding function declarations and making them function expressions. I imagine that would be (something to switch modes) "find function declarations and make them function expressions". Likewise "rename all instances of 'res' to 'result'" with either tone or placement to indicate the variable names. More complex operations on the doc would be very similar to complex operations in the doc.
> Searching in code for a particular substring?
Easy. Have a gesture or tone that makes 'search' a word for operating on the document, not in it.
> Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction.
Yep, I don't think it would still resemble normal speech and interaction either, the same way reading code aloud doesn't. It would however be easier to learn, removing the need to type efficiently as well as the (somewhat orthogonal) current unnecessary ability to create syntax errors.
> DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.
Not sure if I'm reading you correctly here: typing DOM methods in a keyboard in devtools is obviously slower than a single drag and drop operation. Using hands to do it directly is obviously even faster with the mouse.
Stepping back a little: I guess some people assume speech and gestures won't get significantly better, I assume they will.
favouritePeople is Person list, name Joe age 32, Mary 23, Steve 64, end
Using tone to separate entries, but you could use a secondary gesture for that instead. Also some pattern matching.
Unless AI advances considerably. For years I've imagined myself talking to the small specialized AI living in my computer, giving it instructions that it would translate to code...
Writing software is about telling a blazingly fast, literal, moron what to do. The ambiguity inherent in natural language is not a good way of telling such a thing what to do.
I think I have discovered the source of your disagreement.
No. Because of the Glorious PC Master Race - mods, trainers, hacks, overlays etc - these all need dev and root access.
Btw- game modding, cracking, save game editing etc - are the best gateway drugs towards full blown IT career.
Hell, even lack of window management in iOS/Android systems is making UX much more easier to understand for majority of users I know. My granddad, who was an excellent mechanical engineer, have been using computers for the last 20 years, and he still struggles with click/double-click distinction.
Have you tried teaching him that? I highly doubt an old person, especially one with engineering background, will have trouble with understanding the distinction if someone bothers explaining it to them.
Or in general - it's surprising how much non-tech people can understand about technology if someone bothers to sit down with them and explain the concepts to them. Usually the reason they don't learn this stuff themselves is the typical human impulse of "if I haven't figured it out in 3 seconds flat, it's too difficult and I won't understand it".
Only if you want to keep them illiterate, which companies are more than happy to do since it means they can be more easily persuaded and dependent consumers.
Somehow nobody complains that cars, or microwave ovens are too complicated. Everybody knows they have to learn how to use them - either through a training course or just by reading a manual.
Are my parents or family interested in password managers ? Heck no... why should they, because the browser will remember stuff for them.
Permissions ? You have to be joking... they want to read their email or draw a picture.
Computers are there to make life easy - they're convenience tools (for the mass market). If people have to understand them more than switch them on a press a few buttons, they've failed.
It's not the IT world... for years, we were outcast as geeks and nerds (they were insults in the past). It's that the average person doesn't want (or need) to know about this.
How many people service their own car ?
True, but there is still some learning to do. The only way you can reduce it (barring solving general AI and making a system that actually knows what you mean) is by reducing the things a device/piece of software can do. That's what the industry is doing - cutting out features, turning software into shiny toys. Because from the market perspective, is enough that the people sign up / buy the product - it doesn't have to be actually useful.
That's why software for professionals look complicated - because there the company actually has to make a useful tool. This state of thing is sadly a big loss for humanity - if the only way to make stuff "sexy" is to make it barely useful, then the general population is in fact missing out on all the amazing things technology could allow.
(And the tech people are missing out too, because they're too small a niche. It's more profitable to target the masses instead. That's why all mobile devices are getting dumber.)
> It's not the IT world... for years, we were outcast as geeks and nerds (they were insults in the past). It's that the average person doesn't want (or need) to know about this.
Oh but it is the IT world. We've been invaded by the "normal people" and we've lost the battle. Most programmers employed nowadays are not much different from your average non-tech person, and have nowhere near the technical expertise you'd associate with the "geek and nerds" of the past.
> How many people service their own car ?
I'm not talking about servicing, but about driving. You have to spent 30+ hours in training to be allowed to drive on a public road. Nobody complains because people understand that to use the car well, you have to learn how to do it.
If I had to read a manual to operate my microwave, toaster, coffee machine, sandwich maker, oven, games console, etc etc, I'd just get rid of them.
I say class, because most tosters work the same, most microwaves work the same, most smartphones work the same and most 3D modelling programs work the same too. But you have to get that first little bit of knowledge about the class of tools from somewhere, even if from your own experimentation. Humans aren't born with knowledge how to use technology.
You sound like a guy who teaches his kid to swim by throwing him in the stormy sea.
I don't think anyone ever has.
So here's just a few ways you can code on Android:
AIDE (Java): https://play.google.com/store/apps/details?id=com.aide.ui&hl...
Terminal IDE: https://play.google.com/store/apps/details?id=com.spartacusr...
If all else fails, just deploy debian with Linux Deploy: https://play.google.com/store/apps/details?id=ru.meefik.linu...
If desktops become more expensive, it'll just mean people are more motivated to make tools like this. Android phones and tablets are basically treated as cheap commodities and there's an extremely competitive market for them, if anything, the entry price has gone down.
Now, admittedly I'm not sure how this situation is on iOS, but maybe someone could link similar tools on there?
For one thing, a Raspberry Pi is more powerful than the Sinclair ZX-81, Apple IIe, or Atari 400/800 I had access to back then, and much cheaper.
While being able to play around with Project Euler can be fun, it amounts to "I can run a Turing-machine simulator" and doesn't represent anything more than a tiny fraction of what people want to do with computers when they say they want to "code". You may as well be playing one of the numerous puzzle games that involve much of the same concepts.
To use your iPod Touch as an example, if it were more like a traditional desktop computer, you would also be able to do things like write an app to manage your music playlists.
Not surprising if it's a Windows tablet based on the PC architecture - those are far closer to the traditional desktop than iDevices and Androids. If by C# compiler you're referring to the one that comes with the .NET framework, that's been there since the first versions; pity it's not so well known with MS trying to push VS as hard as possible...
Yes, you can. See https://play.google.com/store/apps/details?id=com.aide.ui
During one weekend in which my only options were android devices, I was pleasantly surprised by the packages available in termux. With tmux, git, and ssh installed, I mounted the tablet at the right height and connected a quality keyboard via usb. I actually forgot that I was coding on a tablet!
The phone experience was far more sensitive to maintaining good posture throughout, but being strongly incentivized to keep good posture actually made the experience more pleasant in a way. However, this particular phone was around 1280x720 I believe - seeing individual pixels again, and being pixel-limited (not physical size limited) in the use of panes in tmux were the only facets I found truly unpleasant.
I'm eager to try coding with a high res VR headset.
It seems like a vast majority of software developers, consciously or not, do not wish for software development to improve beyond a certain point as they fear it would become too accessible and therefore lower the value of their skills. The truth is that we actively make programming as difficult as possible, and everybody loses. I can understand that writing code as text would make sense 50 years ago, but there is no excuse for this today.
Consumer UI is now reaching the 3rd dimension with AR and VR, while software development is stuck in the 1st dimension. A long linear piece of string. It is difficult to believe that those who have the power to create great consumer UX are completely blind to improving their own. Software development has some of the worst UX ever.
The solution to all of those issues has been known for a while, and is dead simple to understand. We need to create a new communication platform, powered by ideas from logic programming and the semantic web. Think of it as 2 huge semantic knowledge graphs, the first describing the real state of the world, the second describing the ideal state of the world. Build a UI on top of it (which should feel more like a graph-oriented Excel than RDF/Prolog) to let people, agents and IoT devices communicate "what is" and "what should be". Then, all it takes is an inference algorithm that can match providers with seekers, get them to commit to some set of world changes (through some sort of contract), and let people manage and track the commitments/tasks they're expected to get done. That's it, that replaces 80% of software needs. Thank you very much.
Knowledge Graph -> Semantic Marketplace -> Smart Contracts -> Task Management
Perhaps I should take this opportunity to make that happen.
Given that Microsoft supported Oracle's view that the structure, sequence, and organization of the Java programming interfaces were covered by copyright law, then surely they would also agree that the same holds true for the Linux kernel system call interfaces.
I don't like the APIs-are-copyrightable decision, but given that's the current state, why aren't we talking about how this is a violation of the Linux kernel copyright license -- the GPL?
One legal thing that I'm also wondering about is the "Linux" trademark. I thought the Linux Foundation kept close tabs on how you were allowed to use the trademark, and one requirement was that the Linux kernel was actually involved?
This probably explains why they never talk about Linux (at least I never saw it), but always about Ubuntu. I guess they have an agreement with Canonical.
The whole "they're a big company...don't you think they've thought of this!" argument (and its many "do you really think they'll lose?" variations) is always a fallacy. That doesn't make the argument about the copyright of ABIs valid, but at the same time the notion that Microsoft is big therefore they must be right is absurd.
But the rest of those have nothing to do with their legal team. They wouldn't implement a copy of another OS into this OS without making sure it was legal to do so.
Ozweiller is quite right. Big companies copy other people's stuff, breach trademarks (Metro?) and generally mess up all the time.
I doubt the ABI emulation is actually a problem, but calling it "Windows Subsystem for Linux" might well be a trademark violation as it doesn't involve Linux itself. Imaging if Wine called itself "Linux Subsystem for Windows". I think Microsoft would be deploying their legal team right quick.
It might not have been simple to do, but still - hard not to see the outcome.
The very fact that they had to pull the plug seems to suggest that it was not desired, and as such, it should have been safe guarded against.
An example safeguard being, limit what it can say. If it has racist/etc stuff in it, literally don't send to twitter. The bot still learns, the algos don't change, and Microsoft still gets to see what the given AI will behave like in full public. And above all else, the bot isn't a Microsoft branded Hail Hitler AI.
It sounds like you believe what happened is perfectly within reason - if that's the case, why do you believe they pulled the plug?
All in all, this is a lesson that some high-profile person/group eventually had to learn on our behalf. Now, when an unknowing manager asks why your chat bot needs to avoid certain offensive phrases because, "our clientele aren't a bunch of racists", you can just point him to this story. The actual racists are tame by comparison to what trolls will do to your software.
 = https://github.com/shutterstock/List-of-Dirty-Naughty-Obscen...
However the legal department of every company on the planet makes a risk:benefit analysis, especially in fuzzy areas like copyright law (which we've seen with the Java case....an API isn't copyrightable, then it is, then it isn't, then it is). The assumption that if Microsoft did it therefore it must be without risk is folly.
Microsoft's lawyers likely decided that the move is "worth the risk". But they wouldn't be able to be 100% sure that it's either legal or illegal anyway. You can only be 100% sure after someone challenges you in Court, and then judges decide a certain way.
but legally speaking, they seem to have adopted that culture.
Not sure your understanding is correct, but in any case is that not precisely what Wine does on Linux when running Windows apps? Are you worried about Windows copyright violations with Wine? From Wine webpage, "Wine translates Windows API calls into POSIX calls on-the-fly. . ." 
Linux, on the other hand, provides exactly that, and this wrapper makes it so that you can actually run "movl $1, %eax; movl $0, %ebx; int $0x80" and it will actually call the equivalent of exit(0).
Would be very interesting to see a Wine based on this concept instead...
* you're linking to a URL with leetspeak in it, and not MSDN.
* the codes change so frequently.
* some of them disappear or are renumbered, even in service packs.
If we wanted to do this for Wine, then it would also require emulation or something like OS X's Hypervisor.framework to catch system calls, which seems heavyhanded when we have working code already.
It could also be done as a kernel level loadable module in the sort of style this Linux subsystem is being done, which is more what I was thinking. Changing a few numbers for every Windows service pack (really, not even necessary if you only support certain versions of ntdll for example) might not be so bad compare to re-implementing bug level compatibility with every Windows API.
Calling Wine "working" in its current state is a bit of an overstatement.
What's interesting to me right now is whether or not Microsoft is saying it's OK in one context (linux interfaces on windows), and not the other (java interfaces on android), and why are they different?
A system call is a function entry point. The code executed when those functions are called is GPL licensed, and Microsoft wrote that from scratch.
The whole point of open source licenses is to make it explicitly clear what you're allowed to do with the (yes, still copyrighted) material. They too provide strict terms and require you to honor the license.
It's not like Linux has a published license free spec. Unless they reverse engineered the system calls (possible but omg I think that would have been hard), then I'd be willing to bet that this could be easily considered derived work from the GPL.
This is not going to happen. And this is not a too bad news, because this might give us leverage (by estoppel) if MS ever wants to litigate against free software on the theory that reimplementing API propagates copyright.
The GPL is not that expansive, it only extends to programs built upon GPL-licensed stuff, not for programs that just happen to have a GPL application running on them.
A compilation of a covered work with other separate and independent works,
which are not by their nature extensions of the covered work, and which
are not combined with it such as to form a larger program, in or on a
volume of a storage or distribution medium, is called an “aggregate”
if the compilation and its resulting copyright are not used to limit
the access or legal rights of the compilation's users beyond what the
individual works permit. Inclusion of a covered work in an aggregate
does not cause this License to apply to the other parts of the aggregate.
(1) core NT kernel
(2) Linux subsystem
The option to run unmodified executables is nice if you have closed-source linux binaries, but they are rare, and this is directed towards developers and not deployment anyway (where this might be a useful feature).
When I heard "Linux subsystem", I was hoping for a fuller integration. Mapping Linux users to Windows users, Linux processes to Windows processes etc.. I want to do "top" in a cmd.exe window and see windows and linux processes. Or for a more useful example, I want to use bash scripts to automate windows tools, e.g. hairy VC++ builds. And I thought it would be possible to throw a dlopen in a Linux program and load Windows DLLs. Since I don't need to run unmodified Linux binaries, I don't see what this brings to me over cygwin.
I am hoping though that this might be a bit more stable (due to ubuntu packages) and faster than Cygwin, and that it might push improvements of the native Windows "console" window.
... would you, too, agree with a call for its resurrection?
Most Linux programmers I know aren't Windows devs as much as the MS shill team would like everyone on social media to believe.
You don't have to be part of the FOSS crowd to support FOSS. I'd wager the majority of the programmers you know would be ecstatic for Windows or OS X to go open source, and if they use OS X/iOS they probably do care about their privacy.
I don't know a single developer that uses a windows phone or a Windows workstation purely out of choice, most devs I know that are ingrained in Windows are using it because they have to.
Stack overflow statistics show that programmers disproportionately choose OS X and Linux over Windows when compared to typical desktop usage (Linux use skyrockets among programmers compared to desktop).
These "Linux programmers who want Windows" only exist on the internet as far as I can tell. No one actually wants to use Windows.
The few coders that I know that are part of the FOSS crowd, have their ThinkPads or Dell XPS Developer Editions with Gentoo,Ubuntu or Arch.
This isn't for them, its for the ones already on Windows or OSX.
OS X and Linux with X11 or even Wayland are easily 20 years behind windows in terms of UI responsiveness and snappiness. The constant input lag of unix desktops always drives me back to windows for my workstations, although I am a fullheart windows server hater and despite everything that comes with it. It's stable, extremely fast and when edited/hacked right also privacy aware.
I can't really describe it, but it's bothering me and I would put my hand in a fire betting that it's somehow measurable. It feels like the input lag of a cheap IPS-Monitor. Every klick, every slide, every window resize has this minimum lag of maybe 10 to 50 ms, worse on linux. That's hardware independent, because that was already bothering me with a MacBook Pro and my old iMac. I haven't booted OSX in years to work production because of that tbh (Every piece of apple hardware I own is booted 99% of the time with Arch or W10E) but did it just now and compared it side to side. It's still there. It's driving me crazy. I know that I am hyper sensitive for lags and stuttering because of my former quake carrier.
On my beast Win10 machine, everything is also instant, it's just randomly throughout the day, it will lock up while tabbing to an app or something for 10 seconds.....which is just so weird and annoying. CPU shows it is pegging a thread for the opening app, doing who knows what.
My mac is a 2013 MBP with a intel 5200hd.
Mapping the users is possible and "SFU" did this, with a couple of caveats (Windows requires group and user names to be different, while UNIX systems often have groups with the same name as users).
I don't think this is a Linux or GNOME killer, but it might put a dent in Cygwin and git-bash.
I think Microsoft can do something similar.
Performs the default action as if it were a Linux process. Mostly terminate or ignore.
What about more basic things, like moving files around, etc.?
I'd be happy if I never had to write a (Windows) batch script again...
No. Execution mode incompatible - see e.g. https://github.com/wishstudio/flinux/wiki/Difference-between... for details.
What would really interest me: how was fork() implemented by MS here? The same method as http://stackoverflow.com/questions/985281/what-is-the-closes... or have different interfaces been created?
can you check what happens after you wake from sleep/hibernation? are those apps still fully functioning?
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.4, 256 bits)
OpenGL version string: 2.1 Mesa 10.1.3
OpenGL shading language version string: 1.30
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server "localhost:0.0"
after 732 requests (732 known processed) with 0 events remaining.
What's illustrative for the dominance of *NIXes in development are the number of projects on Github that contain only +NIX installation instructions and no Windows instructions (again, anecdata).
So if Windows wants to remain competitive, they need to retain developers. And as the +nix way of developing seems to be dominant now in quite a number of fields, Microsoft needs to adapt.
Why, you're asking, do I think that the +NIX way of development is dominant today? In a nutshell, Web -> Unix Servers -> POSIX shells -> Languages that work best with POSIX -> OSs that are POSIX-compliant.
Edit: Asterisks don't work as expected here. At least not in a Markdown-compatible way.
That being said, I'm typing this comment my workstation running Linux and I for one am getting very tired of this year of the Linux desktop joke.
What OS you run is an individual choice, stop trying to declare a single winner.
Realistically linux did hit it big, but on a phone OS. It's now one of the most installed kernels in the world, but its brand is hidden. Linux is also incredibly important in the server space, and everyone knows this.
Linux will never have its year on the desktop in my opinion, but it will still be all over the place in the server/phone space. It just won out in other areas than the desktop.
They were not actually, this is a myth. A few tech "journalists" wrote such articles which people started making fun of. But no, regular Linux users never claimed that, or at least not in any significant number that I know of.
You != masses.
There might not be a clear single winner, but there is a clear single loser. Statistically speaking.
Myself and a lot of other people are using GNU/Linux and other libre operating systems with great pleasure and, finally, growing hardware support. I could not care less if 90% of desktops are Windows systems or if an additional 9% are OS X machines or whatever.
tl;dr: Just use what works for you. If it supports your ethical values, itäs even better!
They lost me with all the rewriteritis and monodaemonisation that followed. I switched to MacOS (hackintosh) and was very happy for a while, since it could run all the Unix stuff, most of the productivity stuff (MS Office), and many games. It was for a long time the most plain, conservative OS (while Windows was going crazy with 8).
But recently, I've found Windows to be the OS that "just works" and gets out of my way - which was pretty surprising to me.
If anybody killed alternative desktops, it is not MS, but the desktops themselves.
I've had the opposite experience. Windows does not "just work" and it certainly does not "stay out of the way".
I have USB headphones I can't use in Windows because they connect but Windows doesn't let me switch to them. When I plug in an external monitor my OS comes to a crawl and it doesn't speed back up until I restart the whole thing. When I unplug a monitor it loses my windows.
And did you hear the story about the guy who lost his job because Windows decided to update the .NET framework right before he was scheduled to do a presentation at a business meeting? Doesn't sound like Windows stays out of the way to me.
I wish Windows "just worked" but it doesn't. It breaks all the time unless you're a power user. Giving my parents Linux was the best thing I ever did for them because it turned their laptops from a source of constant frustration to an always-on communication machine. We went from hundreds of ads and dozens of toolbars on windows to a Linux machine that just works.
Now I'm just trying to get my dad to switch to Linux for work so he doesn't have to install his printer drivers again every time he wants to print something. All he uses for work is Chrome any way.
For most people, this would be a problem with the USB headphones, not with Windows. On the other hand, if the USB headphones work well in Windows but not in Ubuntu, then it's a problem with Ubuntu, not the USB headphones.
This is why it's impossible to have a rational debate about the state of the "Linux desktop".
All hail Winux though. (That's the name for this mix I came up with.)
Before you downvote this without thinking ... consider, for example, KDE is severely understaffed and this will deplete them further. Who will bother with X.org bugs and drivers now? What's the point? Who is your target audience? You need to drink a real big dose of Stallman kool-aid to continue with Linux if this thing on Windows works as promised.
I have been using Linux solely on my laptop since 2004. I am sick of the constant driver problems. Yes, yes, you can connect to your home router or the router in the cafe. Now go and try and connect to an enterprise network. Perhaps with VPN.
But it has been improving all the time with every single release.
The problem is that you (and millions of people who were looking for Linux desktop to "win") are just not excited anymore since new form factors (phones, tablets) arrived.
But I actually think Linux desktop is a winner. There are several high quality desktop enviroments suitable for all kinds of use cases.
Yeah we are not dominating the world. That was a short naive dream in the early 2000. But we have awesome desktops and thats what matters.
Disclaimer: minor KDE contributor but these were my thoughts not KDE's.
Btw, give KDE a try its so good these days :)
It becomes even more approachable by "Winux". Let people learn the basic of the CLI and get comfy with more open source tools -- then reinstall your computer to a Linux distro (and put your Win-only apps in a VM or on Wine) is a small move.
Same for some IT professionals that use Windows (either since their job demands it, or out of preference). They might install Winux at some point to get some aspect of their work done faster. Again a lower barrier to get your CLI skills up and get comfy with common open source tools.
I believe there is a lot of value in "CLI skills and common open source tools" that Windows users are currently missing out on.
I've deemed it "Frankenstein OS" because they've sewn a whole bunch of parts together to make an unwieldy monster that doesn't quite work as good as the individual pieces did on their own.
The main advantage over a VM is no resource partitioning: on a 4GB RAM tablet with 64GB eMMC, you can't allocate more than 2GB RAM to a VM without trouble, and putting 20GB of disk aside for it is also a pain., and much improved power efficiency (even an idle VM drastically reduces battery life, while Ubuntu for Windows doesn't).
Compared to Cygwin: a lot more packages are available, a lot more just works out of the box, and you can painlessly use online tutorials for Linux, which often assume Ubuntu and don't consider Cygwin a target platform.
compaudit:105: wait failed: invalid argument
compdef:95: wait failed: invalid argument
zsh: you have running jobs.
Will have to try again after the next build.
I haven't tried zsh, but I'm pretty sure you can install it. I installed a bunch of applications, including using third party repos and ppa's. I don't see why zsh would not work.
Perhaps zsh uses some unsupported escape sequences (for instance, screen doesn't seem to work), but you can readily work around that by using another terminal in windows (mintty) or launching a VNC server from Linux and a VNC client to your localhost I assume.
It's readily apparent from the error messages earlier in this sub-thread that the zsh problem isn't to do with escape sequences. And of course the screen problem (at least the one known so far) is not escape sequences, either.
Don't get me wrong, I've been fine using a "remote" VM machine locally for linux, and a lot of my work the past few years has been that way (CIFS in the VM, to run a gui editor on the desktop), but to be able to run closer to native is a good thing imho... hopefully it stays well supported.
The native Windows versions of git and CMake can be awfully slow .
I got tired of running Linux directly on my desktop. Compiz crashes (worked in 14.04, not working in 14.10, started working again in 15.04, not working in 15.10...), can't keep my select sound output, terrible font rendering, awful HiDPI support, graphics drivers are a mess, and on and on...
Since I basically just need the non-graphical parts of Linux (Bash + Tmux + Vim) I'm very happy with this setup.
PS: Forgot to say that I have a very beefy machine (Skylake Core i7 4Ghz, 32GB DDR4, fastest consumer NVMe SSD from Samsung) but I've found that this setup works well on basically any machine.
For this use case, "Ubuntu on Windows" seems to be a nice improvement.
It's a very convenient arrangement for developing web apps/servers. I have the servers running on FBSD which can be accessed from a browser on the Windows host. This perfectly replicates connecting to a remote server, so when the app is working properly and committed to the repo on the remote host, it's almost guaranteed to work as intended!
When I acquired the SP2 I originally thought I'd dual boot. Turns out that's difficult to accomplish, but in a way using Hyper-V is better because I have Windows and FBSD running at the same time. Of course the VM imposes limits so not optimum for every purpose but good enough for my uses.
I should mention that Cygwin is installed as well. Runs Bash nicely in a terminal, and a good way to ssh to remote servers as well as FBSD in Hyper-V.
msysGit is also handy when using a Windows host.
Rsync is straightforward and has fewer permissions issues to work out when sharing with Windows, but is uni-directional. NFS is a true share rather than a sync, but restricts what you can do to the directories on the guest in terms of permissions. There are other options too, like bindfs and unison, but I haven't personally explored those in depth.
I sound like a Vagrant fanboy or shareholder but I'm just a very happy dev since I started using this setup.
edit: Specifically, I want to understand to what extent - if any - will it allow some of the horror problems you have working with certain Python libraries (compiling Numpy on Windows is like pulling teeth) to be a thing of the past. I'd be more than happy to work in WinBash for Python if it means having the easy Linux install processes available for some of the more scientific packages.
Python on Windows is painful mostly because of the amount of binary packages that have to be compiled since distributing binary packages hasn't been in vogue until only recently with Python. You can save a ton of trouble using something like Anaconda, or honestly just run a Linux VM. If you're compiling numpy you're doing something wrong IMHO--use a prebuilt version that's optimized for your processor (ideally using Intel's commercial compiler with full SSE, etc. optimizations).
You can't just use /home/chx/todo.txt as a path from any Windows application, but you can find that file through some other path.
The opposite is also true: the linux subsystem files are mounted under a regular directory in windows, so you can see all the files but from the normal windows subsystem you can't execute the linux binaries.
It means there is a big wall between the two systems, and you can't really automate windows things with bash instead of PS if you wanted. At this point though I find that to be a benefit - It would be fantastically confusing if you typed "find" or "python" and had to wonder whether a linux program or a windows program would actually execute.
Moreover, getting stuff like OpenCV to work is a pain, and I find that the deep learning packages (e.g.: theano) get even worse.