It still sucks for the average user. The GUI is lipstick on a pig. As soon as anything goes wrong, it's back to the command line. You look up the problem, and there will be a dozen articles about it, all contradicting each other. Look up what it takes to create a desktop icon for something. (Hint: the top result in Google won't work.)
I'm currently restoring a Ubuntu system after a hard drive replacement. I've been at this for a day now. Typical little stuff:
- OpenGL frame rate is terrible. The default install turns out to be running OpenGL in the Mesa emulator. Time to install a driver. There's lots of advice on that. Get it from the NVidia site? From Ubuntu? From a driver-aggregation site? (Scary). Want the open source version or the closed source binary? Need CUDA support? Five choices of driver. Finally pick the latest open source version and it works.
- File restoration isn't working properly. I used iDrive for backup, and now it's time to restore. I've restored one or two files from iDrive before, but now I have to restore a lot of them. The interface is a bunch of Perl scripts, with a command line menu system from the 1970s. After much struggling ("Restore From" means the name of the machine that was backed up), I get the restore running. It runs, but about 8,000 files don't restore out of a few hundred thousand. The message is just "FAILED". Files with "&" in the name don't restore. Files with "__" in the name sometimes don't restore. Working on that. Looks like most of what I lost was part of old builds of a big program, but there are a few files I'd like back.
There's also many occasions where some .exe has simply "stopped working", or some system process takes over the CPU, with me none the wiser. If I install a 3rd party driver, there's some chance Windows will at some point attempt to replace it with an inferior version from its own driver database.
Then there's the whole update situation, which effectively takes control away from you, the computer owner, not to mention it could delete user data.
And yet, despite OS updates being forced upon you, application installation and updating is often still mostly an annoyingly manual process of browsing to some site, doing some sanity checks to make sure the site's legit, which your 'average user' is not doing, downloading the .exe and trying to avoid accidentally agreeing to have some toolbar installed in the process.
Add to that the telemetry, ads, nagging to use Edge before you switch etc. and am pretty sure that's not even most of it, as I luckily don't have to use Windows often.
This reasoning... doesn't make sense. Say Linux freezes 1% of the time, and half that time you can go into the TTY and fix it; the other half of the time you can't do anything. Now say Windows freezes 0.001% of the time, but never provides any way to recover. Therefore you conclude Windows sucks and you prefer to be on Linux because Windows requires a hard reset whenever it freezes...?!
You're attacking a strawman, you specified the numbers and now say how they don't really make sense. I'd say Windows freezes are actually more frequent for me than on Linux, but maybe it's about even, (general slowdowns, ie "Not Responding", are certainly more frequent on Windows).
Of course it does. But you presumed that Windows freezes much less frequently, which is not accurate in my experience and indeed I don't specify numbers precisely because the difference isn't significant.
Perhaps. I assumed it was implied.
And then just try getting the built-in OpenSSH in Windows 10 installed and playing nicely with Linux – I'm still not sure it was worth the effort over just e-mailing files back and forth. I've had at least as much trouble with Windows as with Linux lately (but I don't do anything involving graphics).
The way I see it, the main reasons for using Windows over Linux are simply that there are still lots of programs written for Windows that aren't there on Linux, and although there may be good alternatives, they won't have the same community around them or training materials or third party tools and integrations. E.g. if you use Adobe CS, you can get lots of howtos on youtube, filters and plugins, help from friends etc. – if you try to switch to GIMP/Krita/Darktable, you're on your own and when you come across a cool-looking plugin you can no longer even try it out. And that feels bad.
By the way, what is iDrive? I've used varous linuces for nearly two decades and never heard of it.
It has come so far over the years (back from when kernel / Xorg / drivers whatever) has to be compiled to get things to work. Now it mostly just works, although some effort is always required.
Over the years I have reduced my GUI usage to a minimum (it is still very useful for some types of operations) and it has made life simpler because there is a lot less variation in the CLI than than GUIs.
It can still be a pain (I have Nvidia card in my most recent laptop and it took some effort to get it working) but I have learned enough over the years that I was able to do it with only minimal Googling.
Once everything is set up it works and keeps working. It doesn’t slowly build up crap over the years, it doesn’t gradually work slower and slower. And if I took out my hard drive and put it in another machine it will probably just work, or at least get me to a shell where I can tweak a few things.
I can run the same OS on my desktop as I run in my servers and routers.
Being fluent in Linux has done wonders for my career in IT.
If I need some Windows apps for work I just run them in a VM with zero problems.
A big part of why things work so well of course is Debian - it is really good. As good a balance of freedom as you get in the Linux world.
So my experience is: if you are a basic user Linux is great, if you are an advanced user it is great. If you are an average user, not so great, but it’s hard to please everybody.
Sid is quite stable but things will occasionally break (once or twice a year). Never so bad that it needed a rebuild (touch wood) but enough to be scary if you don't know enough to keep a cool head about it.
Latest example is some nice person breaking udev around Christmas causing it not to mount LVM partitions during boot. Luckily Linux does the right thing and drops you to an emergency shell and it's very easy to mount things by hand and resume boot. It did make my heart beat faster for a few minutes though! ;) This was the worst breakage for several years to give some perspective.
Also, best not to upgrade the (Sid) system for a few months after a new Stable release - that's when the floodgates (from Experimental) open and issues are often seen.
Overall, it is easily worth it for me to run Sid and get the latest software versions. I always run Stable on servers though.
Installing a program (not from the Ubuntu Software Center) and getting a launcher icon into the start-menu-equivalent took way too much time and Googling.
This is not true. Repos can, and have, been hijacked. Packages can and have had malicious code inserted. Hell, projects can and have accidentally accepted stealth backdoors in the guise of bug fixes or feature commits.
The repo system is not a panacea for security.
However, in addition to e.g. hacking the actual developer/distributor, on Windows, you have to be wary of a rogue versions on e.g. download.com that rank higher than the genuine one, and of google ads for trojaned VLCs and stuff.
It is no panacea, but it is very significantly better than the Windows world (and the Mac world outside of the app store)
Furthermore, modern snaps have limited privileges and file system views, so a backdoor is much less effective; I assume appimages and flatpaks have similar mechanisms (and if not, they should).
Situation is far from perfect, but it is about a billion time better than Windows whichever way you measure it.
I don't understand this. You can open the "App Store" and search and download programs. If, for some inexplicable reason, you don't want to use the Software Center, you can open a terminal and type "apt install [name]".
Surely this is easier than installing a program on Windows. What exactly is your issue?
- Download installer
- execute installer by double clicking on it
- Download .exe
- (Optional) Drag'n'drop .exe to some random folder
- Right click .exe and select "Create desktop shortcut"
And there's the dreadful download pages with ads that look like download buttons, which are more obvious than the real one.
Also, don't forget to inspect every option of the installer to ensure you're not allowing the installation of bundled adware, toolbars, and search page hijackers. And these are sometimes bundled by the distributor.
Every mainstream Linux distro has an "app store"-like GUI application, where you click install and it installs. Click uninstall, and it uninstalls. You should try it - you'll be impressed.
I'm not saying the approach is superior in all points, I'm pointing out how much more intuitive it is for the average person.
And I was specifically talking about software that's not in a pre-installed repository. The different GUI app stores are pretty much equally easy to use.
- Open browser
- Type program name
- Navigate maze of adware, crapware, and ad-ridden websites, download the stuff you want from "softpedia" or a similar dodgy website; somehow miraculously download the actual software instead of a virus.
- Open .exe
- Navigate installer
While in Ubuntu/Linux Mint:
- Open the software center
- Type program name
- Click Install
You've presented the worst case scenario for windows and the best case scenario for linux.
A recent example of one of my installation experience for a certain application:
- Download tar.gz file from official website
- Move to appropriate installation folder
- Extract files from archive
- Look at readme contained within for further instructions
- Give execute permissions to the install script
- Execute install script
- Copy location of app launch script and create shortcut
- Download icon from google image search and add that to the shortcut so it looks right
A lot of steps required the googling how to do it and/or using the command line. Would normal users go through this? Would they even be able to in some cases? It's been claimed anybody who needs special software is already advanced enough to know how to do this. Ok, maybe, but I also don't want to have to, it's a pain.
Admittedly what I outlined is not the normal case of installing an app. Usually it is very easy with the software center. You can't hand wave away the shitty experience for anything that's not there though. Even with the technical knowledge to get it done, who wants to? Installing the same program for Windows would have required downloading the .exe from the official site and running it.
If you're at the point where you're downloading software that isn't in the repo, and their packages don't create a shortcut, then you're at the level where you can figure out how to do it yourself.
- [Super Key]
- First three letters of the program name
Now, I have a love-hate relationship with Ubuntu. Things kind of work, but they feel the pathological need of changing most of their tools each other version. The sad thing is that those changes don't provide any improvements, it's just the same but different.
I can think of different things that are easier in Linux than in any other os. But would you try to create a website in C++?
"The right tool for the right job" is only part of the story. Proficiency can go along way.
- Unity has one of the cleanest UI out there when you open no more than 5 applications
- he can't do any serious damage (he doesn't have admin permissions)
- that laptop has an Nvidia graphic card that runs smoothly with (unfortunately, Nvidia fuck you) proprietary drivers
Also, I have a desktop PC running Ubuntu with a 780ti and I can play at Dota max details with 120 fps (capped)
Ubuntu LTS is not a desktop oriented option, so surely it's quite behind in desktop usability. Especially when you aren't even using the latest one, but one version behind the latest LTS. That's really old. Get some up to date modern distro, select KDE and enjoy.
TL;DR: don't use server distros for the desktop use case.
This isn't simply about "latest and greatest", but about timely bug fixes as well. Thinking that LTS provides stability can backfire, if you'll be stuck with the same bug for months and in the end won't have stability either.
Proper desktop oriented LTS updates some things frequently (backports idea).
Ubuntu with a Mate GUI "just works", has all the software I need and lacks BS like the Windows 10 interface.
I don't really think there's any objective barrier to using Linux for a significant percentage of people - people who don't need Adobe Creative Suite or similar monopolies. But this is the same population who probably won't both doing something unusual given they have no strong motivation.
For the last year and change, I've carried a particularly fast thumb drive with Linux Mint installed on it (it's actual mint, rather than an installer/preview image). I'm constantly surprised by how many machines it "just works" with. Laptops from many different brands, workstations and desktops, AMD, Intel, Nvidia... Everything I plug it into seems to work fine.
With that in mind, I want to echo the sentiments in other comments. I'm still using Windows, but only because of the software.
At home I run windows only because of the software I use (Photoshop/Lightroom, games, and Equalizer APO).
At work I still need to use Windows simply because the toolchains I use are often exclusive to windows (Visual Studio and Atmel Studio in particular).
All they want to do is browse the web, and not have to fight with their computer at random points. I do pity these people who spend a lot of money on their computers for it to not even be fit for purpose.. and spend more money on Norton snake oil, which only makes it worse.
I believe this is why smartphones and tablets took off so well, they just work.
That's really the key to it all. Frankly, Windows, macOS, and just about any Linux are all good enough and have been for a while now. For most people, it all comes down to what software they want to run.
Valve for linux very much exists, but definitely not everything is ported.
There are a few other things I need it for (like FPGA/circuit tools, home theater programming, etc.) but those can be easily handled by a VM.
That said, I am thinking of switching back from Mac to Windows for a daily driver, or to Linux, because these new MBPs are so bad. I need another keyboard replacement and I do not want to be without my computer for a week. This time double keypresses instead of none. What a joke, and the TouchBar makes VI hard (and no, after 31 years of VI I am not switching to caps lock).
How do you figure? Every part about Ubuntu, from keybindings to menus to window management to user interface expectations, are directly copying windows.
But seriously, who in their right mind would tolerate advertisements in software that they paid good money for?
Ubuntu tried to pull some weird netbook phone hybrid that didn't work well on any type of device. 8 years we had to live with Unity.
I'd say that the UI of Windows 10 is about ten years ahead of anything in the linux world (and I doubt linux well ever be as polished). I say that as someone who is possibly going to do the switch to linux this year, but I'm in for a rough transition.
Windows 10 is really really nice these days...
Darn, pardon me. I should have said the Windows 10 interface. Yeah, Mate is copying Windows 7/Vista task bar stuff, with perhaps less of that system's unneeded flash.
Linux is usable for daily tasks, but only barely, and it gets further behind state of the art software every year. I wouldn’t wish it on anyone who isn’t fluent with critical debugging skills (bash, knowlege of logs and how to search them, the ability to manipulate system tasks, x11 and graphcis driver configurations, etc etc).
Until the FOSS community decides to unify behind a single experience and actually invest in the experience, it’s always going to be a shitty clone of the user interfaces people actually want to use.
I use linux every day but the idea it is usable for the average person without paying someone to support them through issues is an utter pipe dream.
People always say this whenever Linux is under discussion. No one actually wants this. If you want "one-size fits all" use MacOS or Windows and see how that fits you.
People who trumpet a unified Linux experience expect the river to somehow flow backwards.
looks at user counts for each OS
reads parent comment again
Would you say the same for Chromebooks? Chromebooks are Linux. If not that proves that it's at least possible for a user-friendly experience to exist on Linux. And I would further suggest that running Ubuntu on well-supported hardware closely approximates the Chromebook experience for most users. I personally know several ordinary users who find Ubuntu as easy as Windows to use, if not easier.
I think your statement that Linux gets "further behind state of the art" as time goes on is probably the oddest claim you make here. It runs counter to everything I've heard from anyone running Linux for a long period of time. Most reasonable hardware configurations just work right out of the box. Remember that the average user never installs an operating system. Either the OS comes with the computer or someone competent is doing the installation for them; in any case the end user isn't expected to fix any problems that arise. Whenever I've installed Windows I've encountered driver problems that had to be fixed to get a working experience, but that's not something I count against Windows when talking about the end-user experience.
Recently I am unhappy about the stability and also about the RAM requirements of KDE since I only have 8GB Ram. I am pretty sure my next installed Linux system will be Bodhi Linux on my Laptop.
Someone mentions restoring a Ubuntu system after a hard drive replacement. I used a live CD, to run tar with --numeric-owner and other options I do not commonly use, and gzip, to make backups onto three DVDs, and then to set up the partitioning and boot loader on the new hard drive, and then to restore all of the files. And then the new system just worked fine when booted, with no problem (actually the first time I misconfigured the boot loader, but after I corrected that, it booted fine).
Some people are disheartened to see how much you still need the command line to do some very basic things. I rather see, is good that such thing can be done by command line programs; you don't need the GUI to do so many thing like Windows needs.
But if you don't like Linux, you do not need to use it; there are other systems. But I find Linux is good.
I am more surprised how someone can do serious IT work with windows. A nightmare for me.
There was so much clicking involved that It made my left index finger start hurting. Adding a MIME type to IIS seemed like a chore.
Every once in a while I would try a new distro as live CD, just for kicks, and to see how things are different. I learned how to use my own window manager and terminal program. I found that the necessary elements needed for working Linux are actually quite small, and most everything beyond that is for convenience.
When you learn to use the terminal, your entire perspective on how to use a computer changes. No longer are you wasting time swishing your mouse around, clicking on folders and icons. Instead, you live on the keyboard, using hotkeys, personal aliases, tab completion, STDIO piping, dot files, terminal based text editors... There's a lot to master.
But don't feel put off by this, we actually have a lot of fun in the terminal. We can play music with MPD/Ncmpcpp, play Tetris, retrieve HN articles and view comments, retrieve 3-day local weather with animations, we can even browse webpages. It's actually a fun world in the terminal, and we can eliminate the need for otherwise fatty software.
If you're looking for something that Just Werks as a Windows replacement, any Ubuntu derivative will be fine. If you're looking for something that requires more work but might spur interest (i.e. hobby), you might try Arch or Gentoo. And if you're looking for rice examples, check out Reddit/unixporn.
I believe you. I grew up on a C64 and later DOS and really, I don't miss it. It's nice to have the console just in case but I don't want it to become something I have to use again. It always was that "waste of time" you're describing there and I've never seen it the other way around. I can do that swishing with one hand. One finger even or my nose if I have to. A double click does it. A GUI where I can see all the options, select them, apply and run. So...human.
Every time I use my Linux laptop I find myself googling up commands or god knows what weird solutions to problems that should be easy and are in their equivalent on Win. It's such a waste of time and I feel lost if I can't go online to do it.
There is a long way for Linux to get there and I see a solution where you can have both. It does not contradict itself. But the biggest problem out there I see is the aversion (or even hate) in relevant parts of the Linux community towards everything GUI.
I like my GUIs for a lot of things. Music playing for me almost requires album art. Instant messaging as well, I need avatars/pfps (I've always disliked IRC because of that). Checking log messages though? File management? Quick edits? Shell all the way.
The terminal definitely requires mastery, there's no argument about that, however the ability gained is just tremendous. Even the basic tools like ls, cat, grep, less, man... Are very easy to use, and are very rich with functionality.
Blech. Do you remember when games like Leisure Suit Larry went from text input to "click the right pixel"? All sorts of exploration and hilarity was lost.
You can see all the options because there aren't as many.
About games, I agree with you. But when doing work, or even non-gaming fun things on a computer, "exploration and hilarity" are damn near the last things I want to be expected of me. I want to get things done in a cognitively efficient way, even if that's not the most temporally efficient way.
That means: if an application wants me to learn new skills, they should be limited enough as to be relevant to the immediate task, discoverable, and with a clear, immediate path towards competence.
No "Well, you want to do $x in a GUI, but have you considered learning the CLI? it's much more flexible, and the next 10 things you might want to do with $x will be so easy (once you've mastered the CLI)!" I may never need to do 10 more things with a given program/task; I may be happy to do what I need inefficiently but in a familiar way; if I do need to do more in the future, I'll set aside time to learn how rather than getting interrupted on my way to a specific goal unrelated to leveling up my proficiency at something.
The attitude of many people and ecosystems in the Linux community seems to be like that. The tools themselves are fine once you learn to use them, but the people and conventions that proliferate those tools seem to often lose sight of their (desktop, not power/dev/sysadmin) users' needs for immediate specific task-accessibility, discoverability, and incremental increases in proficiency rather than "exploratory" or deep-end-first learning styles.
I say this as someone who spends just as much time obsessively, wastefully over-customizing my development workflow/shell/etc. as most programmers I know. I do that for fun, but while I find it fun, I recognize that most of my users do not, and likely never will.
Perhaps so but it also got rid of the tiresome "Guess which exact words or phrasing the developer wanted" aspect of the early Sierra games as well so that you could focus on the actual puzzles. (King's Quest, I'm looking at you.) Not entirely a bad thing.
Do you remember all those games that came with the mouse?
And they are still coming.
Yes there are problems, but for me the reasons to use linux (and firefox instead of chrome) are trust and politics.
Maybe it's just me, but I really dislike software preventing or forcing random functionality because google, apple, ms or even canonical want me to. In this respect linux is in my opinion the best option.
As for the "things should just work" argument some people use - part of it is probably still valid, but I think linux is in some respects (handling of updates, security, installation process,...) way ahead of windows for example.
Some anecdata on that:
It takes ~5min when I need to do something on a windows or mac machine for me to get frustrated due to me not knowing UI details or the OS simply getting in my way (forcing updates on reboot for example). Maybe it's me, but then I'm working in web development for a while, so I suspect I'm of average intelligence.
The availability of professional software is a limiting factor in some areas (design and gaming come to my mind). But for development work I can't complain. The trend for web-based software helps too.
Think what you want about linux, all software has flaws - maybe we can get rid of some of them :)
Honestly, these things do take a lot of time. People think using Windows effectively is a trivial thing they already know, apparently because it's a GUI and it looks like it should be easy. It very much is not. I think the most important thing is, regardless of what OS you choose for your tasks, it should be based on your knowledge of how to use them properly, not lack thereof. Because whatever you end up preferring (I prefer Windows), there really are tasks in which your life is easier in one OS compared to another (I have Linuxes handy for this reason), and if you aren't aware of where each one shines, you're going to miss out and make your own life harder than it needs to be.
- GUI performance is bad. I have a reasonably high end system, but switching between desktops is choppy unless I turn my resolution down to 1600x900.
- Trackpad support is bad. It works but feels clunky. You really feel the difference when you use a Macbook for the first time in a while.
- Hibernate doesn't work. I've tried and tried to get this to work but haven't succeeded. So now I either need to keep my laptop plugged in all the time or shut it down (which is annoying, because I encrypt my hard drive so I have to enter two passwords to start).
- Bluetooth doesn't work well. I have to run a command to restart my bluetooth service every time I connect, and still sometimes have to re-pair my headphones. I use a wired connection now to avoid dealing with it.
I bought a Linux-native laptop (System76 Galago) thinking things would work out of the box, but this hasn't been the case. (Also, the Galago's battery is awful so I wouldn't recommend it even if everything worked well.)
The next time I'm ready to drop $1500+ on a laptop, I'll probably just get a Macbook.
This depends heavily on the GPU drivers used, which in itself is probably a problem, but is far from universal.
I didn't have any issues with Bluetooth & hibernate for years now, I am surprised you'd have hibernate problems on a System76 laptop as I don't even have a 'Linux-friendly' laptop per se and hibernate works.
> The next time I'm ready to drop $1500+ on a laptop, I'll probably just get a Macbook.
I use MacBooks sometimes at work and I'd say that past 2015 they've not been on the right track. The keyboard and cooling especially. Software-wise, macOS is indeed more 'uniform' in terms of hardware, so it's easier to make sure everything 'works', (which is not really a fair comparison to Linux, which works well on a much wider range of hardware, including the 2015 MBP).
As for macOS itself, you don't really have much in terms of customization options, which you may prefer, but that also means if you don't like something you're stuck. Be however prepared to pay $20-30 for every little utility from a decent file manager to window snapping, (that's right, macOS doesn't ship with a proper file manager and doesn't really support window snapping the way you'd expect out of the box).
I ran Arch Linux for about 7 years. After systemd I spent years justifying why I used it even though it would constantly break and require a reformat more often than Win7 did. I went to Ubuntu, and it was marginally more stable. But even there, I was bitten too many times. Libicu upgrades breaking everything, updating packages breaks the package manager or ruins systemctl settings. Really, the whole problem that pushed me back to Windows is a combination of constant systemd nightmares and bad package maintenance.
I've... Kind of been loving Win10. Sure, it's marginally more evil than Linux, (but less so than Google and Apple products these days), but at least I don't have to constantly wipe my machine or keep it out of date to keep using it. Weird how the tables have turned.
I have used various distros on and off over the years, but today I can run Windows 10 on a Surface Book 2, and the non-development portions of my experience are spectacular, for the low, low price of selling my soul: lazy file syncing with OneDrive, pen support combined with OneNote is spectacular. Unfortunately, WSL only gets me so far when I want to use more than vim. For better or worse, I don't get to spend all of my time coding, so I'd like Linux with a real, modern notebook experience that can let me get my work done without praying to the gods that my external monitors come back on when I plug in.
I don't have a touchscreen so can't comment on that, but pretty sure KDE has support for that stuff - their art/drawing/painting app Krita works with pens/tablets, and there do seem to be some touch gesture related options in the settings app.
Docking just works. Monitors automatically remember your config and switch when you dock/undock. Also it's very easy to make scripts to swap monitor layouts using xrandr (of course, you can do this with the settings GUI, but with scripts you have all the layouts you use just a few keystrokes away if you want to switch to a special layout).
I use 1.5x DPI scaling with my 4k external monitor. The laptop screen is 1080p, so no dpi scaling. It works well, with the one issue that you need to log out and log back in to switch the scale factor - this seems to be the same as on Windows, but not as good as macOS. One option I just started using a few days ago is font scaling. I set the font DPI to 144 (the settings GUI lets you do this), so all text appears at a normal size. Some icons are still tiny - mainly things like the checkmark in an OK button. But it's good enough that I don't notice any egregious issues and don't mind using it. The advantage of this is that you only need to restart a program for it to use a new font DPI. So instead of logging out/in, I just set the font DPI and restart any programs if I need to.
There are a few minor graphical glitches left if you use fractional (not 2x or 3x) DPI scaling, mainly related to 1 px lines sometimes appearing between lines of text in the KDE terminal and text editor. But they'll hopefully be fixed soon, and I haven't noticed any other issues.
KDE has had a bad reputation in the past, but these days it is a very polished, fast, and feature-rich environment that just works and lets you get your work done. They have KDE Neon, which is the 'official' (I think) Ubuntu-based KDE distro, if you want to try it out.
It's a bit janky but no more janky than actually running Linux on a laptop. No worries about my HDMI projector connection working, no fighting with wifi, games on Steam work, and I get a full Linux for development.
The magic comes from proxying my development servers (like Django's) through the guest machine into Windows. I've got it set up so that if I type `localhost:8000` into any Windows browser, it hits my virtual machine's localhost.
So yeah, complicated, but from what I can see it gives me the best setup I know of with the least amount of fighting with the operating system.
I think I will try this on one of my windows computers. I can't switch from Mac to Linux because I use too much non Linux software, but most of it is on Windows. Adobe suite, Ableton, Native Instruments, etc.
Does anyone know a way to virtualize Linux like the thought above please?
If you can set up Linux, you can set this up too.
Works without any issues for me.
Apple seems to be the only manufacturer that so far escaped the race to the bottom and which still produces somewhat premium hardware. I want: a hidpi screen with lots of nits and a good contrast, a trackpoint or a huge touchpad, good battery life and somewhat light weight.
In the past Thinkpads worked really well for that, but recently I had multiple issues with Lenovo hardware. I'm currently using a X1C3 and originally considered upgrading to a X1C6 or T480s, but given there are known issues such as the throttling bug (https://github.com/erpalma/throttled) that Lenovo hasn't fixed in over half a year I'm very close to giving up on Thinkpads.
I think it would be interesting if Canonical offered an equivalent product for home users, understanding that the $/year would be higher. That would give all the people moving to Ubuntu an option to support the people making the OS.
I do see that donations are accepted (the prompt is on the _Your download will begin shortly…_ page), so at least there's that!
It's $7,500 + $150 per seat over 50
$150 for a single seat would be significantly less.
It was useless, and I don't really blame Canonical: Consider how much of a competent person's time the annual price back then, 190 euros, could buy. Not much.
If you want to support them monetarily, the donation thing covers that. If you want to pay for them to provide support for you, what do you expect "support" to mean and at what cost? The trope that if you pay for Open Source support, you get to affect bug fixing priorities doesn't work at the money levels that individuals (except maybe ones who have the kind of money Shuttleworth has) are willing to pay.
But even for just advice on how to perform a task or resolve a problem, if you can't get the answer right away using a Web search engine, chances are that the solution needs a competent person to pay attention for a longer period of time than your expected annual per-seat price pays for.
I don't like latex slides and I give presentations frequently for work. The only thing that has proved consistently annoying is that LibreOffice Impress is nowhere near as good as keynote.
That's moot. Linux gaming today is a lot better than ever before, considering advances in graphics APIs (Vulkan), increasing amount of native Linux games and rapidly improving support for Windows games through Wine, dxvk, vkd3d, including projects like Proton integrated into Steam client and etc.
Many former Windows gamers find it acceptable to switch. So at the very least, give Linux gaming a try and see for yourself.
Many gamers are actually very much into tweaking things as you say, from custom PC builds to game mods and other customizations. They see configurability as a benefit, not as a downside. So Linux is a natural fit.
Besides, Linux gaming is quite accessible today without any extensive manual tweaking (while you can always do that if you want to, a lot more than on Windows).
The kinds of people I think of when you say "serious gamers" are not going to be very happy with the idea that they're locked out of the best kit. These are people for whom their computer's primary purpose is probably playing games. Why would they sacrifice needlessly?
> So Linux is a natural fit.
If it were a natural fit, they'd be interested in using it. Data suggests they aren't.
Again, generalizations like "serious gamer always does that" are not helpful. Lot's of people are not satisfied with Windows for many reasons. Would they switch to Linux if they could play their games? Many would and already do, and I wouldn't call them not serious.
Performance variations are not uniform. I.e. it's not a given that Linux always reduces performance. Proper made games actually perform better on Linux. Of course if you are using translation layers like Wine, some performance hit is expected to compensate. That's not necessarily a bad trade-off for those who want to switch, as long as result is well playable.
> If it were a natural fit, they'd be interested in using it.
They are, increasingly.
Then why did you make a generalization in the first place and keep making it? I didn't introduce the term to the conversation you know.
Let's ask a simple question: would your conception of a "serious gamer" include literally anyone who plays PC games at all? You have to have some line you're drawing somewhere right? Well, I can't read your mind so I'm using my own line, which is drawn where someone considers gaming to be a very high priority to them, and it seems unlikely to me that that kind of person is willing to make that trade off for some nebulous concept of "privacy".
I didn't introduce it either, I simply referenced what the article used, to disagree with the point that Linux is not suitable for "serious gamer", as to say that there are gamers today who switch to Linux and who aren't any less serious than others.
> Let's ask a simple question: would your conception of a "serious gamer" include literally anyone who plays PC games at all?
I'd say someone who prefers to have good hardware to be able to play more demanding games is serious enough about it. Many Linux gamers do that as well. What's the reason to claim they aren't serious?
That's just not the group I think of when someone uses the term "serious" in this manner. Sure, they're on the spectrum of seriousness, but if I were to tell you that someone was a "serious cigar smoker", do you honestly imagine a guy who smokes machine rolled every few months?
Linus itself is an excellent operating system that makes life for end users a lot easier, but I suggest the applications first approach since it is not for everyone. Even using this approach and testing it on live media first is bound to have a few hiccups, yet it is a lot better than starting out by learning everything from scratch or by troubleshooting major issues.
(Pardon me if I seem a little skeptical, I've heard this song before...)
Do you expect Windows users to run to install Linux to replace Windows? Most probably don't care or don't know how to.
So usually people who switch to Linux at least know how to install an OS on their computer.
methinks if libreoffice gets a bit leaner and featured it might be
I ran into so many compatibility problems I could barely believe it. Couldn't edit each other's headers, no maths support worked between the two, images and other objects would get randomly shuffled in the documents (one image showing up where another used to be). To say nothing of the minor niggling display differences reminiscent of the old IE6 box model problems.
I had to start sending documents in PDF, but then they couldn't edit them. That alone was enough to move me back to Windows.
For better or worse, I don't think Libreoffice can move forward until interoperability can be presumed.
Github Link: https://github.com/ONLYOFFICE
One question though: What desktop UI API are they using? Is it using the native APIs, or using Electron under the hood?
For example the recommendation to learn about partitioning and stuff is "funny". Just install Linux Mint 19, accept the suggested partitions and be done already.
>2. Put the distro on a thumb drive or DVD so you can boot to it from there.
>3. Create a partition big enough for the Linux distro.
>4. Install the Linux distro in the partition.
>5. Configure Linux so you can use it on a daily basis.
I think there's a critical step missing from this list, which applies to the 'non-techies' reading the post.
(At first I wasn't going to post this, thinking that the article was only for technically-minded people, but the author calls out notes for non-techies at various points.)
The critical step is: Tell your support person, and your other users!
Many people have a support person. At work, this is your IT person. At home, this may often be a family member. For example, I am the support person for my father.
If you have a support person, let them know what you would like to do. Point them to the article, and to the other pages that you're looking at.
Be respectful of your support person's time. It will take at least an hour to go through all of the above steps; longer if something doesn't go exactly right. If you do this out of the blue, run into a problem, and have to lean on your support person unexpectedly, understand that you will be taking them away from something else unexpectedly.
Be prepared for your support person to say "If you do this, I won't be able to help you." If they say that, then accept it. If you need to go back to them for help, don't be surprised if they say "You're going to have to wipe everything and reinstall".
Back up your stuff! Do this before making Step 3. Make sure those backups are good.
You may also have other users. If your family shares a machine, then your family members are other users.
Talk to those people. Let them know what is going to happen. Even if you are just adding Ubuntu as a new partition, you must assume there will be a time when you have to leave the computer unexpectedly, another user will come in, and be presented with a weird lock screen or login screen. Walk people through at least the Ubuntu login and lock screens.
Let your other users know, if the machine is locked, restarting into Windows may mean that anything still open under Linux may lose data.
Again, be respectful of everyone's time. If you're the parent, then you can certainly say "I'm doing this tomorrow at 3 PM; come to me first before you try to use the computer after that time.", but if your child then comes back with "I had planned on working on $PROJECT at 4 PM tomorrow", responding with "Well, you should've planned your time better." is BS.
If you're interested in running Linux on your desktop, then you can definitely do it! But please, recognize that (most of) you have people in your life who fulfill either the 'supporter' or 'co-user' role, and they deserve to be brought into the loop.
(Source of rant: I once had my Dad upgrade several versions of macOS in one single jump, on some random weekday. Of course the upgrade took longer than expected, and also the jump of several versions caused lots of UI things to change. That led to multiple unplanned hour-plus-long phone calls in the middle of my workday, as this was my parents' only computer.)
My support person was my 12-year-old son. ;-)
- Linux kernel is not just for desktop, and being used in near real-time systems as well as mobile. It is the 'distribution' that makes the difference to their audience.
- Multi-user environments are very real in workplaces. At least one additional employee account is needed for using a system without modifying it.
- There is no obligation to understand car design to drive them. But, what if one day you wanted to check the engine? That is at least you can't do with proprietary systems.
SMP is garbage and multi-core IS trash, whether you like it or not.
>Linux kernel is not just for desktop, and being used in near real-time systems as well as mobile. It is the 'distribution' that makes the difference to their audience.
yeah i mean for servers it's ok, on mobile it's a JVM anyway so it wouldn't matter what you swapped the kernel with.
>Multi-user environments are very real in workplaces. At least one additional employee account is needed for using a system without modifying it.
yeah not on the consumer desktop though, families don't need multi-user systems, laptops don't need them either.
>There is no obligation to understand car design to drive them. But, what if one day you wanted to check the engine? That is at least you can't do with proprietary systems.
maybe not but understanding a car design is possible and within reason, understanding 15 million lines of code on the other hand is not. also I don't recall when I said you could understand proprietary systems either?
Comparing Facebook, Microsoft, and Apple is apples to oranges. What did Apple do that threatens customer privacy and free speech?
You don’t need to switch to Linux to stop using Facebook and Twitter...
Apple affects many things negatively by being extremely pro lock-in and patent aggressive. So I'd say it's a good thing when someone stops using Apple's products and switches to Linux instead.
Lock-in (i.e. refusal to support interoperability), patent aggression, forbidding competing technologies in their store and so on and so forth.
Apple is happy to learn a lot about you. Hence the feds can learn a lot about you through them.
But most important to me is this wretched speech by Tim Cook: https://www.macrumors.com/2018/12/03/tim-cook-adl-keynote-sp...
What a censorious tool. Soured me on Apple for, probably, ever.
I wonder if this can be applied to the majority of people. If the app is available for Windows then use the Windows version (e.g. Chrome at least until Edge uses Chrome engine) otherwise use the app under WSL.
Microsoft has made it easy to run GNU tools under Windows such that you don’t need to be constrained to Linux installs for that.
I certainly don't have the time anymore to sysadmin any Windows machines.
It's an awful lot of work to maintain this, and of dubious value with the end result being running a second-rate OS.
I'm sure there are good bits of Windows - Microsoft has a lot of very smart people working for them. But I've never experienced Windows as a first rate OS - it's always had bullshit on top (and beneath) it.
> And be fair, maintaining a Linux configuration just the way you like it is a lot of work too. If it weren't, there wouldn't be hundreds of slightly-different distros.
Maybe "just the way you like it", but "functional and not sending your private information off to a dubious corporation" is pretty easy.
When there are issues with a Linux install, it's usually transparent enough to debug. Windows? Opaque all the way down. Ask for help online? They say "check your anti-virus; if that doesn't work, reinstall your OS". In what world is that sane?
Sure, and that's great if that's all you want out of a personal computer.
> When there are issues with a Linux install, it's usually transparent enough to debug.
Bull. I've been using Linux for decades and it's always the same story. You have a problem with X, you do a search for your problem with X, you find a dozen solutions that don't apply to you because you're on a different distro or they're from 2013 when half the software stack was different. Of course the error reporting is vague and the documentation is out of date, referring you to the source for details. If you dare go somewhere to ask about your problem, you most likely get told you're using the wrong distro. It really isn't all that different from the situation on Windows.
> Bull. I've been using Linux for decades and it's always the same story. You have a problem with X, you do a search for your problem with X, you find a dozen solutions that don't apply to you because you're on a different distro or they're from 2013 when half the software stack was different.
Ok, so in some cases there can be complications. But once you've found (a/the) solution, it makes a sort of sense. In contrast, in the cases I've successfully found a solution to a problem on Windows, the reasons usually remain opaque. This, surely, is a side-effect of running a proprietary OS, and one I no longer have patience for.
For example, when I was dual-booting Windows 10 on two desktops a couple of years ago, I ran into major problems just getting Windows installed (taking far longer to ultimately install than it ever took me to install any Linux distro, from Ubuntu to Arch to Void to GuixSD).
I had downloaded the official Windows 10 image from Microsoft onto a usb thumbdrive and went to install. During the installing process, it failed with an obscure message about missing drivers. I try on the other differntly-configured desktop (different motherboard etc). It's the same thing. I think perhaps it's an chicken-and-egg issue with trying to install via a usb3 port (i.e. maybe it's missing usb3 drivers or something). Seems unlikely, but I try booting from a usb2 port. No luck. For good measure, even from different usb ports. But it doesn't matter, it's always the same error message during install. Seems like something which would be easy to debug, since there are in fact lots of Windows users, but I can't find anything similar described.
I finally find a post on some obscure forum which describes a similar problem. The suggested solution: wait until you get the 'drivers missing' error, and then unplug the installation usb thumb and plug it into a different usb port. (So it doesn't matter which usb port you start with, it will always 'miss' the drivers until you unplug it and plug it into another usb port.)
I thought: what a stupid solution, surely it won't work, but what the hell, let's try it for shits and giggles. Of course, that was the solution. That's completely opaque. The solution might as well have been sprinkling the machine with chicken blood while playing the Beatles' White Album.
I don't know how many hours were wasted trying to resolve that. After a year or so, I ended up wiping all of the Windows partitions. Too much headache.
It sounds like you have better experiences with Window. Perhaps you have the right magic touch that's needed.
One real historical criticism I have of Windows is that it took them until Win10 to make bash available. They really ought to have, at minimum, aliased a lot more bash commands to Powershell 'cmdlets'. It used to be so frustrating, having to look up all their weird, verbose commands.
> They really ought to have, at minimum, aliased a lot more bash commands to Powershell 'cmdlets'.
First of all, I am of the opinion that PowerShell is significantly superior to bash+coreutils in many ways. If you took a step back from your familiarity with the UNIX stack and looked at it objectively, you might come to the same conclusion.
Anyway, I think aliasing bash commands to cmdlets at all was a mistake, because it creates the impression that they are compatible in some way, when they often function very very differently.
> It used to be so frustrating, having to look up all their weird, verbose commands.
They're verbose because they are descriptive. When you write scripts, this is an advantage. When you're not writing scripts, PowerShell has a lot of built-in short aliases you can use (gci => Get-ChildItem), and you're free to define your own at will. This is in addition to tab completion of cmdlets, variables, and argument names.
Presumably it should be, given that PowerShell came out in 2006, while Bash appeared in 1989 and is tied to the model of Bourne shell which appeared in 1977 which itself was tied to the model of the Thompson shell of 1971. Which means that traditional UNIX terminal environments have certain artificial limitations which are tied to these earlier paradigms.
(For instance, I prefer using eshell to a traditional terminal, because eshell [in addition to giving the user access to Lisp as an additional scripting/interface language] it is not only the 'command line' part of the buffer which accepts input, but potentially other parts of the buffer; you can use Plan 9 style terminal features; cd to multiple directories simultaneously etc..)
I am not sure why people can't just give Ubuntu a go on a brand new machine rather than maybe run it on Virtualbox. It is like cars and bicycles, people will spend all their money on a car but want or insist on some second hand stolen junk when it comes to buying a bicycle. In so doing they never get into cycling and spend even more money on their car. Then one day they hear about climate change and maybe decide to get the bike out the shed.
I feel this is the same story here, since when has there been any information privacy with these commercial operating systems? DOS 6.22 with a locked floppy drive was probably the last time. So to suddenly wake up to the 'scandal' of Facebook et al. selling your every mouse click is a bit special.
These are just observations, obviously Ubuntu linux is the best operating system there is and I would not change for a paid for OS if you paid me.
Whether I’m capable or not is irrelevant. I consider the extra effort required to configure and maintain a Linux desktop to be a waste of time, because I’m perfectly satisfied with my MacBook, which doesn’t require all that extra effort (for what I consider to be an inferior user experience to boot).
Well it doesn't help that Linux desktop is backed by one of the most condescending cultures that ever stalked the internet.
Have you considered that maybe, just maybe, the way Linux works just doesn't jive with a lot of people? Like, everything has a middleman involved. Drivers for your hardware? Have to be merged into the kernel. Applications? Should go through (all of) the distro package repo(s) unless you want it to be a pain in the ass.
We get it, you find value in the way these things work, some of us don't.
…? This is a really strange perspective, since I've found Linux to remove middlemen. No more searching a manufacterer's website for some garbage closed-source driver that has a dozen vulnerabilities and no support. No more waiting for Microsoft to roll out fixes; you can apply your own changes yourself.
And now you have to, forever, since you're no longer using a package manager to get updates to your custom buggy horror show.
> No more searching a manufacterer's website for some garbage closed-source driver
Please point me to the source for the drivers for my rtx2080. That would be super helpful!
There are cases where good hardware has no drivers for latest windows, like I had a Wacoom tablet that the vendfor would not result drivers for newer Windows version but continued to work out of the box on Linux. The fact that NVIDIA are jerks and not providing at least documentation for the community to make drivers is not the fault of the Linux community or the fault of us the NVIDIA customers,
The proprietary driver work well enough for my 970 but my next card will be an AMD one
> No more waiting for Microsoft to roll out fixes; you can apply your own changes yourself.
You don't have to wait, since vendors can roll out new drivers. And what's the equivalent on Linux? Pollute your system with an unnecessary kernel dev environment and compile from source. Personally, I think having to resort to that for such a simple operation when it is almost 2020 is a gigantic failure.
Installing software shouldn't have to be a formal scientific discipline. It isn't rocket science unless you make it rocket science so you have an excuse to invent package managers.
> All supported distros will provide binary updates.
Eventually. On their own schedule. If you want (or need) the latest driver as soon as the vendor says it is ready, you'll be compiling. Otherwise you have to wait for the kernel maintainers, then the distributors.
If that is true, then why do developers keep telling Linux people that distribution to Linux Desktop is a problem? What issue do AppImage and Snap and FlatPak exist to try and solve?
> and these are things people definitely want - see the App Store, the Play Store, Homebrew, Chocolatey, etc.
Chocolatey has a paltry install base, Homebrew is only used by developers, and you don't really get a choice about the mobile stores.
It's a trade off. For that you're giving up control as a user about what can and cannot be installed on your system, you're giving up portable applications, being able to have applications on different media, having different versions of the same applications, etc.
I'm an end user too, and I don't want to make that trade off. I imagine there are many others who agree.
Try it some time. See how much of your Linux Desktop software you can actually acquire that way without a whole lot of work.
Oh, Rust community would like a chance to defend their title! ;)
But for home use, there are simply too many cool programs on Windows to give it up. The Mint partition on my home dual boot sits largely unused. Emulation does an acceptable job for some things but it's often a sub-par experience. It's also quite tiring to have to constantly futz around with services and configurations just to get programs working, only to have that configuration break because of software updates.
In the end I find a Win10 PC with some of the guts rips out (via something like Blackbird) seems to produce the least amount of headaches and generally stays out of my way. I hate Microsoft's Windows-as-a-Service model but most of the things about it can be ignored or switched off. (And yes, one shouldn't have to but I find it better than the alternatives)