OK, well I wrote my doctoral thesis* on digital literacies, so this is a hot topic for me as a parent of an almost 15 year-old boy and almost 11 year-old girl.
TL;DR: we're way stricter than other parents I know both in the UK and other western countries.
For example: we allow them WhatsApp, but no other social media apps on their phone (including YouTube). Devices automatically switch on at 08:00 and off at 20:00. Certain apps, like browsers, have maximum time limits of two hours.
Like me, they're both gamers, but aren't allowed games on their phone - only on tablets and consoles.
This sounds harsh when I write it down in black and white, but as a consequence they both read a lot and are really into sport (both represent the county at football and athletics).
The reason this stuff is so hard is that we're the first generation of parents having to deal with all this. And there are no accepted rules in wider society yet...
Why no social media in particular? Is it a blanket ban because there isn't time to talk to them about it or is it simply, in your opinion, too toxic for teenagers, or something else?
Why no games on their phones? Is this a way to control the time spent on games or to prevent the time wasted on games when out and about?
As a knowledge worker, I've found that there is no 'best' way of organising information. It depends on what it is that you're doing.
Having said that, the things I come back to are Trello (Kanban-style boards), Pinboard.in, a personal wiki, and Google calendar. I like everything web-based so that I can access things wherever I am and whatever device I choose to use.
Over and above that, I use a paper-based daily planner that I've iterated over time. I pull everything to do that day on to it, then bin it at the end of the day.
I've taken cold showers every weekday for the last few years. The way I get around the problems raised (washing hair, etc.) is that I have a really hot shower just before bed. This helps me sleep by lowering my core body temperature.
Cold showers are great and, as someone who's suffered from depression in the past, I go into each day thinking, "what's the worst that could happen"?*
I've been doing this (both hot, though) most of my life. My skin is fine, but my hair used to take a beating -- now I only wash it every other day, rinsing in between, and it seems to work well.
I think the Chromebook Pixel 2015 suffers from the same HDPI problem with Linux distros that I've got with the 2013 version. I've got GalliumOS (https://galliumos.org) working, but it's somewhat unusable.
Linux Mint supports HDPI out-of-the-box, so that's what I'm going to try next...
I'm running Arch + Cinnamon on a Pixel 2 and HiDPI is not a problem at all. The last problem application for me as the Arduino IDE, but that has recently been fixed.
I'm running vanilla Ubuntu and not a single issue with HiDPI screen - it's razor sharp and after setting the right scale settings every component is the right size. HiDPI is not the future, it's present, and if some applications still haven't caught up with that - let's not blame the hardware.
As a new literacies researcher, this fascinates me - especially the 'forced informality' through artificial constraints.
"so accustomed is the public to telegraphic brevity, that their use often produces amusement rather than the expression of formality which the sender desired."
In this case the brevity was due to cost, but in this age of abundance we choose such constraints (and complain when they're removed - c.f. Twitter).
It was my first time meeting Stallman at the Indie Tech Summit (where I also met the CloudFleet guys). He's a little eccentric, but he's one of the reasons I'm now on a Linux machine instead of a Mac!
The battle for desktops and laptops is still not even close to won. Proprietary microcode in everything is on the rise, motherboard firmware is less open every generation of x86 hardware, and now we are seeing SSD microcontrollers be exploited and be otherwise untrustable.
Gains have been made, in that GPUs are much more open now than they were 10 years ago from AMD and Intel, and AMD has done a good job supporting their chipsets in coreboot for the last several generations of hardware, but it is still practically impossible to buy off the shelf parts and build a computer you actually control unless you accept AMD's microcode blobs for GPUs. And its not something to ham fist and say "don't worry about it" because there have been multiple instances of bugs in the microcode the GPU boots with having bugs in it that break the graphics stack on Linux. Freedom is valuable everywhere even if you don't practically see yourself taking advantage of it.
I know. I'm not saying we should ignore desktops. But things are on the right path there, I think. I can order a brand new (and inexpensive) Rockchip Chromebook tomorrow and get it going on a fully open stack. Not even CPU microcodes, although I don't know about its SSD microcontroller. I think this is remarkable.
Now, if we talk about mobile, there are only a few niche projects like Neo900 which might release extremely expensive, outdated and half free hardware. On the software side of things, Mer is pretty much dead, and Replicant offers very limited functionality only on cherrypicked old hardware.
Bunnie Huang (of MIT fame, original cracker of Xbox, proponent of open hardware [see: Chumby, Novena, etc]) has a really interesting talk given at the Media Labs in '15 at his alma mater[1]. It's about 1:15 but fully worth the watch including the QA, if you have any interest in 'open hardware'. That Rockchip run's on a fairly complicated 4core ARM. So which while the instruction set may be open, so is Intels. They'll happily ship to you their 3 volume multi-thousand page ISA. (Open is when you get to see any of final Altium/Cadence files that were used for the tape-out to make production wafers). At best you might get a few block diagrams. Good luck getting anything more than that. The closest you'll get to something open might be collaborative open standards a la the OpenPOWER (still, not that open) or RISC-V (which is probably still fab'ing at what, 65nm? 45?)
Anyways, that Rockchip is far from open. watch the talk to see his rationale behind choosing Freescale as his microprocessor of choice. He remarks on how $2.25 SoC picked up on the streets of Shenzhen have the full capabilities to run on GSM, which is really pretty remarkable. Obviously they're stealing IP and probably using grey-market tech/seconds that failed QA, but when he talks about the innovation _within_ that grey-market its fascinating. Even with his PhD in EE from MIT, experience in taking consumer products to market multiple times, fluency in Mandarin to aide in his ability to navigate the markets of Shenzhen, he'd have a real difficult time making a low-unit production run to compete with half the functionality of a low-level Android at twice the price because of the massive barriers to market. (Starting with easy things like UL certs and proper EMI shielding to more difficult political issues in getting a phone like that through the FCC and to the consumer). There's no economies of scale to help when Johnny doesn't know why he needs OSS.
Tangentially - if the N900 was still around I'd pay retail for it.
[1]http://www.bunniestudios.com/
[2]The reason it's so cheap is because they're clearly not paying any licensing fees to the appropriate IP owners. Every time a phone gets pumped out in a market where companies have litigious recourse, you have compliance with FTDI and pay your taxes to use USB, rent your little carved out spectrum from the FCC, etc.
Are there any non-license encumbered protocols similar to USB? I know DisplayPort is royalty free. How does that even work when you can run USB over Displayport?
I know there is no open band 4G standard, and while WIMAX is close its only advertised for support on restricted radio bands, which is not good enough. But radio spectrum allocation already is an extremely fucked up mess that needs dissolved all on its own.
There's only a license for USB if you want to use the USB trademark, or if you want your own Vendor ID. Pretty much every chip with USB already has paid the money for the Vendor ID because it makes headaches due to collisions, etc, much, much less likely. However, if you need an id, http://pid.codes/ has pids for your use for free. USB over DisplayPort, the company almost certainly has paid for a USB vendor ID, for much the same reason. If you're making a few million chips, the amortized cost of the license becomes less than a penny. Displayport, if you don't want to carry USB, there's no requirement to -- the displayport connectors on my video cards certainly don't carry USB.
Regarding similar protocols, there really isn't anything. One of the lessons from the Bad Old Days is that not having a centralized registry is that you get frustrating conflicts and huge interoperability problems.
With radio spectrum allocation, it's fucked up for a number of reasons, and dissolving it would cause even more problems. Radio astronomy needs clear areas of bandwidth so they can look over the sky and analyze, emergency responders need the same so people can be directed quickly, local broadcasters need it so that people can actually broadcast. Cell phones need them so they can communicate with towers. Radio technology is /complicated/ and at higher frequencies, it only gets moreso. That's why devices need licensing -- if someone screws up the radio hardware/software, it can easily impact more than themselves. Abolishing any sort of spectrum allocation would only lead to a commons that would quickly degrade and send us right back to corded devices.
I just want to point out there is a distinction between allocating spectrum for certain purposes and selling it to private companies. I would never recommend not having a frequency band restricted to emergency communications, though I admit I'm not informed enough about astronomy to know what is optimal there, but I still feel outright saying "you cannot produce radio waves of <these> frequencies, ever" is a very blunt solution.
My larger point is that we have observed in the last twenty years that while signal congestion can be real, we can also now build radios sophisticated enough to deal with it much better than in the past. Things you cannot have interfered with like air traffic control and emergency broadcasts should absolutely be given their own channels, but technology has improved enough that we can certainly stymie their band allocations they are given today without compromising integrity assuming the use of more capable radios.
But even then, I'm not strongly arguing to do any of that - I think it is possible, but I also don't think its particularly necessary - we could simply de-privatize most of the sold off spectrum to private companies to be used for public communications and in the common interest. Think of the Internet bandwidth over air we could get if we had gigahertz of available channels between the 500 to 5000mhz bands.
Another frontier that the Free Software community should set their sights on is virtual reality. The potential impact of VR is enormous, and one of the leading VR development and content creation tools (Unreal Engine) recently open-sourced thier product (though I'm not sure if it was done with an FSF compatible license). Unity, the real giant in the VR arena, and Oculus' own code are closed, if I'm not mistaken. I'm not sure how open Oculus' competitors are (HTC Vive, Playstation Morpheus, and Microsoft's Hololens), but I wouldn't be surprised if they're all closed. Making a difference in the openness of the ecosystem at this relatively early stage in its development could make a big difference.
Google is working on an Android based virtual reality OS [0], hopefully that will be free. Google Cardboard has a repo but it's just binaries. Valve/HTC are ostensibly working on opening their Vive headset through OpenVR, but the repo [1] is a bunch of binaries as well. Oculus freed the DK1 firmware [2] (not the sdk though), Linux support is coming after launch (probably not free though). Slight aside, Palmer said in an interview that they've noticed that people spend more time watching movies than gaming on the GearVR, so there's definitely tracking code in there. Finally there's OSVR, which seems the most promising, though the actual hardware maybe not so much[4].
Ultimately, I think there's enough to be optimistic about. Someone definitely should work on extending X11 or Wayland/Mir to VR, though I'm not sure how doable that is at the moment. Plus things are moving quickly right now, which complicates things even more. Probably will be good to wait for standardization, which Palmer has said will happen eventually, or build a mobile os for vr.
>ne of the leading VR development and content creation tools (Unreal Engine) recently open-sourced thier product (though I'm not sure if it was done with an FSF compatible license).
Nope, sadly absolutely non-free. A blnice example how the term 'open-source' is ugly.
In addition to phones. It's obvious hardware should be tackled as most mobile hardware has firmware that is proprietary. Plus some hardware has features that is hard or impossible to turn off.
TL;DR: we're way stricter than other parents I know both in the UK and other western countries.
For example: we allow them WhatsApp, but no other social media apps on their phone (including YouTube). Devices automatically switch on at 08:00 and off at 20:00. Certain apps, like browsers, have maximum time limits of two hours.
Like me, they're both gamers, but aren't allowed games on their phone - only on tablets and consoles.
This sounds harsh when I write it down in black and white, but as a consequence they both read a lot and are really into sport (both represent the county at football and athletics).
The reason this stuff is so hard is that we're the first generation of parents having to deal with all this. And there are no accepted rules in wider society yet...
* https://dougbelshaw.com/thesis/