I find this comment mildly irritating. The Alto apparently cost $40k  in (or around) 1973, which was equivalent to $95k in 1984 (or $232k today). Compare with the Mac which cost $2.5k in 1984.
Sure, the Alto may have been more powerful, but if it was financially out of reach of almost everybody, it's no wonder it had a limited direct impact.
I wonder if it's a trend that Alan underestimates the importance of economics for technology to actually have an impact.
(Worth pondering this way of going about things.)
What was disappointing is that the market couldn't value personal computing, especially the general market. For example, the Lisa with its hard disk was really the better machine to be the flagship for Apple. In the early 80s it was priced at less than an average car -- ~$10K -- a mass market price if you could see that this would be your "information and speculation vehicle".
Instead, PCs could only be sold at consumer price levels, more or less as novelties and in business as spreadsheet machines -- and this forced the Mac to be much smaller (both in RAM and disk) so that it was more like a toy than a vehicle.
So, no, we (not I - I was part of a research community) did not underestimate the importance of economics. Note that my original Dynabook paper in 1972 said these would eventually be sold for the same price as color TV sets. But we did underestimate the ability of the early market to value computing.
A car is a major purchase and people keep them for years or get significant resale value out of them. Moore's law would mean Lisa would be rapidly devalued. For businesses that could justify it generating $x amount of revenue in that time it could make sense. For individuals at the time I'm not sure.
That's not to say people don't spend extravagant amounts on cars beyond the base level of needed functionality. I just can't think of too many purchases at that level that get obsolete so quickly.
So for the majority of the consumer market, computers just didn't really offer $10,000 of value.
The business market was maybe a little bit different, but still, businesses suffered badly through the early 80s too, and a computer would be an R&D expense moreso than a business expense. It would be a very long time before computers could do things beneficial enough to businesses to justify high upfront costs (plus the costs of finding or training anybody to operate the things). And, computers represented quite a big change in operations, and as anyone that's done any kind of business consulting has learned first-hand, businesses hate that.
With all due respect to the people who are much smarter than I am, the folks that invent and pioneer technology so often view it through a very different lens than the people who would use it, and they really struggle to bring the technology to larger society. Dean Kamen is my favorite example of a brilliant guy creating amazing technology that nobody can afford or find the right use for. Someday somebody else might come along and give the world the same thing but much cheaper and in a different package, and it'll change a lot of lives, and there will as usual be people to say, "But Dean Kamen invented it first and did it better." Well, sure, but nobody knew about it.
Despite the flaws that his detractors are always eager to bring up, that was something I respected about Jobs (and a few other industrialists) -- the combined ability to understand new technology, visualize how it could be brought to a broad market, and then do it successfully.
It's kinda fun to imagine how things might've worked out if Apple had never visited Xerox. Maybe SGI would do it instead and Apple would be that company that made the boxes we all played Oregon Trail on. But, I'm pretty sure we wouldn't be using a Xerox OS.
Jobs participated in a Q&A at the MIT Sloan School of Management in 1992, and one of the topics he discussed were the marketing pivots needed for the Mac and Next once he figured out what real world tasks those platforms excelled at that their competition did not. (desktop publishing and enterprise rapid application development)
Video of the talk has recently been made available.
But I don't think the Lisa was right for the corporate market either - since it didn't have connectivity to the firm's mainframe or midrange. On the Lisa, you had a mouse port, parallel port, and two serial ports. The serial ports could have worked, but IIRC the SNA network stack needed wasn't on the Lisa (and Apple would have been quite hostile to IBM if they'd tried to add it). So firms would have had to share data via sneakernet, and that was the same thing they'd been doing with their Apple ]['s for several years now -- so the 10x price difference wasn't worth it, even if the UI was gorgeous.
There was also the matter of the installed base of machines the company might have had. The Apple ][ used standard Shugart 5-1/4" mechanisms, but the Lisa (originally) used their new FileWare "Twiggy" drives which placed the top/bottom heads on opposite ends of the diskette. So information couldn't be shared between Lisa and Apple ][ users.
Also - this may be my only chance to say Thank You for your work. So .. Thank you very much.
With the C64 in particular there was no real distinction between the operating system and the programming language, the programming language actually WAS the operating system. The machine booted into an empty canvas for you to create something with.
I still find that idea fascinating. Absurdly enough, the closest we've come to this again afterwards is Microsoft Excel. A spreadsheet application today is the closest general purpose computing equivalent to 'an empty canvas everyone can use to create something with right away'.
The difference is one of visibility and discoverability. The 70's and 80's were the time when user equated to programmer, mostly by technical necessity. The separation between those two concepts came later, and it's evidenced by the way modern windows hides the programming languages it bundles.
We had to wait for HyperCard on the Mac and I'm not sure such a thing even exists on modern phones.
Not understanding Hypercard was one of Apple's largest mistakes in the world of end-users. It was a real breakthrough in something that end-users could really handle and be usefully programmable by them. Besides not understanding its significance on the Mac, we (old Parc hands) pleaded until we were blue in the face to make HC the basis of a really good web browser (it was a great model of a symmetric author-consumer media tool). Missing the latter was a tragedy.
In the light of the first comment, we could then contemplate an end-user system that combined what was great about Hypercard, Smalltalk, and some other experience from the 80s (e.g. the use-cases from Ashton-Tate "Framework", etc.).
>Who Should Manage the Windows, X11 or NeWS?
>This is a discussion of ICCCM Window Management for X11/NeWS. One of the horrible problems of X11/NeWS was window management. The X people wanted to wrap NeWS windows up in X frames (that is, OLWM). The NeWS people wanted to do it the other way around, and prototyped an ICCCM window manager in NeWS (mostly object oriented PostScript, and a tiny bit of C), that wrapped X windows up in NeWS window frames.
>Why wrap X windows in NeWS frames? Because NeWS is much better at window management than X. On the surface, it was easy to implement lots of cool features. But deeper, NeWS is capable of synchronizing input events much more reliably than X11, so it can manage the input focus perfectly, where asynchronous X11 window managers fall flat on their face by definition.
>Our next step (if you'll pardon the allusion) was to use HyperNeWS (renamed HyperLook, a graphical user interface system like HyperCard with PostScript) to implemented a totally customizable X window manager!
>What is HyperLook? It's a user interface environment for NeWS, that was designed by Arthur van Hoff, at the Turing Institute in Glasgow. HyperLook was previously known as HyperNeWS, and GoodNeWS before that.
>Open windows with HyperLook
>HyperLook is an interactive application design system, that lets you develop advanced multimedia systems, via simple direct manipulation, property sheets, and object oriented programming. It releases the full power of OpenWindows to the whole spectrum of users, ranging from casual users who want a configurable desktop and handy presentation tools, to professional programmers who want to push the limits in interactive mulltimedia.
>You design interfaces by taking fully functional components from an object warehouse. You lay them out in your own window, configure them with menus and property sheets, define their appearance in colorful PostScript fonts and graphics, and write scripts to customize their behavior.
>You can write applications in C or other languages, that communicate with HyperLook by sending high level messages across the network. They need not worry about details like layout, look and feel, or fonts and colors. You can edit HyperLook applications while they're running, or deliver them in an uneditable runtime form.
>HyperLook is totally extensible and open ended. It comes with a toolkit of user interface classes, property sheets, and warehouses of pre-configured components with useful behavior.
I find this especially telling after Apple's recent education-focused announcement, where Apple continues trying to push the iPad as an educational tool. Is this Tim Cook trying to continue Steve Jobs' vision of Alan Kay's vision? Or do Apple seriously believe the iPad as it is has a business chance in the cash-strapped education market?
He was talking today about playing games on the PCs in the computer lab, and my wife asked about the iPads, and you know what he said? "Those are for doing work."
(The second-most common place is as a distraction device given to three-year-olds.)
The percentage of users that use computing devices primarily for something other than content consumption is pretty small. I actually think an argument could be made that phones are used more for productive tasks than tablets.
Computing has augmented human intellect, but right now almost exclusively for scientists, engineers, the medical profession, and other professionals. The real computer revolution will happen when the general public are able to boost their own intellects internally and to boost their reach externally with the help of computers.
Getting fixated on a poor problem can mask the problems that need to be worked on. A big one is "real education". Another is "real governance and democracy".
An analogy is to the Kennedy moonshot, which set back space travel more than 50 years. Real space travel has to be done quite differently than just relying on chemical rocketry, and the better ways to do it were swamped by the "stunt" to this day.
Isn't this already well under way, considering people's day-to-day use of phones and tablets for google searches, wikipedia, calendars, maps, and various 'helper' apps (for meditation, productivity, project planning, etc.)?
> And he says, “Well, people lose their pens.”
> And I said, “Well, have a place to put it.”
It does. It just has to sell on hype instead of any actual utility. It's a very common sales strategy.
(This is sad, really. As I understand, e-Ink tech is patented to death, so there's little progress there.)
Still, I'm not saying iPads are useless in general. The topic is relative merit of iPad vs. other possibilities, in education. My claims are as follows:
- iPad is a bad idea in education, partly because of the low capabilities of the device, and partly because not much apps that would would make full use of the capabilities of the device in order to encourage learning (this is kind of similar to "edutainment" or "educational games" for PC - utter crap that only fools parents)
- given that, iPad looses to plethora of other tools available for teachers, most of which are low-tech
- neither of the two points above will prevent iPad from succeeding on this market, as the hype is strong enough to overcome rational obstacles
Incidentally, I feel like back in 2015. I thought iPads already succeeded in getting sold to schools en masse, and it already turned out to be a colossal waste of money. I'm pretty sure we already had those discussions on HN.
- e-ink doesn't have color, color is helpful in educational material to make it friendly and accessible
- there are outlets everywhere, so unless you're going camping, the battery life is not that big of an advantage, more of a slight convenience thing
all of these points are why tablets aren't going anywhere
The first thing I did after trying to read on a tablet was to buy a Kindle.
Reading on a tablet, minutes, reading on a e-ink device, hours.
An e-reader has some disadvantages compared to a tablet. For example, it does not have a capacitive screen and does not have the ability to draw (low refresh rate) or use a stylus as input. It is a great tool for reading and it does that better than any other device (if you agree with its advantages over paper book which is weight vs amount of content) but not for say practice and exams.
I gave all my nieces and nephews tablets, and they all used them, I hope I didn’t destroy their lives.
Seriously, I have a hard time believing you can quote any evidence that a paper book is much better than a tablet for everyone. Which appears to be your assertion, although it’s hard to tell.
My assertion is in context of education. If you're going to use a tablet as an electronic version of paper books, then paper books are better, because:
- they're much faster to skim through and search through
- print has better resolution than screen
- you can actually make notes on them - I mean free-form notes, made in an efficient way
- you can pull out multiple books and compare them
- they're much nicer on your eyes
Note again, I'm talking about education. That's different from recreational reading of fiction, and more akin to reading technical books - you need the ability to skim, skip, quickly move around and cross-reference stuff separated from each other by many pages.
Now tablets could be better if electronic textbooks actually exploited the medium they're using - the way Bret Victor has been talking about for ages now. See e.g. http://worrydream.com/KillMath/ or http://worrydream.com/ScientificCommunicationAsSequentialArt... (just got on HN frontpage, https://news.ycombinator.com/item?id=16836530). However, I'm yet to see that happening in the wild.
Ultimately, my point isn't that tablets are bad. Hell, I'm happy user of such devices, and they definitely help me read on the go (they beat Kindle for technical books, due to color and better workflow with PDFs). My point is that tablets get dumped on schools and are used to view textbooks, which is something at which they're worse than regular textbooks. Both teachers and publishers have little clue on how to use these devices to their full potential, and there doesn't even seem to be a movement to help them get that clue.
We mostly think of paper as a poor archival medium because of a few decades, mainly at the end of the 19th century and the first half of the 20th century, when high-acid wood pulp paper was very widely used. (Newspapers are also sometimes printed on cheap acidic newsprint.) Acidic paper can become brittle within 20 years or so, or in any case much less than a century, although this also depends on many other factors. But both before and after this period, many books were printed on acid-free paper, and these are likely to survive for centuries, even with occasional use. The observations about paper's longevity in, for instance, Nicholson Baker's polemical Double Fold, tallies well with my experience as the son of a bookdealer. My father routinely sells books that are over a century old to people who fully expect to read them.
And I remember visiting an antiquarian book fair where a first folio Shakespeare was for sale for millions of dollars. It was still possible to turn the pages with no damage and all the printing was completely legible, even though the book was already 380 years old or so.
Books on low-acid paper are really a very durable technology when treated with respect.
- about to break
Even poor quality books are mostly readable after decades.
I don't know for certain a single example of any of my disks that have survived since 1990ies although I could hope.
A book can be usable even if it has a few loose pages. Also you can fix it.
Disks break more often and when they break you are lucky if you get anything out of them without paying (or being) a specialist.
I have a big shelf of books.
It includes my mothers chemistry book from before I was born as well as a few books bought second hand.
I have maybe one disk from the time I was a teenager.
Yes, fire will destroy a book (although parts will often be recoverable using low tech methods).
The same fire will certainly destroy disks and make them unrecoverable for everyone but recovery labs (if they are recoverable at all.)
Summary: books fail slower than disks. When they fail they often fail gracefully and you can still use them and even repair them using low tech tools.
Disks fail more often. In fact you rarely see working disks from last decade. They fail for a number of reasons and other times seemingly for no reasons at all. When they fail they might have tried to warn you but often they just fail with no warning and unless you work in IT you'll rarely see those warnings anyway. Also after they've actually failed you more likely than not need a recovery lab to get anything back.
I am suggesting that claiming that one thing is best for everything and everyone is not true.
I was physics/math and was lucky that my textbooks didn't change that rapidly, but still, it's a bit odd to claim that paper is the obvious choice for everyone based on widely varying circumstances.
And you can search through the content
And you can carry thousands or millions of them in a single device
And you can add a new one to your collection without having to physically go anywhere to buy
And you can draw and make marks without damaging the content forever
And you can create bookmarks in multiple ebooks and have them all available at the same location, also with search functionallity
And the device you use to read can be lighter than an actual book, at least it will always have the same weight regardless of how many ebooks you carry
There is probably more.
I assume your family doesn't use their tablets with candycrush-like apps unlike some of my acquaintances. Which apps do they use to make it worth it?
I tend to assume that the burden of the proof of value is on the new tool, otherwise we just keep changing and nothing gets ever battle-tested. Paper, pen and other boring educational tools have drawbacks and benefits that are well-known; what about tablets?
Have we regressed to the point where the simplest statement requires third party ("scientific" references)?
A table offers 100x the functionality of a paper book.
Even just in reading mode, it can carry 1000s of books in it, one can take notes, highlight, search them, remember its position on each, it has its own light (adjustable to match ambient levels or go beyond them), increase the font size (ever though about those with no perfect eyesight? Sucks to be them with paper books huh?), even spoken to the reader through TTS.
Not to mention that the price of ebooks is much cheaper than paper books to boot (and that one can even pirate the latter, if they are so inclined). Or that there are also 100,000s of FREE ebooks (classics, and such) but seldom (if ever, except for charity or so) free paper books.
I do agree with your statement that a tablet has more functionalities than any given book. Indeed, as you point out, a tablet can contain thousands of them.
What I tried and failed to express was that, in an educational setting, these functionalities are sometimes poorly used. This obviously depends on the teacher, if there is one.
It is not clear to me how teachers are supposed to use a tablet to enhance their classes, and in my (admittedly limited) experience, they are left to their own devices or with poor guidance to say the least. It could be that teachers more familiar with technology (beyond just using it) know what to do with it while teaching, but to which extent? I would have assumed that using tablets instead of books is at least as expensive; is this cost worth it in terms of gains?
In the Alan Kay interview which has already been linked in this thread:
I said, “If we’re gonna do a personal computer”—and that’s what I wanted PARC to do and that’s what we wound up doing [with the Alto]—”the children have to be completely full-fledged users of this thing.”
Think about what this means in the context of say, a Mac, an iPhone, an iPad. They aren’t full-fledged users. They’re just television watchers of different kinds.
That doesn't sound so promising in terms of education, unless I am missing something. From time to time, I see parents leaving their phone or tablet to their kid, who invariably plays some game like candy crush. How do you transform a device that was made for entertainment into an educational tool?
There have also been stories in the past about big tech leaders sending their children to "boring" schools without much in terms of technology, and mostly pen and paper, which suggests that better technology is not necessarily better for education.
Regarding the more limited scope of "just reading", I read a lot on my phone, but it is easier for me to get distracted since I can do so many things with a phone or a tablet. Granted it's the user's fault, but you don't need any self-discipline when using a simple book, which is at least one advantage.
Edit: I should have seen that TeMPOraL's comment already says it all: https://news.ycombinator.com/item?id=16836634
And it would have to be a lot better to be the obvious choice for 100% of readers that some folks in this thread appear to think that it is.
But of course this is just based on my limited personal experience, I could be totally wrong.
My nieces and nephews were k12 when I gave them tablets.
Apparently I missed the news that tablets were terrible 100% of the time! Which appears to be what you're saying, but I can't really tell, what with all the names you called me. "Out of touch" or "ignorant", what a choice!
Sorry, but what is really irritating is the comparison you are making. 1973 component prices are in no way comparable to 1984 component prices, and Xerox's target market and desired margins were completely different from Apple's. If you actually look at the commercial machines that Xerox developed after the Alto prototypes, you find things like the Star, that came out for $16,000 in 1981. Compare that to the Apple Lisa, which came out for $10,000 in 1983. And it is not really a fair comparison, because the Star had much better hardware in every respect. What would be a fair comparison to the Lisa in terms of hardware capabilities would be the 1978 NoteTaker, built with off-the-shelf components for about $15,000 (my estimate: https://oneofus.la/have-emacs-will-hack/2012-01-11-the-perso...). The people at Xerox were perfectly capable of designing and building low-cost computers with third party components. Xerox never entered the low-cost personal computer market. In the early 1980s Xerox sold high-margin workstations to large corporate clients, and in terms of price/performance their offerings were very competitive.
If you want to say that not selling cheap mass-market products has "limited direct impact," that is true, but is also not a very insightful observation.
This was a rather plain CP/M computer released almost at the same time as the Star. Pretty sad when you think how much better it would have been for them to just make the 3 year old Notetaker with no changes instead.
It's not made very gracefully, but the point is that the Mac wasn't a big leap forward in capability, it was the consumer version of the thing they had working a decade earlier.
Of course it's an open question what sort of computers Apple would have made if Jobs hadn't visited PARC.
Possibly the best look we'll ever get at the type of Human-Computer Interface he wanted to build is written down in "The Humane Interface", although I believe he did, and correct me if I'm wrong, develop a system for Canon that implemented some of his designs.
Based on other writings of his, which I've also enjoyed, he doesn't seem shy to throw in these jabs.
Earlier iterations of the machines supported by this community utilized ARM-based CPU's that had smaller cores onboard that could be repurposed for other things .. audio and video codecs, 3d engines and so on.. maybe you have yet to see the Pyra/Pandora classes of devices?
The software nature of the device also needs to match up to human abilities and needs: so I always ask questions about the GUI, how does it help in learning itself and other things, what kinds of things can I see and contrast and compare, what kinds of things can I do by programming, what kinds of things can kids do by programming.
And so forth. What do you think?
1. The Memfractor framework: http://slideplayer.com/slide/3869533/
2. knowm's memristor based Hebbian Learning technology:
https://vimeo.com/125361591 (Their CEO has reddit btw.: https://www.reddit.com/user/010011000111/)
3. Some intentionally not-turing complete components with some recursivity in them, which is apparently possible:
"We further showed that certain variants of DOT’s recursive self types can be integrated successfully while keeping the calculus strongly normalizing. This result is surprising, as traditional recursive types are known to make a language Turing-complete."
"What is the minimum required change to achieve Turing-completeness? Consistent with our expectations from traditional models of recursive types, we demonstrate that recursive type values are enough to encode diverging computation."
"But surprisingly, with only non-recursive type values via rule (TTYP), we can still add recursive self types to the calculus and maintain strong normalization."
"Can we still do anything useful with recursive self types if the creation of proper recursive type values is prohibited? Even in this setting, recursive self types enable a certain degree of F-bounded quantification, ..."
+other langsec related hijinks, see also http://bootstrappable.org/ and http:// langsec.org/occupy/
4. Software (UX) & hardware (again: UX) elements/modular components that scale using morphic numbers, i.e. the golden ratio and the plastic number (whose square Donald E. Knuth likes to refer to as "High Phi". He even suggested a special mark for it, which nobody ever bothered to include in Unicode): https://en.wikipedia.org/wiki/Plastic_number
5. Balanced ternary. We've recently made significant advances in solving the hardware issues with ternary hardware. For an example, see here: http://www.ingentaconnect.com/content/asp/jolpe/2017/0000001... Mirzaee, Reza Faghih; Farahani, Niloofar - Design of a Ternary Edge-Sensitive D FFF for Multiple-Valued Sequential Logic ( Preprint here: https://arxiv.org/abs/1609.03897 )
6. Spatial Analytic Interfaces: https://mspace.lib.umanitoba.ca/handle/1993/31595 Just take a look at the figures to see where this goes. Incidentally, this kind of user interface lends itself well for Dataflow programming...
7. Projective Programming: http://www.reddit.com/r/nosyntax. See also the Racket Medic Debugger https://docs.racket-lang.org/medic/index.html / https://www.cs.utah.edu/plt/publications/fpw15-lf.pdf, but far more importantly the sequel to it, called 'Ripple': https://dl.acm.org/citation.cfm?id=3136019 Li, Xiangqi; Flatt, Matthew - Debugging with Domain-Specific Events via Macros
8. GNUnet and related hijinks, see https://pdfs.semanticscholar.org/153d/578869863450766a870727... and https://news.ycombinator.com/item?id=15877908. See also P3KI, who FINALLY released a whitepaper: https://p3ki.com/docs/Decentralized_IPsec_VPN_using_P3KI_Cor...
9. Xanadu, Zettelkasten, Smalltalk, Lisp Machine, Plan 9, Inferno etc. related hyperlink crowd/mesh/cloud/grid/whatever hijinks, both for social and for programmatic cooperation. I'll not link anything on that here, since, well, kind of pointless to feed Alan Kay the kool-aid he in part came up with long ago. ;)
And then there exist a million other things that should go here, too, but for which I lack the time to type them out - I already consider this far, far too terse to make any sense to someone without A LOT of preexisting knowledge.
Edit: Below is the interview. He "had an idea of being able to type on a multi touch glass display"
https://www.youtube.com/watch?v=i5f8bqYYwps @ 37:00
And let's not forget that a Microsoft guy provoked him into creating a tablet called the iPad.
Jobs went into the office the next day, gathered his team, and said, "I want to make a tablet, and it can’t have a keyboard or a stylus."
This is another example of the old internet trope "Do people still watch Tv? I haven't had a TV in 10 years".
Now at the old netbooks price range you can only find hybrid/detachable keyboard laptops.
The market seems to be saying: if I need something that I have to lug around, I might as well have a keyboard. i.e. a laptop.
Another effect is people are doing everything they can on a phone, because of the convenience. This leads to slightly larger phones every year, and now a 5.5" phone is no longer considered a "phablet", and apple is rumoured to having a 6.5" phone next (their smallest tablet is 7").
So... tablets will rule the world, but they'll be "phones". The barrier to adaption was changing consumer behaviour... as always.
These days I'm not using much Windows for desktop (HN took over the time I had for games), but I do sincerely hope this OS will live on for at least couple more years, because Microsoft is the only company that knows how to do tablets. Seriously. A proper operating system, with real capabilities and proper user interface that can make use of the tablet form, makes a world of a difference. And I say that as someone who owned a decent Android tablet in the past. As someone who's friends and family with people owning currently decent Android tablets. My little Windows 10 el-cheapo tablet/netbook I picked up second-hand for ~$130 is much more useful than the top-of-the-line Samsung tablet offering. Hell, it already earned me many times more than its own value back, as it's good enough to do some meaningful dev work on the bus/train.
I realise I'm making a slightly different point as I'm purely talking about netbooks/laptops. But having the full power of an OS on a tablet is definitely the way to go.
Certainly that's one of the bad points about iOS and Android they are so locked down that you have to jump through hoops if you wanted to use them as a work machine.
For my current job I bought myself a $350 refurbished Thinkpad (T430 8GB SSD core i5), this brings me in all my income. You can compare this to my nephew that is paying $1000 for an iPhone X because he got bored of his iPhone 8.
Further to that I bought exactly the same spec Thinkpad for my 5 year old daughter. The Thinkpad T-series are great because you can pour a litre of liquid over them without problem  plus they're built like a brick, so basically perfect for kids. My daughter immediately covered the grey brick with shiny stickers and gave it a name (she has an iPad, but that's always just called 'iPad'). It has the full capability to do everything she'll ever need in theory for the rest of her life/career. Further to that it's got Ubuntu installed and I can then install Sugar  for her to use (the same used for One Laptop Per Child).
The possible drawback is that it doesn't have a touchscreen. But with my experiment of buying a laptop with a touch screen I found I pretty much never wanted to use the touch screen, it's a slower interface than keyboard and mouse. You want the screen in front of you at arms length but then you have to reach with your arm to touch the screen.
I can now teach her over the years what it means to have real freedom with your software and hardware.
Do you know of termux.com, for linux without rooting? It's basically a linux distribution, adjusted for android's slightly odd filesystem. The app itself is tiny, and you install from its repository. e.g. there's latex, youtube-dl, curl, vim, clang, ecj, python, ruby, perl, lua, erlang etc.
A bluetooth keyboard completes the DIY linux netbook... sadly, bluetooth keyboards are also almost gone, presumably as people adjust themselves to "touch" typing.
For the occasional typing, I use my same mini Bluetooth keyboard for my computer and my iPad.
I had an original iPad bought in mid 2011 but it didn't get much use - it was no more than a "big iPhone".
But a lot changed since then.
- Cellular connectivity is a lot cheaper. I pay $20 a month for unlimited data with T-Mobile.
- I got rid of cable and use Netflix/Hulu/DirecTv and Plex.
- PluralSight is available and I use that for watching technical videos.
- The screen resolution is a lot better for ebooks.
- better productivity software from Apple, Microsoft and Google.
- iOS has made iPad specific improvements - split screen, picture in picture, and drag and drop.
I've been using an iPad mini with Cellular as my phone-replacement mobile device since the day iPad mini came out. It fits in my pocket (not as comfortably as a phone, but well enough).
My only other device is a 15" MBP.
To be more accurate, the iPad mini is 7.9", not 7" though. And 4:3 aspect ratio. So in area, it's still much bigger than a 6.5" 16+:9 phone would be.
They do when they're trying to create something more complicated than a social media post.
If you really believe that the only thing “younger generations” do is post on social media, I suggest you spend more time around high schoolers (for example), particularly those with limited financial means.
Reminds me of a friend who, few years ago, was a teenager of limited financial means. He earned his first real $50 through a phone. His laptop broke, and he didn't have anyone to borrow one from in time, so he ssh'd into a remote machine from his phone and did the work (some data wrangling task) this way. Where there's a will, there's a way.
But still, that in no way suggests phones are good at text editing or productive work. They're capable enough, if you have lots of will and no other option.
It's awesome that those with limited funds can still get stuff done, and phones are great for consuming information, but I don't think there's much argument against the efficiency gain that hardware keyboards, mice and larger screens with multiple windows give when you get into things that require fine tuning and multi-tasking like drawing, coding and video editing.
(sorry I know posts like this don't make interesting reading, I just know how frustrating it can be when you put effort into writing something to avoid being misunderstood but still get taken the wrong way... so wanted to point out that at least for this reader the negative connotation was not there)
It's sized for creating things, but running the OS of a device made for consuming things.
In Facebook's case, it was famously harvard.edu. A lot of people want to belong to or participate in exclusive clubs. Something other people are widely excluded from. The commons is Answers.com or Yahoo Answers. The people writing junk and spam questions & answers on Yahoo Answers, are not participating on Quora.
In Quora's case, it was that it originated / spread out of prestigious tech circles. A small group of connected, rather elite, techies out of Silicon Valley seeded the site in its early days. Those people were connected to other prestigious tech people, and so it went.
Adam D'Angelo is one of Quora's two founders, the former CTO of Facebook, and is a near-billionaire that went to Exeter with Zuckerberg and then CIT. Elite + connected.
Then last but not least, Quora enforces fairly high standards on quality, similar to why Stack Overflow isn't overflowing with trash despite its immense scale.
Oh boy. You haven't checked out the newest questions page on SO, have you? There are oh so many new low-quality questions.
That said, they're mostly so badly written, that they rarely show up on Google. But if you're actually trying to answer questions, then good luck finding any worth answering.
I simply gave up answering questions unless I stumble upon them while googling.
"Think about this. How stupid is this? It’s about as stupid as you can get. But how successful is the iPhone? It’s about as successful as you can get, so that matches you up with something that is the logical equivalent of television in our time."
"Yeah. We can eliminate the learning curve for reading by getting rid of reading and going to recordings. That’s basically what they’re doing: Basically, let’s revert back to a pre-tool time."
Can someone explain the joke?
Also does anyone know how long Alan was with Apple?