We have these things we carry around with us, called "phones"; they are actually general-purpose computing devices and making (what we still call) phone calls is a small part of their purpose. And to an increasing extent these can fulfill almost all our computing needs.
So, fast-forward a couple of decades, assuming no huge technological shifts. Everyone is used to having one of these things that they carry around in their pocket, and scarcely anyone really needs any other general-purpose computing device. We still call them "phones" because we always did -- so now this is the usual term for a general-purpose computing device. And, yeah, there are some people who, because of their unusual needs, have a bigger more powerful one. Well, what are you going to call something like a phone but sized to sit on your desk while you work? Obviously it's a desktop phone, right? And somewhere out there In The Cloud there are whole farms of server phones :-).
And there will be conversations like the ones we now have about how "computer" used to mean a human being who did calculations. "Hey, did you ever look up where the word 'phone' actually comes from? It turns out that back in the 20th century they mostly used them for talking to one another, and it comes from some Greek thing meaning 'sound at a distance'. Weird, huh?"
 That's a pretty big assumption, of course.
 For sufficiently small values of "we".
Unless they want to read something, or type more than a paragraph, or do anything collaborative. Because it still feels a niche role, not a general computing one, the ability to connect to service providers is the dominating characteristic.
If anything, I see "phone" moving to be a description of service, and devices are just named device-specific things. People have already started dropping the "smart" part and just assume that a "phone" at LEAST can call. I now distinguish between "phones", "flip phones", and "land lines". If I don't need to talk, why not just get an ipod and a laptop? Each device is much better when giving up the domain of the other.
Unusual needs like being able to see a lot of text at once, type things longer than a paragraph, multitask, use more than one app in a work flow, and run real software?
I doubt it... unless phones develop dockable desktop capability in which case they are now dual-purpose converged devices. Right now to believe that mobile is "the future" of everything requires one to believe that most people have nothing to say beyond one-liners and selfies and that the only purpose of computing is to interact with canned services. That might be true for a subset of the market but I think it's a smaller subset than some do.
Phones already have dockable desktop capability (that is, support for desktop-style output via appropriate cables that support HDMI, and support for desktop-style input via Bluetooth keyboard/mouse.)
Already here, its only going to get better.
In tech-land and teen-land you can specify platforms to each of your relationships. But in the average American's day, doctors, daycare providers, clients, etc are all going to be contacting you by phone.
Without being required to carry around the phone part, a pocket-sized tablet would have 10x less adoption than it does today. "smart phone" it still is.
So if that device is uniquely identifiable by a phone number, and only you can be the only user of that number at any time (again unlike social network accounts), might as well call it a phone.
Not necessarily. There are phones which can hold two SIM cards.
I can also use these devices without a SIM card and never make a phone call. The evolution of the device definitely led to the name, but it's clear that that's just another peripheral use, like taking photos, listening to music, or getting turn-by-turn driving directions. If any device had all those capabilties, we'd call it a computer of some kind.
You're not going to get the media to use the name "palmtop," but if the tech community adopted it I think we'd do and see things a little differently.
Heh. My eye doctor just confirmed my upcoming appointment via text message. I'm sure they would be calling if anything outside of the most common case needed resolved, however.
This is usually used as an argument for why the Japanese decided to colloquially shorten 携帯電話 ("keitai denwa", hand-held telephone) into just 携帯(hand-held, or handy) instead of just calling it 電話(phone) like so many western countries do or 携電("kei-den"), which follows a more typical pattern for abbreviations in Japanese.
However, with the success of the iPhone, which was incompatible with all the featurephone services a new segment arose consisting of iPhones and non-backward compatible Android handsets, and these are denoted as "smartphones" shorten to スマホ(sma-pho).
I think it is interesting that they chose to move closer towards the "phone" moniker, than staying with the focus on hand-held.
Same idea, recently, the name "iphone" is quite popular and people distingish it between phones made by Apple and the rest.
It was a predictable outcome though. Japanese has a tendency to abbreviate common words, and the term smartphone (the unwieldy スマートフォン/sumātofon) was imported with the first generations of smartphones.
The German Handy on the other hand (as ccozan mentions)… What a lovely word.
I'm not sure I get what word "Gara" is a homonym with, to me it seems like a fairly normal word.
I've heard people calling them ガラクタ (crap) phones before. It's obviously just slang...
If you look at where the battery goes, that's usually an accurate distinction. If your screen-on time is less than 4 hours per day, you may spend 75% of battery on the cellular antenna. Everything else takes a lower priority.
Calling it a phone also efficiently distinguishes between smartphones and tablets or devices like the iPod Touch which are identical devices except for the lack of cellular antenna.
> They would have been sued into oblivion, lost half their customers, suffered one hell of a bad press. They could have sunk. Yet when Apple did exactly that to their new computer, the iPhone, few objected and customers flocked.
This just tells me that execution matters more than the initial idea. There's plenty of competition in the mobile space. If users didn't want Apple's locked-down ecosystem, they wouldn't buy iPhones. The vast majority of users don't care about side-loading apps or installing another operating system. They are (quite rationally) willing to sacrifice customization and app choices to avoid malware.
As for people, people didn't sue Microsoft, companies did. People mostly didn't care. Companies like Sun and Netscape cared. They're they ones that lobbied to have the government declare MS a monopoly. In other words nothing to do with execution and perceived awesomeness by customers. At the time MS was sued IE was the best browser by pretty much every measure so customers wanted it. It was companies that were upset.
Also malware infested vs walled garden is a false dichotomy
Meh. IE was the browser with the best website compatibility, because Microsoft went out of its way to reimplement half of the HTML standard in its own way, and could use the prevalence of IE to make webdevelopers follow "their" flavour of HTML.
Mozilla's first Firefox release came with the slogan "take back the web". That motto had nothing to do with the excellence of IE.
You can argue that IE was that good due to Microsoft being able to leverage its monopoly to develop something for free they could never have afforded to develop otherwise, since that argument basically won in a court of law give or take some nuances. But, in the meantime, IE really was a better browser for a good long time.
> As for people, people didn't sue Microsoft, companies did.
People did sue Microsoft as the US Federal Government (United States v. Microsoft Corp.) and as the European Commission (Microsoft Corp v Commission), even if those cases may have been prompted by company complaints (that was the case for Microsoft Corp v Commission, not sure for United States v. Microsoft Corp.)
> Also malware infested vs walled garden is a false dichotomy
How so? The vast majority of users can't tell if an application is malware. Better for them to put their trust in an authority who can tell (and who can punish authors of any malware that gets through).
Short term, sure, give in to the walled garden. Long term though? Learning what a computer is might be a good idea.
When I download stuff from the Debian repository, I trust the Debian team to do the security scanning for me. When I download The Witness, I trust Jonathan Blow, his team, and the distribution channel not to put malware in there. When I download a crack for some game I might or might not have bough, I trust the popularity of the corresponding torrent to correlate with a lack of malware.
And when a pop-up bugs me, I just close it.
I won't deny my computer is at risk. It is. Unlike most people however, I can use Windows for 2 years and have almost zero malware on it. Many people on the other hand let enough malware in to lag their computer to a crawl after a few months.
Education doesn't solve all problems, but it does solve many —maybe even most.
Personally, I feel that this is a disgraceful failing on the part of the global computer industry et. al., but that is beside the main issue of, "What the heck are we going to do about this!?"
To sell stuff, promise of usefulness is more important than genuine usefulness. We only care about the substance insofar as it reinforces the appeal (fortunately, the stuff gotta be genuinely useful, or the scam would be exposed). The industry could explain how computers work, but that would be far less effective than selling selling a magic wand.
So we end up with computers that hide their internals, so the user doesn't have to deal with them. Language based interface (command line) is put aside for the point & click "caveman interface" (Bret Victor showed us some amazing GUI, but rarely ever saw this since sketchpad). And we pile up abstractions on top of abstractions without stopping for a minute to consider the sheer madness of the distributed cognitive load implied by a personal computing system that requires over 10.000 thick books to write.
(P.S. I'm a fan of your Earley Parser Explained post. Kudos, great job!)
It's just as much one as the other; let's not forget Symbian?
At the time MS was sued IE was the best browser by pretty much every measure so customers wanted it
! People used it because it was preinstalled.
And yeah, Netscape Navigator/Communicator 4 was pretty crappy, as was the first Netscape 6/Mozilla stuff for a few years pre-Phoenix/Firebird/Firefox.
But that was all pre-"web 1.0". Once computers started becoming routinely networked, we ended up with the malware problem which has driven us here. The user is in no position to accurately assess the safety of software, so as you say it's not such a bad choice to pick a locked platform to avoid malware.
The situation is an uncomfortable duopoly between the semi-open Android and locked-down Apple ecosystems. How long will this remain stable? I don't know.
Hate to say it, but I think it is much worse: the "average user" is in no position to asses which software he is clicking on.
When a pop-up appears, many users don't know if it comes from the browser, another application, the OS, some malware, or if it's a pop up at all (could be a Gif clickbait).
I don't think he really gets how CPUs work -- it's already that way, and has been for quite a while. The published instruction sets are the 'virtual' instructions, and translation of these instructions (be they ARM, x86, or PowerPC) is baked into each CPU's microcode. We actually have no visibility into the 'real' instructions that CPUs execute ('micro-ops'), because they're proprietary.
We need to go back and overhaul the CPU instruction set like Vulkan, Dx12 and Mantle overhauled the GPU APIs. We need to reflect on what CPUs can do, what they can do fast, and what they can do with low power. Then we need an instruction set that would act as an API to these subsystems. Something orthogonal, that doesn't take too much energy to decode, and could be decoded in parallel if need be (for crazy desktop speed ups).
While we're at it, it might be nice to have explicit support for things like pointer tags, to speed up dynamic stuff like garbage collection and runtime type information.
You're right, I don't really get how CPUs work. But I did pick up a few things that lead me to trust instruction set design is not over. We can do better.
1) The percentage of die used for decoding is actually already quite low. Per Anandtech 2014, it was at 10% for x86 and decreasing:
Since there is a floor to the number of transistors needed for decoding, there might not be a whole lot to gain there.
2) Progress has already been made towards updating instruction sets to reflect what CPUs do quickly, well etc. It started with MMX back in the stone age, and has progressed through a plethora of SIMD and media acceleration instructions.
3) Instruction sets are already not designed as if they were supposed to be run directly. Quite to the contrary, they are abstracted -- the instruction set is the API, and the microoperations are the instructions. Designing them as if they were to run directly would mean exposing the micro-ops, which would require backwards compatibility breaking changes to the CPU each generation when there were changes to the micro ops.
4) The current system already pretty well levels energy consumption between competing ISAs. See: http://www.extremetech.com/extreme/188396-the-final-isa-show...
You're describing something that sounds like a bytecode VM, but in essence, that's what modern processors already are. Unfortunately, x86 (at least 32 bit) assembly is pretty unpleasant as an 'API', but ARM and PowerPC are both pretty good.
As far as pointer tagging, that's something that's probably mostly limited by memory bandwidth (it's not a particularly compute heavy thing to do), so unless the HW support came in the form of a dedicated on-chip cache, it probably wouldn't get you very much ... and then, if you're going to the expense of adding a dedicated on-chip cache, it's probably going to be more effective as a general purpose cache -- if the code is accessing the pointer metadata often, it will be in cache, and therefore accelerated.
Not trying to rain on your parade, I like seeing creative ideas, and I have no idea why people are downvoting you. Have an upvote on me :).
EDIT: Also, ARM has NEON.
I think one reason behind the word "mobile" is because "ordinateur portable" (or "portable" for short) is already used to talk about laptops. Destkops are in rare cases called "ordinateur de bureau". Generally, we just say "PC" —unless it's from Apple.
I am not aware of what the Académie Française may or may not have decided.
What I love about the English language here, is that we have 3 words that neatly apply to the three form factors: desktop, laptop, and palmtop. It's a bit of a bummer we can't exploit such regularity in French. If I had to settle on a term, I'd probably try "ordinateur mobile", or "mobile" for short. Unlike "palmtop", it wouldn't scream "computer", so the best I can hope for is that we just stop using "téléphone" to describe those things.
I find it perfectly vague yet easily understandable.
Alas, we often just say "tablet", so it's still possible to make an artificial distinction between them and "real" computers such as laptops and desktops.
The idea behind "palmtop" is to appeal to the intuitions behind laptops and desktops, and suggest it should be subjected to the same rules (when possible).
I want to hang on to the sense of awe and amazement at the reality of having a powerful computer and all of the internet at our disposal at all times.
But there's absolutely nothing wrong with "Smartphone". Two are the reasons:
1 - There's history there. Most of the words aren't precisely crafted by linguists when society needs them, we simply build on what already exists. The so called smartphone came from the cellphone that came from the telephone, the name itself hints you about its origins. It's fascinating what you can learn about a word when you study its etymology.
2 - Meaning wise I wouldn't worry so much about the connotations that the author mentions (I wouldn't worry at all). I'm with Wittgenstein in this one: the word means whatever we decide it means.
The first thing I did when opening this thread was Ctrl+F to see if anyone was suggesting it should be called a "personal computer", because that's what I've thought best described what the smartphone really has become.
Now, to suggest even trying to use that to refer to smartphones these days would be adding a much heavier does of confusion. But the insight seems fundamentally right to me.
Interesting article on the subject that brings up this point: http://ben-evans.com/benedictevans/2015/11/7/mobile-ecosyste...
For the average consumer this semantic discussion won't change their perception of those devices though. The masses for now have settled on the three categories of smartphone, tablet, and laptop/desktop computer.
(Even more frustrating is that every piece of software, service, and website you use on those devices, and in some countries even a text message is called simply app.)
Maybe when we start to see more android devices that are in a laptop configuration we'll start to struggle more with that issue.
The Newton was not a revolutionary music player, and no Apple music player before the iPhone was a "mobile computer" in a sense that would make this portrayal meaningful and accurate.
> It evolved into a sophisticated media computer with color video, wireless and computer utilities in the last iTouch before the iPhone.
iPod Touch (sometimes nicknamed "iTouch") was introduced after the iPhone, running a later version of iPhone OS (the OS which later became iOS) than the first iPhone. There was no "iTouch before the iPhone."
There was the old-style wheel-controlled iPod before the iPhone, and that was a revolutionary (at least in terms of commercial impact) music player, but it wasn't a "mobile computer" in the same sense as the iPod Touch, modern smartphones, or even earlier PDAs or the Apple Newton.
If so, where are you? I'm curious because all I hear in the UK is "phone" or "mobile" - both of which we always used. But I'm acutely aware that I hear or read "cell" in American sources significantly less often than, I think, I used to.
True. My beef isn't about "smart" however, it's about "phone". I believe my point stands even more acutely in this light.
Although actually a contraction of 'mobile phone', it doesn't have the same 'problem' going forward, since we can just decide or assume that it refers to a mobile computing device.
Possibly though things will get even more blurred than that - everything a computing device and almost all of them mobile... Time will tell!
Edit: Actually, I meant that the term "Feature phone" is really even a worse name for feature phones, than "Smartphone" is for smartphones. I didn't suggest that it was a better name, but the downvote and comments suggest that I didn't make that clear.
You couldn't do much beyond the basic functionality. And there was a very limited amount of apps available, if any at all. They were as cheap as today's entry level Android smartphones.
Point is, I think its arbitrary that you can't use an iPhone to write apps for the iPhone.
Are you kidding me? All of those functions are 10x easier to do on the "unfit touchscreen" that on what we had before that (tiny non-touch screens, LCDs, 80's style arrays of buttons, etc).
And when it comes to actually taking the picture, changing track or volume etc, smartphones even offer physical buttons on the sides.
Remember trying to get to the 10th track of the 4th folder in your "physical" CD mp3 player or minidisk? Setting anything more advanced than zoom level and picture mode on a typical 2002-2005 compact camera?
I remember putting my music on MP3 players for the last 12 years, having non-MP3 CDs before that, and being able to perform basic operations on them blind and one-handed. I remember using my non-touch camera being much quicker to adjust than the app in my phone, especially when I'm adjusting the more advanced settings.
Touchscreens are optimized for flexibility and portability, and they're great at those. They're terrible at being fit for specific purposes.
You remember through rose colored glasses then.
Try using that non-touch early 00's compact camera again (and no cheating with DSLRs with dedicated aperture and speed dials). As for the "blind and one-handed" operation with the non-mp3 CDs, did that include changing through 200+ albums and 2000+ tracks?
I don't have a touch camera at all. The one that I do have isn't a DSLR, but includes various dials. That's my point: dedicated controls will always be better. A camera that primarily relies on a direction pad and menus is only slightly better than using a touchscreen (but it is better).
> As for the "blind and one-handed" operation with the non-mp3 CDs, did that include changing through 200+ albums and 2000+ tracks?
For the CDs? Of course not. I'm not going to claim that a single CD could hold that many, whether or not they were in MP3 format. As for the MP3 player, I tend to set playback to a particular artist or setting the player to randomly play song from the whole library (by sight, of course), then throw it in my pocket. From that point, basic playback control doesn't require me to take it out of my pocket, while I'd have to at least take out my phone and wake up the screen to start doing the same thing.
I've got to assume that the way we tend to use our devices differs enough that something that feels like a huge improvement to me is a piddling detail in your use-case. On the other hand, some of the ways that I liked using my devices have disappeared, or at least been de-emphasized in more recent models, and that's what I'm frustrated with.
Side picture buttons are increasingly rare if not defunct, volume rockers are still here though.
A lot of things that could be done quickly is now fragile and subtle. A side effect of translating desktop UX to IRL handheld gadgets, by the promise that software will be smart enough to make it one button away.
psedit: as noted below, I indeed never realized the volume rocker was bound to snapshot. My rant is half void now u_u;
Thank you for the double-tap tip, though. It works in Marshmallow too, apparently.
The old Blackberry devices were UI marvels. The physical scroll wheel was great.
That said, Nokia did a few phys.kbd phones not long ago. Bought a C5 for my mom as she just wanted a simple thing. Unfortunately the software layer was one of the worst one I had to experience in my whole life. 4 random and generic submenus on average to get to any function. Ellipsis full menus ... gosh.
Anyway, shootout to Blackberry.
There's also the old (~1990s) joke that it's actually Bavarian and short for "Hän die ken Schnür?" ("Haben die kein Kabel?" / "Do these not have a cord?").
i.e. a means to communicate.
You do not call an automobile "wheeled combustion chamber".