Hacker News new | past | comments | ask | show | jobs | submit login
“Smartphone” is the wrong name (loup-vaillant.fr)
75 points by loup-vaillant on Feb 22, 2016 | hide | past | favorite | 112 comments



I have for some time had a crackpot theory that language will actually evolve the other way.

We have these things we carry around with us, called "phones"; they are actually general-purpose computing devices and making (what we still call) phone calls is a small part of their purpose. And to an increasing extent these can fulfill almost all our computing needs.

So, fast-forward a couple of decades, assuming no huge technological shifts[1]. Everyone is used to having one of these things that they carry around in their pocket, and scarcely anyone really needs any other general-purpose computing device. We still call them "phones" because we always did -- so now this is the usual term for a general-purpose computing device. And, yeah, there are some people who, because of their unusual needs, have a bigger more powerful one. Well, what are you going to call something like a phone but sized to sit on your desk while you work? Obviously it's a desktop phone, right? And somewhere out there In The Cloud there are whole farms of server phones :-).

And there will be conversations like the ones we[2] now have about how "computer" used to mean a human being who did calculations. "Hey, did you ever look up where the word 'phone' actually comes from? It turns out that back in the 20th century they mostly used them for talking to one another, and it comes from some Greek thing meaning 'sound at a distance'. Weird, huh?"

[1] That's a pretty big assumption, of course.

[2] For sufficiently small values of "we".


> Everyone is used to having one of these things that they carry around in their pocket, and scarcely anyone really needs any other general-purpose computing device.

Unless they want to read something, or type more than a paragraph, or do anything collaborative. Because it still feels a niche role, not a general computing one, the ability to connect to service providers is the dominating characteristic.

If anything, I see "phone" moving to be a description of service, and devices are just named device-specific things. People have already started dropping the "smart" part and just assume that a "phone" at LEAST can call. I now distinguish between "phones", "flip phones", and "land lines". If I don't need to talk, why not just get an ipod and a laptop? Each device is much better when giving up the domain of the other.


> there are some people who, because of their unusual needs, have a bigger more powerful one.

Unusual needs like being able to see a lot of text at once, type things longer than a paragraph, multitask, use more than one app in a work flow, and run real software?

I doubt it... unless phones develop dockable desktop capability in which case they are now dual-purpose converged devices. Right now to believe that mobile is "the future" of everything requires one to believe that most people have nothing to say beyond one-liners and selfies and that the only purpose of computing is to interact with canned services. That might be true for a subset of the market but I think it's a smaller subset than some do.


> unless phones develop dockable desktop capability in which case they are now dual-purpose converged devices.

Phones already have dockable desktop capability (that is, support for desktop-style output via appropriate cables that support HDMI, and support for desktop-style input via Bluetooth keyboard/mouse.)


> I doubt it... unless phones develop dockable desktop capability in which case they are now dual-purpose converged devices.

Already here, its only going to get better.

http://bgr.com/2016/02/21/hp-elite-x3-review-pt-1-preview/



Someone has already predicted that it will fail ...

http://techcrunch.com/2016/02/21/hp-announces-yet-another-mo...


The average adult is required to be available by phone. Not by snapchat, not by skype, not even by text message. By. Phone.

In tech-land and teen-land you can specify platforms to each of your relationships. But in the average American's day, doctors, daycare providers, clients, etc are all going to be contacting you by phone.

Without being required to carry around the phone part, a pocket-sized tablet would have 10x less adoption than it does today. "smart phone" it still is.


Not to mention that you can have multiple accounts on any social network, and switch between them at any time, but you can have only one phone number on that mobile device, whatever you want to call it.

So if that device is uniquely identifiable by a phone number, and only you can be the only user of that number at any time (again unlike social network accounts), might as well call it a phone.


> but you can have only one phone number on that mobile device

Not necessarily. There are phones which can hold two SIM cards.


It doesn't make the point invalid. There are laptops that connect to cell networks. I can make calls to landlines from my desktop. That doesn't change the name, because it does a lot more.

I can also use these devices without a SIM card and never make a phone call. The evolution of the device definitely led to the name, but it's clear that that's just another peripheral use, like taking photos, listening to music, or getting turn-by-turn driving directions. If any device had all those capabilties, we'd call it a computer of some kind.

You're not going to get the media to use the name "palmtop," but if the tech community adopted it I think we'd do and see things a little differently.


> But in the average American's day, doctors, daycare providers, clients, etc are all going to be contacting you by phone.

Heh. My eye doctor just confirmed my upcoming appointment via text message. I'm sure they would be calling if anything outside of the most common case needed resolved, however.


Japanese cellphones had a lot of the same use-cases that modern smartphones have. You could buy stuff online, use GPS, surf the mobile web, send pictures via email and purchase stuff and movies (disney even produced direct to featurephone cartoons in Japan!).

This is usually used as an argument for why the Japanese decided to colloquially shorten 携帯電話 ("keitai denwa", hand-held telephone) into just 携帯(hand-held, or handy) instead of just calling it 電話(phone) like so many western countries do or 携電("kei-den"), which follows a more typical pattern for abbreviations in Japanese.

However, with the success of the iPhone, which was incompatible with all the featurephone services a new segment arose consisting of iPhones and non-backward compatible Android handsets, and these are denoted as "smartphones" shorten to スマホ(sma-pho).

I think it is interesting that they chose to move closer towards the "phone" moniker, than staying with the focus on hand-held.


Germans also call their smartphones "handy", even if the name comes from age of bricks/clamshells and having a color screen with 256 colours was a selling point.

Same idea, recently, the name "iphone" is quite popular and people distingish it between phones made by Apple and the rest.


In that respect, I miss the word 携帯. スマホ (sumaho, ugh!) feels so uncultivated in such a pleasant language.

It was a predictable outcome though. Japanese has a tendency to abbreviate common words, and the term smartphone (the unwieldy スマートフォン/sumātofon) was imported with the first generations of smartphones.

The German Handy on the other hand (as ccozan mentions)… What a lovely word.


I only hear people using スマホ to differentiate them from feature phones ガラ携 (Galapagos phone, also a homonym for "junk phone"). 携帯 is still the general term for "cellphone".


You see cellphone store advertise feature phones as Galapagos phones, and I believe it was Panasonic or Sharp that even had a line of phones called Galapagos.

I'm not sure I get what word "Gara" is a homonym with, to me it seems like a fairly normal word.


Yes, the Galapagos phone is where ガラ comes from.

I've heard people calling them ガラクタ (crap) phones before. It's obviously just slang...


You don't need to miss it, as 携帯 is still an acceptable term to describe a smartphone, and frankly I'm not sure スマホ is on track to being a fixture of the Japanese language. Adoption of the term seems a bit spotty. Which I think is nice, because I agree with you that スマホ is a distinctly stupid word.


In Hong Kong, cell phones were always referred to as "手機", which roughly translates to "hand machine". Yes, even the 80s and 90s Nokia dumbphones.


> "Smartphone" has the wrong connotation. It suggests your expensive brick is a phone first, and can do smart things second.

If you look at where the battery goes, that's usually an accurate distinction. If your screen-on time is less than 4 hours per day, you may spend 75% of battery on the cellular antenna. Everything else takes a lower priority.

Calling it a phone also efficiently distinguishes between smartphones and tablets or devices like the iPod Touch which are identical devices except for the lack of cellular antenna.


There are plenty of tablets with cellular antenna though (not all of them have voice call functionality, but that's mostly just a software limitation). And with VoIP and instant messaging apps even WiFi-only device can fulfill "phone" roles.


> For instance, picture Microsoft in 2001, after it got sued for bundling Internet Explorer with Windows. Imagine what would have happened if they sold Windows XP with an exclusive application store. Imagine that every program must be approved by Microsoft to run on XP, and they take a 30% cut. Oh, and no interpreter allowed.

> They would have been sued into oblivion, lost half their customers, suffered one hell of a bad press. They could have sunk. Yet when Apple did exactly that to their new computer, the iPhone, few objected and customers flocked.

This just tells me that execution matters more than the initial idea. There's plenty of competition in the mobile space. If users didn't want Apple's locked-down ecosystem, they wouldn't buy iPhones. The vast majority of users don't care about side-loading apps or installing another operating system. They are (quite rationally) willing to sacrifice customization and app choices to avoid malware.


Microsoft was considered to have a monopoly (whether you or I agree). As such they were subject to different rules. Apple, especially when iPhone launched, did not have any kind of monopoly on cellphones nor did they have one on PDAs (which is really where smartphones come from, not cellphones).

As for people, people didn't sue Microsoft, companies did. People mostly didn't care. Companies like Sun and Netscape cared. They're they ones that lobbied to have the government declare MS a monopoly. In other words nothing to do with execution and perceived awesomeness by customers. At the time MS was sued IE was the best browser by pretty much every measure so customers wanted it. It was companies that were upset.

Also malware infested vs walled garden is a false dichotomy


At the time MS was sued IE was the best browser by pretty much every measure

Meh. IE was the browser with the best website compatibility, because Microsoft went out of its way to reimplement half of the HTML standard in its own way, and could use the prevalence of IE to make webdevelopers follow "their" flavour of HTML.

Mozilla's first Firefox release came with the slogan "take back the web". That motto had nothing to do with the excellence of IE.


IE4 was head and shoulders above Netscape 4. IE4 is recognizably a modern browser (by introducing .innerHTML), if a very poor one by modern standards; Netscape 4 was hackery on an engine taken way beyond what it could sustain. Netscape require a long time to catch up, and the helping hand Microsoft gave by slowing down dev on IE round about IE6. There was a long window where IE was the best browser by pretty much every measure and I'm pretty sure it overlapped this lawsuit, though my brain and the calendar don't always get along perfectly.

You can argue that IE was that good due to Microsoft being able to leverage its monopoly to develop something for free they could never have afforded to develop otherwise, since that argument basically won in a court of law give or take some nuances. But, in the meantime, IE really was a better browser for a good long time.


Yep. The issue of MSIE wasn't so much bundling a browser, as leveraging an existing monopoly into a second one and unfairly driving out competition.

> As for people, people didn't sue Microsoft, companies did.

People did sue Microsoft as the US Federal Government (United States v. Microsoft Corp.) and as the European Commission (Microsoft Corp v Commission), even if those cases may have been prompted by company complaints (that was the case for Microsoft Corp v Commission, not sure for United States v. Microsoft Corp.)


In the paragraphs I quoted, the point was basically: If Microsoft had tried what Apple actually did, it would have been a disaster for Microsoft. I think that's accurate regardless of whether they were a monopoly or not.

> Also malware infested vs walled garden is a false dichotomy

How so? The vast majority of users can't tell if an application is malware. Better for them to put their trust in an authority who can tell (and who can punish authors of any malware that gets through).


No uneducated user is safe. And I believe malware on smartphones are not unheard of.

Short term, sure, give in to the walled garden. Long term though? Learning what a computer is might be a good idea.


Do you decompile every Android app you install? Are you doing anti-malware analysis of their code or binaries? If not then being "educated user" doesn't really help you much. And in iPhone's case Apple is doing that security threat scan for you.


I don't, and I don't. On the other hand, I do know what I am clicking on, and I don't execute a program I have not explicitly trusted. (Mostly. The OS itself trusts many programs, which trust programs…)

When I download stuff from the Debian repository, I trust the Debian team to do the security scanning for me. When I download The Witness, I trust Jonathan Blow, his team, and the distribution channel not to put malware in there. When I download a crack for some game I might or might not have bough, I trust the popularity of the corresponding torrent to correlate with a lack of malware.

And when a pop-up bugs me, I just close it.

I won't deny my computer is at risk. It is. Unlike most people however, I can use Windows for 2 years and have almost zero malware on it. Many people on the other hand let enough malware in to lag their computer to a crawl after a few months.

Education doesn't solve all problems, but it does solve many —maybe even most.


Sad that this is getting downvotes, it's one of the most important points about this vast machine we are building around ourselves: You cannot secure [people who suffer from] ignorance. "No uneducated user is safe." Very much so.

Personally, I feel that this is a disgraceful failing on the part of the global computer industry et. al., but that is beside the main issue of, "What the heck are we going to do about this!?"


My take on it is, the computer industry, because they're capitalists like any industry, care about money. The engineers working there often care about the users of course, but the shareholders don't give a damn. They just want money. Ultimately, that greed determines the output of the engineers far more effectively than the ideals of the engineers themselves. (This could be generalised to basically any public company, and any of their employees. Employment is an amazing crowd control tool.)

To sell stuff, promise of usefulness is more important than genuine usefulness. We only care about the substance insofar as it reinforces the appeal (fortunately, the stuff gotta be genuinely useful, or the scam would be exposed). The industry could explain how computers work, but that would be far less effective than selling selling a magic wand.

So we end up with computers that hide their internals, so the user doesn't have to deal with them. Language based interface (command line) is put aside for the point & click "caveman interface" (Bret Victor showed us some amazing GUI, but rarely ever saw this since sketchpad). And we pile up abstractions on top of abstractions without stopping for a minute to consider the sheer madness of the distributed cognitive load implied by a personal computing system that requires over 10.000 thick books to write.


Yes, all of the above. This wasn't so bad when our reach was less, but today with Siri-like things being built into Barbie dolls it seems we should be spending more time thinking deeply before we chase that dollar, eh?

(P.S. I'm a fan of your Earley Parser Explained post. Kudos, great job!)


PDAs (which is really where smartphones come from, not cellphones).

It's just as much one as the other; let's not forget Symbian?

At the time MS was sued IE was the best browser by pretty much every measure so customers wanted it

[citation needed]! People used it because it was preinstalled.


Symbian also came from a PDA OS.

And yeah, Netscape Navigator/Communicator 4 was pretty crappy, as was the first Netscape 6/Mozilla stuff for a few years pre-Phoenix/Firebird/Firefox.


The openness of the PC platform was something of an accident. The home microcomputers and the early Macs had their OS in ROM. There were competing PC operating systems (DR-DOS, DESQview, OS/2, etc) but Microsoft managed to kill off all the commercial competition.

But that was all pre-"web 1.0". Once computers started becoming routinely networked, we ended up with the malware problem which has driven us here. The user is in no position to accurately assess the safety of software, so as you say it's not such a bad choice to pick a locked platform to avoid malware.

The situation is an uncomfortable duopoly between the semi-open Android and locked-down Apple ecosystems. How long will this remain stable? I don't know.


> The user is in no position to accurately assess the safety of software

Hate to say it, but I think it is much worse: the "average user" is in no position to asses which software he is clicking on.

When a pop-up appears, many users don't know if it comes from the browser, another application, the OS, some malware, or if it's a pop up at all (could be a Gif clickbait).


> Personally, I have more faith in a third alternative: a virtual instruction set. Like bytecode, though not managed like Java, and not meant to be interpreted or JIT compiled either. It could run on the bare metal, or be translated into something that is —like the Mill CPU. That way you can keep the illusion of having a single instruction set, while sacrificing virtually nothing. Moreover, future CPUs don't even need backward compatibility, as long as you can translate (and optimise!) the virtual assembly for them.

I don't think he really gets how CPUs work -- it's already that way, and has been for quite a while. The published instruction sets are the 'virtual' instructions, and translation of these instructions (be they ARM, x86, or PowerPC) is baked into each CPU's microcode. We actually have no visibility into the 'real' instructions that CPUs execute ('micro-ops'), because they're proprietary.


The idea is to own up to this reality, and stop designing instruction sets as if they were meant run directly. x86 in particular has a legacy of simple CPUs that didn't have much of a decoding unit. But now it has gone too far in the other direction: it takes a significant amount of chip surface and energy to decode, making it unsuitable for low power situations. ARM on the other hand is probably lacking in the SIMD department (I haven't checked).

We need to go back and overhaul the CPU instruction set like Vulkan, Dx12 and Mantle overhauled the GPU APIs. We need to reflect on what CPUs can do, what they can do fast, and what they can do with low power. Then we need an instruction set that would act as an API to these subsystems. Something orthogonal, that doesn't take too much energy to decode, and could be decoded in parallel if need be (for crazy desktop speed ups).

While we're at it, it might be nice to have explicit support for things like pointer tags, to speed up dynamic stuff like garbage collection and runtime type information.

You're right, I don't really get how CPUs work. But I did pick up a few things that lead me to trust instruction set design is not over. We can do better.


When I make a long comment, I like to see explanations about what warranted a downvote. So I can learn…


I didn't personally downvote you, but I'll give you my best guesses:

1) The percentage of die used for decoding is actually already quite low. Per Anandtech 2014, it was at 10% for x86 and decreasing:

http://www.anandtech.com/show/8776/arm-challinging-intel-in-...

Since there is a floor to the number of transistors needed for decoding, there might not be a whole lot to gain there.

2) Progress has already been made towards updating instruction sets to reflect what CPUs do quickly, well etc. It started with MMX back in the stone age, and has progressed through a plethora of SIMD and media acceleration instructions.

3) Instruction sets are already not designed as if they were supposed to be run directly. Quite to the contrary, they are abstracted -- the instruction set is the API, and the microoperations are the instructions. Designing them as if they were to run directly would mean exposing the micro-ops, which would require backwards compatibility breaking changes to the CPU each generation when there were changes to the micro ops.

4) The current system already pretty well levels energy consumption between competing ISAs. See: http://www.extremetech.com/extreme/188396-the-final-isa-show...

You're describing something that sounds like a bytecode VM, but in essence, that's what modern processors already are. Unfortunately, x86 (at least 32 bit) assembly is pretty unpleasant as an 'API', but ARM and PowerPC are both pretty good.

As far as pointer tagging, that's something that's probably mostly limited by memory bandwidth (it's not a particularly compute heavy thing to do), so unless the HW support came in the form of a dedicated on-chip cache, it probably wouldn't get you very much ... and then, if you're going to the expense of adding a dedicated on-chip cache, it's probably going to be more effective as a general purpose cache -- if the code is accessing the pointer metadata often, it will be in cache, and therefore accelerated.

Not trying to rain on your parade, I like seeing creative ideas, and I have no idea why people are downvoting you. Have an upvote on me :).

EDIT: Also, ARM has NEON.


Whoa, I was hopelessly out of date. Thanks.


This article is from France. What's the French term? Has the Académie Française decided on one yet? (Unlike English, French has an official standards body.)


The most common term here is "portable", or "téléphone portable", or just "téléphone". "Portable" means what you think it means: you can carry it. Sometimes, (especially in the commercials), we see "mobile" instead of "portable". It also means what you think it means: you can move it.

I think one reason behind the word "mobile" is because "ordinateur portable" (or "portable" for short) is already used to talk about laptops. Destkops are in rare cases called "ordinateur de bureau". Generally, we just say "PC" —unless it's from Apple.

I am not aware of what the Académie Française may or may not have decided.

What I love about the English language here, is that we have 3 words that neatly apply to the three form factors: desktop, laptop, and palmtop. It's a bit of a bummer we can't exploit such regularity in French. If I had to settle on a term, I'd probably try "ordinateur mobile", or "mobile" for short. Unlike "palmtop", it wouldn't scream "computer", so the best I can hope for is that we just stop using "téléphone" to describe those things.


In many countries they're mostly referred to as "mobiles" (as an abbreviation of "mobile phone"), especially now that nobody has feature phones anymore and there's no reason to make the distinction.

I find it perfectly vague yet easily understandable.


In Italy many people refer to mobiles as telefonino (lil' phone). I guess that doesn't really apply to phablets though :)


I agree that naming is important, yet I don't think calling them "palmtops" would have changed anything. Tablets were and often are still called "tablet computers", yet they usually have the same limitations.


True. On the other hand, the name make it a bit easier to protest: they can't say those things are not computers, since can't do anything else.

Alas, we often just say "tablet", so it's still possible to make an artificial distinction between them and "real" computers such as laptops and desktops.

The idea behind "palmtop" is to appeal to the intuitions behind laptops and desktops, and suggest it should be subjected to the same rules (when possible).


I can't be the only one who has made a habit of casually referring to them as "space computers." Whenever my wife asks some easily google-able question I reply "well just ask you pocket space computer."

I want to hang on to the sense of awe and amazement at the reality of having a powerful computer and all of the internet at our disposal at all times.


If I had to rename "Smartphone" I would call it "Personal Computer". My phone is way more "personal" than my desktop and laptop.

But there's absolutely nothing wrong with "Smartphone". Two are the reasons:

1 - There's history there. Most of the words aren't precisely crafted by linguists when society needs them, we simply build on what already exists. The so called smartphone came from the cellphone that came from the telephone, the name itself hints you about its origins. It's fascinating what you can learn about a word when you study its etymology.

http://etymonline.com

2 - Meaning wise I wouldn't worry so much about the connotations that the author mentions (I wouldn't worry at all). I'm with Wittgenstein in this one: the word means whatever we decide it means.

https://en.wikipedia.org/wiki/Philosophical_Investigations#M...


I'm pretty sure that Wittgenstein, while agreeing with your point about meaning being defined by use, would still argue that there's a lot of unspoken confusion that comes from calling the smartphone a smartphone.

The first thing I did when opening this thread was Ctrl+F to see if anyone was suggesting it should be called a "personal computer", because that's what I've thought best described what the smartphone really has become.

Now, to suggest even trying to use that to refer to smartphones these days would be adding a much heavier does of confusion. But the insight seems fundamentally right to me.

Interesting article on the subject that brings up this point: http://ben-evans.com/benedictevans/2015/11/7/mobile-ecosyste...


We call them "buttons" on interface windows because they look and act like mechanical buttons in the real world. But mechanical buttons have that name only because they look like shirt buttons.


Bill Gates called it the "Wallet PC", back in 1994. Later Pen-based computing. Later "Pocket PC".


Bill's description was pretty spot on too. I read The Road Ahead again recently and he was right about a lot of things. Though even with all that prescience Microsoft still missed the boat a number of things they saw coming a mile away.


Pocket PC still sounds nice, but Apple's marketing efforts caused the meaning of the word to become unclear. Pocket computer is perhaps the most neutral term available.

For the average consumer this semantic discussion won't change their perception of those devices though. The masses for now have settled on the three categories of smartphone, tablet, and laptop/desktop computer.

(Even more frustrating is that every piece of software, service, and website you use on those devices, and in some countries even a text message is called simply app.)


By 1994, "PC" had already shifted from being platform neutral to referring to the IBM-compatibles with Microsoft OSs. So a name with "PC" in it wasn't likely to stick as the platform-neutral label for the mobile segment.


I doubt anyone is going to change what they call them (other than dropping the "smart" now that such features are becoming the default), but the point about not being able to install apps except though an app store with a monopoly is very true. Of course it tends to be true with tablets as well.

Maybe when we start to see more android devices that are in a laptop configuration we'll start to struggle more with that issue.


Why? Are you under the impression that you can't install Android apps except through an app store?


Articles like this gloss over how small tweaks to a product put it in a different category, with large commercial consequences. In the case of smartphones, having a mobile network radio is a very large difference in capabilities, price, operating cost and channel. Smartphones are the largest business on the planet and growing, and tablets have stumbled for not having met their potential in displacing enough PCs in office productivity use cases. Smaller distinctions are important within the smartphone and tablet markets, like "phablets" and various size and price categories for tablets, and for not-quite-tablets like "convertibles." Microsoft tried and failed in multiple product generations to turn PCs into tablets, while in Windows CE and NETCF they had the basic formula for a modern smartphone but they treated NETCF like a red headed stepchild. Small differences, big results.


Well, to be honest the best name would be just "PC". Think about it, PC stands for a personal computer. And because a smartphone is in fact a small computer and it's as personal as it gets.


Note Apple came to this market from a different direction than most other cellphone makers. Their first mobile computer was a revolutionary music player (innovative UI and music store). It evolved into a sophisticated media computer with color video, wireless and computer utilities in the last iTouch before the iPhone. (Apple still ships iTouches which are the iPhone device without the cellular phone in it, or a micro iPad.) So they pretty much had a full fledge mobile computer before grafting phone technology into it.


> Their first mobile computer was a revolutionary music player

The Newton was not a revolutionary music player, and no Apple music player before the iPhone was a "mobile computer" in a sense that would make this portrayal meaningful and accurate.

> It evolved into a sophisticated media computer with color video, wireless and computer utilities in the last iTouch before the iPhone.

iPod Touch (sometimes nicknamed "iTouch") was introduced after the iPhone, running a later version of iPhone OS (the OS which later became iOS) than the first iPhone. There was no "iTouch before the iPhone."

There was the old-style wheel-controlled iPod before the iPhone, and that was a revolutionary (at least in terms of commercial impact) music player, but it wasn't a "mobile computer" in the same sense as the iPod Touch, modern smartphones, or even earlier PDAs or the Apple Newton.


Does anyone actually say "smartphone" except (vanishingly rarely) when necessary to describe the device in contradistinction to "dumbphones"? Is there a difference whether spoken or written?

If so, where are you? I'm curious because all I hear in the UK is "phone" or "mobile" - both of which we always used. But I'm acutely aware that I hear or read "cell" in American sources significantly less often than, I think, I used to.


> Does anyone actually say "smartphone" except (vanishingly rarely) when necessary to describe the device in contradistinction to "dumbphones"?

True. My beef isn't about "smart" however, it's about "phone". I believe my point stands even more acutely in this light.


Sure, it was something of a side-point, but the more relevant point (which I may have forgotten to make..) was that in the UK 'mobile' is still common.

Although actually a contraction of 'mobile phone', it doesn't have the same 'problem' going forward, since we can just decide or assume that it refers to a mobile computing device.

Possibly though things will get even more blurred than that - everything a computing device and almost all of them mobile... Time will tell!


Yeah, "cell" usage is going down and "phone" is going up. Personally, I like to be needlessly formal and call mine a "telephone", and some of my friends are catching that.


They won't stop carrying the "phone" connotation until the service providers stop pitching themselves as primarily telephony companies.


I usually refer those as "tracking device"


I was thinking more "self-funding prole monitoring device program".


Now that's a political statement!


"Smartphone" is the wrong name? What about "Feature phone"?

Edit: Actually, I meant that the term "Feature phone" is really even a worse name for feature phones, than "Smartphone" is for smartphones. I didn't suggest that it was a better name, but the downvote and comments suggest that I didn't make that clear.


'feature phone' is already an industry term for (mainly) bottom-tier Android phones that lack the hardware to be considered among 'smart phones' but have the basic abilities to access the internet, and play basic media files. Basically the phones that come with the pay-as-you-go wal-mart phone plans.

https://en.wikipedia.org/wiki/Feature_phone


Nokia's Symbian based phones, various China based look-a-like phones and WinPhone 7 were called "feature phone".

You couldn't do much beyond the basic functionality. And there was a very limited amount of apps available, if any at all. They were as cheap as today's entry level Android smartphones.


In my opinion, it'd only be a truly smart phone if it shipped with a compiler onboard so you could build apps for it ..


DroidDevelop and AIDE are available on Android devices. As far as I know, no portable devices ship with a compiler installed, but since it's easily available in the app store, I'm not convinced that it matters.


OpenPandora, a portable device (which I consider to be an utterly arbitrary class of computing), ships with usable compilers..

Point is, I think its arbitrary that you can't use an iPhone to write apps for the iPhone.


Right, OpenPandora, its predecessors, and its successors. I considered buying one for a time, but never quite convinced myself, so I've never gotten to play around with one.


There's also QPython and, while I haven't tried them, there appear to be various IDES/compilers/interpreters for javascript, BASIC, C++ compilers, Go, etc.


Microsoft has been exploring the smartphone / laptop gap with Continuum - http://windows.microsoft.com/en-us/windows-10/getstarted-con...


I've long thought that "hand computer" was a good term for it, but that has rather a lot to do with nostalgia:

https://app.box.com/s/i6tw2gc8avr9r1hevn0trf4w41nt9r1b


Not even phones they're not handheld devices anymore. The lack of physical interface is very sad. Taking pictures, listening to music, even calling someone, all require high attention interaction over an unfit touchscreen. It's a small slate computer.


>Taking pictures, listening to music, even calling someone, all require high attention interaction over an unfit touchscreen.

Are you kidding me? All of those functions are 10x easier to do on the "unfit touchscreen" that on what we had before that (tiny non-touch screens, LCDs, 80's style arrays of buttons, etc).

And when it comes to actually taking the picture, changing track or volume etc, smartphones even offer physical buttons on the sides.

Remember trying to get to the 10th track of the 4th folder in your "physical" CD mp3 player or minidisk? Setting anything more advanced than zoom level and picture mode on a typical 2002-2005 compact camera?


Cd's usually only held about 10 -15 tracks, so it was hardly a problem getting lost in folders. Obscure functions on cameras are more likely to be obscure because of menu layouts rather than a touchscreen (its still difficult to find some obscure settings buried in menus whether you have a touchscreen or buttons).


In the latter years, portable CD players often supported CD-Rs with MP3 tracks, so you could have 100+ tracks on a single disc. Have fun pressing "next" until you find the track you need! :)


> Remember trying to get to the 10th track of the 4th folder in your "physical" CD mp3 player or minidisk? Setting anything more advanced than zoom level and picture mode on a typical 2002-2005 compact camera?

I remember putting my music on MP3 players for the last 12 years, having non-MP3 CDs before that, and being able to perform basic operations on them blind and one-handed. I remember using my non-touch camera being much quicker to adjust than the app in my phone, especially when I'm adjusting the more advanced settings.

Touchscreens are optimized for flexibility and portability, and they're great at those. They're terrible at being fit for specific purposes.


>* I remember putting my music on MP3 players for the last 12 years, having non-MP3 CDs before that, and being able to perform basic operations on them blind and one-handed. I remember using my non-touch camera being much quicker to adjust than the app in my phone, especially when I'm adjusting the more advanced settings.*

You remember through rose colored glasses then.

Try using that non-touch early 00's compact camera again (and no cheating with DSLRs with dedicated aperture and speed dials). As for the "blind and one-handed" operation with the non-mp3 CDs, did that include changing through 200+ albums and 2000+ tracks?


> Try using that non-touch early 00's compact camera again

I don't have a touch camera at all. The one that I do have isn't a DSLR, but includes various dials. That's my point: dedicated controls will always be better. A camera that primarily relies on a direction pad and menus is only slightly better than using a touchscreen (but it is better).

> As for the "blind and one-handed" operation with the non-mp3 CDs, did that include changing through 200+ albums and 2000+ tracks?

For the CDs? Of course not. I'm not going to claim that a single CD could hold that many, whether or not they were in MP3 format. As for the MP3 player, I tend to set playback to a particular artist or setting the player to randomly play song from the whole library (by sight, of course), then throw it in my pocket. From that point, basic playback control doesn't require me to take it out of my pocket, while I'd have to at least take out my phone and wake up the screen to start doing the same thing.

I've got to assume that the way we tend to use our devices differs enough that something that feels like a huge improvement to me is a piddling detail in your use-case. On the other hand, some of the ways that I liked using my devices have disappeared, or at least been de-emphasized in more recent models, and that's what I'm frustrated with.


90% of the time I need direct blind access to simple functions. Now I have to carefully swipe, carefully otherwise the app goes into gallery mode, or the picture is taken, and since I was pushing on the screen, it's blurry and I need two hands to do so.

Side picture buttons are increasingly rare if not defunct, volume rockers are still here though.

A lot of things that could be done quickly is now fragile and subtle. A side effect of translating desktop UX to IRL handheld gadgets, by the promise that software will be smart enough to make it one button away.

psedit: as noted below, I indeed never realized the volume rocker was bound to snapshot. My rant is half void now u_u;


This can all be "fixed" in software. In fact, on my HTC One M8 which is running Cyanogenmod, if I double press the power button at anytime, it switches to camera app and I can then take photos by pressing either volume buttons. I don't need to (or do) look at the screen let alone interact with it.


How often do you take a picture without needing to set the focus, or at least watching the screen to see when the app has finished its autofocus? More generally, beyond changing the volume or initiating a voice query, how much can you really do without pulling the device out to use the screen? Lack of hardware buttons and tactile feedback can't be fixed in software.


Even with a purpose built DSLR, you still have to frame your shot. I meant I don't need to look at the screen to make the phone ready to shoot. I believe the software can go a long way. It will never be as UX friendly as a camera, sure.


I agree that there's room for UX improvement in the software, and as a software guy myself, it's the first place I'm inclined to look at as a source of improvement. It's just that not all the problems have a software fix, in my opinion.

Thank you for the double-tap tip, though. It works in Marshmallow too, apparently.


To me the issue is the lack of context and pressure. Physical interfaces tapped into deep human perception. Sub millimeter movements, skin sensitivity, change of texture, response curve. None of that is taken in account in the desktop design world. At least before, software was designed for high throughput and underpowered machines (I often find AS400 UX as ugly as efficient, and in reality nobody cares about software being pretty). Now you get Material Design. 80% eye candy. Whenever I have to use KitKat I feel so relieved because it has centered, static, squary input menus. I don't have to avoid overlapping free floating (+). Every new market causes a regression, until it learns lessons from the past. The learning phase is still going indeed.


The volume rocker is usually a picture button, no?


> All of those functions are 10x easier to do on the "unfit touchscreen" that on what we had before that...

The old Blackberry devices were UI marvels. The physical scroll wheel was great.


Your not going to find a lot of people agreeing with this point of view. First, touch UI is generally agreed to be a step forward in simplicity vs pointing devices and keyboards. Babies, old people, and others flummoxed by the mouse seem to agree. Second, the results show that mobility and immediate 24/7 availabaility is valued over any productivty and flexibility advantage of other form factors. Lastly, it looks like touch and personalization are dissolving physical interfaces. Your personalized interface is in your pocket.


Indeed this is a niche rant. For a kid playing at home a tablet is very convenient and I'm not a UI fetishist. I don't think mouse or keyboards are that important. But for out of home, on the go, computer-secondary interactions they are subpar. If I may, it's like the old analog appliances that start right away versus smart TV that takes a minute to load an OS.


I have a Blackberry Passport. It has a nice tactile keyboard, play/pause button and volume control. The dream isn't dead (yet).


Makes me wanna spend money on BB and do free advertisement for them.

That said, Nokia did a few phys.kbd phones not long ago. Bought a C5 for my mom as she just wanted a simple thing. Unfortunately the software layer was one of the worst one I had to experience in my whole life. 4 random and generic submenus on average to get to any function. Ellipsis full menus ... gosh.

Anyway, shootout to Blackberry.


This. When smartphone first come up I was like I will wait until they get improved and calling some will get easier just as on my regular phone, years later and calling people and doing regular phone things becoming harder and harder.


It's very common when tech allows for too much, ideas that seem to make things simpler just make them shinier and potentially harder. Until the next wave of fauxgress comes in to attract the buyers. I consider our whole society a massive form of that, food, health .. it's all a backfire fest.


Dude, just give it a minute and they'll be called "tricorders" as they should be.


We're getting closer. Caterpillar just announced their C60 smartphone with thermal imaging.[1]

[1] http://www.catphones.com/en-us/phones/s60-smartphone


Better than what we call them in Switzerland. 'Handy', pronounced the English way.


Germany as well. I always heard it dates back to an early marketing thing.

There's also the old (~1990s) joke that it's actually Bavarian and short for "Hän die ken Schnür?" ("Haben die kein Kabel?" / "Do these not have a cord?").


In some countries, 'handy' has a very different meaning :-o


I prefer "tricorder."


from the Greek phon: voice, sound

i.e. a means to communicate.

You do not call an automobile "wheeled combustion chamber".


the device is still a phone, because it needs a SIM card to operate. Without it it's pretty much useless. You cant even start the iphone without it, first thing it needs is a sim card.


What? If you boot an iPhone without a SIM it'll show you a warning/error that the SIM is missing, but it still turns on and works.


Handcuff computer was rejected by the marketing department.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: