Hacker News new | past | comments | ask | show | jobs | submit login
What’s Next in Computing? (medium.com)
300 points by MichaelAO on Feb 21, 2016 | hide | past | web | favorite | 148 comments

Coming up next:

- If your job involves sitting at a desk, and your inputs and outputs come in via phone or display, expect to be automated.

- Automatic driving.

- AIs which sell. These will be annoying but effective.

- Big Brother will be much more effective.

- Within ten years, an AI running some investment will fire a CEO.

Probably not important:

- Virtual reality. Other than for games, it won't be big.

- Internet of Things for the home. Home remote control is a niche product. It's been available since the 1980s and never got much traction.

Not yet:

- Robots for routine unstructured tasks. Still a hard problem, from both a hardware, software, and cost perspective.

- Nanotechnology (excluding surface chemistry stuff)

On VR: Medical uses, though niche, have deep pockets and there is a desperate need for VR there.

75% of children that experience an ICU still have PTSD (among other problems) 90 day after discharge. The DoD funded Bravemind project (Oculus Rift with a clinical 'wizard of Oz'-type iPad interface) has shown real progress for PTSD treatment. Maybe the Bravemind for children stuck in ICU can reduce psychosis, need for anti-depressants with co-morbidities, and general depression for these children. Meningitis patients and those about to undergo organ donation are isolated for a long period in-hospital; boredom doe not help their recovery at all. Imagine a hololens in a long term ICU situation. Suddenly this room that is filled with wires, beeping machines, scary stuff even to adults, it can become fun, it can become 'their' ICU. Think a boggart from Harry Potter, what is scary is now funny. Mario or Iron Man is now dancing on the ventilator. Imagine the healing that can come from bedridden patients that can take a walk in their backyards again, that can play again with their dogs.

The applications for surgeons are great as well. With VR they can see with more than just small holes and the tips of endoscopes. The vascular system comes out at them in real time and in real dimensions. Now the nurses can see what is going on with a patient's womb as labor is progressing from across the hospital and behind walls. The education is also great. Now a patient herself can better understand what is going on with her body. She can see what the doctor is telling her, and not be confused with jargon or a typo in a take-home sheet. And this can all be personalized to the patient themself.

Children's Hospital Colorado services from the Canadian to the Mexican border; it's Flight For Life air ambulances are the best in the world. A little birdy has mentioned to me that in late April there may or may not be a 'push' for VR in medicine announced there. Hopefully we can help heal these children better in this new future.

How do you think VR can be use in Medicine? I would love to hear ideas from HN.

>Suddenly this room that is filled with wires, beeping machines, scary stuff even to adults, it can become fun, it can become 'their' ICU. Think a boggart from Harry Potter, what is scary is now funny. Mario or Iron Man is now dancing on the ventilator. Imagine the healing that can come from bedridden patients that can take a walk in their backyards again, that can play again with their dogs.

On the one hand, I did once calm down in a hospital waiting room (literally a separate room, because reasons) from watching James and the Giant Peach and/or Pokemon the Movie (I can't remember if those are the same or different occasions at a hospital). On the other hand, if someone tried to fill my hospital room with marketing-ridden branded "content" anytime into or past my teens, I would probably pitch a fit and go psychotic.

Surgical navigation and planning is a commonly used term to describe Augmented Reality for surgeons. I've seen similar systems for spinal surgery. Here's a project (cirle.com) I helped build a while back for eye surgery:


> Virtual reality. Other than for games, it won't be big.

There are other niche markets for VR: Interior Design, Real Estate (especially commercial), Education, Health Care, Industrial Design ...

Every major sports network is working to deliver VR for sporting events.

Sports and concerts seem like pretty compelling VR apps. If the music people handle their business right, I could see VR front row tickets at home being a premium to mid grade tickets. Same for sports, a couple rime vantage points that the user can toggle between? I could see people paying more than a live ticket

I tried William Hill's VR horse racing app. It uses computer game style horses linked to live position detectors on the real ones. It was quite good. Wouldn't use it myself but I could see it working. They hope to make money from the betting in the usual manner.


Yeah, I can imagine how much (and how many) court-side VR tickets to Kobe's last game at the Staples Center would go for.

Or strap a VR rig to the head of one of the towel boys and sit on the end of the bench as the 16th man.

I think this is actually a bigger $$$ market than VR games because of the repeatable sales.

Escapism will be a new/enhanced gaming category with VR, IMO.

> Every major sports network is working to deliver VR for sporting events.

Advertising in those will be powerful.

Probably more like a premium service like MLB Premium.

> Virtual reality. Other than for games, it won't be big.

Why? Effective and realistic VR has reach far beyond games, it could nearly eliminate most business travel and recreational travel. No need for a real office or commuting when a company can have a virtual office that feels real enough for workers to collaborate with presence. No need to blow large amounts of money travelling to real places that are either perfectly imitated in VR or blown away by cooler places that only exist within VR.

VR ain't just for gaming.

I don't understand the idea that VR will make business travel unnecessary. What problem does VR solve that video conferencing doesn't? The ability to turn your head?

The reason business travel, and in general non-remote work, is worthwhile isn't necessarily the pre-planned meetings, it's also the familiarity developed by meeting somebody in person, shooting the shit over lunch, and being able to casually mention an idea or anecdote without the overhead of a "call."

It adds presence, which is the main reason business types currently don't like video conferencing and still choose to travel when they really don't need to. People want to be around other people, in their actual presence, they don't get that from video conferencing. Lack of presence is why telecommuting hasn't really taken over and why companies still bother with the overhead of a real office. People are social animals, they want to be around other people, not around screens with other people's faces on them. Presence is far more than just turning your head.

> it's also the familiarity developed by meeting somebody in person, shooting the shit over lunch, and being able to casually mention an idea or anecdote without the overhead of a "call."

Yes, that's called presence, and that's what VR brings that video conferencing doesn't.

I see this sentiment echoed every now and then and it completely baffles me. How on earth do you see VR ever working as an evolution of videoconferencing, no less a substitute for in-person meeting? HMDs obscure the most important part of face-to-face interaction, the myriad nonverbal cues and visceral connection we get from the eyes.

Talking to a person wearing a brick over their face in VR with perfect presence is still talking to a person wearing a brick over their face.

That depends entirely on the quality of the virtual avatar. Facial expression capture is already used heavily in character animation for movies, such technology will be a core part of the VR experience in the future. It is quite inevitable that the VR experience will continue to improve until it's basically the Matrix and being in VR will be a perfectly fine substitute for being in the real world. For business, it won't need to be nearly that good to kill most business travel.

My company has remote workers in 5 states and offices in 2, when VR is good enough we can eliminate lots of unnecessary business travel that only exists because to satisfy the human social urge for presence. Nothing is accomplished by this travel that couldn't be done over Google video other than making people feel better. I suspect this is the norm for a large amount of business travel.

Does it matter how well the system analyzes and recreates facial gestures when anyone can just > load poker_face.vrscript?

Is that what you do at work, load up your poker face and try and hide all your emotions? I don't think that's what most coworkers would want to do so I don't see that being a problem. People will know if you're not hooked up for real, no bot will fool a human unless there's real AI behind it. You can tell when you're up against another person vs any bot.

Yes it does matter. It's presence, and the interactive responsiveness between participants. Loading a script / playing a script won't have that. What you're describing will be the new bot, and identifying them will be like having street smarts.

What I've read is that even without facial expressions, the sense of the other person being there is surprisingly strong because of the subtleties of seeing how a person moves and reacts to you. I can't find any of the articles I've read saying that, but I'll transcribe a bit of this audio interview with two people right after they tried Oculus's Toybox demo:

"For me the biggest takeaway was how much is added when you're with another human being in VR and you're able to intuitively interact with them."

"I felt like this person that I'd never met before -- I felt at the end of it like we were bros. ... You're sharing this VR space but at the end it feels surprisingly intimate."

That's from here: http://voicesofvr.com/228-candid-reactions-to-the-oculus-toy...

Note that it's already that good even without any representation of the other person's facial expression. All it would take to add facial expression are some sensors on the inside of the headset. There are even companies working on inferring what your eyes look like from the expression on the more-visible bottom half of your face.

Companies will come up with smart ways of maximising the strengths and minimising the weaknesses of VR. They'll find that efficiency just the way office chat systems (Slack, etc) have done.

In the shortterm, they'll be saving money/time spent on offices and commutes or long-distance travel so they'll have an advantage in sales.

If VR was going to be a big success, there would be a killer demo for the Oculus Rift by now. There isn't. "The Chair"[1] is considered one of the best demos. Whatever. GTA V in VR is pretty good, but that's because GTA V is pretty good without VR.

[1] https://www.youtube.com/watch?v=-8c3dAWZ7No

Oculus hasn't even shipped the consumer version of the Rift yet. Isn't it a bit premature to write of the success of VR?

I've not used a VR headset before, but I've heard the games that provide the most immersive VR experience (at the moment at least) are 'cockpit'-based games such as Elite Dangerous. There seems to be some recognition of this from Oculus, as they're bundling EVE: Valkyrie with the CV1: https://www.oculus.com/en-us/rift/#eve-valkyrie

You clearly haven't tried very many demos (if any). To give just one example, Windlands is incredible and is not anywhere near as good outside of VR.

We aren't talking about today, we're talking about the near future. Nor are we talking about games, but other uses beyond gaming. The Occulus is akin to the iPhone 1, in a few years it'll appear clunky and outdated but it's certainly the first step in consumer quality VR.

Yes. VR is for porn.

AR will also be for porn.

lol, that too.

Internet of things on the actuators side is a continuous failure since decades (because most of the time you need to be near anyway). But on the sensor and data collection side, there is plenty of opportunities (and insurers and commodity providers for example are very much interested in it, from what I ear). Of course we get back to the "who will own your data" question.

Have you tried an experience on a CV1 Rift or a HTC Vive? Before making judgements on VR I'd encourage you to try one of the modern VR platforms that will be coming out in April.

It's not a question of technology - the technology is impressive. I think the argument is that their aren't legitimate use cases outside of gaming.

I would argue that education could benefit immensely from VR. Why explain it when you can show it. Imagine something like Fantastic Voyage in high school biology.

Yet certain parts of education like abstract mathematics do not translate well into VR.

Nobody said everything has to translate into VR. Even then, showing those abstract concepts in a boundary free VR world could do a lot towards understanding those concepts.

Unfortunately, what you said doesn't make much sense to me. In fact, it's to the extent where it almost completely hampers what you're saying. So much of education is teaching/explaining these abstract concepts -- otherwise every day at school would be "movie day" -- so most of education probably cannot benefit from VR (unless you're talking about some sort of telepresence where you're in a virtual 3D MOOC democratizing access to a Harvard professor's lectures).

I need an operational definition here. How do you "show" an abstract concept? What is a "boundary free VR world"? The biology one is a good example but it's low hanging fruit because it's so tangible.

For example, if I crack open Unity right now, what do I build to teach someone what, say, Hilbert space is using VR? How do I teach someone the intermediate value theorem? How does VR add anything to the explanation more than a whiteboard would?

Even still, how do you demonstrate that someone will learn more/better with the VR? How convinced are you that it will be the case?

Lastly, given that VR is just a game with stereoscopic display and more limited UI/UX possibilities, why doesn't there yet exist the non-stereoscopic panacea computer game of education? Does depth perception REALLY add all that much?

You are really underestimating VR. You are not thinking on the applications that do not yet exists and no one has thought about yet. These will be the applications that will sell VR like donuts.

This is a purely speculative non-argument though, and so the "space" of non-existent applications can be as small or as large as anyone wants (might as well choose the size that benefit's one's argument). The subspace of "donut-selling" non-existent applications is even tougher to define.

Would you count programming as being in the first item? A lot of what keeps it from being automated is the boundary between the human specification and the machine implementation.

Just wondering, have you tried a VR headset of this generation? I feel a lot of people discount VR simply because they don't realise the possibilities.

> Virtual reality.

I'm interested in VR as an alternative to purchasing a 4k monitor and/or dual monitors for programming. I don't know how I'm going to do that, pretty much all Google results for "VR programming" are about programming video games, not programming using VR as an monitor alternative. I could probably do a better job googling.

Too dependent on the resolution of the VR screens. I think once they're high-quality enough, it will be trivial for people to write freeware apps that spin-up displays and/or overlaid text editors. Even with the Rift now there are virtual cinemas and text displays that work fine but are just limited by resolution.

There's a solid angle analogue to VR pixels where the "solid angle" is so much larger than a screen that occupies a much smaller portion of your field of view.

In order to represent a 4k resolution display in VR at a "natural" viewing distance in perspective-projected 3D space, you'll need a headset with probably two, three orders of magnitude more pixels covering your field of view.

The stereoscopic nature of the VR headset itself effectively halves your pixel count, to say nothing of the lossy distortion of the rectangle-shaped backing display by the lenses into both eyes. Your "solid angle" is going to vary depending on distance from the center of each lens given that display manufacturers only produce rectangular screens with uniform pixel densities.

Put VR headset in a football helmet. Then, a soldier. Then a civilian. Then the matrix.

whenever I see a headline of the formula, "What's next in X" I immediately think to myself, "probably not what this article thinks it is."

That said, yeah, some of the handwriting is on the wall. I think IoT will be the next big hit, not because people want it or will use any of it, but because that's all anyone is going to be selling. Samsung, LG, GE, et al. aren't going to give us a choice.

It won't be a big dramatic change like self-driving cars. It will be a slow trickle of toaster fridges that we don't notice until we are out trying to replace a washing machine one day and can't find one without a self-ironing board attached that wants to connect to our bed to know when to wake up the coffee pot that makes your egg-white substitute omelet and makes sure your shirt is wrinkle free at exactly the moment your shower dries you off and combs your hair.

And while all of us nerds are busy disabling all that crap so we can just clean some underwear, Wall Street will be declaring that IoT is here and winning! What they won't mention is that it's winning because there's nothing else left to buy.

I don't agree. There are plenty of things in my life that could be hooked up to some online AI to make them smarter.

Anything with a lock. Just figure out that I'm me and open. If I leave lock up. Open up for anyone on my authorised list.

More cameras pointed at stuff. Our local club (100 members) purchased a weather station and decent camera and pointed it at the sea so we can all check the wind and surf. Cool. More of these cheaper please.

We have Woo devices that we attach to kiteboards, these measure jump height, duration and G force. These are then synced to a phone and internet, so not quite IoT but close. More of these please. Attach them to anything that moves (shoes, kites, surfboards, swim fins, soccer balls, bike) more data is fun.

GoPros, most people I know have these. They need to be better hooked up to the internet.

I think there are a ton of IoT that will make sense and people will buy. Just because you don't see the value in a toaster hooked up to the 'net, doesn't mean the IoT is dumb. I think it's great.

The big problem I have with IoT is that none of this requires online AI. What part of having an authorized list or identifying you requires internet access?

The IoT will take off when services are run on site. When the processing power is available from a small appliance box and companies advertise security and reliability associated with local processing it will catch on.

What happens to your smart lock of your internet connection goes out? What about if the server that holds your whitelist is compromised?

Exactly. If there's one thing I hate about the direction of tech these days, it's this headlong rush to gratuitously shackle all of our previously-independent devices to a mesh of online services. My bike lock (to use this example) should NOT require an internet connection, and should not depend on some web service (run by a super trendoid startup which might not exist in six months time) to fulfill its function.

And don't get me started on phone apps which shift some trivial processing off into "the cloud" as a thinly veiled excuse to upload all of your personal data to the company's servers so they can then flog your details off to some advertising company.


I can see value in internet access. The lock having internet access means I can be on vacation in Thailand and grant a friend access to my house to drop off the rent check I left on the counter. And later, I can make sure my friend didn't forget. Or, make sure my dog-sitter is showing up. I also can't think of a more convenient way to grant access and identify people off-hand.

As for the connection going out, there are solutions. Redundant cellular connections, maybe? And if you really can't get in, it's not the end of the world. We already have solutions for that: locksmiths. Might be expensive though, and end up destroying the device, or maybe your door.

The data security thing is really bad. Unfortunately, that's a much larger problem though, not really related just to IoT.

Got a nice start reading that comment. A nice amount of good sense. And then...

> We already have solutions for that:

Yes, finally somebody will say "local cache"!

> locksmiths.

Ok. Not yet.

That's a good point!

I guess more broadly, I was thinking about a scenario in which the lock device dies, and making the point that conventional devices aren't foolproof either. In this case, the fool being me, and the proof being locking my keys in the house.

So, any lock can fail, but I'm not really concerned as long as it has a reasonably low failure rate. We've tolerated conventional lock systems failing (via user error mostly) for a long time.

That's why vanadium exists (and is being invented). Http://v.io

> I don't agree. There are plenty of things in my life that could be hooked up to some online AI to make them smarter.

But "online AI" is just a code-word for "some company, probably the manufacturer, that owns the IP rights". I don't want my household stuff to phone home and report all my actions to Some Company. I think that, for instance, my sandwich toaster thingy should toast sandwiches without reporting the weight or density of them.

>more data is fun.

More data is not fun. More data is Big Brother, particularly when the device itself has a closed-source hardware design, closed-source firmware, closed-source software, and a closed-off "shiny" user interface. More programmability with less phoning home is fun.


Do you know what weather station and camera was used?

I can't remember. Whatever comes top in google I expect.

I think some are genuinely useful.

Smart water sprinkler controllers seem generally useful. I used to forget to use our non-smart controller's "rain delay" all the time.

We also have an alert system setup to let us know when the outside temperature would meet the heating or cooling requirements of the thermostat. This is pretty useful in southern California.

A single switch to kill all your lights when you leave/go to bed is useful. But at $35 per smart switch you're going to shell out a lot of cash for this convenience.

My biggest issue with IoT as it is currently evolving is how reliant on the manufacturer they are to keep working. I would be surprised to see some of these devices working in 5 years, much less 20 or 30.

I'm surprised we haven't seen more brainstorming about how we can make these devices (now and future) less reliant on individual companies whether they're start-ups or giants.

Maybe an open standard that is provided as part of your ISP package but can be moved from one to another?

If no people want or will use IoT connected devices, what motivation do companies have to sell only IoT devices? I don't understand the logic behind this prediction (only IoT products available, but no one wants them). It seems a bit like predicting that in the future people will still prefer to write on paper, but no companies will sell paper anymore, only tablets. Maybe there's a key motivating factor I'm missing?

Because It'll let those same companies better understand you and thus better target you with advertisements for more products (or sell the data to others who want to sell to you).

Don't think for a second that that smart fridge that knows when you're running low on milk knows just so that it can alert you and be done with it. Most people will buy the thing and agree to an unreadable set of terms that allow them to use your data for advertising purposes without much thought. Then the fridge notices that you're low on milk and tells you "Hey, you're low on milk! Lucky for you, Target's got your favorite brand for 10% off with this coupon." You pay Target and Target pays FridgeCo.

They're motivated to sell this stuff whether or not you want it because it lets them build revenue streams in markets that they wouldn't otherwise have access to. The buyers of the information are motivated to pay for it because it means that they can learn far more about the individuals, not just the groups of individuals, who buy their products.

Because with IoT companies move consumers from owning the devices they buy to renting them. The company retains control of how the device is used, what content it displays, even if the device works or not. You make the consumers buying the device wholy dependent on being in the good graces of the company for their expensive device to not become a brick.

Big executive hears about it, thinks its 'the next big thing' and poors money into it. A second big executive hears that the previous company is doing it and they have to stay competitive. Their next flagship product has it and coincidentally does well. They attribute it to IoT and all of their products have it in a year. Then it spreads across the industry and no one has choice anymore.

I don't think multinational corporations are going to change their entire product line entirely on the basis of gossip about "the next big thing". Maybe some would, but surely not all. A decision like that would have to be made after doing some serious market research and testing sales of products with that feature against sales of products without it. If corporation A switched to only IoT devices and customers didn't want that, surely Corporation B which hasn't made the switch would gain more market share, then A would take note and switch back. There's too much money to be made by being the one company that still offers what consumers want. I don't understand this fear that profit-driven corporations will suddenly decide to add costly features in a world where no one wants them.

The history of consumer computing - especially the existence of the word "crapware" - suggests the opposite happens.

Corps tend to offer what execs think consumers want, based on a consumer model which is irrationally similar to an idealised version of the exec - who inevitably seems to think more is better, for very noisy values of "more".

Which is why printer driver installers, smartphones, operating systems, and hardware products all acquire barnacle encrustations of crapware that literally no consumer wants.

IoT could easily be more susceptible to this than desktop and mobile computing.

Corps that truly understand their customers are incredibly rare. Many corps seem to get along by throwing crap at the wall and hoping some of it sticks.

Genuine customer insight is practically a superpower.

We already have a great example. Look at all the money car companies are now pouring into self-driving cars. In reality, it's all marketing hype. Google's Self-Driving Cars are a decade from practicality, they're far more dangerous than human drivers if left to their own devices. We're actually not that much far further in automated driving than we were five years ago. But now that it's a topic of hype, everyone's dumping tens of millions into it.

Sure, we'll make some progress, but we might also be looking that this generation's version of the flying car. The magical leap forward in technology we keep believing is just around the corner.

Do you really think all the recent progress in autonomous vehicles is marketing hype? I disagree that we haven't come a long way in the last 5 years. Self driving cars are a lot more practical than the flying car.


Almost every excited "self-driving cars are coming soon!" post quotes a single man. Chris Urmson, the lead of Google's project. He avoids mentioning the real limitations and roadblocks self-driving cars has, and continually talks about how he never wants his son to have a driver's license. He talks more fluff than substance, and he's misrepresented Google's progress pretty heavily. He's a great marketing voice for Google, but I have found very little merit to what he has to say.

When you actually look at the hard numbers, or other companies' more realistic progress reports, it paints a very different picture.


I've found that to be true as well. Driverless cars can work reliably in a strictly controlled environment such as the Google campus gated community, but have very little chance of reliable functioning in the real, gritty world full of variables and uncertainties.

Worse yet, driverless cars will be forced to make tough ethical choices. Imagine a driverless car's brakes fail and it is headed for a pregnant woman. Should a car steer off the road and possibly kill the passenger(s) in order to save the pregnant woman? Who gets to decide what is the correct choice?

"Flying cars" (personal-sized VTOL vehicles) do exist. They're just expensive and unsafe for a normal person to drive.

Similarly, self-driving cars do exist. They're expensive, and unsafe for normal people to drive. :)

What you've ironically described is home automation facilitated by IoT.

Cheap IoT devices can be installed just about any manufacturing and agricultural device. Heck, you'll be able to monitor each individual plant health. Your car will be able to notify of failing components before they actually failed (FFT and irregular frequency monitoring).

Well, that's the dream IoT is selling. But the reality is that most of it won't work. The big brands that the vast majority of consumers recognize and are willing to buy and can afford are simply not competent to deliver the dream.

They will make the devices work just well enough to gather your personal data and stop there. For most of the rest of our lifetimes, IoT will be the same shitshow of incompatible nightmares as plug-and-play was for Windows throughout most of the 90s.

But it will win, and it will be the next big thing for a similar reason: not because it's good or because it works well, but because that's all anyone is selling.

You can't spell "idiot" without "iot".

Intentionally Deficient...

Yup. I found when I took on doing home automation for myself, it involved a lot more electrical work, and a lot more code than the consumer is up for, to make it do what I actually want. The one-off consumer devices inspire the dream but don't deliver on it.

>(FFT and irregular frequency monitoring

Got any links for this? It's a topic I'm interested in.


IIRC Australian railways are implementing this.

Yes, we recently talked about IoT devices here at work, but the thing is, it wasn't ignited by people who had even heard or knew the "IoT" moniker. It's hard to tell what will be a hit from my world since I'm so deep into IT that I often lack the perspective, but that one was a good test of how it's already been entering consumer mindsets.

I think people need to pay more attention to the current AI status. It is rapidly maturing and offering surprises every year. Most of the progress didn't make to the news, as AlphaGo level, but it is going to have bigger impact to our lives, even our jobs.

like: http://arxiv.org/abs/1512.00965

It would be nice if there were a searchable blog about AI and its applications. That way, even people like me who are not so interested in the algorithms behind deep learning, can still see where this technology is heading.

DataTau (http://www.datatau.com/news) is a clone of HN dedicated to Data Science.

Maybe something similar for AI?

DataTau seems somewhat abandoned. I would suggest the subreddits https://www.reddit.com/r/machinelearning or https://www.reddit.com/r/thisisthewayitwillbe

Reddit has excellent machine learning sub. Worth looking.

I don't think it's excellent at all. It's mostly for and by novices in their first weeks or months after really starting learning about it and many zeroday beginners who think it sounds cool and post the nth deep learning news article and ask how they should start learning about this stuff. There is very little advanced discussion there. And the advanced questions and topics usually have very few comments.

Somewhat true. But should be good enough for beginners though.

I would also love to learn other good machine learning discussion panel out of Reddit as well :)

I had a Windows Mobile phone in 2005 (One of the earlier O2 XDA models, from memory) and it certainly fell into the gestation phase. Email was passable if you were connected to an Exchange server, but tapping out long responses with the stylus was a chore. It was definitely possible to browse the web, but the prevailing Windows-style point and click interface combined with a low res screen made it more trouble than it was worth. At the time, I couldn't imagine how to make mobile easier to use, but once I experienced an iPhone it was very obvious that the Touch UI fundamentally changed things.

I say this because I got an Oculus DK2 a couple of years ago, and after a month of enthusiastic use it got cast aside. I know it doesn't affect everyone, but I suffered from some nausea which soured me quickly. Queasiness aside though, There was a bigger problem. When I tried a demo, there was an obvious wow factor at the start, but it wore off quickly. Just attaching VR to standard 3D game was not as exciting as I had hoped.

The first television programs were essentially radio shows with a camera put in front of the presenter, until someone was able to use video to harness the story telling possibilities. The first web sites were glorified electronic newspapers, and then someone figured out how to integrate interactivity that was impossible in print. All new media forms intially imitates the old. As McLuhan said, the medium is the message.

I think it will take an as-yet unknown paradigm shift to make VR compelling.

I am actually looking for people who are susceptible to VR motion sickness, I worked on a VR project where we tried to make it as accessible as possible. It would be really useful for me to see how it is for people who are particularly affected by it. You can grab it from https://share.oculus.com/app/super-turbo-atomic-ninja-rabbit... (I defiantly agree that a lot more work needs doing to work out how to tell stories effectively with VR).

That is a really good point. I suspect in 20 years we will look back and laugh at how today's VR apps kind of MIS the point of what you can do with it.

I think VR has tons of potential for office/enterprise apps because the richer spatial experience can enable new ways of visualizing and working with information.

I wrote a hypothesis [1] on the cause and effect of the cycles Chris mentions. With those assumptions about the separation of user and their data in mind, I make the following near term technology predictions:

- a swing back to private cloud will be kicked off by container/microkernel technologies, starting the largest cycle we've seen to date in terms of growth and value

- public cloud computing growth will slow slightly and will have to refocus on lower privacy needs use-cases (or die trying)

- IoT and cloud computing will start to merge as a market, where the compute resources of your IoT devices are used to operate on the data other IoT devices nearby

- cryptocurrencies and the blockchain will finally find a home in securing the processes behind cloud provisioning of what was known previously as SaaS software

- cryptocurrency deployments make open source project's revenue models sustainable

- open source hardware, including circuitry, becomes ubiquitously available via physical printing processes, which then drives the IoT and cloud computing markets even higher.

- the digital nomad lifestyle explodes as a result of the changes to the open source model

- startups and the VC culture in Silicon Valley will be forced to retool their financial strategies and software business models for accessing decentralized markets

- a meltdown of global financial markets will be the only thing which enables bringing positive, decentralized change to those financial markets.

As John Chambers, chairman of Cicso said a few years back, "You're going to see a brutal, brutal consolidation of the IT industry". Better buckle up.

[1] https://medium.com/@kordless/an-ecology-of-cloud-1cedfa326b8...

BTW, there is only one thing that will make me stop coming to HN and participating in the community, and that's when I see people's opinions on something downvoted. Downvote if you must, but keep in mind it's a game of blame if the post you are downvoting is someone's opinion or view on something, which itself is blameless.

All of the points subsequent to your first one conflict with that conjecture, that we'll see a huge swing back toward private cloud & on-premise DCs.

Speaking from where I saw the industry [when I was a part of it] and now from what kinds of CIO/CTO conversations I see the GCP AMs/SEs having, I don't think we're remotely close to a swing back. Enterprise Capex budgets move VERY slowly most of the time, and it will take a multiple years to alter current momentum, which has finally reached the point where it's almost a foregone conclusion for many industries that on-prem DCs no longer make sense.

This opinion is driven almost as much by the increasingly compelling portfolio of enterprisy SaaS products like Netsuite and a plethora of workforce management, CRMs, analytics/visualization, sooooo much more of what has historically been internally developed business productivity apps.

There are a lot of people who think public cloud is the bee's knees and is going to continue to grow without bounds. The fact is, private cloud revenue is 30X as big as public cloud. It's this big because of previous growth cycles. Also, there's the consumer market to consider, just like in previous cycles.

Public cloud is simply multi tenant computing on data managed by someone else away from where the customer created it. I'm predicting data will come home to roost on single tenant hardware (which comes in by way of IoT) and the code that has been called SaaS will follow it there. We'll still have managed services, they'll just run near us, instead of on compute owned by someone else.

Regardless of whether you agree with that or not, it's a HARD fact that bandwidth isn't growing as fast as storage and compute. That fact alone suggests heavy decentralization of service architectures in the coming years.

I think the author is right about cars being next. There is a lot of work by a lot of people in this area, and one of them is going to get it very right (my personal bet is on Tesla or Apple).

There are more people working on mobile, there's a lot more revenue, and a lot more opportunity for iteration. Mobile and the desktop converge:


You're choosing Apple over all the existing car manufacturers? What have done in this space other that carplay?

> You're choosing Apple over all the existing car manufacturers?

Yes. Much like smartphones and tablets, they hadn't done much in the space until they exploded on the scene after sitting back and watching everyone else get things wrong. That's what they're really good at -- seeing what others do wrong and not repeating those mistakes.

Unlike tablets though, it's a lot harder to hide when you're making a car. They've already secured a test track and they're hiring vehicle engineers.

Apple had always been involved in that space. The Newton, the eMate, the collaboration with Motorola, and (obviously) the iPod, were all in the same type of device. The latter even had scheduling, contacts and downloadable games which you could buy the iTunes Store, before the iPhone was released.

What did they do in the music player space before the iPod?

What did they do in the phone space before iPhone?

What did they do in the tablet space before the iPad?

How about the watch?

If Apple makes a car--and I'm not totally convinced they will--the "they aren't just going to come in here and do this thing they've never done before" response is sounding awfully stupid these days.

That's sort of their thing.

Apple's Newton platform ran on multiple devices with touchscreens, including phones and PDAs which could connect to the Internet and send/receive text messages.

In fact, the current VP of iPad Product Marketing was one of the Newton's marketing product champions.

Apple are very good at the shiny brand thing. Most cars over $20k work just fine these days and a lot of why people pay 2 or 3x that is for status. I can see that working for Apple.

I suspect we will finally reach the point where enough is enough on proprietary cloud, and we'll start to see decentralization taking a strong forefront again.

A lot of big companies are driving this AI talk, but by and large it helps them more than it helps the rest of us.

>but by and large it helps them more than it helps the rest of us.

Correct. If you have 10,000 employees and you could replace 5,000 of them with 'computerized smarts' you shift a huge amount of liabilities. They don't require health insurance that seems to go up in cost every year. No retirement fund is needed. In theory the costs of using AI will go down over time per unit of work accomplished.

As for an individual it is difficult to see how AI will decrease our workload, increase our happiness, or increase our wealth.

> As for an individual it is difficult to see how AI will decrease our workload, increase our happiness, or increase our wealth.

You have to be the guy or gal creating the AI. :)

Something that has been on my mind lately is are we near the end of viable startups in apps? I mean this in a multi-platform approach not just mobile. But for apps in general I am starting to feel what's possible has been done and we are nearing a point where the big guys are already doing it and you might as well just join FANG instead of building something on your own.

Maybe I'm getting jaded though.

There are always new niches coming out... And there are reinvented cycles. Like how many big dating apps have been there since the 90s? Quite a bit actually. Can there really be another dating app in current internet app standards between desktop and mobile? Probably not.

Maybe the next dating app will be in VR and AR. In fact I guarantee it.

But I'm feeling that we are getting tapped out on app ideas before the next flood.

Makes me a little sad because I'm more entrepreneurial than anything and not feeling many problems to solve lately. Maybe it's just me. Anyone else feel similar?

More platforms = more opportunities. The Kim Kardashian game on Android/iOS made $200 million dollars in 2014. You don't have to come up with a new niche to be wildly successful, you can innovate on execution.

That being said, I believe there is huge potential for new types of voice-only apps on the Amazon Alexa platform.

Yeah, there's a huge problem with discovery and new apps, though, which kardashian has a huge leg up on.

Other challenges are that people tend to spend 80% of their time on 3 apps, and that people are likely experimenting with new apps much less than they were a few years ago, as they've already got their goto apps at this point.

I think this is true at the current tech level. That is, if your product idea is a website or a phone app (as opposed to using these things to solve a problem in some other domain), it's unlikely to be worth doing. I think the next big opportunities will come with the next technological breakthrough (Internet of things, robotics, AI, nanotech, whatever) and that's the direction to aim for.

If AR, VR, and IoT kick off simultaneously, I wonder what the malicious exploit, microtransaction heavy crapware, and indie game scenes are going to be like.

What if there's an upper limit on what can be created and what if we are mostly there. What if we never really progress much beyond where we are other than gradually getting faster, longer battery life, and more storage. Games and movies and TV shows are now mostly sequels and rehashed versions. A 2006 car vs a 2016... Basically the same.

There were no self-driving cars in 2006, and electric cars seemed like a far off dream.

Moreover: We're talking about a decade here. It's not a very long time. For more than 99.9% of human history, you would expect one decade to be pretty much identical to the previous.

But you could make a pretty long list of the changes in life from each decade to the next since 1900, so at this point I think the onus is on the people who want to prove that the next decade will not be materially different from the current. I'd certainly bet on change, and probably accelerating change.

https://en.wikipedia.org/wiki/General_Motors_EV1 - Available to consumers (for lease, and they were recalled)

https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_%282005%... - Competitions of self-driving cars for over a decade

We really haven't taken the magical leaps some people believe we have. We just learned to market them better. :)

So since the "internet" existed in the 70's, nothing exciting happened in the 90's?

Dashboards in 2006 cars are much different than today. Today most cars have at least one pixel based LCD/LED, CD player, Aux, Bluetooth, USB, etc. My 07 car was the last in its series to not have those. It has only a CD player. In 1996 they had radio and maybe cassette. In 2026 cars will be self-driving and/or SaaSS.

Self replicating robots? We're not there on that one but it seems possible. Then you can have 10,000 of them build a Taj for you. That would be different.

I think the general trend is that computers start to understand and be aware of physical world, rather than just numbers entried by humans. And not just understand, but manipulate, which means that you can go to forest and run forest.foreach(tree => { if(tree.length > 5) cut() }) I.e the world becomes computable (or rather manipulatable).

The trend seems to be more to insert humans into an automation process - e.g. mechanical turk, amazon package fulfillment, uber, etc.

The world is already this way, just not accessible in such a concise and precise manner. When you ask a lumberjack to cut down all trees greater than 5 meters tall, you're effectively doing the same thing as in your example. The execution model, runtime, etc are not the same, but the point still stands.

Yup, that's true, but computers also used to be people who did computing.

Indeed, and that was my main point. "Computing" in a general sense is as fundamental as any physical law (if one wants to make a distinction between those). We have reduced computing to its essence, and are now building up complex systems in a "rational" way. The Universe already "knew" about the essence of computing, and just let things happen.

I am biased as an AR developer, but I think Chris underplays the impact of AR as a computing and communications platform.

AI - and eventually AGI - is an application of computation. It's revolutionary and I think will transform everything but it's not a computing platform. That is, AI itself doesn't have an interface, so it still needs a platform to run on. AR and eventually BCI and wetware are the platforms for interfacing with it.

Cars aren't a computing platform either - they will certainly utilize and be transformed by new waves of computation capabilities, but cars aren't a replacement for a personal computer.

Same with Drones and IOT. I think wearables will fall to AR as well - or rather integrate with it as a peripheral.

So when asking what is next for computing, in the context of the evolution of interface/platform from Mainframe to Micro computer to Smartphone, AR is unquestionably "next" as a platform.

What's the killer app for AR? speadsheets:computing, email:internet, ____::AR?

I'm not quite sure what to call it, other than operations research.

People focus a lot on consumer/entertainment applications, but I think heavy and high tech industry will rapidly adopt AR. Imagine a mechanic working their way through an interactive checklist while completing some maintenance task on a jet engine or other complex machinery. Even with hardware as rudimentary as google glass that's useful. But then consider Boeing in the moment that said engine unexpectedly caught fire. Imagine that they can go back and review footage of every time folks touched some component on that engine, or use basic machine vision techniques to confirm the position or state of some part? Now imagine they have that kind of visibility into the majority of what people's hands do on the shop floor or in hangers.

There will be social issues and debate over this, which we're seeing with police body cameras, but I think ultimately safety will trump people's reluctance to have their every task recorded. Certainly in industries with high safety/risk implications, we'll see similar strong arguments for going there.

Slightly less complex, consumer-focused possibilities:

- Cook like an expert (with an AR), you don't have to glance at a screen or cookbook, all the steps are visible in your view/whispered in your ear as you need them, reminders to take the pasta off the boil at the correct moment, etc.

- Repair or perform maintenance on a car - when you open the engine bay, the steps to change the air filter are available for your model of car.

"Cook like an expert": There's much more to cooking than just following steps. AR don't help much preventing you to slice the onions instead of your fingers. The wrong temperature, an egg that is larger or smaller ... it's timing and knowledge/experience, not a rule book that makes a great dish.

Same for the car maintenance: Perhaps you could do it yourself and save a few bucks. Would you repair your breaks yourself with AR but no knowledge and understanding about cars and maintenance if the life of your children depends on these breaks?

Both scenarios might work for an expert or at least someone who knows the fundamentals, but not for an unprepared consumer.

I think he's talking about experts.

But for cooking, the AR can find out the temperature (you can do this with some tools already minus the AR), determine sizes of food and how long to cook them for, maybe overlay a line on food you're cutting to help you cut it better... etc

One part of this that will need a lot more work is the visualization of all this data. All the seminal dataviz works (Tufte, etc) emphasize the 2D world.

Not sure if there is a single one, just like there is not one killer app for the smartphone - though arguably it was the integrated camera that was the breakthrough for the iPhone.

We are betting the killer app for mobile AR is for e-commerce/shopping.

Killer 'app' for the smartphone is pocket sized PC. It has enough compute power (coupled with the internet) to take over mapping, entertainment, entertainment planning (let's find a restaurant near us,make reservations, then see if there are any shows or movies nearby). It changed everything. I just wish the damn things didn't have a phone built in.

What's the shopping angle for AR you are envisioning? We can take photos of QR codes or products, or just talk to the thing and it will bring up whatever I'm interested in (I do this all the time in a store to figure out which of N products I want to buy).

>Killer 'app' for the smartphone is pocket sized PC.

Yea but that's like I was saying, it's not one thing. So there isn't really one killer app, the killer app is whatever it is for any given user.

>What's the shopping angle for AR you are envisioning?

Well our startup Pair lets you put products into your environment like they are real, walk around them, take pictures or video and then purchase.


One of my hopes is facial recognition, to help me remember who this person is who send to remember me so well. Maybe a couple notes on the person that I keep on the side as well...

I often totally fail to recognize people outside of the contexts and places where I usually see them. So a fix would be amazing.

That use case is very intermittent though.

A killer app would be something that you use 5-10x a day like maps, camera or email. I think it will be contextual based on what you are doing. At home it will be assistance (search, instructional) and entertainment based. At work it will replace monitors. In transit it would replace phone/dashboard GPS.

Whatever it is, I hope they have a solution for the issue of different focus for overlay and reality. Basically I won't use it if I get a headache.


The big problem i foresee with AR is that frankly it needs to be always on to be effective.

But have you tried keeping a smartphone out of sleep for a day? You will be lucky if you get through a full work day without needing a charge. And thats on a device you can put down while doing something else.

It seems like the problem you are describing is battery life, not the use case of being always on.

You are correct though that AR is best when it's persistent. Consider though that for parts of the day (at work for example) you could plug it in while in use.

That depends on the type of work. AR at the desk is superfluous.

AR at the assembly line, on the shop floor, or any other job where standing and walking is done for most of the day is a very different thing. But then plugging in becomes a real problem.

That said, perhaps we will see some development in hot swapped batteries. I recall seeing a video some years back about a no-brand tablet someone found in a Chinese market. It was powered by two Nokia batteries that either to be hotswapped while the other remained in place.

AR at the desk is superfluous.

Hardly. Maximize and customize your space how you like. You can literally change everything about it. For example, if you work in an open plan office, you can put a virtual divider up between you and the persons next to you temporarily so you can focus. You can also take your whole "desk" with you everywhere with the layout you like - like moving your laptop - except everything else comes with it.

AGI? Definition?

My own prediction is that in terms of personal computing, the hardware will start to fade into the background, along with the OS. AI will take over consumer mindshare. Windows, iOS, and Android will give way to Cortana, Siri, whatever Google comes up with, and Amazon's Alexa. Probably something similar from Facebook as well. The consistent user experience with any of them will be available everywhere. It won't be important to carry a particular device, the hardware around you will recognize you. I already catch myself trying to ask Alexa for something in my car. I'm sure Amazon will make that a reality sooner rather than later.

The scene is from Terminator 2 (1992) not The Terminator (1984).

How did he miss robots? He talked all about AI and self driving cars, but missed robots.

Robots have been pretty good for decades, but waiting for AI to get good enough to take advantage of it. This is just starting to happen.

> Of particular importance are GPUs (graphics processors), the best of which are made by Nvidia.

Don't let any gaming forums see this, they'll be arguing for weeks afterward

Processors in memory like Micron's Automata http://www.micronautomata.com

Von Neuman arch is dead for a lot of simple compute tasks. CISC central CPU will only be used for very specific workloads that need custom circutry like floating point units. Also, every CPU will come with a small FPGA stock, not just the on die GPU that AMD offers.

Predicting the future is a thoroughly thankless task.

I just wish the author of this article elaborated more on the financial side of the AI revolution, rather than just mentioning it at the start and then dropping it completely.

"Post hoc ergo propter hoc" (Latin)

DARPA's SyNAPSE and then the self powered brain of an ant. That's computing. Everything else could find place on what's next in consumerist garbage.

The 'consumerist garbage' is what's paying to build the factories that make the chips that power the exciting research stuff.

Plastic analog neural networks. Here is a bit : www.billpct.org

All front-end everywhere without programming imho.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact