Hacker News new | comments | show | ask | jobs | submit login
Ask HN: What tech that's right around the corner are you most excited about?
211 points by Kevin_S 6 months ago | hide | past | web | favorite | 346 comments

New, exciting tech making its way in to boring, old industries. And I mean boring, old industries.

There's an unbelievable amount of backwards business process that's still out there. Unless you've experienced it first hand, I really don't think you can fully appreciate how manual the "business world" still is.

For the past year I've been working with an intermodal trucking company building an app for owner-operator truck drivers so they can accept/reject deliveries, turn in paperwork, and update delivery statuses via a mobile app. If that sounds dead simple, it's because it is. But the change it brings is amazing.

While deploying the app I'd often ask when so-and-so truck driver came in to the office. The answer was usually something like "every day at 5:00pm to drop off his paperwork". A week after they start using the app, the answer suddenly turns in to "Oh, he never comes in to the office. You'll have to call his cell."

Dispatchers that were tearing their hair out trying to get updates from their drivers so they can in turn update their customers now feel like they can manage double the trucks. They're asking if they can get a similar app on their phone so they can manage their drivers on the go. Managers are asking when they'll be able to ditch the office space they're renting and let everyone work from home.

When I tell people "It's like Uber for intermodal trucking", nobody cares. If they pretend to care, I have to explain what intermodal trucking is in the first place -- then they stop pretending. It doesn't sound "sexy". It's a boring industry.

I think there's a lot of boring industry out there that hasn't fully embraced technology, and I think when it finally does we'll see a cultural change in the way we view work.

I'm interning at a law firm, two days ago I spent 4+ hours searching through public announcements on a government website. Me and a my colleague searched through the announcements of 500+ companies in the last 3 years. We ctrl+f'ed certain words, if they were present we noted the decision date, it was that simple. But the government website had a flashy UI with Ajax that required multiple clicks to get a list of announcements for a single year.

If the government had provided an API, the same task could be accomplished in a matter of minutes.

Likewise, most law offices have FileStore's or DMS's (Document Management System). These technologies are old and slow. If they used something like Algolia or Apache Lucene on a beefy server, the productivity would increase multiple-folds.

Another thing I want to mention is automated translation software. Most contracts we write are in dual languages. People usually slightly modify terms from earlier contracts. If there were a software that could identify identical/similar blocks and find their translation, the productivity increase would be substantial.

I wonder why these firms do not invest in IT. Being a corporate lawyer in 21st century is not that far away from being a coder. You primarily work with your computer, only instead of writing computer code, you write rules in natural languages. Programmers have IDEs, Lawyers still use MS Office products with lots of add-ons.

> But the government website had a flashy UI with Ajax that required multiple clicks to get a list of announcements for a single year.

That's, unfortunately, the counterproductive effect of current tech fads - UX designed around "engagement" (i.e. sucking money or attention out of people) and not around effectiveness. I believe it's important for responsible engineers to work to counter that thread.

I remember someone posted a comment regarding why Lawyer doesnt want to improve productivity before in HN. It's because lawyer usually charge client per hour, the more hour the lawyer spend on work the more he/she get paid, there's a negative incentive on reducing work hours this way.

Lawyers bill by the hour, that's right. However there is almost always a race against time. You can always do the work slower/write more hours. But you can't speed up the work, if you don't have the tools.

In large offices, IMHO people get billed with their capacity to pay. It has not much to do with the hours lawyers put in. If the client has the money, lawyers put in more hours. Every piece of paper lawyers print gets billed to you. Every cab fare, every x and y, you imagine. On top of all this lawyers need to bill hours so they don't get fired. But I'd rather bill hours doing serious research than manual labor.

You've got a lot of competition. Transflo is the 800 pound gorilla in this space. But there are quite a few small providers working very hard to get into it.

And integration is the challenge. Integration into sometimes archaic dispatch systems . TMW Innovative is still an RPG green-screen system on IBM I Power systems and they have thousands of customers. They have a "web" version that is just a green screen scraper, which isn't bad, but is is slow.

All that to say this industry is ripe for innovation, especially for small carriers that have to implement elogs by December. It's an interesting year in trucking.

this. A company in my town has an RFID (active and passive) material tracking system for large construction projects that does some pretty cool stuff. Imagine having a $10b construction site, thousands of parts, materials, and pieces of equipment laying around in a stock yard, and being able to know where every single one is, know if someone takes the wrong part out of the yard, know if someone takes a part to an area of the jobsite it doesn't need to be in, etc... Snowfall covered your entire stock yard? no problem, still works.

Construction management is another similar field. People routinely have secretaries who print people's emails so they can write responses, or print drawings, Mark them up, then scan them back in. A YC startup net30.io is working on construction payment tracking.

This. There are so many "boring" companies out there that could become so much more competitive if they brought in a couple of full-stack devs to write a basic domain-specific CRUD monolith running on AWS hooked up to a basic mobile app.

I built exactly the same system just over 13 years ago. I'd love to be doing that now as the technology wasn't really there back then. GPRS and windows mobile was all there was.

You're right. It sounds boring but it was great fun!

We're building a similar at product at https://loaddocs.co and hiring, if you are interested.

Thanks for the info. I'm working on a hardware startup at the moment so I need to concentrate on that first :)

I work in industrial data collection and communications. I'd caution that if you're going to go this route (introducing tech where there wasn't any), you need to focus on the low hanging fruit and solve actual pain points. I see a lot of companies pushing data collection without a use for the data or an understanding of the real problems companies need to solve. Sounds like the parent avoided these pitfalls.

My whole business is based on targeting older businesses. We've done some really simple CRUD apps with users for each employee for quite great margins. So much work to be done - just gotta meet enough businesses.

This is so true. I have been trying to get into this space but leads are so hard to come by.

Can I ask what company you work for? Sounds like it's interesting work that's actually augmenting people's workflow to be both faster and easier. Seems like it's satisfying work.

What's the app?

But how do you find jobs like that?

Tldr; This is what we attend / do to get clients that's not into tech already:

- lots of general network events where only C-level staff attend (often paid networks)

- side by side with consultants and business developers that haven't got technical staff

- get references from companies we've befriended by giving them leads before

You need to find a way into non-technical companies. We find them by going to network meetings with companies not in the IT sector. I have a large network of tech companies, but more non-tech companies. Sometimes a CEO for a marketing firm have a good lead they want in the long term, and then he brings in us to deliver something else for their business, such as an internal app for tracking deliveries. His potential client gets hyped up and I suggest potential marketing angles of the app and in doing so direct the business back to him. Other times I have a few drinks with CEOs from different companies at network events, they get to know me on a more personal level, and then pass me / us as a reference when they hear about jobs. We also go side by side of business developers, letting them add "digitalization" as a part of their brand.


Apple has solved few real world AR problems, which were usually hard for an average app developer to get started with. But with ARKit, apple, is "trying" to do the heavy lifting in terms of plane and object recognition etc.

Another thing is the platform of distribution. People will use AR apps, which will hit the app store after September like crazy. And these same people, will be primary audience for Apple headset.

It's like, before releasing headset, Apple is proving people that you really need a headset to overcome the pain of "continuously" moving your phone in the space.

Also, other OS and vendors will follow the trend and release AR compatible phones/hardware early. The only potential pitfall, I see is, battery usage. If it's properly optimised, I think a large AR wave on smartphones is about to hit.

Just my 2 cents!

Yeah. It's ARKit. A thousand times over.

For some time I thought the convergence of modern computer vision (including server-side machine learning based object discrimination) and mobile cameras had the potential to create tech indistinguishable from magic. And with Pokemon Go there was a glimpse at the possible consumer demand for the right combination of user interface and compelling content or use case.

But now consider something like "Medical Mixed Reality" following an exponential law of growth into the coming decades. Imagine travelling virtually inside the human body to excise the moribund sections of a liver in vivo with the help of a robotic laser operating on a patient on another continent.

Multiply the potential across verticals. And across platforms like Microsoft HoloLens, Google Tango, MagicLeap and you begin to see how the future will be "mixed"!

Augmented Reality's A-ha Moment:


I knew nothing of ARKit until a few days ago when it started popping up on my twitter feed like mad. I have to say the tech literally sells itself because of the eye candy [1]. It finally feels like we are living in the future.

[1] https://twitter.com/madewithARKit

Really cool demos of ARKit there! Makes me pumped for the kind of clever apps that will be built for iOS and optimized for the upcoming iPhone. I especially liked the kitchen measurement demo.

Yes, ARKit.

With 3D hand tracking and gesture recognition.[1] When the new Clay SDK releases (RSN?)[2], I expect a new wave of ARKit demos.

[1] https://www.youtube.com/watch?v=Fa9wgHkIucs&start=43 [2] http://claytheworld.com/news/

ARKit is impressive, but AR is a solution looking for a problem. I don't foresee it becoming really useful anytime soon.

Why do you think AR will become a "thing"?

Do people really want to run around wearing glasses/headsets all the time?

If you'd ask me, this idea could just as well end the same way as 3D TV.

> Do people really want to run around wearing glasses/headsets all the time?

Absolutely! If I could wear some regular looking glasses that could identify objects, give me directions, etc, I would totally wear them.

I'm really good with faces, in that I can see someone after a long time and know that I know them, but I'm really bad with names, so I can never remember their name. If I could have glasses that looked in my contacts or Facebook or whatever and told me the name of the person I'm looking at, that would help me be less awkward at social gatherings.

If the glasses can name people from your contacts, it's only one more step to name and recognize everyone it can including strangers, including showing any related info like reddit aliases (known and suspected), reputation in some reputation tracker or even criminal history, favorite alcoholic beverages and other interests to help with dating... enough people don't want that so they'll ban AR devices and punch out people they catch wearing them. This stuff is already going on to varying degrees, but mostly businesses watching people. Getting people watching people and really kicking off AR needs, besides the tech, more assurances that no one can tell you're watching. I agree there wouldn't be much adoption barrier for having to wear hardware, so many people wear glasses and sunglasses and hats all the time anyway, the barrier is just being inconspicuous.

At one point there was an Android app that could identify strangers. They nerfed it so that it couldn't do that anymore. I'm sure they could do the same for the glasses.

But yeah, that's why it would have to be real subtle.

Wasn't it an app by a Russian guy that looked on the social network for people's name? I think it served as a brilliant demonstration of the current state of privacy.

This sounds like an amazing idea for a Black Mirror episode

> that would help me be less awkward at social gatherings

And how do you think the people you are "talking" to feel when you're constantly occupied with your AR glasses? :)

Hopefully they would be subtle enough not to be distracting. Like my smart watch. :)

It's worth pondering why Google Glass totally failed as a consumer product before we get too excited about this.

google glass wasn’t really AR the same way that arkit is.

glass was more of a novelty camera you wore on your face. it was extremely unfashionable, and quickly became associated with “people trying to record you” and invasion of privacy in general.

i think if apple releases some kind of headset, it will look more like regular eyewear. it will also unquestionably be for augmented reality. society probably won’t conflate it with someone trying to snap candid shots like glass.

ARKit what these guys are talking about isn't even that. Its just holding up your phone. This functionality has been around forever. From Worldlens, to Yelp to the Ikea app. I don't know why all of the sudden its considered "Around the corner"

> I don't know why all of the sudden its considered "Around the corner"

Because at the end of the Summer hundreds of millions of users will upgrade to an OS that has the ARKit API built in and developers will have a standard solution that works well. That's with the hardware out now, enhanced AR friendly sensors will probably start shipping on the new iPhone and then in subsequent model years it's anyone's guess.

Ubiquity and accessibility. Sure, giant corporations like Ikea and Yelp made AR apps, now every iOS developer can too and there'll be buckets of documentation and tutorials to help them along the way.

layar was an api anyone could have built ar apps with

and Android 2.2 also had an api. but don't try to argue with people that just watched an apple announcement.

They're different things, but don't try and argue with people on hacker news.

Not markerless, not coupled with a relatively to use 3D api.

I already wear glasses all the time. Lots of tradespeople wear eye protection all the time.

I don't think I've seen the killer app yet, but if it's awesome, wearing glasses won't be too much of a problem once they become slim and light.

I don't want to carry a big flat phone in my pocket, but its worth it.

I think its all about marketing when it comes to convincing people, nobody imagined computer phones and the idea would seem weird then, like why put a computer on your phone.

I think ARKit has potential, Pokemon Go is still big and Snapchat glasses has generated hype among people that don't identify the glasses with AR.

I actually think ARkit will be big precisely because it's not a headset. You can just hold it in your hand.

Imagine the Metaverse exists, fully fledged, and there are two products which can access it:

1) A headset that you strap on your head, and you're "inside"

2) A picture frame that you can hold up and look through, and the Metaverse appears on the other side. You can reach behind the frame with your other hand, and it's then inside, and can touch stuff inside

Which will be more popular?

Now, I know a lot of VR enthusiasts are going say #1, but I suspect a lot of people will prefer #2. It has several benefits:

- No nausea

- If freaky shit starts happening, you can just look away or put the device down

- You can keep talking with real world people

- You can look back and forth between the Metaverse and the real world without taking your headset off or performing any kind of switch

- You can still make eye contact with people in the real world

- A bajillion people already have an iPhone

- There's nothing really to learn, the Metaverse is just a new app on your phone, not a whole new interaction paradigm

The downsides are two:

- Less immersive

- One of your hands is taken up

I can see the uses for both devices, but I predict the picture frame gets at least 50% of Metaverse interactions over the next decade, maybe more.

Maybe I am misunderstanding your point, but I believe what you are describing is what is referred to as VR (Virtual Reality), not AR (Augmented Reality).

From what I understand, I believe other commenters in this thread refer to AR as a combination of 'real' reality and computer generated overlays. This is opposed to VR where all of reality is generated. So when we talk about AR headsets we're talking about things akin to Google Glass or Microsoft Hololens if you're familiar with these products. In that sense you are never 'inside' AR (with a headset or phone), it simply augments what you currently experience with extra data. With a true AR headset it is still possible to interact with people in the real world, you can turn on/off the extra data easily (since you are still immersed in the real world) and the computer generated overlays fit in with your real surroundings in a way that is simple and intuitive. Granted such a headset does not yet exist, but I believe that is the end goal.

What you describe as 'stepping inside the Metaverse' with no possibility of interacting with the real world is exactly what VR is all about (full immersion). (As an aside, the Metaverse as described in Snow Crash is exactly this: a Virtual Reality.)

As for the headset vs phone debate, personally I believe that the applications of AR in a non-headset configuration will be limited. There are definitely applications for one-off utilities which don't require constant use (ie. don't require gorilla arm), but having it available at all times and transparently via some sort of headset is where I believe it will really be useful.

What is the Metaverse and why do I want to go there?

I think there are a few different opinions on this, one is a total emersion world with streets of your own and it's own landscape. See Snow Crash https://en.wikipedia.org/wiki/Snow_Crash

Another version would be something like the "darknet" in the book Daemon/Freedom by Daniel Suarez https://en.wikipedia.org/wiki/Freedom%E2%84%A2

The AR in Pokemon Go is already kind of a thing. 750m app downloads to date.

Does anyone know is there something similar to this in Android OS? How Google's plans are to explore the AR space?

Wasn't google tango their solution? (but requires custom android hardware for depth sensor)


I haven't heard anything about them for a few years now.

Visual-inertial SLAM has been working for a few years on mobile phones: https://www.youtube.com/watch?v=e7bjsIqlbS0 So there should be plenty of third-party libraries you can use.

The beauty of it is that ARKit also requires somewhat specialized hardware. But since apple controls the hardware as well, they can pull it off much faster than the android eco-system ever will.

They solve the chicken and egg problem, having phones with the needed hardware before the apps/demand even exists.

Apple already has the first mover advantage on this. Will be interesting to see how Google/Android will overcome it especially given they don't entirely control the hardware. They should at the very least add the necessary hardware in the next Pixel phone.

As far as I can tell this is not open source. It seems to be a toolkit/framework and is supposed to be the basis of actual applications with focus on augmented reality. I don't think it will become widely used if the developers are not able to inspect its code, especially when the code comes from Apple. They do have a track record of adding hidden "features" one would like to deactivate.

ARKit seems like one of those technology breakthroughs that actually has gotten way less hype than you would think, but is legitimately a huge advancement.

All the hype I've seen has been about retail applications that consumers are not going to tolerate after the (minimal) novelty wears off, which is why I think it's being mostly ignored. I think the killer apps will be ones that make people better and faster at things they are strongly motivated to be better and faster at and/or are getting paid to do by someone who can buy the hardware and order them to use it.

why do you say this ? https://techcrunch.com/2010/12/16/world-lens-translates-word...

World lens has been around since 2010

Layar since 2009


Yelp Monocal in 2009


ARKit is still the same hold your phone up to do AR. Why is it different now ?

> Why is it different now ?

Because it doesn't require hardware and it's incredibly accurate...

it never required hardware it was the same exact implementation. you hold up your phone in a magic window

Surely you realize that there's a tremendous difference between on the one hand using solid API's and tools that are baked into the OS (and thus 'installed' on a huge number of people's devices), and on the other hand either building it yourself or relying on yet another app that needs to be installed?

Not to mention execution: my experience with Layar is that it was pretty shit for the most part. Wordlens was cool, but it's not like I can just build an app that uses Wordlens' tech, and even if I could, it would be, well, Word Lens. Not AR in the broader sense.

Everything I've seen from ARKit seems much more solid and much more practically useful than the 'floaty overlay that roughly stays in place' glorified tech demos I mostly see.

we'll have to agree to disagree. These AR kit demos are still apps. When it comes to usability for users they are still holding up their phones and looking through a magic window. Apple users have surprised me before, so I'm not discounting it. If you try the Ikea app everything you state is certainly doable without AR kit. I just don't see it as around the corner or necessarily new and emerging.

I just wish Apple allow devs to built apps into the build in camera vs. having to open tons of apps.

Camera's on all smartphones is the most used app why not make even more useful and fun to use?

Also, wearing any AR or VR headset is a horrible experience. AR needs to be created where it's seen and enjoyed without a headset! It just needs to blend in with daily life sans the need to wear a headset.

Agreed. AR, like so many other computing technologies, didn't properly exist until Apple invented it for us.

Textblade: https://waytools.com/

Single row keyboard that has minimal finger movement. It was to be delivered March 2015 or so, but it has yet to be released. They have testers who rave about it and it looks incredible, but it is perpetually around the corner.

Originally designed as an ultra-portable phone keyboard, those who have used it tend to use it for all their machines. It has jump slots to quickly switch from device to device.

When I load it in Firefox for Android, that link pops up with an alert that reads "Please view with Chrome, Firefox, or Safari," and then just presents a blank white page. Using "Request desktop site" seems to fool it.

I have been using Minuum on my phone for about three years now, which seems to be a digital version of a similar concept.

Couldn't be happier.

Wow, that is brilliant! But it's not on the market, you say? Well, it looks like something I can draft and print. I won't be competing with them, that would be rude, but I must have this for myself.

They should remove the unresponsive Buy button, that just looks bad.

You can take a look at some forum posts to get a sense of where they are at: https://forum.waytools.com/t/tech-update-la-la-state-drill-d...

They keep tweaking and iterating and keep saying another couple of months.

I saw the videos and it looks really innovative and great.

Although, my concern is that it will not catch on, because people don't usually like to learn "two" keyboards, even though Textblade is essentially a QWERTY keyboard.

The same happened with Dvorak and other variations of the keyboard. While it's easier to type on Dvorak (my own experience), you're going to have to type in QWERTY on other computers. Sometimes you'll get confused about where keys are.

This is also true when switching from Mac to PC and vice-versa. I spend an hour on a PC and when I switch back to Mac, I automatically press Fn instead of CMD.

and worse: learn another keyboard that can only be used on a desk, where space is not limited.

this product is a flop. why people even know about it?!

wake me up when I can use it as a real phone keyboard. like the priv already does.

I don't know if this is the solution, but I agree improvements in human-computer-interaction is one of the big things holding back tech for a lot of problems. Obviously a non-invasive brain interface would be the holy grail.

Do you really want to hook your brain directly up to the internet? I'd have serious reservations about that.

That makes it sound like it'd be a two-way street - like an Ono-Sendai from Neuromancer, and not just an input device. To offer an analogy, are Alexa, Siri, Google Assistant, or Cortana "hooking your voice directly to the internet?" In a sense, yes, but that isn't going to take over your voice...

I wouldn't necessarily hook it up to the internet but I'd consider hooking it up to a non-network connected computer.

Some people would do it in a heartbeat.

No doubt they would. Just like some people will base jump and put magnets under their skin.

Hi ! Very strange juxtaposition of your examples of "irrational" or "stupidity without established merit".

Subdermal magnets aside, BASE comes in many forms. You may think of it as guys in wing suits zooming past the ground and 5' clearance and killing themselves in droves - and you could be forgiven for this perspective, as it's the one most focused on by media / movies / etc.

It's a sport, performed on the overwhelming whole by highly trained, cogniscent people, in a legal context, in situations well within tolerance, and that's that. Instead of jumping from a plane, it's a fixed point.

I'm one of the testers, and yes it is awesome. They severely underestimated the development time required for it, but considering where they're at, I'd expect it to be released first quarter of 2018.

Not available? I didn't go through the full check-out process; but what happens when you go to buy it? I don't see any reference that it's unavailable.

I just tested it; eventually you get to the end of the shopping cart that asks for credit card details. The submit button says "Pre-order Now".

Yes, you can pre-order it. I have. They will charge you right away and then you will probably wait months, maybe years, until they are ready to release it.

Given how amazing it seems, it was worth it to me to take this gamble for $100.

Unlike a kickstarter, they do offer a full refund if you ever get tired of waiting. I went through that process once when I bought an iPad Pro with the keyboard case; this was before people were testing the Textblade. They did promptly refund my money. I then decided I really did not like the Apple keyboard case, returned it to Apple, and re-pre-ordered the Textblade. That was over a year ago.

The really annoying thing is that they do not do any push notifications, such as emailing status updates. To get their randomly timed updates, one has to be on their forums.

Gene editing, particularly on living people. I'm looking forward to cancer treatments being no more involved than an antibiotics regimen.

I don't know if its around the corner, but considering the human genome was completed circa 2003, I'm pretty enthusiastic that it isn't too far away.

It's a way's off, but I'm looking forward to the day genetic engineering lets people see colors they aren't able to see today. Those and other sensory enhancements could revolutionize art, culture, and society.

You may be excited about a technology from the sixties called "acid" that allows you to do just this. (Somebody had to say it)

I'm not sure if gene editing is really the quickest way to see more colors-- I would assume that a Brain Computer Interface solution would happen sooner.

I'm looking for a CRISPR (or equivalent gene tech) loading system to be implanted, so we can easily install upgrades to our genome as a species. Patches that fix common flaws that dramatically increase disease / shorten life expectancy, along with improvement patches that make us better in various ways. It's not coming anytime soon, but it's inevitable and will be more beneficial than the vaccination programs we've spread globally that work similarly in how they benefit the species as a whole as a sort of upgrade. I don't know whether it'll be more efficient/ideal to have it function as an implanted system that check & regulates constantly, or as something that is performed every so many years (you'd get your first patch in the womb).

It probably seems like sci-fi now, some early concepts in this space will be possible within 30 years.

I'm a little bit concerned about the improvement patches. We should get rid of devastating diseases, but we should also be really careful about how we decide to change the species. We'll be deciding that for future generations.

> We'll be deciding that for future generations.

I think this is only true for genetic modifications to germ cells or embryos, which is why the recent embryonic gene editing story [1] is so controversial.

As far as I know, gene editing in somatic cells only affects the individual and not their offspring, unless that change somehow propagated itself to the genetic material in the gametes.

(Disclaimer: this is in my field of interest but way outside of my field of expertise)

[1] https://www.scientificamerican.com/article/embryo-gene-editi...

> We'll be deciding that for future generations.

Isn't the premise that they can decide for themselves?

There is the possibility of editing the germline so that all future generations will have the improvement.

Now imagine parents deciding this for their future kids, grandkids, etc.

Every decision we make effects future generations, this one is only more direct.

Point is your kids can revert your changes.

I think "around the corner" is a little optimistic, but I totally get you and I think with more computer scientists joining the research in this area we will see tremendous advancement in the near future.

Much of what people have mentioned here will be really great. I think self-driving cars have the most potential to impact people lives day to day, followed by AR. I guess infinite, cheap, clean energy could also spark another industrial revolution.

But all these things aren't likely to impact the happiness and life satisfaction of those living in the developed world. The internet has been huge, but it really hasn't made us happier as a whole.

I would like to see someone create something that will make people's lives happier. That probably means doing something that will foster good human relationships and real world experiences.

I guess there's a lot of potential for driverless cars to help with that, but they could do the opposite as well. I think we need better tools for connecting with each other, understanding each other, forming social organizations and communities, and maybe changing the geography of cities to bring us closer together, rather than making it possible for us to be further apart. It's likely that new technology isn't needed, we just need to use what we already have in a new way.

>but it really hasn't made us happier as a whole.

Serious question. Has any technology done this?

We don't appear to be happier than any other animal. Even in the developed world plenty of people are downright miserable.

Unless primitive humans were just far sadder than the average animal, I can't see how technological advancement has made us happier as a whole, because we're not all that happy.

If antibiotics, electicity, and the steam engine didn't noticeably move the needle, it seems absurd to expect phone tech to do the deed. Whatever technology is solving "for the whole", it's not happiness.

The frame of thought I use to think about this is that evolution simply gave humans a brain that is far too big. We cannot help but tinker with things, think about things like science or philosophy etc. Its not whether the steam engine made us happier. It's that the steam engine is the result of a species bored out of its wits and needing something intellectually engaging to do.

Yes. Alcohol and various drugs, for example.

I jest, but only a little bit. The thing is, happiness is not really a good goal for technology. Humans are good about noticing what is not OK to them and focusing on that - while also growing resistant to persistent problems. So they keep the baseline.

Things have changed a lot, however - we've been moving up on the Maslow's hierarchy of needs. Thanks to technology, most Westerners no longer really worry about food, shelter, personal security and forming relationships. We've moved upwards - that's why we have time and will to bitch about abstract topics on the Internet. The very controversies our society has is the evidence just how far our technology has lifted us. Starving people don't fret about identity politics and gender imbalances in hot industries.

This is a realistic goal for technology - lifting us up and solving the problems we identify. Feeling happy is something we either need to learn individually, or brainwash (or engineer) ourselves into.

>> The thing is, happiness is not really a good goal for technology.

>> Thanks to technology, most Westerners no longer really worry about food, shelter, personal security and forming relationships.

In the buddhist tradition, one dedicated to happiness, one barrier to happiness(or happiness training), was the need to take care of all those things. So happiness seems like a decent goal for technology, but maybe not the best ideology to win in the tech dev market.

I think one of the under reported aspects of self driving cars is what it might do for urban income inequality. The prices of cars and housing are two big things that keep poor people poor. If you can't afford the upfront or ongoing costs of a car, you need to live along mass transit lines to get to your job or endure a much longer commute which costs extra time and money. Housing along mass transit lines can be much more expensive that housing a good distance away.

Self driving cars can help this in two ways. First, the cost of transit should decrease. Human drivers will no longer be needed and the vehicles can be used more efficiently instead of sitting idle most of the day. Bus, Taxi, and ride-sharing costs should drop dramatically. Secondly commutes by car should become quicker and less of a burden for the driver. This should increase the "acceptable commute" radius for the average person. This adds choices in housing and more uniform demand that isn't as closely tied their proximity to urban areas, mass transit systems, and highways. The end result should hopefully be cheaper housing.

I agree. Driverless cars will be awesome. I hope they will make people happier, but people tend to quickly adjust to changes in their environment and revert back to their "steady-state" level of happiness. Good relationships and experiences are the only things (other than maybe drugs) that can boost long term happiness.

Having more material wealth or free time can definitely make you happier, but generally only if you use it to spend time with family and friends. If people use self-driving cars to ferry themselves back and forth between their empty McMansions out in the exurbs, and work, then they're probably not going to be any happier.

I think self-driving cars might not change anything here unfortunately :( If we take silicon valley as an example, the wages going up has just driven rents and housing costs up. Essentially housing is a good with inelastic demand, so it gets priced more like a tax at "x% of income". Same seems to be true for restaurants, grocery stores - they just raise the prices to match the income level of an area.

>Essentially housing is a good with inelastic demand

I apparently wasn't clear in my post, I think this technology can alter the elasticity of that demand in urban environments. Yes, everyone will still need housing so the demand as a country will not change. However people are going to be willing to live further away from their jobs because commutes will be quicker and easier. Demand will shift from urban areas to suburbs and suburbs to exurbs, getting cheaper at every step of the way.

Indeed. One thing we tend to overlook when we get excited about technological progress towards some not-too-distant happier/easier life is the commercial framework these technologies will reside in. Yes, predictions of cheap ride sharing autonomous vehicle pools may become a reality; but cost vs price will still be an issue. All this will be great for those who own the cars in the pool, but not so much for those who must pay to use it.

If you want to bring people physically closer together, just lean into NIMBYism and keep housing supply small. As current trends continue and require more people to live in city centers, we'll see more and more instances of people forced to have roommates when they'd rather not, multiple families forced to share a single apartment when they'd rather have their own, etc.

But somehow I think you will actually create conflict, tension, and claustrophobia, not happiness.

I disagree. Living in a dense city does not necessitate living in some sort of tenement and being at each other's throats. Living costs are high in cities because people who bought property early like it that way. They want their properties to appreciate, so they prevent others from increasing supply. Governments seem happy to object out of some mistaken sense of social justice or historical preservation.

Still, people in dense cities who endure higher living costs tend to be happier and healthier than those outside of them (even when controlling for income). People are willing to pay crazy amounts of money to live there for a reason, after all.

I wish we could build new cities. That would provide an opportunity to do it right, and with a design that is appropriate for the 21st century.

> Still, people in dense cities who endure higher living costs tend to be happier

Most of the studies I've seen have said the exact opposite, that those living in urban environments are actually less happy: http://bit.ly/2v5cQni

In some sense upzoning is still living further apart, just on the vertical rather than horizontal plane.

Driverless cars could mean less parking lots... Hallelujah!

I am excited about both WebGL & Web assembly.

Although both of these have been around for a few years, we are yet to see a general adaptation of these. (might be due to inconsistent browser support).

Now with the rise of VR, 3d Printing, powerful GPUs these two technologies are bound to open new avenues of an immersive browsing experience. I imagine that in next few years we would have 1- Webs stores, that show a virtual 3d shopping mall, 2- 3-d virtual try out of garments, 3- VR coaching of physical activities like a- Playing Tennis, b- Judo, c-Taecondo, d- Dance

I've been interested in learning WebGL but have been hesitant to take the time to learn it because it didn't seem like a skill companies would hire for. Do you think it will become a marketable skill in the future?

WebGL is very similar to OpenGL ES. In general, having basic knowledge of OpenGL and the rendering pipeline is always a good thing. But that alone won't provide a lot of value to a company.

Start out with some tutorials and learn the basics, but keep in mind, that there are frameworks like three.js [1], which abstract away all the nitty-gritty details, like setting up a render context, initialize texture buffers or load 3d models from known formats. By using a framework you will save a lot of time and you can concentrate on the fun stuff.

But even three.js can be kind of low-level, because depending on what you want to achieve, you still need more 'boilerplate'. If you want to write a game, you need stuff like a ui framework within the render context, a physics engine, particle generators, pathfinding etc etc. If you need this, maybe take a look at Unity [2] or Unreal Engine [3], which provide said extensions and alot more. Those engines provide ways to build applications for different targets like direct3d, opengl or webgl, so you basically can cover browsers, pc's and mobile platforms with the same codebase which is truly awesome. But be prepared for a step learning curve (but it's fun!).

When you start working on stuff like this, you realize, that those frameworks and game engines are just tools. In the end, you need to combine those tools with some kind of domain knowledge, like art, game design, architecture, interface design or something similar to increase chances finding a job in such a sector.

It's alot to ask for, but you need to start somewhere. And if it doesn't work out, you maybe find a new cool hobby and gain programming and design experience on the way.

[1] https://threejs.org/ [2] https://unity3d.com [3] https://www.unrealengine.com

We were excited about VRML in the 90s...

In the 90s we were also excited about the internet and mobile phones. We had just stopped being excited about neural networks.

Things mature at different rates.

Two things have changed since the 90s:

- hardware acceleration is everywhere, we all have an SGI Onyx in our pocket

- good positional tracking give us a natural 3D input technique

Without those, the tech doesn't work.

Don't forget:

- always connected

- population increase

- boredom increase

- social networking (ie. highly targeted ads)

- mobile payment

- global map / elevation / city map / traffic data free online

Fully & Quickly Reusable Rockets (refuel & refly, like jets)

Obviously both SpaceX and Blue Origin are the leaders here, but once they do it the other majors will either have to build the same thing or drop out of the industry.

There are so many things about space that we just assume are true, but are actually only true because access to orbit has always been so expensive. If we can get the cost to reach orbit down to a multiple of the fuel cost, then so many more things are possible.

We finally get large satellite constellations for low-latency Internet all over Earth. We get space stations and O'Neil cylinders. Moon bases and fuel depots on Titan.

At the same time, firms like Made In Space are working on in-space construction so you can build radio telescopes in space with arbitrarily large dishes (10 km, maybe?). Eventually we build mirrors that size too.

Basically just those two things are the only barriers between us and a solar-system-wide civilization like in The Expanse.

Arguably the third thing is ISRU (In-Space Resource Utilization), aka, mining asteroids for fuel and materials. But that's not immediately necessary and doesn't count as "around the corner".

My understanding is that "ISRU" normally stands for "In Situ Resource Utilization", which can include mining asteroids (if your locale is 'asteroid belt'), but also includes using fuel that can be generated from resources found on a planet or moon.

RISC-V We should see the first hardware running a real Linux distro in 2018 and it should proliferate from there. RV32 should also start showing up in micro controllers as well.

Still need an open GPU, but I think a bunch of risc-v cores with vector extensions running LLVMpipe would be reasonable for running a Wayland desktop.

Mass adoption of electric vehicles. Battery prices are plummeting and the advantages of EV are so great that the moment they break that $20k barrier with zero subsidies (which should be within the next five years) the switch will be rapid. I suspect at least 1 in 3 personal vehicles in use in major metro areas will be an EV by 2025.

I'd rather have adoption of high density urban areas, public transit and walkable/cycle-able neighborhoods.

Traffic jammed highways will still be traffic jammed highways with EVs.

I would absolutely own an electric vehicle if I was part of a muti-car family. But, until electric is as easily and quick to refill on as gas (long road trips, etc) I would not own it as my only vehicle.

I think battery price and availability are the main bottlenecks right now, so improvements on that front are what's got me excited.

Ethereum's Casper


It aims for more economically secure public blockchains with shorter confirmation times and less cost (electricity/hardware/inflation). I haven't delved deep enough into it to be fully convinced, but what I've gotten through so far is promising. AFAIK its the only proof of stake algorithm thats been formally documented.

For me it is Raiden, I believe Ethereum should be able to beat Bitcoin in their lightening network implementation, and for the cryptocurrency of future, Raiden is very important for Ethereum.

PS: So you don't support Ethereum Classic anymore?

Also more excited about Raiden. Sad there isn't much more information about it.

From my understanding, the thing I like about payment channels is that the capacity of the payment channel depends on how much of the cryptocurrency is deposited/locked away under it.

So with everyone competing for a payment channel toll, they have taken large amounts of crypto off the market, constricting the supply, while distributing the marketing out for their own use case, some of which will be successful at increasing the demand. Any limited supply commodity performs the same under these circumstances, up, and that is exciting because the capacity can also scale for the new attention, while the "centralized" payment channels stay optional ways to transact on the network at all.

FYI you can go use working state channels right now at https://showcase.funfair.io/

I prefer the laissez-faire philosophy of classic, but the innovation is happening on the main chain.

I don't know much about Raiden, is there anything preventing a side chain of bitcoin from implementing it?

Bitcoin-style blockchains have Lightning, same thing

Crystal & Julia 1.0. Crystal because it's a blazing fast, compiled and statically typed version of Ruby (or 80% anyway), and it's web server is awesome. Now it just needs full concurrency.

Julia because it offers a nice, performant alternative to Python & R in data science, while avoiding Java & C++. It has some really nice features like multiple dispatch and the ability to run R, Python, Fortran and C code inside of it, so you can use libraries like Numpy in Julia.

Not trying to be a troll, but...

I was on the Julia train full-force until I started using it more heavily. My sense is that the benchmarks are a little misleading.

These benchmarks give a little flavor of what I'm talking about: https://github.com/kostya/benchmarks

If you look at them, you'll see that Julia is indeed crazy fast in some cases. But in other cases, the performance is kind of middling. The native threads implementation of matmul is obscenely fast, but the "pure Julia" implementation is pretty slow--faster than R or Python for sure, but not in the same ballpark as C.

You could argue "well, why would you not use the native threads implementation in that case?" However, for someone implementing a routine in Julia, the implementation would be the pure one. That's the point.

With something like Rust, or Crystal, or Nim, what is promised is basically what you get: something in the ballpark of C. But with Julia, I feel like it's kind of unpredictable.

The problem is that the slow parts end up being a bottleneck that slows down the rest of the code, and that slow code is slow enough that it's not worth it relative to C or Python/R. That is, if you're writing heavy numerics, when you get to 20x slower than C, you might as well write it in C (or Rust or Crystal or Nim), and wrap Python or R around it, because that 20x is enough to kill it. At that point you might as well go with the established language even though it's 100x slower as an interface, and use something fast for the heavy lifting.

Crystal I can't say much about. It seems great, but the "everything is an object" makes me nervous--I tend to get anxious about inheritance because it drives me nuts. Nim seems more attractive to me. But who knows--it's also not like you can't have more than one language out there.

I see, thanks for that info on Julia's performance.

As for OOP, I feel like that's more a matter of preference. I don't buy the functional is better (generally speaking not specific situations where it might be). But I could be wrong, I just don't see evidence, only anecdotal claims by fans of functional programming (which doesn't mean they're wrong only that I'm not convinced). To be fair, they have arguments against OOP in favor of functional, but I'm not personally sold. Or maybe I just like Ruby and it's approach to OOP.

Ruby did/does have a nice OOP approach. I remember when Python and Ruby were first kind of being touted as alternatives to Perl, and Ruby always seemed more well-thought out to me--the Python 2-3 split was something I sort of saw coming.

I wouldn't say I'm a functional purist (at least so far), but inheritance in particular has caused me some headaches in the past. I like Rust's approach to all of that, but it's much lower level.

Honestly, if Crystal ended up taking over the programming world, I wouldn't mind. Its a very nice language with a lot of advantages over what people are using now.

Yes, can't wait for Crystal 1.0 release, awesome language.

From crystal-lang.org, Crystal is a compiled language

I never understood the phrase compiled language.

A compiled language uses a compiler to create a executable binary as opposed to a runtime that interprets or just-in-time compiles the language as it runs. Ruby is interpreted, C is compiled. Generally speaking, compiled languages are more performant but lack some of the flexibility of having a runtime available.

Crystal sacrifices some of Ruby's metaprogramming capabilities for performance and static type assurances.

There are many C interpreters.

I'm aware of PicoC, Cling (nee CINT), and Ch (commercial), but that's only three. What others are there? I'm curious about canonical C interpreters as well as things that aren't quite interpreting C but are somehow related.

Here is another http://compcert.inria.fr

[08/2011] Release of version 1.9 of the Compcert C compiler. Novelties include a reference C interpreter and stronger semantic preservation results.

Oh, another C compiler! I like collecting these - thanks! :)

Question for anyone handing out unofficial legal opinions:

This one is dual-licensed. There are a small kerfuffle of custom licenses that say "no commercial use", but everything is also variously covered by the GPL (v2 or later), LGPL v3, and BSD 3-clause.

Ref: https://github.com/AbsInt/CompCert/blob/master/LICENSE

I'm curious if I could use this in a commercial setting if I perfectly comply with the various GPL and BSD licenses.

If I could, that could be a problem for this group, because the repo readme (at https://github.com/AbsInt/CompCert) explicitly states "this is not free software."

tcc ?

https://bellard.org/tcc/ has:

C script supported : just add '#!/usr/local/bin/tcc -run' at the first line of your C source, and execute it directly from the command line.

Ah, I forgot tcc. Such an amazing system, so sad that development stalled.

FWIW, I think tcc compiles into RAM, then jmp-s to the result. Pretty much just eliminates the creation of an output file.

Do you genuinely not understand what they mean (that it produces binaries and isn't run in an interpreter or VM)?

I think their aim was to highlight the distinction between a language and an implementation of that language and possibly argue that only a particular implementation should be referred to as compiled/interpreted.

What I understand is that they invented a language called Crystal. And they made a compiler for it. The phrase "compiled language" is not very meaningful. It is quite possible for someone to make a Crystal interpreter.

Sure, but the goal of the language creator is to create a compiled Ruby-like language. As such, Crystal is by design meant to be compiled. Which is also the case for languages like C, C++, Go, etc.

What properties of C indicate that it is meant to be compiled? The word "compiler" does not appear in the ISO/IEC 9899:1999 standard (aka C99) in any meaningful way (there is just one sentence in a footnote).

That it's fairly close to assembly for a high level language, and most C that has ever been written has been compiled.

Okay, but it still seems strange to claim compiled-ness as some sort of intrinsic property of a language.

It's more a general descriptive of languages designed with compilation in mind.

I can't wait to never use MATLAB again.

What's keeping you on Matlab? In my experience it's mostly the domain-specific libraries or Simulink.

The core matrix-y stuff can pretty much all be ported to Python with similar or better perfomance, especially using Numba.

Part of it is me inheriting a legacy code base and part of it is the fact that it is easier to collaborate with members of my field using MATLAB as it is common knowledge.

Python is fine, but I do like having matrices as first-party language members -- that's just a nitpick though. I'm starting a new project soon that I intend to do in Julia because it seems to have everything I like from Python and MATLAB combined.

Don't forget the core matrix-y stuff can be directly run in Octave. But that's just a cost issue, it doesn't get away from the weird language.

I was choosing between Crystal and elixir for my personal edification and I picked elixir.

Elixir looks really interesting as well. I just hope it leaves some Rubyists behind for Crystal. The reason I prefer Crystal is that I really like Ruby, and Elixir is a much different language.

And then there is Ruby 3.0, which will have concurrency and (optional) static types according to Matz. But that's slated for 2020.

Safe choice. I Like both, and Crystal is definitely faster, but its not as developed as Elixir as far as ecosystem maturity or community.

ARkit finally pushing AR to the masses. From a developer perspective, having an SDK that simplifies "environment detection/reasoning" is huge. Previously required pretty deep hardware requirements and now is turning into an AI/ML/software problem.


A usable AR SDK has been available from Qualcomm/Vuforia for years on more devices than ARKit supports. The only thing that came out of it was marketing gimmicks.

I thought vuforia was mostly just a marker based AR, which never really took off in the US market. ARKit is markerless and their VIO system is actually pretty good.

ARKits ease of use and is way ahead of vuforia. The advantage of ARKit is also OS integrated.

yep - have been using Vuforia since it came out. Fantastic for scanning in models and the usability is great. That said, I keep coming back to the fact that the install base of iOS11 is what intrigues me the most. By no means is ARkit the magic bullet, but its enough and it can tempt developers the same.

Does it have Ocr built in?

I wouldn't hold my breath on it. Real-world OCR is hard. According to my tests, even Microsoft, Google or IBM's APIs get it wrong very, very often.

not that I've seen, though with some logical combination of object dimensions, env variables (depth of field, etc), and a decent image recognition/ML then you could probably get decent OCR. Note that this is for visual recognition, feel text is decently solved (exhibitA, Google Translate).

There are rumors of phones with batteries lasting longer than 5hrs. I'm still not sure if true or not but when it happens it will revolutionize what you use your phone for. It may even replace the watch completely some day.

Obviously I insist they still need to be as thin as they are now. That is much much more important than batteries.

Getting a solid 32 hours (on average) out of my low-end Samsung J3 with 3G and location always on. Had to optimize a few things to get there, though.

Chinese phone "Elephone Fighter" selling point is its 20 days battery life (5000mAh battery ). Not released yet though, so it's mostly marketing now.


Why would it replace the watch?

Phones and watches are either two entirely separate things, or indeed, once miniaturization and battery tech improve sufficiently, the watch, being much more convenient to carry around and being instantly to-hand, glanceable etc, will replace the phone for everything except large-format display, just as with pocket-watches.

? I get at least 10 hours out of my Pixel on a daily basis, what rumor are you talking about?

It might be a lark, or a very high performance phone. :'))

Try again in 6 months.

can't tell if you're being serious or not...

Are you guys for real? This is ridiculous! I've been on the old Nexus 5 for ages. I think, if we aren't going to get Android 7, then maybe it will be time for me to switch to one of the fringe mobile operating systems.

I get about 20 - 40 hours between charges, depending on whether or not I need to use Google Maps that day, and I often stay in airplane mode when I am working.

Airplane mode is cheating. My Nokia 920 lasted for a literal week during my honeymoon. It sat in the room in airplane mode and zero usage other than an alarm clock and a few dozen pictures. No one cares how long a smartphone lasts when you're not doing smartphone things with it.

judging from the responses it seems like addition of /s might be necessary.

If by "tech" you mean mostly computers and software, then WebAssembly is exciting. We've had 5 or more years now of companies inventing languages that compile back to Javascript, even though Javascript is a terrible target for compilation. WebAssembly was designed to be a true compilation target, and will allow an endless number of languages to be used on the browser.

WebAssembly helps create more space between the kind of languages that developers want to use, and any particular GUI output, such as HTML. In a different thread, I just wrote about what is wrong with HTML:


If by "tech" you don't mean computers/software, then CRSPR is clearly going to be a huge thing going forward.

AR. Being able to superimpose software on top of real objects is amazing. It has so much economic potential that it hurts not being in the space already (working on that though). I feel AR will be the app craze 2.0.

> It has so much economic potential that it hurts not being in the space already (working on that though)

As someone not familiar with the area, could you elaborate on some examples of what you think the economic potential is? To an outsider like me, being able to overlay virtual objects on a scene seems like a nice curiosity, not particularly an engine for commerce.

Edit: specifically, I was curious about areas where AR offers a (financially) meaningful advantage over traditional HUDs (for example).

To me the benefit of AR is not just overlaying virtual objects on the physical world but specifically highlighting real objects in your field of view. Granted, this does just come down to placing a virtual object on the physical world. I can give a specific example that is perhaps of not so great financial advantage, but I think this kind of thing will be useful - using AR to give you directions to build IKEA furniture. You are sitting in front of a pile of materials and the AR will tell you to put _this_ bolt (highlighted) in _this_ hole in _this_ piece of wood. You would not have directions on a piece of paper but rather using the actual objects.

I feel the key AR advantage will be that we don't have to spend half our time glancing down on tiny rectangles anymore.

The Oculus is already less than an order of magnitude away from competing with regular LCD panels. Google glass and maybe that magic leap thing give us hints that it will eventually happen for mobile.

No more displays, now that's something I would pay for.

Problem is that not everyone will want to wear something over their eyes on a regular basis if they don't need to.

It'll be glasses then contacts.

I would rather no-more-displays by way of voice command.

If Siri advanced enough, I would love to leave my phone at home and bring only my Apple watch.

Do you think that's plausible with ar kit?

I think you're missing a more important use case, namely, being able to layer contextual information over objects in your field of vision.

Use cases in industry abound. On a trip to the doctor's office, your physician can overlay your medical file and get pertinent information about you. If you are a politician holding a fundraising dinner, AR + facial recognition will supply you with the biographical information, social media profiles, voting records, donation history, etc. of the people in the room allow you to schmooze ever so effortlessly and ever more efficiently. AR tailor made for athletes could do a number of physiological measurements like heart rate, VO2, etc. (with wearables or implanted sensors) that could combine to produce some sort of stamina bar, allowing for more effective substitution patterns. An activity like paintball can will feel closer to an actual military engagement or a video game if a minimap that includes your teammate's location will be in the corner of your vision.

The Terminator franchise did a great job of showing it's utility by giving Arnold "Terminator vision" in shots from his POV. Video games also make extensive use of OSD. A 1st person shooter's OSD really shows the advantages of AR.

Advances in computer vision and networked IoT devices will be a multiplier for AR's utility. Imagine an app that overlays your vision with the mathematical patterns of nature: illustrating the Fibonacci sequence in the leaves, petals and seeds of plants or in the way tree branches form and split. It could highlight all of the phenomenon that exhibit golden ratios. It could show the equations of motion for objects moving or spinning in your field of view or detail the biomechanics of the elegant dancers of the Bolshoi ballet. In a way, you get to experience what the mind of a Leonardo Da Vince or John Nash might be like, like you are able to get an insight into the mind of a genius.

That was an insightful comment, thank you. Like a few others in the thread, I was skeptical of AR's practical potential - the demos/apps I've seen so far felt like "a solution in search of a problem" - but your examples and concise summary helped me to imagine the range of possibilities.

The ability to "layer contextual information over objects in your field of vision".. I see, that is the crux of it, to further integrate the "real world", the sensory experience of a person, with the global web of information and computing. Great examples you gave of industries that could utilize this, in healthcare, political/social/interpersonal sphere, athletic or intellectual/educational/artistic uses.

It conjures a vision of near future where people enhanced with AR - or maybe "augmented senses" - would have distinct advantages over people without access to such layers of contextual information.

One of the financial uses for AR is the ability to show a user how a product will fit in their life before they've purchased it. This has a lot of potential for any large purchases that need to fit in your home (furniture, appliances, renovations, TVs, etc.), and once we have access to high quality full body tracking it could be applied to fashion as well. Lowe's innovation lab is doing a lot of this already on a variety of different devices for both at home and in store (https://blogs.windows.com/devices/2016/03/18/microsoft-holol...).

Beyond that, with AR you can make certain classes of digital goods and marketing "exist" in the real world. The margins for AR merchandise will be significantly higher than real physical merch, although I'm guessing the prices will be a lot lower. Selling virtual items that you can see in real life opens up a lot of cool and fantastic options as well as more mundane stuff or a mix between the two.

If AR becomes ubiquitous it gives big tech companies a lot of data that no one currently has. Specifically you would have a detailed, 3D map\point cloud of most of the real world at various points in time. You could build an awesome developer platform on this, use it for VR tourism, simulation, license it to film or games producers, etc.

There are already many use cases for industry and enterprise customers when it comes to training and data overlays for workers.

A boring, but lucrative use case happens when resolution and tracking get good enough and assuming the form factor gets to eye glass size. AR could potentially replaces all conventional screens and be more comfortable to use than smartphones.

Note I'm using AR more broadly and including Google Glass\HUD style stuff, ARKit\Vuforia SmartPhone AR, and Hololens or Magic Leap style MR or "holograms".

Its to the smartphone what television was to radio. Adds a whole new dimension of information delivery. That's the financial advantage.

Huh? The smartphone added portability and (most importantly) two-way communication. AR adds... distraction, I guess?

Iron Man style AR has so much more consumer potential than VR imo. Being able to get real-time info about the world around me without having to wave my phone around would be killer.

Isn't the big problem there holograms and not AR?

If you are looking through glasses, holograms become relatively easy.

Beat me to it. Industrial AR is going to be a multi billion dollar industry very soon, and for a very good reason.

Still waiting for the day when I can buy a computer wth a full color e-paper display and sit out in the sunlight to work.

Mirasol [1] was very promising in this regard. A reflective, bystable display is also one of my dreams.

[1] http://www.the-ebook-reader.com/mirasol.html

So, fast refresh e-paper displays?

I'd like a very large, performant-enough display for solely for editing in vi. Even that is still just a dream, as far as I can tell.

Have you taken a look at the Dasung Paperlike yet? http://www.dasung.com/english/

This reminds me of reMarkable (https://remarkable.com/).

Is this more advanced than just an Indiegogo pre-order?

have you? how's real world usage? is ghosting as bad as every other eink?

Honestly I'd be fine with a slowish refresh, I'm not gonna be playing games, I'd be running Illustrator.

Without doubt, GraphQL is the next technologo I will learn. The idea is so brillant compared to REST. If you are convinced by SQL and in general by Domain Specific Languages, I think you should give GraphQL a try.

I'm intrigued ... can you elaborate on the wow factor beyond REST that you see in GraphQL? When I looked at it in the past, it seems to make sense for things that could be represented by graph (e.g. FB's social graph), and graphql allowed you to run complex operations in the backend without fetching each individual node on the client. I clearly missed something?

This frontend code.

  const ProfileImage = ({ user }) => <img src={user.avatarUrl} alt={user.name} />;
  const query = gql`{
    user(id: 3) {

  export const ProfileImageWithData = graphql(query)(ProfileImage);
You can ask for data in the structure the UI expects and only get exactly that data, letting the GQL client handle batching network requests for needs distributed across the whole app, caching, UI updating, etc under the hood. Also does a great job of decoupling product and data management code.

And if you can also elaborate on why SPARQL and RDF were a major failure, and how GraphQL is different/better?

> it seems to make sense for things that could be represented by graph

False, even if there is graph in its name it's not restricted to graph, and that what is great about it compared to things like RDF and SPARQL. Backend data can be stored in key/value store, document, files, RDBMS or whatever. And you will still be able to make it work. AFAIK you write a "translation" layer that interprets a pseudo-JSON file into your data.

AFAIK GraphQL is not good at querying recursive datastructure, it's better at "neighborhood" kind of query, so you can traverse Foreign Keys, but not "indefinitely".

> graphql allowed you to run complex operations in the backend without fetching each individual node on the client.


> I clearly missed something?

IDK. For me, GraphQL basically it's an RPC interface particularly suited at "reading" data.

> And if you can also elaborate on why SPARQL and RDF were a major failure, and how GraphQL is different/better?

GraphQL is not tied to particular data layout. I don't know much of SPARQL and RDF actually and don't know why they failed. I am just guessing.

That said, GraphQL it "just" another query language, it's "just" another DSL targeted at querying datastructures. As something that was thought a long time, focused to solve a particular issue.

> AFAIK GraphQL is not good at querying recursive datastructure, it's better at "neighborhood" kind of query, so you can traverse Foreign Keys, but not "indefinitely".

That isn't true. You can request data in any shape you want and could theoretically traverse through child nodes indefinitely.

We have many queries which go 5-6 levels deep.

How do you deal with IDs misalignment between data sources?

Could you clarify what you mean? Got an example?

Let's say you want to query Wikipedia for village descriptions, and get a map of matched villages from openstreetmap. Would it be a use case where GraphQL could help? Then it would need a "translation" layer for both systems. And joins between datasets would be based (for example) on the couple "country+postalCode". Is it a possible use case of GraphQL?

Could you show me a quick example of what a request and response would look like with REST? I'm still not quite sure what you mean.

Ok. And the join strategy (i.e IDs alignment) between data sources is defined (declaratively? Programmatically?) in the "translation" layer. Sounds very interesting. I still think the URIs of the semantic web are a better idea, but given the fact that nobody (re)uses them, then why not a "translation" layer.

Great resource for anyone wanting to learn more: https://www.howtographql.com/

I'm encouraged to hear your opinion on it. A friend of mine who turned me on to elixir is a fan of GraphQL as well. So you have pushed me to spend the day reading up on it and experimenting with it. Thank you.

I was really not sold by GraphQL, I'm totally down to not use REST as I think that's daft but GraphQL seems to be less powerful than SQL and only really useful if the thing you're querying really is a graph, and not lots of rows of relational data. I do work with data all the time so it's not actually that useful, I wish someone would invent something similar but with a more logical foundation (joins and aggregations and stuff, not only through ugly plugins). Maybe like datalog although I never learned that enough to find the payoffs.

Personally, I believe that someone needs to build some kind of a scalable, security-hardened SQL interface that can be queried directly from the browser using actual SQL. Part of that need involves federated data from multiple services/data sources. We already have a query language that can even handle nested values (see ISO SQL:2016 standard), why not just adapt it for use in the browser?

I don't really like SQL either, it's hard to generate and doesn't support any abstractions, it might be an OK target format for some amazing query generator.

I'm genuinely surprised by the number of people on HN who seem to know very little about what has been safe to call the successor to REST for quite some time.

Forgive me if this sounds hyperbolic, but unless you have unusual or strict requirements, building a new app in 2017 REST-first is most likely a terrible mistake.

Let me clear up some misconceptions I see in almost every HN comment thread on GraphQL:

1. GraphQL isn't more suited to Graph databases. Your data sources can be a mix of relational DB, NoSQL, key value, a REST API, or anything else.

2. n+1 has always been a solved problem in GraphQL thanks to DataLoader[1], a query batching utility which coalesces calls to your data sources from different parts of your app, specifically to avoid n+1.

3. It's unquestionably production-ready, battle-tested, has a real spec and official reference implementation, and it's probably the safest bet you can make at this moment in an industry as fickle as this.

With GraphQL you simply write your schema, define types and relationships, and you’re then able to request data in almost any shape you wish, with very little extra work. This is invaluable during development.

If you have a list of recent comments with author names, and later decide to show an avatar alongside the name, you don’t need to write any extra code on the server. You add an extra line to your query (or component’s fragment) on the client.

The same goes for any field, any relationship, no matter how complex the resulting shape. If you wanted a comment author’s follower’s comments’ likeCounts, you still don’t need to write another line of server code.

This makes it stupidly simple to rapidly prototype new features, try out new layouts, and means you can share a single API endpoint between mobile apps and desktop site without sending useless data to one or both.

There have been many occasions when we simply wouldn't have had the time to implement a feature correctly with REST, particularly when we might not even know what data we'll want until we begin developing the feature and get a feel for how it works.

It doesn't just save server dev time either. On the client side there are libraries like Apollo[2] and Relay which take care of fetching data, caching, normalization, and you should almost certainly use one (I recommend Apollo) unless you have a good reason not to. Writing fetch calls and managing your store manually is just going to be a huge waste of time.

And the spec is more than queries and mutations. It's subscriptions, live queries, and more [3]. Real-time data is a first-class citizen of GraphQL, and the two most popular front-end libraries have official implementations of subscriptions (with live queries in progress).

GraphQL is elegant, has a well-designed official spec, great DX and just plain makes sense. But it's really something you need to try out for yourself (preferably on a real project) to see just how great it is.

If you’re planning to build something new with REST, seriously, reconsider. There's a slightly higher upfront cost to using GraphQL (particularly if you're new to it), but once you settle into it you'll be glad you did.

Useful tools and resources:

- GraphiQL[4] - an incredibly useful tool for running queries on your GraphQL API

- Graph.cool[5] - BaaS for quickly prototyping a GraphQL API

- Apollo Launchpad[6] - Try out GraphQL server code in your browser

[1] https://github.com/facebook/dataloader

[2] https://github.com/apollographql/apollo-client

[3] https://dev-blog.apollodata.com/new-features-in-graphql-batc...

[4] https://github.com/graphql/graphiql

[5] https://www.graph.cool

[6] http://launchpad.graphql.com

OData appeared before the GraphQL with the similar ideas.

Cool, which ones? I think the concept of requesting a response shape was inevitable. It's one of those "obvious in hindsight" ideas.

You see many "GraphQL is similar to x, and x failed" comments on here, but I'm not sure x ever came close to where GraphQL is at the moment.

Good ideas fail sometimes. E4X failed, but JSX and its variations are game-changers.

> Cool, which ones?

Forming data requests on the client/consumer side.

> JSX and its variations are game-changers.

don't be so sure

GraphQL has been a breath of fresh air -- you can tell a lot of thought went into its design and implementation.

I built a GraphQL API earlier this year. The learning curve can be a little awkward coming from REST, but once you get the hang of it GraphQL is super fun. Can't recommend it enough. Enjoy!

By what factor does using GraphQL accelerate development time?

I couldn't put a number on it, but it greatly accelerates development on the front-end. The API call gets you back all the data you need, only the data you need, in the format or structure that you need.

If there were GraphQL for Spotify's API and you wanted an artist's discography, you don't have to make multiple calls to endpoints and work through huge objects where 90-95% of the data is not needed.

For the front-end, it's a no-brainer. I've read that it is a bit more involved on the server-side but Apollo seems to be making great tooling for it.


Lenovo ThinkPad Retro! That's certainly right around the corner: two months and one day until pre-order opens.

Hadn't heard of this until now! Thanks!


I am glad you have heard it now! I am typing this on a T420s because I refuse to use anything but the old seven row keyboard and hope I can move to the Retro in October :)

I'm on an x220. Didn't really know what I was going to do for my next computer, but there might be hope yet...

Self-driving cars that really work and are safe. Google/Waymo is just about there.

Automatic language translation everywhere.

Big Brother everywhere. (Excited about, yes; happy about, no.)

Batteries + solar as the predominant energy source.

Electric cars getting some real market share.

I'm curious as to what exactly excites you about Big Brother being everywhere?

> Google/Waymo is just about there

Those video by Cruise (GM owned) make me think they are here now. We're just waiting on laws to catch up.

Generative Adversarial Networks (GAN)

GAN is a type of Deep Learning Network which can generate data after training.


- Text to image synthesis (Scripts to Movie ?) (https://github.com/reedscot/icml2016)

- Generation of Game Environments

- Image to Image conversion (https://github.com/junyanz/CycleGAN)

- 2d to 3d conversion

I think in future we will have highly creative deep learning systems, which will make ar/vr/movie/game creation faster and cheaper.

better search, always

Find me:

- an aggregation of everything I have to know to run a porcelain store in my country (taxes, suppliers, how to find staff or better yet: showing candidates directly, best location in my town, etc.)

- Fuzzy stuff like: the pic of that tree I took when I was on hollidays in XY a few years ago; or the note I took a few days ago about that band with some greek name

- a ready to paste, non-ancient js-script for XY

- a cafe where nobody cares about how long I sit with my laptop with a not too modern ambiente

- the lesser known types optical illusions

Honestly, it may not be at the level you are describing here now, but I have been overwhelmed by the ability I've had to find small figments of memories with Google search. For example, I had a faint memory of a movie I watched as 4 year old, and could only remember a few generic nouns describing it, and after a few minutes of searching I was able to pinpoint. This wasn't possible years ago, either because the search engine itself wasn't ready, or that the information simply wasn't on the web. Now both of those are true, and only becoming more so, so they are improving in tandem.

I'm pretty confident I can find something on the web by only knowing a few keys words.

Indeed already amazing.

"Internet has become a wasteland of unfiltered data. […] I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them"

from 1995 http://www.newsweek.com/clifford-stoll-why-web-wont-be-nirva...

today google shows it right away: https://encrypted.google.com/search?hl=en&q=date%20of%20the%... (powered by the now slightly more semantic wikipedia I think)

Would you need above info instantly or would you be fine waiting like 10mins for the results to arrive? Also, would you be willing to pay for it ?

It depends.

The 10 minute wait is only ok if the result is really really good. If I use google I do maybe 15 searches to crack a mildly hard nut, if each search would take more than 10 seconds I'd be too annoyed.

I would pay, but again it would depend on the quality and I would probably use it just once a week. If I could do it myself with google in 20min, I would not pay more than 2 bucks. If you can do the china-shop thing, I might pay 15 or even 40 if I knew it was good.

When I search for myself I get a feeling for whether there is more information out there or not. I'd have to trust the service to be good enough not to miss much.

I think you can already find that tree of yours today with Google Photos

Excited may be too strong a word, but I'm interested in seeing how the world of decentralised apps evolve. I like the idea of decentralising the web using plug and play home servers. There are a few projects in this space already (such as Sandstorm) and there are some existing projects that aren't strictly limited to dapps but could play a key role in the future (a few unikernel and meshnet projects spring to mind).

IMHO the real bottle neck here is NAT, so I'd throw universal static ipv6 consumer adoption in with this.

Another bottleneck is the asymmetric up/download speed of most ISPs. Often upload is more than 10x slower than download.

Yeah, greater IPv6 rollout would be very useful.

I'm looking for a company to successfully implement what I call on-behalf AI. AI legally allowed to take actions on behalf of someone, in a large number of ways in relation to daily life. This may be 5, 10, or 20 years out yet, hard to say. It'll be very hard to pull off, and whichever company does it first at a high function + comprehensive level will be another Airbnb or Uber, as the legal/regulatory hurdles will be similarly challenging, and exactly as with those cases it'll be ideal to move first and apologize later (which will cause the typical uproar among people that find that approach appalling). This type of AI is where you get deep into real time savings for consumers, by significantly reducing dozens of mundane & routine tasks (most of which repeat from person to person and can be modeled very accurately accordingly).

Why do you think it's "ideal to move first and apologize later" when rolling out a socially transformative technology in a democratic society?

Because I occasionally don't respect the restraints that tend to exist in democratic societies when it comes to entrenched regulation & interests vs innovation (or really anything new or different), which frequently dramatically slow down progress (see: zoning law abuse, or the FDA's countless abuses or otherwise slow & meandering approach (what I call backwards), or the present US healthcare system (vast entrenched interests, fear of change, etc)). That is, it's my opinion that that restraint is what always must be pushed back against to generate any positive change in any culture or society at any point in history. Failure to push in this regard, guarantees stagnation.

The vast majority - in my opinion - overwhelmingly tend toward being either aggressively anti-change, or they're at best very cautious about it. That has been demonstrated non-stop throughout history with practically every step forward in technology for example. It may even be a beneficial evolutionary attribute for the survival of humanity. I don't belong to the dislikes change group, my personality type is to push. That's the honest answer.

Am I biased because of my own world-view or personality type? No doubt to one extent or another. I'm not advocating for chaos or anarchy though; rather, I'm advocating for constantly testing assumed or entrenched notions/boundaries, as a means to find out if there is new progress to be found there. My opinion is that there's almost always progress to be found in challenging such, with some areas blatantly worth focusing on more than others (due to favorable upside vs downside risk ratio). If you ask permission first every time you attempt to find progress at the edge of what a society presently finds comfortable, you're not likely to get very far.

Maybe, just maybe, a reference to Chesterton's Fence is needed:


For me personally, currently my absolute top reason I'm dying of excitement is http://www.luna-lang.org/ Only I'm not sure if they're gonna manage to get a release out of the door; I have an impression that they understand that it's their top priority and main focus now, but I'm not sure if they really realize how far they they need to go with sacrificing features and cutting it down to absolute bare essentials, to avoid faltering into endless development hell.

That said, I have some tiny glimmer of hope that even if they go vaporware, maybe someone e.g. from around the Lambda the Ultimate community might possibly try to revive their ideas and ignite some F/OSS clone.

The second coming of WebTV. I am sick of small devices, I want a huge 100" display managed by a touch remote where I can play and work, watch videos, movies, track my cryptos, even code my own apps from it, all from the comfort of a fat-ass recliner.

Fusion, of course. It actually is right around the corner now, this time.

If you mean using tokamaks, their own estimates put like a single positive energy reactor decades away

Yeah. Fusion will happen in the year of the linux desktop.

I wish I had a fusion reactor that worked as well as my desktop Linux computer. Far from perfect, but it's still a net positive.

I am suprised applications of blockchain outside cryptocurrency is not higher on this list. For example:

* Election fraud and recounts can become a thing of the past * Everything that requires a contract could become completely electronic (the mortgage industry alone is probably a multi-billion dollar opportunity)

I think it's a crypto-utopian view that election fraud can be fixed with a better record. Election fraud is a fairly small problem in countries like the US and the problems are around voters not voting in the right place, having a new address, losing their proof of identity, etc.

The security community thinks a paper trail is essential - less computers, not more - not a public database. I'm not an expert, but I tend to trust them.

> I'm not an expert, but I tend to trust them.

and blockchain proponents want an infallible system audit-able by the public, instead of having to trust anyone. they haven't figured it out completely, they push towards it.

in any case blockchain networks can function without computers. very cumbersome though.

Voting systems are hard because most democracies expect ballots to be confidential. Is there a blockchain-based voting system that allows a voter to have high confidence that their vote was counted, but doesn't allow someone else to be able to know how that voter voted?

Not only that; to insure that all of the other anonymous votes correspond to other real, eligible voters!

Aparrently, someone has it figured out: https://www.ted.com/talks/david_bismark_e_voting_without_fra...

His solution doesn't seem to solve the hard problem of "stuffing the ballot box." Any voter can verify their own vote is correct, but they can't verify that each vote is cast by a real citizen.

That's not a blockchain solution, though. It's using paper ballots plus some additional cryptography.

For me, I'd like online IDEs to be standard (and therefore equally good) and graph wikipedias for arguments (like Argumans but mainstream).

I can see the advantages of both of these and imagine (and have seen people) build them so I assume someone will fully crack this in the next few years and we'll all be using this.

What might be around the corner that I'd love: someone makes a mainstream general purpose visual programming language (or tools in IDEs using languages that are indistinguishable - revenge of smalltalk)

I wonder if Apple will go to online iPhone development sometime soon. Write code, test in emulators, push to Apple App Store, all through web browser. No XCode, no Macbook required.

Would open up iPhone dev to a whole new class of people who can't afford the pricey Apple desktop/laptop.

As to the language, that's my hope too, specifically I'm crossing fingers super hard for http://luna-lang.org

VR Gaming. We're a few years away from some incredible experiences in the space.

Amazing VR gaming experiences have already arrived.

The last two months have seen a flood of incredible VR gaming experiences for the Oculus Rift. I highly suggest you give the current generation of consumer VR another try:

* Lone Echo - Amazing space story line, one of the highest ranked PC games on Metacritic right now: http://www.metacritic.com/game/pc/lone-echo

* Echo Arena - Basically the Enders Game zero-g arena in space, multiplayer. Really addictive: http://www.metacritic.com/game/pc/echo-arena

* The Unspoken - You're a wizard, with magic, and you use your hands in different combinations with the Oculus Touch controllers to battle other mages, including real people via multiplayer gaming: https://www.youtube.com/watch?v=UVD1O853aSw

* Robo Recall - Battle robots, grab them with your hands, tear them apart, grab bullets from the air. Epic: https://www.youtube.com/watch?v=MIK4D0kVlIs

* Star Trek Bridge Command - You are literally on the bridge of a Star Trek spaceship.

* Mages Tale - VR RPG from the creator of the classic Bards Tale, makes me feel like I'm embedded into a classic AD&D dungeon crawl.

Anyway, that's just a small selection! I picked up an Oculus Rift + Touch Controllers the last two months and have been blown away at the developments lately.

Agreed. Owned a Vive since last October and still have a "wow!" moment at least once a week.

And that's even without the continual joy of demoing to people who've never tried proper VR. It's genuinely rewarding every time you pop a headset on someone new.

Elite Dangerous already exists.

USB-C being so common that it's cheep.

WebGPU. Granted, "right around the corner" is a bit of a stretch.


Do you have a 15 seconds explanation on how that's going to be much better than WebGL?

Sure. WebGL is a sandboxed subset of OpenGL ES 2.0, and WebGL 2 is similarly a derivative of OpenGL ES 3.0.

While both of those APIs offer nearly full programmability, they do so with an API structure that retains a ton of legacy cruft from the days of fixed-function pipelines. Neither are low-level graphics APIs; many assumptions are made. This makes doing general purpose compute tasks on WebGL extremely hacky at best.

Think of WebGPU as Vulkan or Metal for the web. Lower level with much more work, but ultimately cleaner with superior performance and capability.

ELI7: WebGL crunches numbers fast to end up as pixels, WebGPU crunches numbers fast to end up as numbers again.

Got it, thanks for the clear explanation.

So this means that we'll soon have web ads mining bitcoins using our GPUs, right? :)

That was already a thing, for a while. https://bitcointalk.org/index.php?topic=9042.0

Can't tell if they are "honest" people who closed shop when they realized how bad their idea was, or if they just threw smoke and went dark.

Well, that's one thing. Google "Large Scale Distributed Deep Networks" for another.

Already happening! This came up a few days ago: Tensorflow implemented with webgl: https://news.ycombinator.com/item?id=14894653

IRV voting in Maine, automatic districting to prevent gerrymandering, and reliable voting schemes like Scratch and Vote.

Last I heard, ranked choice voting had been ruled unconstitutional in Maine: http://www.pressherald.com/2017/05/23/maine-high-court-says-...

Writing React apps in OCaml using Reason: https://reasonml.github.io/

Edge computing.

Basically shifting off of cloud onto separate peer-to-peer connections. Faster, more secure, more distributed, and no middleman. Think 1990s/mid 2000s but no servers, just client to client.

Soon everyone will want a home server.

How would we implement a search-engine using P2P?

Obviously, edge computing won't work for every use case. But it could get us off this obsession with tying everything into a centralized server that you have no control over. /rant

MY_PC: "Who is hosting a webpage with the following information?" My 1000 connected peers: "Here are 500 more peers who say they do"

Proceed to witness extreme bandwidth usage every query.

That only works if the computer you're connecting to has an index of the entire web.

It will probably be more like this:

- So you're looking for "kitten videos"? Well, you can ask computer X for words starting with "k", then ask computer Y for words starting with "v", and then I'll perform a logical AND operation to get your final results.

But how we make this as fast as Google is beyond me.

And also don't forget that we have to factor in the problem of security (some nodes may be sending in wrong results, e.g. spam).

For example there is the YaCy project


That inflection point in a couple years where SSD is just cheaper than spinning HDDs. A whole class of problems kind of goes away, low end consumer performance goes way up, and archival platforms get faster "for free."

I'm also excited about NVMe and on-the-horizon, faster-than-flash solid state technologies like 3d xpoint, etc.

Allam cycle power plants. I love the idea of turbines that continue to burn in the middle of a giant fire extinguisher.

Major applications and/or operating systems written in Rust, or languages that offer a similar degree of type safety. We need to be ruthlessly eliminating undefined behavior from our software stacks, especially the lower layers, so that there are fewer places for security bugs to hide.

Electric car adoption.

Solar isn't new or particularly exciting, but it's become a good alternative to burning fossil fuels and it isn't used widely yet.

I've been looking forward to real-time global illumination via ray tracing with photon mapping or path tracing or some other good algorithm to become mainstream. It doesn't seem like there's much enthusiasm for the idea from the game industry, though.

Genome editing has already been mentioned, along with reusable rockets.

Another javascript framework, of course.

This may be facetious but in fact it's true.

JavaScript needs a fresh wave of tools and improvements in the JavaScript language.

This time the priority has to be zero configuration and simplicity.

Renewables in general. Many problems can be solved if you throw enough juice at it.

e.g. Desal plants.

Affordable triple digit core desktop computers. The cloud is fine but until I'm getting Google fiber in every city, this'll need to suffice. The application is HTC applications in industry design

A core for every browser tab!

With how bad most webpages are these days its almost not even hyperbolic. No script really woke me up to how much bullshit most pages try to load.

Large-scale low-cost metal 3D printing. We're _very_ close and it could be a game changer for a number of industries since the price for one unit is very close to the mass-production per-unit cost. It could allow custom-fit application or modification of a design on a per-customer level for things that would in the past require significant outlay.

If I need a new part for something that's no longer supported, no problem. If I want to test an idea, fine. etc.

Will be interesting for the manufacturing space when multiplied by torrents.

WebRTC for all real time communications/broadcast.

Self-driving cars, it will majorly change how we transport.

People in 100 years will look back to manually driven cars as we look back to horse-drawn carriages

And after say 200 yrs, what's your prediction?


The advent of AI-based search. I'm not sold on the siri/cortana et al conversational agent approach. AFAIK it's mostly UI/UX layered atop the "keyword->10 links" paradigm. Search is missing the key concept of iteration and exploration and (hopefully) someone somewhere is working on it.

I'm working on it.

I believe the future of UI will be like Tinder or Akinator, where your browse through results and apply constraints to the search space by answering simple yes/no questions.

Nice! can you share some details on your work? do you have a write up somewhere? also is this academic research? or a commercial enterprise?

I'm trying to reduce suffering (the delta between the real and the ideal). For that, we need to communicate better.

Today, we generally communicate through natural language ("Hi, can I get a taxi at XYZ?") or software (Uber's "REQUEST UBERX" button).

I'm taking the best of natural language (ubiquity, flexibility, generality) and software (unambiguity, conciseness, guidance, automation) to create a better communication paradigm.

First, I'm getting rid of action verbs. Everything is the description of a state. Instead of "I want to go to the park", think "I am at the park" or better "me.location = park".

Second, I'm reversing the flow. Instead of manually requesting things, you get things suggested to you. Most of communication is done by accepting or rejecting suggestions, like with Akinator or Tinder.

Third, I'm making the real world interactive by allowing physical things to act triggers for suggestions. With the press of a button, you can capture you current location, a QR code, an RFID tag, a picture of a book, some Chinese text, the song currently playing, etc. The captured entity will be processed, meaning will be extracted, yes/no questions will be asked, and you'll be offered ways to interact with it.

Basically, I want to be able to point at something, get information about it, and act upon it.

This is neither academic research or commercial enterprise. I'm currently doing this on my own, but anyone is free to join.

Next week, I will publish a blog post explaining the relationship between GTD (Getting Things Done methodology) and the future of UI. This should cover some of the ideas I have mentioned here.

volumetric displays are pretty fun. e.g.:

https://voxon.co/ https://www.youtube.com/watch?v=NKTfP56rpDA

A replacement for Javascript in the browser, I'm sure it can't be far off.

FDA-approved closed loop artificial pancreas.

The decentralized web built on top of Ethereum/Swarm/Whisper

Truly and honestly, the automated food preparation and retail systems we're building at http://infinite-food.com/ ... constantly available, broader choice of higher quality food for more people with less supply chain waste and at a lower cost in time and money.

this looks amazing, how many different meals can it provide? how many different ingredients can it process and cook? will the ingredients need to be bought from you or can we supply anything we have?

thanks for working on this, i have been dreaming this for a long while.

Initially we are focused on a broad subset of popular foods in Asia. The intended model is a wholly owned and operated network including the logistics system for restock and servicing, in order to keep costs down. Ingredients will be relatively numerous, with variety significantly upwards of the average restaurant kitchen.

Millisecond instance boot and teardown times on the major clouds - AWS, Google, Microsoft and Digital Ocean.

Along with suitable pricing.

Hopefully WebUSB will arrive sometime and be useful. I thought it would be really cool to write a basic SDR web app/browser extension, but unfortunately it's not really possible right now.

There was a project called Radio Receiver which used an RTL SDR, but it is a chrome app and chrome apps are pretty much dead.

Eye tracking - mouse and keyboard control using eyes. Microsoft has already started working on this.

I read a novel published in 1999 which predicted how VR headsets would be followed by eye tracking eye boards,etc. which would then be followed by control directly via the brain using electrodes,etc.

I think voice controlled apps (google home, Siri, Alexa, etc will be the next big thing)


I can imagine using voice when driving perhaps; but it's not really usable when other people are around, since voice commands would disturb them.

I think voice controlled apps (on Alexa, Cortana, Siri, google home) will be pretty big.

Certainly the voice recognition is impressive.

All the regular things I asked Google Assistant were recognised almost perfectly. Andrew Ng talked about the difference between 95% accuracy and 97% accuracy being the massive difference. Google Assistant is at 95% [2], but feels very impressive already.

It felt like going back a decade when trying out the Firefox Voice Fill [0], where trying repeatedly to say the word you want doesn't work (I was trying to search for 'React Router Redirect', which it kept interpreting 'React' as 'Really').

However I think the personality of the assistants are lacking. It doesn't feel like there is any real interaction. Google assistant feels like little more than a better version of Voice Fill.

I'm waiting for things much along the lines of Ironman's JARVIS (or KITT). But what I really really want is a full blown version of Jeeves voiced by Stephen Fry. I'll have to settle for the TomTom Stephen Fry voice for now [1].

Is there a turing test for Butlers?

[0]: https://testpilot.firefox.com/experiments/voice-fill

[1]: https://www.tomtom.com/en_us/drive/maps-services/shop/naviga...

[2]: https://www.recode.net/2017/5/31/15720118/google-understand-...

Global warming and this coming mass extinction thing are gonna be awesome. And IPv6.

Secure Production Identity Framework For Everyone


GPGPU in the browser. Some would say it's already here but not all browsers have a standardized API that facilitates it.

So I guess the responses on this thread begs another question - VR or AR? Which is going to take off much better?

VR will dominate entertainment, AR will dominate industry. Although I can see AR taking up some of the entertainment space as well (Pokemon Go for example).

I really want a VR Hollywood film experience. One I can watch over and over and notice different details each time.

AR, but with high enough quality to offer a superset of VR's features.

I think AR will get usable and mainstream much sooner, because it can work good on current smartphones.

VR will be cool later, when Oculus and others get more perf and fix more current problems.

Smartdust. (And the opening of geospatial database tech that can support all the data...)

What is smartdust?


Teslas travel network involving Autonomous driving, Boring and hyperloop network.

Entangled pair quantum routers. Oh wait, that's 50 years away.

Its how we break the increasingly centralized nature of the internet given almost all countries are BBB becoming more authoritarian.

Racetrack Memory.

unikernels; single purpose operating systems for anything you want.

Hydrogen fusion power.



Violating the guidelines like this will get your account banned. We detached this subthread from https://news.ycombinator.com/item?id=14930928 and marked it off-topic.

I agree it seems like a silly thing to do (block browsers without a valid reason, of which there are very few), but to blame JavaScript or modern web development in general is ridiculous.

The user agent string is not a JavaScript concept.

There may be some legitimate concerns about the modern state of web development (though I'd argue it's a positive thing on balance) but you've certainly added nothing new or worthwhile to that discussion.

Your attitude is incredibly counterproductive, toxic even. And you're not doing yourself nor anyone else any favors.

Put simply you need to get over yourself.

> but to blame JavaScript or modern web development in general is ridiculous.

This is what I call toxicity:

6000 lines (up to 1.3 million characters long)

170,000 words

2.4 MB

Great, some meaningless numbers without context.

My own!

Should I ask HN to help with the beta test?


Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact