Extrapolating scientifically from the sample of 1, it would seem that a small loss of excitement over trying new services and apps over the larger population would be enough to prevent the fabled hockey stick growth. No hockey sticks -> no unicorns -> less VC money for new startup -> articles about the death of startups -> comments on HN about articles about the death of startups
I used to try every new app, every new service. I used to run Linux on my desktop. Doing those things was fun and challenging, but took time.
I have kids now. I value spending time with them over fiddling with the latest app or spending an hour tuning my kernel so I can run my window manager at max resolution. I use a Mac and it just does that for me now, albeit with far less customizability and fun, but that is the tradeoff I'm willing to make.
I'm not saying you're wrong, but I'm saying your sample group has an age bias. :)
Also, if you really haven't updated in three years, that means that you're still running kernel 3.16, which has 81 known vulnerabilities, 12 of which are critical. That's the kind of stuff I don't really want to worry about anymore.
So now? Linux and *bsd for paid work. Windows for the desktop. cygwin and mobaxterm ftw. All old is new again.
What operating system allows you to "not worry about" OS security updates, and how is it different from Linux?
In my experience, on Linux such installations are smooth for about a year and a half, and then start failing for cryptic reasons, even if you've done everything possible to pick a mainstream distro, ideal hardware, and keep the machine as vanilla as possible.
(And I say this as someone who's been using Linux since the 1990's.)
I also take care of my parents computer with ubuntu and i have done two major release upgrades and have experienced no errors.
The latest update introduced many problems, though
What you've said may have been true several years ago, but it isn't anymore.
You need to add value to others no matter what you do, that's the definition of "building"
Value is what is extracted from work.
"Buildings" are the expression of physical labor put into something OF value.
It's literally defined in our lexicon.
After trying a lot of apps I have found that a smart phone has only a few functions:
1. Talking and texting.
2. Taking pictures and video.
3. Maps and directions.
4. Music, movies, ebooks.
5. Brief interactions with services like Uber, Lyft, AirBnB, Los Angeles metro, etc.
6. Casual web browsing. (I want a bigger screen and a keyboard for anything in depth.)
7. edit: Casual gaming.
Looking at how most other people use phones I don't think I am alone.
These devices are limited and I don't think it took us long to exhaust their potential. There just isn't much else a phone can do well (and that locked down vendor fiefdoms will allow).
The PC era on the other hand gave us a long, wide, and deep trench of innovation that is still not exhausted. PCs are more open, more extensible, and have a much wider IO path to the human in the chair. The breadth of what a PC can do is incredible.
(The main problem with PCs is antiquated, insecure, bloated operating systems. Fix that and I think you'd see even more innovation.)
Of course I have been a mobile skeptic since the iPhone. I just saw the next incarnation of the feature phone. Since it was still locked down by carriers and vendors and since its bandwidth to the user is poor I knew it would not deliver lasting innovation.
It would be a lie though. Since it really would be a Turing complete phone, with lots of programs on it. That was true even of the old Symbian phones, and items from 3-6 go well beyond what they did.
Nonetheless, I think there is benefit to be gained by trading flexibility for reliability. One business difficulty/opportunity is how do you pick out services like Uber, AirBVnB etc. I suspect the solution is profitable, but user hostile: lock them into a vendor when they buy their phone.
In a way, it's a feature, not a bug. The security of mobile devices is created by the very same thing that limits their usefulness - by OS being vendor-controlled, fully sandboxed and locked down, so that users (via programs acting on their behalf) can't break it. I'm not sure if it is even possible to have a secure system that allows you to do anything interesting on it.
I loathe using my phone for most things. Any opportunity I get it is hotspot + laptop.
The only upside is it's easy to track you with these
devices by those who care about you the most: big $$ and big brother.
Used it for years. Just works.
Integrates with my Hue lights.
All kinds of settings including one that refuses to turn off the alarm if I don't use the NFC code in the bathroom (you can use QR codes as well as plain barcodes as well, just hide them away from reach from your bed.
Keep is almost there with a combination of text and drawings. Missing:
- links to other notes
- checkboxes in between text (currently notes in Keep are either all checkboxes or no checkboxes)
- tapping on tags should show all notes with that tag
OneNote is also almost there.
Samsung notes is also almost there.
Maybe what I wish most is some way to link from my calendar to Keep and back etc etc.
However, seeing how I still cannot even link to a mail in desktop Outlook I'm afraid this is only a dream.
In order to fix it Google should open up "Play Store" to other providers. You could then end up with equivalent of both high-end stores & corner shops, selling products of quality that matches their brand, rather than mishmash of everything.
If you think about it, Apple suffers from a similar problem. In their case they allow in only high quality, impeccable software and there's no place for corner shops. Still - "the app economy" is state-owned.
And that's just me, who tries to stay somewhat up to date with the ecosystem and generally understands tech. How my mother is going to discover which alarm clock app is useful, performant, and doesn't mine bitcoin or exfiltrate her photos to the cloud? I have no idea.
App market is now a market of couple dozens of apps that make up the most amount of revenue.
For small developers, there is a too little chance to be seen at all.
Actually, the whole article makes sense. Nowadays, we all see that it's difficult for small startups to promote themselves. The privilege on major platforms is given to those who pay more(ex: Google). Even if one company gains enough traction and begins to become successful, another larger corporation buys it (in best case) or makes everything to not let small competitor get larger market share. That'a a very big problem that if not solved will result in a slower progress rate.
Objectively speaking there are numbers to prove we are not in the minority.
Quick facts from that article:
- 50% of users download zero apps per month
- 13% of users account for more than 50% of the downloads
For all practical purposes, the early start up riches are now gone. And there is also a degree of low hanging fruit value, that gets taken early in the stage of any industry. First movers have had that advantage. After that you can have a lot of small companies providing value to small niche places, but the big money value is now taken.
Every once in a while you will have some nice idea that comes along and will get big, but those will exception rather being the rule.
I feel like this is the public's idea of "iot," while I've learned after a year of working for an iot stack provider that the real "internet of things" explosion is found in places nobody looks except the procurement guys - the stadium lights, the refrigerators, the factory doors, etc. Small, single-minded decided sending one or two bytes of data, spread en masse through an industrial area.
Here in Shenzhen, I would say no. They are cheap as chips. I have multiple friends doing consumer IoT devices (lots of BLE) with single-dimensional backgrounds (often software) who have been able to iterate through multiple prototypes and produce concepts within months. The chief problems for IoT products IMHO are marketing and replicability (low security for initial investment).
The fact that hardware is expensive to tackle in SV just means SV is badly positioned in this sector, not that it's inherently difficult or expensive. BTW, we just had the Hacker Trip to China (by Noisebridge founder Mitch Altman) roll through town here.
Living next door to a few factories probably reduces your iteration times from weeks to days, but IMHO - and maybe this is the 'H' part, I'm still learning - the time that you as an individual wind up spending on design and figuring out software issues are pretty sizable, even compared to a fortnight for a nice 2-4 layer PCB.
Either way, we are certainly living in a wonderful time for rapid prototyping. Dropping the 'IoT' tag, embedded systems are super super exciting in all kinds of areas; telecommunications are just one of those.
Not the kind of revolution I fought for.
Sounds like the author has been living in the land of hand-waved 9-figure growth estimates for a bit too long.
You need to know how to make and fire clay, what sorts of additives can help, how to package and ship them en masse...
There are a lot of problems involved in bricks. OSHA once categorized bricks as hazardous materials (don't breathe the dust when grinding them). We've just gotten really good at dealing with those problems because of how useful bricks are and how easy it is to teach people about making them.
Anecdotally, I once took an archaeology course in college, and we had a guest speaker come in who knew how to do flint-knapping. Like, prehistoric toolmaking out of volcanic rocks. I'll bet you'd run into some real hurtles trying to market an obsidian knife today, if you didn't have experience in that area.
The article's author wants to point out that the hardware part must be so cheap that you can't make any profit in that area any more. But you still need to get it right, otherwise you can't sell anything.
Komali doesn't disagree with that point, but adds another. That most of IoT is actually not in the consumer area but in the industry area where people don't see it but the scale in terms of item numbers might be much bigger. Also a good point.
Then leggomylibro points out that you could replace "IoT" with "physical product", which may or may not imply that hardware is the hard part in IoT.
In the literally interpretation I disagree with the equal sign there, because a brick is not "IoT". There shouldn't be much to say about htis. IoT _is_ about the communication, right?
Considering your interpretation of his statement, that the hardware is the hard part in IoT, makes me wonder how you define hard part. The physical important parts like chips are quite well understood I believe (not an expert though) and we churn these out in the millions each year and they mostly do their job. The point where these chips actually need to be connected to physical I/O and software, these are horribly underdeveloped areas. There is low amount of skilled people, low amount of debugging, low amount of standards. If that's the area you are talking about I agree.
However if we talk business, we also need to talk money. And while you have one or two suicidal hardware architect on such a project, you probably have 50-100 software developers, who will use open-source plugins, libraries, operating systems, which again are each developed by 50-100 developers. So the whole cost of software is margins higher than the costs of developing the hardware. And that factor is growing towards the software side on a daily basis. The hardware will grow more and more to standardized processing units (GPUs, SoCs, touchdisplays for I/O) and the main activity will happen in more and more in software.
So considering that part I have to say nope, software is the harder part (to finance, to develop, to finish) here.
You can make a lot of changes to software within a single day, for free (sans programmer time), and deploy them to all your customers. With physical items, you usually need to fab a new version for each significant change to test it, which is a time-consuming and expensive process. This is not something electronics-specific, this applies just as much to a new brick design.
This is a qualitative difference that makes physical product design an expensive process - you need to get all the things right before you start shipping; you can't just patch things after you deploy. But, as the article correctly points out, ultimately the manufacturing of a correctly designed batch of a physical item is cheap. Which means it's hard to make profit, especially if you spent all that money iterating in your lab, and then someone just reverse-engineers your final design and starts pumping out copies.
In case of IoT - common processing and communication chips are cheap and easy to get. Electrical engineering is hard and expensive, product design is hard and expensive, and - compared to that - software is cheap, because all IoT companies are doing is bog standard cloud-based CRUD.
(If they make the software part needlessly complicated for themselves, that's another problem, but it's endemic to this industry anyway.)
But your argument still holds true in that iterations are quicker and easier to achieve, which is also why this factor will only increase.
"We’re already seeing this. Consider Y Combinator, by all accounts the gold standard of startup accelerators, famously harder to get into than Harvard. Then consider its alumni. Five years ago, in 2012, its three poster children were clearly poised to dominate their markets and become huge companies: AirBnB, Dropbox, and Stripe. And so it came to pass."
"Fast forward to today, and Y Combinator’s three poster children are… unchanged. In the last six years YC have funded more than twice as many startups as they did in their first six — but I challenge you to name any of their post-2011 alumni as well-positioned today as their Big Three were in 2012. The only one that might have qualified, for a time, was Instacart. But Amazon broke into that game with Amazon Fresh, and, especially, their purchase of Whole Foods."
Look at the list of Ycombinator companies from the 2012 batch. Where are they now?
"The web boom of 1997-2006 brought us Amazon, Facebook, Google, Salesforce, Airbnb"
Wikipedia AirBnB page: "Founded August 2008; 9 years ago"
Now, both Gusto and Zenefits are trying to play the same game and both are basically parasitic companies feeding off lock-in and transaction fees.
As a teenager I got enamoured by the idea of the future of the tech being driven by small fast-moving hacker friendly companies rather than big corp, as portrayed in PG's writings and this site.
I remember how I increasingly became disillusioned as I gradually realized that many (most?) startups were optimizing for the acquisition by the same big corp (investors gotta cash out, right?) instead of building long-term sustainable businesses.
This seems to have attracted the kind of personality that just wants to play the game, cash out and get rich.
This model requires projects that have long term stability upon creation but also a recognized lifespan due to attention attrition. Like movies.
A lot of bad code nowadays exists because of pressure from "creators" that don't actually create. They are middlenmanagers appeasing their boss who is appeasing their boss and so on. There is nothing tying it to reality so all sense of quality is lost.
The main things that prevent this now IMO is the friction of setting up these groups for value capture and the maintenance of the subsequent results. To me, the most exciting part of block chains is that they can create a non-human central authority that can theoretically embody any set of rules including group structure and value capture.
New photo sharing apps can end up capturing value like new superhero movies. Winner take all dynamics still exist but enduring monopolies do not.
The contracting model will also solve another huge issue which is the shitty interview process. (mostly self imposed)
After spending some time at well known large companies what I've realized is that they aren't the best of the beverythey are simply places to hide, specialize, or grow. They are like oasis' that survive due to captured value streams or large value stores or deep moats. Inside people can become specialists in service of this "city" but lack the well roundedness grittiness of a desert wanderer.
But unlike a city which is neutral provided you can generate value/pay rent, companies are like cults in the the entrances are guarded and the HOA rules are super strict and apply to everyone. So while they can generate highly specialized "trebuchets" that wanders cannot the citizens can be far from happy. The Hollywood model would greatly benefit the group of people who may currently feel too tied down.
> ...because we’ve all lived through back-to-back massive worldwide hardware revolutions — the growth of the Internet, and the adoption of smartphones — we erroneously think another one is around the corner, and once again, a few kids in a garage can write a little software to take advantage of it. [...] But there is no such revolution en route...
Then it says:
> It is widely accepted that the next wave of important technologies consists of AI, drones, AR/VR, cryptocurrencies, self-driving cars, and the “Internet of Things.” These technologies are, collectively, hugely important and consequential — but they are not remotely as accessible to startup disruption as the web and smartphones were.
But a real counter to the premise is actually presented:
> (However, in fairness, software and services built atop newly emerging hardware are likely an exception to the larger rule here; startups in those niches have far better odds than most others.)
So once again, 'a few kides in a garage can write a little software to take advantage of it.' They start as niches, but we can't say what their potential is without discovery and development.
I mean, it is easy to forget, but many, many things had to come together at one time in order for this to pop off like it did:
* high speed internet
* widespread consumer demand for high speed internet and services
* multi-core, low-power hardware that gave us smartphones and cheapish "device" pcs like tivo/roku/etc.
* widespread cellular and wifi networks
* miniturization and improvement of many types of sensors
* widespread data collection of all types
* massive investments and growth in consumer GPU devices, which underwrote the ML boom
and I'm probably missing some things, but you get the idea.
All of these things had to come together at the same time to give us the boom that we just went through, and it gave us rise to the likes of Google, Facebook, and so on.
This is very unlikely to repeat itself. Those who grew up in the late 90s, early 2000s may not really notice, but the difference between 1995 and 2010 is astronomical.
This is not to say that we're about to crash, or that there won't be another boom in short order, just that it will likely follow a very different pattern than the previous one. The period of 1995-2010/2015 was really a very unique confluence of events, historically speaking, and over what is really a very, very short time frame. Whereas the current boom is built around leveraging the smartphone and widespread internt access, the next will not be, as it will already be filled out by competitors.
Many thing had to come together from 1975 to 1995 to enable the things you're referencing. The list you made is not impressive versus the past, it's normal.
What happened from 1995 to 2015 that was more important than the Internet, transistor, microprocessor, router, DRAM or the GUI? Good luck resolving that debate.
Many thing had to to come together from 1955 to 1975 to....
I don't know how old you are and how familiar you might be with the prior half century plus in tech, but we could be here all day listing the incredible inventions and leaps forward in tech during each of those 20 year periods of time.
Nothing has changed fundamentally about what's occurring in tech. The process continues as before. Each new generation thinks what has happened during its era is particularly special or unique versus the past. We see the same generational bias in most everything, from music to politics.
I am not claiming 1995-2015 was unique in the fundamental factors (we are all riding the exponential curve here), simply that the confluence of advances is unique to that time period, and gives you a unique distribution of companies/organizations/industries/etc. that is very different from other time periods.
But there is one important difference between the past 20 years and the 20 years before that. The number of people participating in a self-employed or entrepreneurial role has been far greater in the past two decades than before.
We did see some of that during the PC revolution as well, but it was disproportionally smaller in scale.
I haven't done the research to say whether there were historical periods before in which such large swaths of the population were gripped by the idea that they could start their own business based on a new technology.
It's possible that it happened before, but I don't think it was like that between 1975 and 1995. Certainly not towards the end of that period because I would remember.
In any case I took the piece at it's title's face value, where startups are nowhere near over. Unicorns shouldn't really factor into that.
Making the next AlphaGo is far less accessible than making the next AirBnB. The gold rush where we're basically sticking a web/mobile app on a business and off to the races is ending.
E.g., some tests are distribution-free. And for other tests, will want to make good use of multi-dimensional data, e.g., not just, say, blood pressure or blood sugar level but both of those two jointly. Well, I'm the inventor of the first, and a large, collection of statistical hypothesis tests that are both distribution-free and multidimensional. That work is published, powerful, valuable, but neglected. I did the work for better zero-day detection of anomalies in high end server farms and networks. So, I got a real statistical hypothesis tests, e.g., know the false alarm rate and get to adjust it and get that rate exactly in practice. IMHO, my work totally knocked the socks off the work our group had been doing on that problem with expert systems using data on thresholds. Also, the core math is nothing like what is most popular in AI/ML now and as far as I know nothing like anything even in small niches of AI/ML now.
Once I was asked to predict revenue. We knew the present revenue, and from our planned capacity knew our maximum, target revenue. So, roughly had to interpolate between those two. So, how might that go? Well, assume that the growth is mostly from current happy customers talking to people who are target customers but not customers yet. Let t denote time, in, say, days. At time t, let y(t) be the revenue, in, say, dollars, at time t. Let b be the revenue at full capacity. Let the present be time t = 0 so that the present revenue is y(0). Then the rate of growth should be, first-cut, ballpark, proportional to both the number of customers talking or y(t) and the number of target customers listening or (b - y(t). Of course the rate of growth is the calculus first derivative of y(t) or
d/dt y(t) = y'(t)
Then for some constant of proportionality k, we must have
y'(t) = k y(t) (b - y(t))
Yes, just from freshman calculus, there is a closed form solution. I'm guessing that the solution is a logistic curve. So, the growth starts slowly, climbs quickly as an exponential, and then grows slowly again as it approaches b asymptotically from below. So, get a lazy S curve. So, it's a model of viral growth. Get the whole curve with minimal data, just y(0), b, and the guess for k. The curve looks a lot like growth of several important products, e.g., TV sets. I derived this and used it to save FedEx. For all the interest in viral growth, there should be more interest in that little derivation.
There is the huge field of optimization -- linear, integer linear, network integer linear (gorgeous stuff, especially with the Cunningham strongly feasible ideas), multi-objective linear, quadratic non-linear, non-linear via the Kuhn-Tucker necessary conditions, convex, dynamic, optimal control, etc. optimization. It is a well developed field with a lot known. I've made good attacks on at least three important problems in optimization, via stochastic optimal control, network integer linear programming, and 0-1 integer linear programming via Lagrangian relaxation and attempted several more where ran into too much in politics. Sadly the great work in optimization is neglected in practice.
The world is awash in stochastic processes, but they are neglected in practice. E.g., once for the US Navy, I dug into Blackman and Tukey, got smart on power spectral estimation, IIRC important for cases of filtering, explained to the Navy the facts of life, helped their project, and got a sole source development contract for my company.
The crucial core of my startup is some applied math I derived based on some advanced pure/applied math prerequisites.
And there is a huge body of brilliant work with beautifully done theorems and proofs that can be used to get powerful, valuable new results for particular problems.
Computers are now really good at doing what we tell them to do. Well, IMHO, for what we should tell them to do that isn't just obvious is nearly all from applied math.
I'm CS a grad student and sometimes it's hard to filter out the hype and find promising but underrated ideas among all the noise.
Usually a start better than papers in journals is books. A first list of books would be for a good ugrad pure math major. There get to concentrate on analysis, algebra, geometry with some concentration on topology or foundations.
For grad school might want to do well with measure theory, functional analysis, probability based on measure theory, statistics based on that probability, optimization, stochastic processes, numerical analysis, pure/applied algebra (applied algebra -- coding theory), etc.
Then, sure, work with some promising applications and then dig deeper into relevant fields as needed by the applications.
One key to success is good "problem selection". So, with good problem selection, some good background, and maybe some original work, might do really well on a good problem, publish some papers, do a good startup, make some big bucks, etc. That's what I'm working on -- picked my problem, for the first good, an excellent, solution did some original applied math derivations, have my production code in alpha test, 24,000 programming language statements in 100,000 lines of typing.
It's applied math; hopefully it's valuable; but I wouldn't call it either AI or ML.
In case my view is not obvious, it is that the best help for the future of computing is pure/applied math and not much like current computer science. Computer science could help -- just learn and do more pure/applied math.
Yup. You describe a gold mine. Well, there's still a lot of gold in there. The amazing hardware developments you describe are not yet fully exploited.
I am not a software developer, but these problems are very conceptually similar, and it seems we're all waiting on software/compute capabilities to leverage all of these new hardware technologies simultaneously.
I might be just drawing arbitrary lines. "Mobile" depended on a number of technologies to happen. Possibly, looking back, this large number of simultaneous technologies will be described by one overarching technology category.
For instance, people have been predicting that VR will be the next big thing for years now. I remember a birthday party more than 30 years ago at an arcade with an expensive VR setup that, while far more limited than today's applications, was still an awkward piece of headgear that's more of a novelty than a potentially-ubiquitous change to society. It's still possible that we hit some sort of inflection point where technology improves to the point where AR/VR becomes unobtrusive enough that it can become ubiquitous, but that's by no means a certainty.
Similarly, I think the jury is still out on IoT, drones, 3D printers and cryptocurrencies/blockchains. If I had to place a bet, I'd say that when we look back on this time period, we'll be talking about machine learning and AI defining this era. The rest of the "current hotness" technologies I could easily see not getting that big.
Now, as I understand it, growth has slowed outside of the labs and Moore's Law appears to no longer be valid. So, there will likely still be growth, but the growth will be more rare, difficult, and expensive?
Like your VR example, we've often had great predictions of the future and so very few of them actually pan out as expected.
I don't know, it's just a thought I've pondered.
Tech was just immune from it because of its immaturity and infancy of the technology itself. Now that its a mature business its being subjected to the same pressures and issues of any established industry.
In some ways it's just tech "growing up".
Alternatively, since the number of funded startups seems to be increasing, you may prefer to look at total valuation over time since graduation. Even if the mean is decreasing, the total number of 'successful' startups may be increasing and no 'end' is signaled.
I'm not convinced that 'there are no good startup ideas left in this technology era' because the big winners are all black swans. By definition, they defy conventional wisdom.
I’m not convinced that the dominant YC companies looked obviously dominant at their early stages; they only seem dominant in retrospect.
As for why the giant companies at YC are still the dominant winners, it’s because the winners just keep growing. We don’t have a sense of scale; when Airbnb was a $500m company it was YC’s poster child. Now Airbnb is worth many billion dollars, and of course it still is the poster child, while companies like LendUp are valued at $500m but are not even talked about.
It’s not that because YC and startups are less successful, it’s that some are so incredibly successful you stop paying attention to the successful ones.
If the measure is valuations coming out of YC, Airbnb raised at a $3m valuation. A lot of YC companies raised $3m at a $14m+ valuation, but that’s more an indication of the market than of the likelihood of success of those companies.
The low-hanging fruit has been thoroughly picked, 'tis all
Big internet giants can reap the benefits of the startup scene by picking up promising startups early on before the valuations growth. Just hint that they might provide free alternative that is just good enough, might drop the valuation of a startup
When the Microsoft was the scary monster in software business in the 80's, all software startups had to have Microsoft strategy. What to do when MS shows interest. Show them a demo before product is ready and they have several ways to shut it down or buy it off and kill it.
I’d write more, but I have to go to the chemist’s to buy some westinghouse relays for my Bell systems electrified collating typewriter-telegraph.
So, on the "large" end you'll have the googles, facebooks, etc and on the "micro" end you'll see a bloom of people starting small projects like those shown on indihackers.
It's something I've been hoping for since 2011!
Another paradigm shift in media (like recorded sound, recorded video, radio, tv, computing, web, and mobile) is what’s needed to produce another startup boom. It all comes down to media—�—the media is the message.
“Money talks” because money is a metaphor, a transfer, and a bridge. Like words and language, money is a storehouse of communally achieved work, skill, and experience. Money, however, is also a specialist technology like writing; and as writing intensifies the visual aspect of speech and order, and as the clock visually separates time from space, so money separates work from the other social functions. Even today money is a language for translating the work of the farmer into the work of the barber, doctor, engineer, or plumber. As a vast social metaphor, bridge, or translator, money—like writing—speeds up exchange and tightens the bonds of interdependence in any community. It gives great spatial expansion and control to political organizations, just as writing does, or the calendar.
I've long since postulated that I'd absolutely volunteer to 'jack in' to a neural method to control a computer - complete with my standard joke about being willing to even have a wifi antenna poking out of my skull.
I wonder, then, if we are going about this the wrong way. We are trying for ocular stimulation directly. If we could skip that and move to neurological stimulation directly, I'd expect VR and AR to finally reach the tipping point. There is, after all, a finite amount of miniaturization that's possible.
It is purely a hunch that tells me VR/AR are not destined for wide-scale 'normal people' adoption until it doesn't require external apparatus to utilize.
As it is, we already have people who don't even like wearing simple eyeglasses. However, if it didn't require such, then it may just be something we humans add to our bodies to augment it.
I'm sitting in bed with my laptop right now, and if I could choose between reading this on a pair of lightweight glasses or my laptop, I'm not sure what the laptop has to offer. The big issue is touch typing; no (macro) gesture-based virtual keyboard is ever going to be usable for professional workloads. I sometimes wonder if some kind of one- or two-handed finger-chording input could be as efficient as a qwerty keyboard. I would be willing to toss my decades of qwerty experience if I could eventually get something as fast without having to carry around a keyboard. I imagine something like this device from Children of Men: https://youtu.be/sJO0n6kvPRU?t=2m4s (1024 "keys" should be plenty, so it's technically possible).
Regarding direct brain-computer interfaces, I just don't see the technological barriers going away any time soon. You'd either need some type of non-invasive technology that could wirelessly stimulate the optic nerves (aside from light, obviously), which I'm not sure exists even in theory, or such sophisticated nano-machinery that it would be effectively invisible, like a neural lace. I don't see either of these things happening for decades at least (I would love to be proven wrong though!).
I don't know if the form factor that will finally trigger mass adoption will resemble currently available headsets. The breakthrough might be retinal laser projection or light field displays. I just think that if nothing else, the ability to move our current workloads to a portable virtual display is such an obvious improvement I can't imagine it not happening as soon as the technology is good enough. Of course the same can be said for BCI but that doesn't even work in the lab yet.
As for the direct methods, I think we may get there someday. We already enable paralyzed people to interact, albeit on a minimal scale, with a computer using nothing but their mind. There is even a DIY movement that has enabled this, again on a minimal scale, for hackers at home.
No timeline, no estimates, but I think we may get there.
My thinking is that miniaturizing is a limited endeavor. We're very unlikely to ever have things like AR by means of contact lenses. So, we're looking at something people will wear.
It is my own personal view that I see no great benefit in consuming print media by means of VR. For that, I have a tablet and a few ebook readers around the house. I am not sure that I (expressing only my own thoughts) see any great benefit in that.
I doubt I'll be alive to see it, but it seems that is the most probable method to get mass adoption of VR. Right now, it is really niche. Right now, we are still trying to do it the way we've been doing it for the past three decades. Things are faster and smaller but there are limits to those two traits.
I'd absolutely love AR. I have a modest collection of automobiles and sometimes work on them. I sometimes make things out of wood. I sometimes can't identify an animal species or plant family. Having the ability to augment that would be a wonderful way to enjoy life even more - at least for me.
But, even if we got these down to the size of eyeglasses (which seems really unlikely for the foreseeable future) I'm not sure we will get mass adoption by Jane Q. Public. It's not a cell phone you pick up and put away, but something worn. The form factor is, by itself, a negative.
I dunno? I can't predict the future. I'd still volunteer to test a viable method. With the ubiquity of cellular network connectivity, it'd be fantastic to have the sum of human knowledge at your immediate beck and call and without the need for an external device.
"Where are all the startups? U.S. entrepreneurship near 40-year low"
In recent weeks, this issue has been discussed several times on Hacker News. I recall someone recently wrote a comment and said, "We should distinguish between new businesses, like a pizza shop, and real startups, that might become big companies." But why exclude a little pizza shop that might become the next PizzaHut or Dominos or Little Caesars? During the real startup era, in the mid 20th century, there were hundreds of successful pizza startups that turned into big companies. If we say "We won't count small pizza places because they can not possibly become big companies that get listed on Wall Street" then we are simply assuming our conclusions. If we exclude all of the categories which were once hot, and which should be hot right now, and only focus on the handful of sectors that still have some life in them, then we can end up believing that the era of startups is still happening right now, but we are blinding ourselves to reality.
When the economy is healthy, small businesses, with the right leadership, can make the jump to the big time. It is from the frothy, primordial soup of little mom and pop shops that new giants emerge. Two examples off the top of my head: both McDonalds and Barnes & Noble were small family businesses, for decades, before new management took over and found a way to turn them into giants.
Focusing on the tech sector, and acting as if it is the only sector that matters, allows us to ignore the sclerosis that has crept over the USA economy since the end of the post-war boom, back in 1973. We should take a step back and look at the long-term trend. The economy has been increasingly sick for 40 years now.
We should ask ourselves, where does this trend end? How large do the monopolies grow? Will there ever be an era when the USA returns to creating new businesses at a rate that would have been normal for most of the 20th century?
1. Rich web apps - We know Gmail, Gdocs, Salesforce, etc are/have taken over from desktop apps. I'm continuously discovering more, e.g. Figma. Basically anything that was a single-user desktop app can be made into a realtime collaborative networked one.
2. Mobile business apps - Yes we have mobile versions of business web apps but these are typically as useful as responsive web apps which drop critical features needing to resort to a poor [x] request desktop app experience. What is needed is to create apps which make full use of what works on mobile. Speech input, gestures, what have you. Just as PCs took over from centralized computers, and web from OSes, future computing will be more mobile and ubiquitous. Current apps are translations of desktop/web ideas. We have a long way to go to making great mobile ones. The many significant discoveries and inventions along the way will come from both large and smaller contributors.
Dear God, Why?
I would rather have MORE single desktop, single license products which I can buy once and then forget about having to upgrade every year, and buy yet another license for. (B2B)
On the positive side, there are some things a big company can do that a small company can't. This may contribute to why so many people are going to work at big companies, and we may see some good results.
I'm not sure if the machine learning revolution will favor the guy at home with a GTX 1080, but until we go for a decade or more without any big hardware/software companies starting up then I will believe it.
OTOH, I suspect that the US as lost the competitive advantage to china in this regard. The "IOT" revolution probably isn't going to start in the US since all the little board dev shops are in china where you can actually purchase all the little parts you need without having to wait 6 weeks for a part or pay 10x in shipping.
Not to start a conversation over ponzi schemes, scams, and vaporware, but its impossible to ignore the millions of dollars of investment moving around on a monthly basis in this space.
First wave of disruption - its all equal. Anyone with an idea, coding chops and time can give it a shot
Second wave - leaders of the first wave get caught unaware, some survive, others don't and new leaders are created.
Third wave - There is no third wave. Survivors of rounds 1 and 2 have learned the lesson, and realize that being wrong footed means death. They spend every erg of energy they can spare to identify trends, buy potential competitors and invest in survival.
Are We on the Verge of a New Golden Age?
This is a tautology and reflective of the mindset that produces articles like this one. There is only one Zuckerberg. Unicorns are by definition rare, and disrupting entrenched business interests is rare because they are entrenched. That's what it means.
It only seems like everyone is a founder because that's how it was in SV. It only seems like every stopped creating startups because that's how it is in SV. But like everything else in the world, trends start at the coasts and work their way inward, to the point where SV tech growth may be stagnating or declining, but Midwest startups are booming. And if the Midwest is firing up, African startups are virtually exploding.
Big businesses have always owned every decade of capitalism, and that's not going to change. Economies of scale is too hard to turn away from. But startups aren't dying, they're just taking on less sexy, harder problems that don't impact SV. So of course SV thinks entrepreneurship is dying.
They are people starting their own business. A friend borrowed some money to start a business where he drives a remote control vehicle down pipes to examine them from the inside. The space didn't have much 'local' competition and he was able to spot that, take advantage of it, and now has a dozen employees, has repaid the money, and is seeking to expand his operations. A similar experience was loaning my sibling some money so he could start a plumbing company - except it is more niche and not residential. They use some tech but they aren't tech companies.
It seems to me that the SV pundits see the tech startups while not seeing the guy who does vehicle power washing on-site, the interior decorator that uses 3D modeling and VR, or the folks who opened a diner where they have automated the food ordering process.
But startups were more about hard tech in 80s, early 90s. It will take a while that new big consumer tech platforms emerge, but when it happens, then again there will be shift towards viral consumer services.
This is new. All those things changed in the last 15 years. The result is that VCs have gone for market share, not technology. (Or they've gone for pharma, where patents still work.) Until about 2000, Silicon Valley VCs wanted startups to show that they had a strong intellectual property position. That all changed in the first dot-com boom, when it started being about buying market share.
This hurts innovation. Why work on a hard problem?
There will always be cycles, but it's so easy to be a naysayer, just write a medium post. People have been calling for the collapse of Silicon Valley since the dotcom crash, and at some point they'll be right, but they've been wrong so far.
Currently, I don't see any companies that are particularly interesting, but I'm confident there's some young kid out there that will create something great that will capture everyone's attention in the next couple of years, and then a new land-grab will occur all over again. It always has happened, and will always continue to happen.
Even "old" startups like Palantir that claim valuations at $20 billion are privately being revalued down significantly .
We're simply moving into a different era and the very public, very flash, direct to consumer startup market appears to be well beyond saturated. The money will go back to where it's always gone, quieter B2B type companies that build boring technology that solves less glamorous, but more important, problems.
1 - https://www.bloomberg.com/news/articles/2017-10-17/palantir-...
I disagree with this. I think the bloom is off the rose for these companies(FB, Google, Amazon et al)in terms of an image as a "cool place" to work. I they will increasingly be viewed for what they essentially are - just other big corporations. Big corporations that only have their own best interests at heart.
This nearly reads like a propaganda piece intended to discourage people from doing their own thing.
"AI doesn’t just require top-tier talent; that talent is all but useless without mountains of the right kind of data. And who has essentially all of the best data? That’s right: the abovementioned Big Five, plus their Chinese counterparts Tencent, Alibaba, and Baidu."
Making the next quantum leaps in AI is not a question of data advantages.
1) We have just seen the world's best Go-playing AI trained completely from self-play, without access to any labeled data nor hand-engineering, trained on a few machines. This happened at Google, but could have easily been done by a small startup.
2) Even the pioneers of deep learning are now strongly pushing back on the methodology that lots of labeled data is required to solve AI problems.
Geoffrey Hinton: "I don't think it's how the brain works. We clearly don't need all the labeled data."
In a nutshell, the biggest challenge of pushing AI forward is tackling unsupervised learning, not having "better data".
2) Read the article that you are referencing please. What you are implying is not the thesis of that article nor is it what Geoffrey Hinton is saying.
What Hinton is saying is, we should throw out deep learning. What he is saying is that the current approaches to AI are fundamentally broken and aren't going to result in artificial general intelligence.
Backpropagation is not just used in supervised learning, it is also used in unsupervised learning. I happen to agree with Hinton, in that there is too much hype around the current state and successes of AI, which has mainly been in "narrow AI".
AI is a term that gets thrown around a lot these days. But there is a big difference between technology that automates the tedious tasks of daily life, and artificial general intelligence.
In the world of deep learning, data is king.
>After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo - which had itself defeated 18-time world champion Lee Sedol - by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world's best players and world number one Ke Jie.
1) AlphaGo Zero was indeed trained in the way I mention.
2) As directly quoted from the article, Hinton believes that a better way of learning doesn't require all that labeled data. If such a method is invented, as is required to push AI forward, big corporations would not have a data advantage, which is my original point.
I was specifically talking about AlphaGo, not AlphaGo scratch. Also, if you read the paper about AlphaGo scratch, the key innovation driving the self-learning is the use of MCTS as a policy improver, which couldn't have feasibly been done without AlphaGo and the supervised learning.
And I think Hinton is saying that we need fundamental breakthroughs for AI, and I don't think he is in favor of "traditional" modern neural network architectures. Anything that requires SGD, he doesn't like.
AlphaGo Zero is the product of incremental improvements made by the same group of computer Go experts that built the original system. _They_ learned from each experiment and incorporated that knowledge in the next version. It was not obvious _a priori_ that the AlphaGo Zero architecture or training method would succeed; if it had been, then the AlphaGo team would not have built the earlier versions. And while it runs on "only" 4 TPUs for inference, those TPUs are each about 30x as powerful as the computers that the original AlphaGo ran on, so it's more like a reduction from ~180 GPUs to 120 equivalent GPUs.
I am not disputing that having more financial resources would help anyone hire talent and build awesome infrastructure, and certainly the latest way of training AlphaGo Zero was aided by earlier experiments that relied on labeled data and extensive computational effort. However, by no means do I think big corps have a lock on these kinds of advancements. There will always be great people who would rather go the startup route, and both algorithmic and hardware advancements are drastically reducing the operational cost of training AI systems. Thus, when it comes to AI, I think very small teams will be able to get very far with the right approach.
It might just be that in the consumer space there are some exceptionally good companies, where that hasn't really happened in enterprise, or it could be a function of the technologies and markets. Amazon (AWS) and Google to some extent compete in enterprise as well, but they're not as dominant relatively.
My personal opinion - as long as there is a market demand and as long as you're a business fulfilling that demand, startup or not, whether the tech is accessible or not, it's all that matters.
However, I am more interested in seeing technology disrupting old heavy industries. Please go outside the silicon valley and there are plenty of those.
> The market capitalization of Bitcoin vastly exceeds that of any Bitcoin-based startup. The same is true for Ethereum.
The market cap of the US dollar exceeds that of any financial institution. What's the fucking point?
My qualifications include a downstream consumer network of people ripe for my Business Savvy and I'd like to let you in on the opportunity to harvest the wealth which your seeds would grow through my hand.
In fact, I wrote a book about this and it's yours for $50 - the title of the book is "how to get people to pay you $50 for your book"
Reserve your exclusive .txt copy today for only three hundred payments of .0015 BTC
ML won’t require mass amounts of data forever. One-shot learning will get solved.
This is nonsense. I would challenge you to show any few-shot learning work that isn't basically some form of transfer learning in disguise.
Few-shot learning is super useful if you're in a position where you don't have much data, but will never compete with huge stacks of data.
"I would challenge you to show proof of X" in ML is an empty challenge when the technology is in its infancy. You might as well say "I challenge you to stream HD video over the internet" in 1989. Theoretically possible but technically impossible for the time. We proved it possible as tech improved, though.
The problem with cynicism is it costs nothing but still makes you look like an expert.
This has literally nothing to do with few-shot learning though. You can always make a crappy model with a small amount of data and then improve it as you get more. And my point is that if you have a competitor with several orders of magnitude more data, their models are almost certainly still going to be better than yours.
Few-shot learning will probably be able to improve the baseline of what you can do on certain tasks, but you're not magically going to learn a sufficiently accurate cancer diagnosis algorithm from 5 radiology images.
The best case for few-shot learning is that you go from "worse than the alternatives" to "good enough for some people to get value", which will probably happen a few times, but is going to be a minor phenomenon.
>will never compete with huge stacks of data
Children learn with sparse data based on composable models, and are bio machines.
I write about this extensively here: https://qbix.com/blog/index.php/2017/08/centralization-and-o...
Decentralization of trust, power, energy generation, and much more is on the way. This will lead to a lot more startup activity.
I'm definitely not strictly veggie: I just wolfed down a nearly free burger when offered! And though I haven't flown for years, I may need to visit China once or more over the next couple of years.
I.e. one (or one thousandth, given his carbon footprint) less Zuckerberg? I could get behind that. Heck, I supported NPG back when I thought humanity had a chance.
The problem is still, it's unlikely to be the solution. If we want the west to change there are two ways to go.
1. Legislation - This is the force option, i.e. if you break the law you go to jail and we'll come and take your property by force.
2. Market - Make technology that makes alternatives easier/more efficient/sexier etc.
2 is hard especially if alternatives are more expensive or about the same. Cost of panel can't really go much lower so we're stuck. Many people on the eco side don't like Nuclear (they're crazy).
Technology is getting better and I think small steps in regulation (i.e. taxing emissions slightly) may work. That's the direction that we're heading in so I'm pretty hopeful.
1. There has to be some of this: no market is totally free, and free-riders are always a problem to some degree.
2. Actually this is exactly the flavour of product/service that I am working on. Have the more efficient solution be better and easier, not a hair shirt. The area we are working on could knock 5% off Europe's entire carbon footprint and save most families hundreds of USD per year also.
impact of eating meat for how long? one year? I know that going vegan for a year is better than getting an electric car.
While true, it basically sends the message of "If you have enough money you are free to do what you please to the detriment of the planet."
I'd like petrol to be 3 or 4 times the cost it is now. That would greatly reduce emissions, but fossil fuels could still be used in situations where there was no alternative (long distance flight, huge earthmoving machines?).
Inevitably, rich jerks would continue to ride their jetskis. So what?
You can't legislate people to have the right attitude. But taxing things does drive bulk behaviour.
I mention this just to share that there is some chance of this improving, even in areas we might not think likely. If it is a long-term mining operation (short-term someday, perhaps) then I can't think of a technical reason that prevents wind and solar being used to generate the on-site electricity.
A recent HN article was about one of the mining dump trucks being powered by battery and using regenerative braking to mean it was able to charge itself. Those giant diggers are already using braking in their cables that power their shovels. Maybe there is something to capture there, as well.
In my work, I had the chance to deal with some riggers and some crane operators (smaller stuff) by just exposure while collecting data. During that exposure, I learned something new. Lifting the stuff up is actually pretty easy. It is putting it down that is difficult. Maybe there is some energy to capture in that process?
But it wouldn't be a detriment, the extra cost would be derived from the expenses associated with offsetting the negative externalities (by cleaning pollution, planting trees, recycling, water purification, whatever).
Basically "only the rich shall eat healthy"
And to be totally honest, I want to tackle this issue through a startup called "standard pantry" which is the idea that one should have access to a standard pantry of goods and recipes (in place of a standard min income) and they should be able to sustain only and healthily feed themselves.
> only the rich shall eat healthy
> only the rich shall eat fancy
It's easy to eat healthy if you're not rich. Buy non-processed ingredients and cook them yourself.
That's the problem.
We need to educate on a basic standard pantry that allows healthy to be so easy that it doesn't even cross our minds.
If the only problem is nutrition education then that is not a rich person privilege.
A large majority of the public cannot afford Whole foods but are educated enough to understand that home cooked meals from non-processed ingredients are healthier than their counterparts.
If education is all it took we'd all be rich with six pack abs. (Derek Sivers.)
Habit rules all.
Make unhealthy food illegal? Make marketing unhealthy food illegal?
I learned to cook from my depression era grandmother. Home cooking is healthy, but not always cheap.
My grandmother used to tell me "this meal is $1.37 per serving"
As we made things, I learned to cook in le curset pots from a woman who used to travel the world with Martin yan and Julia child... (we were state department family and had a lot of opportunity)
But my main point remains: the "educate" is pArt-and-parcel to the whole greater idea of a home cooked meal: "family time"
"Fast food is the bane of existence as we have basically brought generations to not value what home cooking means... then we exploit them in the cheap labor force and perpetuate the idea that having a "home cooked meal" means that "the way mom used to cook it" is an actual phrase and it means the disruption of the basis of modern society, the family...
This issue can spiral and spiral, but my point is that rather than basic income, we need basic pantry - the basis of understanding, having access to, and the knowledge to cook a good meal without breaking the home (due to how hard people need to work at jobs for their ability to provide food on the table)
I'm there once a year. I'll pin this.
Amount US taxpayers spend yearly to subsidize meat and dairy : $38 billion
To subsidize fruits and vegetables : $17 million
US retail price of a pound of chicken in 1935 (adjusted for inflation) : $5.07
In 2011 : $1.34
Pounds of chicken eaten annually per American in 1935 : 9
In 2011 : 56
Revenue collected by US fishing industry per pound of fish caught : $0.59
Portion of this figure funded by taxpayers as subsidies : $0.28
Anyway here's another source that shows that meat consumption has dramatically increased since the 1920s, though that wasn't my central point:
I want to show that there is a lot more murder today than before because of guns.
I pick one year (a far edge of the bell curve type of year) to compare to today where there was the least amount of murders and compare it to day.
Without showing an average or showing even a distribution bias should be expected. There is very little reason for picking a single year to these types of arguments and it is too often done.
I, for one at least, consider along the millions of years that evolution tried a version of homo sapiens that couldn't eat meat, and it was selected against...why would we now attempt to overturn the apparent wisdom of that selection?
That's pretty much a no-brainer for our health, for the environment and to reduce the suffering of millions of animals at the hands of the industrial livestock industry.
Personally, I used to eat meat multiple times a day, and now I only eat it once or twice a week. It has really improved my life. I think everyone should try it.
Because it doesn't scale.
"I've been disrupted by a leaner competitor!"
Our bodies specifically evolved for reproduction, but that doesn't stop us from using contraception when we have sex. Evolution also designed us to have wisdom teeth, but that doesn't stop people removing them.
Your argument assumes that natural selection has some kind of innate "wisdom", and that's just not true. Natural selection doesn't produce perfection or have any kind of intelligence behind it, it simply produces "good enough" (or, as we're on HN, the minimum viable product).
Also it's probably a mistake to attribute something like wisdom to what is essentially a statistical crap-shoot. The great temptation of religion is of course the idea that there is an Intelligence at work. But just to pick a silly example, does the dodo bird or the passenger pigeon think it was "wise" and "intelligent" for the forces of nature to select for meat-eating humans? Dodo philosophers and theologians in the last days - did they question why God would unleash such a fury on them?
Anyway I can tell you human ones will. Natural selection will most likely start killing off humans as soon as our technology can't keep up with all our baby-birthin'. It will manifest in forms you're already seeing in today's headlines. Voluntarily consuming less is an attempt to deal with it before the wisdom of natural selection deals with it.
Sounds promising. Maybe you can build a small or medium sized organization around trying to make that happen.
For example, it should be "three clicks and you're in" to found a virtual company and state "this is the problem we are going to solve, who's with me" - and another person from across the globe should be able to say "heck yeah let's do this" and then Facebook would say "that sounds great we will host your site/app/whatever and when you show traction on that problem you will hit level 1 funding of "X" and so on and so on..."
Facebook has the daily attention of over a billion people they claim, yet they literally just steal their energy and provide no building power.
Start a fucking economy and empower people unlucky enough to be born in a far reaching place such as the Congo but smart enough to know how to solve various problems affecting many.
Facebook is not making the world a better place until they do something like this.
So, I don't know if we actually need to change much at this point.
EX: Per person US emissions are down ~18% since 2000. We are on tack to hit 50% drop reasonably quickly.
I also strongly believe we would see more innovation and generally a more resilient economy.
The internet looked like it might help for a while, but it's currently being constricted by the ISPs and corporations like Facebook and Google, while more and more data is being tracked about each person at every moment. Authoritarianism broke down many times because you couldn't track what everyone doing and eventually someone managed to make a plan to fight against you. Information technology is just making it easier for authoritarians to maintain control