Hacker News new | past | comments | ask | show | jobs | submit login

New stuff: Machine learning that works. Rust's borrow checker. 3D SLAM that works. Voice input that works. Lots of image processing stuff. Machines with large numbers of non-shared-memory CPUs that are actually useful. Doing non-graphics things in GPUs.

The webcrap world is mostly churn, not improvement. Each "framework" puts developers on a treadmill keeping up with the changes. This provides steady employment for many people, but hasn't improved web sites much.

An incredible amount of effort seems to go into packaging, build, and container systems, yet most of them suck. They're complex because they contain so many parts, but what they do isn't that interesting.

Stuff we should have had by now but don't: a secure microkernel OS in wide use. Program verification that's usable by non-PhDs. An end to buffer overflows.




Old timer rant:

IMO Machine learning mostly doesn't work (yet) with a couple exceptions where tremendous amounts of energy and talent have made that happen. For example, image processing with conv nets is really cool, but the data sets have been "dogs all the way down" until very recently. And for the past few years, just getting new data and tuning AlexNet on a bunch more categories was an instant $30-$50M acqui-hire. Beyond a few categories, its output amuses and annoys me roughly equally.

But the real problem with ML algorithms IMO is that they cannot be deployed effectively as black boxes yet. The algorithms still require insanely finicky human tuning and parameter optimization to get a useful result out of any de novo data set. And such results frequently don't reproduce when the underlying code isn't given away on github. Finally, since the talent that can do that is literally worth more than its weight in gold in acqui-hire lucky bucks, it doesn't seem like there's a solution anytime soon.

Voice input? You gotta be kidding me. IMO it works just well enough to enter the uncanny valley level of deceiving the user into trusting it and then fails sufficiently often to trigger unending rage. Baidu's TypeTalk is a bit better than the godawful default Google Keyboard though so maybe there's hope.

GPUs? Yep, NVIDIA was a decade ahead of everyone by optimizing strong-scaling over weak-scaling (Sorry Intel, you suck here. AMD? Get in the ring, you'll do better than you think). Chance favored the prepared processor here when Deep Learning exploded. But now NVIDIA is betting the entire farm on it, and betting the entire farm on anything IMO is a bad idea. A $40B+ market is more than enough to summon a competent competitor into existence (But seriously Intel, you need an intervention at this point IMO).

Machines with lots of CPUs: Well, um, I really really wish they had better single-core CPU performance because that ties in with working with GPUs. Sadly, I've seen sub-$500 consumer CPUs destroy $5000+ Xeon CPUs as GPU managers because of this, sigh.

Container systems? Oh god make it stop. IMO they mostly (try to) solve a wacky dependency problem that should never have been allowed to exist in the first place.

The web: getting crappier and slower by the day. IMO because the frameworks are increasingly abstracting the underlying dataflow which just gets more and more inefficient. Also, down with autoplay anything. Just make it stop.


"The web: getting crappier and slower by the day. IMO because the frameworks are increasingly abstracting the underlying dataflow which just gets more and more inefficient. Also, down with autoplay anything. Just make it stop."

One of my favorite features now on my iPhone is "Reader View". Have a new iPhone 7, which is very fast, but some pages still take too long to load, and when it finally does, the content I want to read is obscured with something I have to click to go away, and then a good percentage of the screen is still taken up by headers and footers that don't go away. The Reader View loads faster, and generally has much better font and layout for actually reading the content I'm interested in.

All of which is to say, the sole purpose of what a lot of web developers are working on today seems to serve no purpose other than to annoy people.


> his provides steady employment for many people, but hasn't improved web sites much.

Just a week ago I made the startling discovery that FB's mobile web app it's actually worse than a lot of websites I used to visit at the end of the 90s - early 2000 on Netscape 4.

Case in point, their textarea thingie for when you're writing a message to someone: after each letter push there is an actual, very discernible lag until said letter shows up in the textarea field. So much so that there are cases when I'd finished typing an entire word before it shows up on my mobile phone's screen. I suspect it's something related to the JS framework they're using (a plain HTML textarea field with no JS attached works just fine on other websites, like on HN), maybe they're doing an AJAX call after each key-press (?!), I wouldn't know. Whatever it is, it makes their web messenger almost unusable. (if it matters, I'm using an iPhone4).


FB does auto-complete for names, groups, places, and so on. So for each char it does a callback to see if it should display the dropdown. Using a swype style keyboard ia a bit nicer because you're only adding full words at a time.


AFAIK they log everything you type, even if you don't submit it. So maybe that has something to do with it?


Hasn't improved websites much? I remember the days of iframes and jquery monstrosities feigning as web "applications". The idea o a web-based office suite on the web would have been laughable 20 years ago.

My guess is you haven't actually built a real web application. The progress we've made in 20 years is astounding.


Another way to look at it is that, even after 20 years and huge investment from serious companies, we can still only build poor substitutes for desktop applications.

Don't get me wrong, it is amazing progress given the technology you have to fight. But in absolute terms it's not that great.


This is my view exactly. In the past 10 years I've developed both a web app [1] and a cloud-based native app [2]. Developing the native app was by far the more enjoyable and productive experience.

The great thing about native development is the long-term stability of all the components. I have access to a broad range of good-looking UI components with simple layout mechanisms, a small but robust SQL database, a rock-solid IDE and build tools - all of which haven't changed much in the past decade. Plus super-fast performance and a great range of libraries.

To put it in terms of the article: the half-life of native desktop knowledge is much longer than 10 years. Almost everything I learnt about native programming 10 years ago is relevant now.

Unfortunately, the atrocious deployment situation for native apps is also unchanged in 10 years (ie. "This program may harm your computer - are you really sure you want to run it?"). But on the other hand having a native app has allowed me to implement features like "offline mode" and "end-to-end encryption" that would be difficult or impossible in a web app. This has given my business a distinct advantage over web-based alternatives.

[1] https://whiteboardfox.com

[2] https://www.solaraccounts.co.uk


I really am very glad to hear someone writing in public what I've been mentioning to colleagues and all who would listen for the past few years.


I've never built a web application of any description. As a user, what are the improvements that I should be looking for that have been introduced over the past 10 years? It was longer ago than that that AJAX started getting big, and as far as I can tell, that was really the last key innovation for most apps: Being able to asynchronously load data and modify the DOM when it becomes available. I'm aware of other things like video tags, webgl, canvas, and such that allow us to replace Flash for a security win, but that seems replacing a past technology just to get feature parity with what we had a decade ago.

Everything else seems like stuff that makes things better for the developers but not much visible benefit to the user. I can understand where a comment about the web not being much better would come from, on the scale of a decade.

Go back 20 years, and you're talking about a completely different world; frames, forms, webrings, and "Best viewed with IE 4.0". But if '96 to '06 was a series of monumental leaps, '06 to '16 looks like some tentative hops.


There's actually a fair number of new features in the web today that you couldn't do in 2006 - offline access, push notifications, real-time communications (without a hack that breaks on most firewalls), smooth transitions, background computation, the history API, OS clipboard support, accelerometer access, geolocation access, multi-touch, etc.

Few websites use them effectively yet, at least in a way that benefits the consumer (several are using them to benefit marketers). This could be because developers don't know about them, consumers don't care about them, or perhaps just not enough time has passed. XHR was introduced in 1999, after all, but it took until 2004 before anyone besides Microsoft noticed it.


My first install of Netscape Communicator had an offline mode, and one of my colleagues recent told me how they had fully working live cross-browser video conferencing in 98. Their biggest competitor was WebEx, which is still around.

I think many of us underestimate what was possible to do in browsers. What has happened is that these features have been democratised: what took them months to build I can now pull of using WebRTC in the space of a weekend.


My very first software internship ever was getting streaming video to work for the Air Force Center Daily at MITRE Corp, back in 1997. I did it with RealPlayer, Layers in Netscape (remember them?) and DHTML in IE.

The thing is - the polish matters. You can't do viable consumer apps until they actually work like the consumer wants, which is often decades after the technology preview. You could emulate websockets using IFRAMES, script tags, and long-polling back in the late 90s, but a.) you'd get cut off with the slightest network glitch and b.) you'd spend so much time setting up your transport layer that you go bankrupt before writing the app.


Thank you for the explanation. Some of those I've known about (but they didn't come to mind in my original comment), and some are definitely things that I would've taken for granted (coming from a native application programming background). Those are all features added to web standards and implemented in browsers though, right?


They're all in web standards. Browser support varies but is generally pretty good for most of them.

They're taken for granted in native application programming, yes, but the big advantage of browsers is the zero-cost, on-demand install. This is a bit less of an advantage than it was in 2003 (where new Windows & Flash security vulnerabilities were discovered almost every day, and nobody dared install software less their computer be pwned), but there are still many applications where getting a user to install an app is a non-starter.


Most programs that existed before (lets be real.. programs were replaced with web apps now) defaulted to being offline only, or would sync. I miss those days.


> I miss those days.

Ditto. I like having a copy of a program that no one but me has access to modify, and I like that I don't have to rely on my ISP to use my computer. If I like a program, I don't want it to change until I choose to change it. I don't want to be A/B tested, marketed to, etc. I'd rather buy a license and be happy =)


> The progress we've made in 20 years is astounding.

And yet "open mail in new tab" in Gmail has been dead for at least a couple of years now. In fact, I'd say that "open link in new tab" is dead on most of the new web "applications", I'm actually surprised when it works. The same goes for the "go back with backspace" thingie, which Google just killed for no good reason.

Copy-paste is also starting to become a nuisance on lots of websites. sometimes when I try to do it a shitty pop-up shows up with "post this text you've just copied to FB/Twitter" or the app just re-directs me to somewhere else. It reminds me of the Flash-based websites from around 2002-2003, when they were all the rage.


> And yet "open mail in new tab" in Gmail has been dead for at least a couple of years now.

Use the basic HTML version. It's worse in a few ways but better in most others. Including speed.


'Back with backspace' was changed to CMD+left-arrow assumedly because a simple backspace can change the page unexpectedly for someone who thinks they are in a text field.


'Back with backspace' has been standard in the Windows file manager for as long as I can remember. Given that the file manager was the moral precursor to the browser of today it would have been nice to retain it.

Back with backspace!


Actually, it didn't go back on Windows XP. After using Windows 7 for 4 years, I am still not used to this 'new' behavior.


I sit corrected, thank you.


I know the reasons, I just think they're stupid. They've replaced one simple key-press with two non-intuitive ones. On my keyboard I have to move both my hands down in order to press the 2 keys, the backspace key was very easy to reach without moving my hands.

On top of that I actually have no "CMD" key on my keyboard, I have a "Ctrl" key which I assume is the same as "CMD" (I also have a key with the Windows logo which I had assumed it was the CMD key, I was wrong). KISS has gone out of the window long ago.


Alt + left arrow still goes back on ff and Alt + right arrow goes forward. It's been that way for quite a while.

The outlook Web app on the other hand sometimes blocks backspace from deleting a character, presumably to stop you inadvertently jumping back from inside a text field. This is only "on" when inside a text field in the first place, so if MS could do it, I don't see why Google's better engineers couldn't.


I've lost so many posts from hitting backspace without realizing I didn't have the input box focused I'm more than happy with this trade off.


Browsers have improved greatly and new web development frameworks are necessary to make use of those improvements but the actual process of building usable web application doesn't seem that improved. It's certainly not any easier to achieve pretty much the same results.

> The idea o a web-based office suite on the web would have been laughable 20 years ago.

What's laughable is how much effort has gone into rebuilding something in this platform with a result that is nearly the same (but worse) as what existed 20 years ago.


What kills me about a lot of the technology we use today is that few people, at least in positions of power are brave enough to pause every now and then and say, "WTF are we doing?" So many technologies live on because of so-called "critical mass," big investments, marketplace skills, and other things that are about anything other than the technology itself. These of course are mostly practical reasons and important ones at that, but at some point it becomes impractical to continue to cling to the practical reasons. IMO, the ability to do something truly different to make an advancement is often what separates innovators and intelligent people from the rest. When someone does try to act, the market essentially crushes them accordingly, making most efforts null and void, and dulling the senses and hope of everyone else watching, warning them not to try anything themselves.

x86 CPUs, web browsers, popular operating systems, and so on are all examples of this problem. At some point I really wish we could do something different, practical reasons be damned. It's sad that as many cool, "new" things we have, some of the core, basic ideas and goals are implemented so poorly and we are effectively stuck with them. This is one reason I hate that almost all software and hardware projects are so rushed, and that standards bodies are the opposite, but with only the bad things carried over. The cost of our bad decisions often weighs for much longer than anyone could imagine, just ask anyone who has designed a programming language or something major in software ecosystems.

As much as I enjoy all the new, shiny stuff, it makes me sad thinking about BBSs and old protocols Gopher that represented the old guard, alternate routes, and the fact that we really haven't come that far. Overall things of course are a lot better, but in many ways I often feel like we're treating the symptoms and not the cause or just going around in circles.

I could go on, but the rant would be novel length.


I find it super disappointing that Android, from an operating systems perspective, is so terrible. It's the newest popular operating system but it's no better than Windows, iOS, or Linux. It's a mess of passable API's, average security, etc. Rushed to market for, of course, practical reasons.

I don't see any opportunity in the future for any person or company to take all the lessons learned in the last 50 years and build something new that takes it into account.

Same with browsers; It's only now that we kinda know what a browser really needs to be but there's no way to start from scratch with all those lessons and build a new kind of web browser. There is always going to need to be what they currently are and build on what was already done.

I understand why, but it's still kind of sad.


20 years ago there was exactly one multiplatform office suite, and if you think StarOffice was better than the current incarnation of Google Apps I'm not sure how to respond to that outside of laughter.


I also kind of question whether or not a web-based office suite is truly multiplatform. For the most part, it doesn't interact with my desktop so it's really a single platform.

It's almost like saying Microsoft Word is cross-platform because I can RDP into a Windows machine from Linux. It's not really part of Linux, it needs a client to access an application running on a remote server. The only difference is how complex the client is.


Is the web browser the best technology to achieve "multiplatform" for an office suite? It makes sense from a purely practical sense but technologically it's pretty terrible.


Practicality wins pretty handedly here. Technology will always improve and web based applications will become more and more feasible as a result.

The flip side of that equation is that poor practical choices never improve because there will only be more platforms to target.

If we made development decisions based on technological constraints alone, how is it supposed to improve?


I wrote pivot table functionality in XSLT and XML for IE6. Pretty much 10 years ago. It's not that you couldn't do it, it was that it wasn't worth it.

Your whole multiplatform thing is disingenuous because back then there really was only 1 platform. Windows. So you've conveniently forgotten about the lotus suite, etc.

I also think you're vastly overestimating how far we've come in that timespan. Like v8 was pretty much 95% of the improvements, simply because you could do more than 1000 loops in JavaScript without killing the browser.

And yet today it is still harder to make a decent web app than it was in VB6 15 years ago.


I call BS. The progress that was made in the late eighties/early nineties was much more astounding. Going from character screens to fully event driven, windowed graphics mode was a much greater and more impressive change in a shorter time-frame.

the that time you had to learn a lot of new stuff in a short time too.


Well, in early 2000-s suddenly most of computers in the world became connected to each other and then after some time everybody got a powerful and networked computer in their pocket. I think this is pretty impressive too.


What exactly are you calling BS on? Nothing I said conflicts with your comments at all. It's not like progress can only happen once...


Have you tried to debug babelified react site with sourcemaps and whatever ...

I spent four hours just to find that the latest and greatest express don't have the simple global site protection with a password (that it had in version 3) like with .htaccess - it is just not possible anymore. There were no elegant solutions.

There may be some marginal progress while doing complex stuff, but doing the simple is harder and harder with each passing year.

Here is a simple question - is making working UI now easier than with MFC circa 1999. If the answer is no- than that progress is imaginary.

Every new thing is strongly opinionated, doesn't work and relies on magic. Debugging is nightmare and we have layers upon layers of abstractions.

Please for the love of Cthulhu - if any of you googlers, facebookers, twitterers read this - next time you start doing the next big thing let these 3 be your guiding lights - the code and flow must be easy to understand, it should be easy to debug, it should be easy to pinpoint where in the code something happens - all of the frameworks' benefits become marginal at best if I have to spend 4 hours finding the exact event chain and context and place in the framework that fires that ajax request.

/rant over


Do the web based office suites use these new libraries and frameworks? (React, Angular, Webpack etc) Genuine question as I thought that companies like Google use their own in house libraries like Google Closure which have been slowly built up over many years.


Google Apps use Closure, and after taking a peek at the source of Microsoft's online office apps, they appear to be using Script# or a similar C# to JS tool, as I see lots of references to namespaces, and lots of calls to dispose() methods.

iCloud's office apps use Sproutcore, which eventually forked into Ember (though the original Sproutcore project still exists).


Probably not Google Apps. But Angular is a Google developed framework.


In 2005, I worked on a crud app framework that used data binding driven from the server and doing minimal partial updates of the web page in a way that is much more efficient both in resources and development time than anything currently mainstream. That was the second version of something that used XML islands to do data fetching before XHR was introduced by Microsoft. IMO most mainstream JS dev is just barely catching up with what smart devs were doing in individual shops.


   Program verification
We really tried. It's hard, and verification of even simple programs easily hits the worst cases in automation. It will take another decade before program verification is digestible by mainstream programmers.


Spot on about machine learning. That's something that never worked all my life, but it seems like some of these young turks might be onto something there... I should probably sit up and pay attention to that one.


I think the fact that we'really starting to make native apps using Web technologies speaks to the progressadele for webapps.

The whole HTML rendering pipeline with advanced scripting support is really an innovation in itself. The downside is speed, but that's where we innovated the most, VMs for JavaScript.

Hopefully Web Assembly will really show the improvement we've made.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: