Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the single top-priority software engineering problem?
208 points by abrax3141 14 days ago | hide | past | web | favorite | 351 comments
If you have unlimited time and/or resources, what single software engineering problem would you address? I'm not talking about "Peace on Earth"-type problems, but rather real world practical problems facing software engineering that could be actually solved if you could pay an rationally large team of serious hackers a rationally large amount of money for a rationally long period of time. Another way of asking this is: What's the most important piece of technical debt across software engineering, that could practically be solved if we put enough energy into it?

Development environments. The amount of time and hassle I've seen lost to getting a working development environment up and running is incredible.

Every time I talk to someone who's learning to program I tell them: "Setting up your development environment will be a nightmare. I have been doing this for twenty years and it's STILL a nightmare for me every time I do it. You are not alone."

Docker is reasonably good here... except you have to install Docker, and learn to use Docker Compose, and learn enough of the abstractions that you can fix it when something breaks.

https://glitch.com/ is by far the best step forward I've seen on this problem, but it's relatively limited in terms of what you can accomplish with it.

This is my calling. I'm the cofounder of Repl.it and I've dedicated my career to solving this problem.

Simon -- since you're the cocreator of Django, you might get a kick out of this: From loading up a Django environment to adding and rendering the first view takes less than a minute: https://gifs.amasad.repl.co/django.gif

Before starting this company I was a founding engineer on the React Native team where I focused speed of setting up and taking the drudgery out of cross-platform mobile dev. And before that I was a founding engineer at Codecademy where we were attacking the same problem from an education angle.

With Repl.it, however, we're determined to simultaneously solve the "getting started" problem while scaling the environment to allow real work. It's incredibly challenging but we've made great progress. If anyone is interested in working with us, collaborating, or if you're simply passionate about this problem and want to chat then I'd love to hear from you: amjad@repl.it

I read that comment and thought “repl.it founder’s gonna be in here real quick.” Then I saw your comment with ‘0 minutes ago’ at the top.

I love Repl.it and taught my daughter python there. Can you please make a self hosted version? I think the only thing I've seen is that maybe sometime in the future you'll have it.

Great to hear :-)

Certainly the plan. Just a matter of priorities and time. I'm curious, what's so attractive about self hosted version?

As long as my critical infrastructure and tools depend on someone else's computer, I'm nothing more than a (potentially well-paid) sharecropper.

In fact, I think that's a valid answer to the question posed by the headline. Returning the power to the end user, and keeping it there, should be the most important priority for software developers. This is a social problem, not an engineering problem, but unlike many other social problems the solution will have to be engineering-driven.

In more concrete terms, that means being able to self-host your tools.

> As long as my critical infrastructure and tools depend on someone else's computer, I'm nothing more than a (potentially well-paid) sharecropper.

I heartily agree. There are so many new tools and things that are interesting concepts, that I don't try because there is no offline/self-hosted version available.

Current example (but can be replaced by a myriad of other tools, this is not specific to 'em): Notion. Apparently could be adopted into my personal knowledge management, has some interesting features most of the software I've seen so far does not, but why would I ever invest even a moment of time to pay the costs of using the system (let alone the membership fees etc.) if one day, poof, it's gone, like Frank Sinatra, like WiiWare, like Microsoft eBooks, like Google Reader.

> Returning the power to the end user, and keeping it there, should be the most important priority for software developers.

This is what we need to return to. Look at the open-source manifestos, FSF documents, heck even certain sections in the Windows 9x User interface guides and .NET Framework design guidelines indicate that the user should always be the focus, the user should be the one in control.

It's a requirement for most of the US DoD (though that's changing). Which is where I work

> (though that's changing)

Though I'm glad for the flexibility it will offer you...that doesn't seem like an awesome idea/trend.

That gif is a really excellent demo. I'd been mislead by your name - I assumed you were all about Jupyter-style REPL consoles. I didn't realise how much closer you were to Glitch.

I'll take a deeper look.

We started as just an editor and a console -- I initially modeled it after the Dr Scheme REPL (now Racket) -- but our users wanted more. Kids who learned to code using our repl wanted to use it to build and publish apps so we evolved the product towards becoming more of an IDE while trying to keep the simplicity.

Check out our community as well, lots of hackers, especially young ones, sharing and collaborating on code: https://repl.it/talk

How does it compare to Codesandbox?

Focused on simplicity, speed, and generality. They do really well on frontend and client-side execution. Our speciality, on the other hand, is language support and containers/general dev environments.

Ah, okay thanks.

Thank you for your hard work!

My dream is a "Open in VSCode" button on each GitHub repo that would create a container dev environment (either locally or hosted), and that environment would be ready to run all test, CI, localhost etc.

You could trivially create a new environment to test out someones PR or work on a feature branch, etc. If you use hosted environments, then you could connect from any client. If you only have a web browser, then you could work from VSCode in the browser (it's all JS+HTML anyway).

Others could join your hosted environment for peer-programming Google Docs style.

On hosted environments, normally long builds could be done very quickly and smart caching on the backend could share incremental compilation state between your teammates and you.

We're pretty close to this already with VSCode Remote Containers[0], Visual Studio Online[1] and VSCode Live Share[2].

(Disclosure: I work at MSFT but not on these technologies.)

[0] https://code.visualstudio.com/docs/remote/containers

[1] https://code.visualstudio.com/docs/remote/vsonline

[2] https://visualstudio.microsoft.com/services/live-share/

We've made something similar at Repl.it -- we call it "Repl from Repo". It's far from running everything but it can run a lot: https://repl.it/blog/github

We tried to make it as seamless and as low configuration as possible. Our "run on repl.it" badge is currently growing exponentially on GitHub.

Great work on this. Is it open source? Or is there a way to use my own hardware? If not, then it seems like gitpod.io[0] is a much better fit for the open source ecosystem. That said, there's definitely a place for proprietary software in this space (I'm a Glitch user and I love it).

[0] https://www.gitpod.io/

Much of the component pieces are open-source, as I mention in this comment: https://news.ycombinator.com/item?id=22278301

And the goal is to get to open-source most, if not all of it.

Is GitPod fully open-source? AFAICT you still need to buy a license to self-host.

Gitpod is open-source at core. I.e. we made the IDE completely open-source (https://theia-ide.org). I didn't know replit is open-source. Can you share a pointer?

What you are talking about exists, and its fantastic:


Nix is the closest I've come to this dream.

I like docker, but it's too resource heavy on a Mac. I need it running directly and not in a VM.

So a simple nix file, and nix-shell shebangs[1] has been life changing. Throw in --pure and now you really know the environment works everywhere.

My repos all have a shell.nix with every dependency. Just call nix-shell and it works in Linux and OSX. Anywhere.

You can even bundle it into a docker image and use nix in there too. For production, etc.

Documentation and tutorials are lacking, but if you put in some time, I've found nothing comparable.

Version it in, pin the Nix channel[2], and you can come back a year later in a brand new machine and start where you left off.

[1] https://gist.github.com/travisbhartwell/f972aab227306edfcfea

[2] Highly counterintuitive: if you don't pin a Nix channel, it's a moving target and dependencies may go missing, be updated, etc.

Agree that Nix is a great candidate. The PhD about it is quite readable and demonstrates the rigour with which the author approaches this problem space.

It’s pretty difficult to understand and use though.

Nix has pretty awful UX for sure. It's kind of a trial by fire to learn it.

But once you do, you can build abstractions to improve that UX. But it would be nice if vanilla Nix was easier.

Nix was a huge innovation in operating systems. The wider community has not caught up yet, but it will.

repl.it is a nice take on this problem:


I was impressed by the low latency (at least as of a few months ago). If you make a web app right, it can be faster than a resource-hogging local IDE!

I don't use it myself every day because I've already been using vim/screen/bash for 15+ years. Those are my lightweight REPLs.

But if I were advising someone to code, and they wanted to skip the dev env, I would probably try repl.it. They also have impressive concurrent editing facilities in any language.


I also think the question is malformed, because I would hesitate to say there's any "single problem" in software development. It's all death by a thousand paper cuts.

The "dev env" problem is real, but it's really a problem with 100+ different tools. It's an architecture/ecosystem problem.

repl.it has some creative solutions to these problems, though it's still a ton of work. I think they are doing great but it's not going to solve all problems.

For example, another popular dev env is for data science or machine learning researchers. RStudio is doing some good stuff there, and there's Google colab, etc.

Software development is very big diverse and it's hard to imagine one solution cutting across all of it.

Thanks so much for the mention. It's indeed a lot of work, but we've done so much with such a small team and it's only accelerating as we layer all of our solutions on top of each other. We're solving one problem at a time with focus on patterns and abstractions that makes it easily extensible and could support virtually any language.

With the Universal Package Manager (https://github.com/replit/upm) we're trying to encode the best practices in package management behind a single easy-to-use interface. One of the most fun features is that you can `import` or `require` a package and we'll just guess what you want to do, install the dependencies for you, generate a spec and lock file.

With prybar (https://github.com/replit/prybar) we're trying to create a universal interactive programming experience that behaves roughly the same for every language.

And, to be open-sourced soon, our window-tiling manager and workspace (https://repl.it/blog/ide), we're trying to build a framework that makes it so easy to build complex, plugin-based environments, such as IDEs, very easy to build.

We also leverage a lot of awesome open-source projects and standards, like the Language Server Protocol by VSCode/Microsoft, that abstract over language features and makes it easy to provide an amazing experience across languages.

Edit: if it's been a few months since you tried Repl.it, give it another try, we've made a lot of progress since then. Multiplayer is now out of beta, we have Git integration, and we've done a lot to make it faster and more reliable.

Have you tried using VSCode's remote development extension?

In a nutshell: your repo contains a file (or directory) with everything needed to set up a containerized run environment for the project. VSCode adds its own server daemon, and your IDE runs half inside the container, half on the host machine. Once it's set up, everyone on the project (who has vscode) instantly has a one click clean, working development environment, including all the niceness you expect from local development (debugger, test integration, etc). It is fucking magical.

Detail: https://www.hanselman.com/blog/VisualStudioCodeRemoteDevelop...

I thought about creating an AMI for AWS that uses code-server instead of Cloud9.

Theres a great tool by cdr that uses ssh to prepare and launch a VScode environment. Works great in conjunction with AWS ec2 tools like the awscli.


right, this is something like what VSCode has built in, except rather than hosting all of VSCode on the remote server, it splits itself into a thin client on your host and a server backend on the remote machine. You can use SSH, or my preference is a local container.

And then after using Docker, sometimes you need to figure out how to get your debugger and IDE to talk inside the docker container, which has .. varying levels of success depending on the language.

I've been working on this for the last 3 year and at least solved the problem for myself. The crux is that there are about 100 programming languages, with a few frameworks each, and a couple of database solutions, with at least a dozen different versions of frameworks and databases that are used in parallel for app maintenance, and a few different platforms to run on. And on top of that developers like to customize their developer environment.

It’s absolutely tragic that LSP isn’t meant to work remotely, because the actual editing and code intelligence interface really needs to be local. So practically speaking the whole dev environment has to be local. I’ve used rsync type solutions for both AWS and local VMs (docker for Mac), and in the end it was always better to just get the thing running natively.

I'm interested in what features and integrations (possibly to be included in the default install) that Emacs and Vim need to compete with IDEs. A package for major OSs that contains Emacs+various packages, gcc, autotools and make all set up to run out of the box would be great.

Also, at least on Mac, docker can be a major resource hog.

Pretty sure it just runs in a normal VM when not on linux

Close. Docker for Mac runs in a paravirtual environment using Hyperkit, which uses Apple's Hypervisor.framework. At least that was true a couple years ago. A couple years before that, Docker Machine and boot2docker ran on hypervisor of choice (e.g. Virtualbox).

If you get a chance, Hyperkit can be fun to play with. I was able to write a bash script that downloads Hyperkit andba Debian ISO, and from that boot a running instance of Debian. It was an interesting exercise.

That’s what I like about working in a .net shop, it goes like this:

1) install windows 10 2) install visual studio

And I’m good to go. I’ve seen my friends from the uni setting up elaborate arch/vim contraptions and I’m kinda glad I don’t have to.

Visual studio works really well with C++ and .net languages. The defaults are fairly reasonable. Of course if defaults don’t suffice it’s back again to painland.

dotfiles help a lot here, they should have them for more things

npm & git's local directory awareness is a lifesver

rvm / nvm are great when the shell integration is turned on, direnv is wonderful

gap seems to be with native packages -- there isn't one of these for brew / apt-get, and language-level package managers fail on these deps a lot

inability to map local hostnames to docker is also a problem -- that would make it easier to manage multiple environments at once

Why are you setting up new dev environments all the time? I have my dotfiles on github and they're always ready to go on any new Linux system I happen to be on. I have to initially set up new tools, of course, but once it's done it's done.

I was looking in offerings like Glitch lately.

Codesandbox had better UX than Glitch, at least for my taste.

And let's not even get started on docker for mac volume mounts. My linux machine (that's not allowed anymore) is literally 10x faster for local dev.

My development environment is

    pacman -S qtcreator clang qt5 cmake ninja git hotspot heaptrack 
I can start pissing apps 10 minutes after a clean install of my system... what do you guys do that make it so hard ?!

A constantly cycling proliferation of different languages, frameworks, libraries, etc. that all do the same things in a different way and are most often mutually incompatible with each other and have entirely different ecosystems with their own comparative advantages but also major pitfalls. This causes tech workers’ investment in skills to get more out of shallow knowledge and trivia than on deeper concepts, creates silos of employment opportunity based on the trivial knowledge workers have, and hinders the ability for the software engineering field as a whole to have a large pool of shared knowledge and develop and evolve stable, relatively timeless systems and tools of high quality, both as end products and in intermediate tooling toward those ends.

Learning new skills is easy enough, what kills me is the enormous manpower invested in churning working code.

Easily 30% of my company's total engineering effort is just treading water, migrating from a deprecated platform to another one that will be deprecated by the time the migration is complete. Typically because the original team's standard 18-month tenure has elapsed and the new guy was under-leveled at hiring so he needs impact for promotion.

It's a great disservice to our field that people so deep in the stack are so comfortable changing their minds all the time. The Python 2.7 thing feels like the Library of Alexandria. Burning down mountains of perfectly good working code just because we can.

Backwards compatibility is tragically underrated.

Backwards compatibility causes a lot of evil. I would prefer that breaking changes come with automated tools to migrate. Sometimes bad decisions get made, and we shouldn't have to carry the burden of that forever out of laziness/stubbornness. What ends up happening is the opposite of what you wanted, because after enough backwards compatibility debt adds up, someone starts a fresh competitor that takes over.

> I would prefer that breaking changes come with automated tools to migrate.

The hard part of the migration isn't that you can't automate it. The hard part is that you can't automate verifying that it didn't break anything. So you rather stick to what you have.

> Sometimes bad decisions get made, and we shouldn't have to carry the burden of that forever out of laziness/stubbornness.

Here's the deal, Mr. Developer who wants to change everything because the old stuff is somehow bad and the new stuff surely is better: I almost certainly have better use for the money than to spend it on migrating the code. The old code will keep working. That's not laziness, it's prudence.

Developers always exaggerate the benefit of rewriting stuff and changing things around. I understand why, you'd rather arrange the code you work with to your taste than whatever horrible stuff is there already. The problem is that every other developer feels the same way, but with a different taste.

For instance, some people hate object oriented programming, some people think procedural programming is somehow bad, some people believe functional programming is generally good. I believe the best programming is the one you can just stop arguing about and solve the damn problem with, sooner rather than later.

> What ends up happening is the opposite of what you wanted, because after enough backwards compatibility debt adds up, someone starts a fresh competitor that takes over.

That's not true at all in the vast majority of businesses cases. The benefit of the migration would have to be so spectacularly high, it would have to offset its cost. Again, this almost never happens.

Whenever you break compatibility, you burn good will. Whenever you break compatibility, you give me an opportunity to switch to your competitor. If I have to migrate, I might as well migrate to something else.

When code is hard to maintain, extend, or debug we should absolutely refactor it. Even if something is fine but can be expressed better using a feature from the language's new backwards-compatible release, go for it.

The tragedy is mandatory, all-out migration for code that was already high quality, and just happened to be written for an older stack.

All code is "hard to maintain, extend and debug" to the person who didn't write it and therefore doesn't like working with it. Developers like to exaggerate this all the time, because they want to maximize their own comfort.

Even if developers were able to objectively asses that code is hard to maintain/extend/debug, that doesn't mean changing it is the right business decision. It may or may not be. From a pure efficiency standpoint, it may well be a net loss. From a developer psychology standpoint, it may be worthwhile.

You also can do a lot of refactoring without breaking a lot of code, especially if you didn't buy into the idea of writing tiny functions and unit tests for every piece of functionality.

If you’re stuck working with people who think they are the only ones ever to have written good code, I’m sorry. That must be unpleasant. But please don’t take that out on everyone else who merely gives a shit about the quality of their work.

Most of the code I refactor is my own from a few years ago, btw. Usually because I learned new information, past choices turned out to have regrettable unforeseen consequences, etc.

But we have learned to be really careful about public APIs. Implementations may change, interfaces are forever.

> If you’re stuck working with people who think they are the only ones ever to have written good code, I’m sorry.

I didn't say that.

> Usually because I learned new information, past choices turned out to have regrettable unforeseen consequences, etc.

Indeed, that "awful" code may have been written by the same developer from one or two years ago and now they're not comfortable with it anymore either.

Of course I'm being a bit hyperbolic, but what I'm saying is basically true and applies to pretty much everyone.

It's not all bad either, without some push towards renewal, we would be stuck with old ideas forever. The key point is that permanent renewal has a cost that the business must carry. Sometimes no renewal at all is the right business decision.

It doesn't matter if it is easy. If you have to waste time because the language-of-the-week came, which multiplies for N number of devs, is still a huge waste of time.

I was thinking about the reasons why this happens, and I see two major factors:

1) Tension between abstraction and optimization. To put it shortly, abstraction is about ignoring the details, optimization is about fine-tuning the details; you can't do both at the same time. Which is why different programming languages make different kinds of compromise. You could make a beautiful language or framework with elegant abstractions, then look at the performance and cry. Or optimize for performance, and then cry while reading and debugging the code.

2) Tension between mathematical elegance and the cost of hiring people who are great at math. Some developers care about having the code elegant from mathematical perspective, but the companies optimize for having a product to sell, as cheaply as possible, which involves a lot of cutting corners. A product full of bugs can still make you millions of dollars. And what's the point of having a mathematical proof of correctness of your current code, when the business requirements are going to change tomorrow anyway.

For the tension between abstraction and optimization. Red, the language claim to be fullstack, from hardware driver to high level programming for GUI. It can be used for performant system programming as well as high level functional programming and meta programming.

For people interested by this rebol inspired language https://www.red-lang.org/p/about.html?m=1

I am not familiar with this language, so I don't know how (or whether at all) it addresses the problem I was thinking about. Let me try to explain it:

Let's take the simple concept of "array" or "list". Mathematically speaking, it's a very simple concept: integer numbers are mapped to objects.

But there are so many ways to implement this mathematical abstraction. Do you assume the integer numbers will be used in a sequence, like 1, 2, 3, 4, 5; or completely arbitrarily with possibly large gaps, like 1, 1000, 1001, 5000? Will it be accessed from different threads? Will it have a "create phase" when it is constructed in a single thread, followed by a "use phase" when it is accessed from multiple threads but read only? Will it be modified frequently, or rarely, compared to mere reading? Will you need to make copies of it for further independent modification? Etc.

Now, one possible way is to choose one specific answer, and make the standard "list" in your language mean exactly that. For some purposes it will be okay, for other purposes it will suck, and someone will create a library providing an alternative implementation; and users will complain why they need an extra library for something that should have been part of the language.

Or perhaps you will provide multiple implementation, and users will have to choose. And they will complain about this being too complicated.

You might also spend years trying to find one perfect implementation of "list", that will fare relatively well under all circumstances (never the best one, but also never the worst one).

It would be nice to have a language where the programmer wouldn't have to worry about performance of the underlying data types. Just use them as mathematical abstractions, and everything will work fine. Like, you would still have to worry about algorithmic complexity of your code, but you would not have to worry about accidentally using the existing stuff in a wrong way which is not obviously wrong (and would not be wrong for a different implementation).

This is different from merely allowing people to use both high-level and low-level concepts in the same language. This is about how to implement the language in a way that allows you to do high-level as much as possible, without suffering terrible performance consequences. And I don't mean consequences like "this will be 100 times slower", but rather "this implementation of list, when used in this specific way, will actually have exponential complexity where some other implementation would have been polynomial". Because ultimately each implementation has a weakness, and one must be chosen. And if you treat other pieces of code as black boxes, it means you never know whether you made a good choice.

You're right, it's not just about the language, also the 'built-in', 'ecosystem' and 'foreign interface'.

Nevermind all that, even starting arrays at 0 or 1 will break the momentum of the mightiest dev waves.

And conversely, the problematic consequence and cause of all that: Resume Driven Development.

You don't actually have to invest in these new shiny ecosystems. It's just a choice you made. There are still people making a living on Fortran, C, C++03, Java <5, Angular. All of that still works on modern hardware and will for the next 100 years. You just probably won't be in a coastal place with pool tables and free food.

I think this is a manifestation of a more bigger issue: C, C++, JS.

This happend because our foundations are shaking, and is requiere to put on top some sanity. But nothing can work because: This industry not let go of the old and bad (C, C++, JS, Bash, etc) and refuse to do a significant change.

Too much change is bad.

Too little change is bad.

Have both things at odds in a fierce battle is worse.

Totally agree, this is a problem I also see plaguing our industry, and solving it is indeed a priority. We developers or engineers like wasting our time in abstractions, i.e. virtual worlds of isolated castles and babel towers of different languages and expertise. In such virtual worlds we can create and fantasize on all the abstractions we want, be the expert, play god, decide on which constraints matter, and build a world based on that. Such is the power of the non material software world indeed. Who does not want to be god? To caricature I'd say a significant part of the IT industry is about writing virtual entertainment: video games to be sold to the wider public, and toys developers and engineers can play with: languages, IDEs, ecosystems, frameworks, etc.

Earlier in this thread I wrote about data being the top priority [1], the raw material, that we rarely give data the primary focus it deserves and instead focus too much on the processing side. More concretely: dashboards showing instrumented processing clusters give a biased view that does not focus on what matters at the end of the day. What we also need are dashboards showing data flowing between sinks and sources, data quantity, data quality, etc. Sure resource utilization and efficiency matters, but only after we can validate we still have the right output, and that input is of proper quality. If output contains garbage, is it bad processing or is it garbage from the input? And if something is wrong with processing, do we know the impact downstream? In other words instrumentation should include data sensors, not just processing sensors: data counters, validation points, invariants, etc. Because at the end of the day, when the power goes off, do you know what's left to recover? Do you prefer your customers telling you about an unfulfilled order or do you rather want it to be detected earlier? If you get audited for GDPR, do you have a map of your sensitive data? In terms of security, is it about protecting clusters and containers, or is it about protecting the data? Once you get the data side right many things become simpler, but if you get it wrong, as we often do, we create a world of problems. Giving data its proper place in our engineering practices will certainly change our industry for the better and bring it closer to "reality" with less danger of veering into the virtual for the sake of it.

In a world where software is increasingly involved in human activities this would have a great impact. However I don't think we should stop there, as I believe this is part of a larger trend I'm concerned about: the idea that, not just in software development but in most human endeavors, we're increasingly favoring spending time in virtual/man-made spaces and activities, at the expense of the real world, the place and time we're at, nature and the environment. As if we want to escape the physical conditions we're in: whether it's our body, our environment, society, the work we do, etc. When one can't see a way to influence the real world a tendency would be to start operating in a virtual one where we get the illusion to have an effect, make some money, be an expert, etc. Oh and let's not forget this desire to put as much tech between us and the real world, as if we don't want to experience it directly, it's too icky, and instead need devices to offer an indirect perception: wearable tech, navigation by GPS, remote controlling tractors in vast industrialized agricultural fields, etc.

A friend working in a supermarket chain told me this story: he often advised a younger manager to consider better maintenance procedures of their A/C system, but to no effect. Now that my friend is retiring soon this younger manager is proposing to make an excel chart to track energy consumption in order to optimize setpoints, and asks my friend for his approval. Really? Approve perception to be limited to an excel chart? Isn't that the same as leaving the windows open and wanting to change the setpoint?

[1] https://news.ycombinator.com/item?id=22277875

Do you work in software? It's making new things with new things that software makers better at what they do.

Sounds nice. And idealistic. The reality is at this point the newer is often not much better, or may be just a step sideways rather than forward, and therefore does not replace anything, but rather siphons off a portion of the investment in the pool of alternatives around it. The benefits tend to not justify the costs associated with skill fragmentation and time and effort needed to learn them. More concretely, im not talking about jumping from C to Java. Im talking about things like the thousand different web languages that dont need to exist at all.

We need a faster web framework that generates HTML on mobile phones with no JS on the main thread.

The web is the "single top-priority" software platform, but it's in big, big trouble.

On mobile, users spend less than 7% of their time on the web. https://vimeo.com/364402896 All of the rest of their time is in native apps, where big corporations decide what you are and aren't allowed to do.

As a result, the money is going to native apps. The ad money is going there, the dev time is going there, and the mobile-web developer ecosystem is in peril.

The biggest reason people use native apps instead of mobile web apps is performance. Developers design web apps for fast desktop CPUs on fast WiFi data connections, and test their sites on top-of-the-line smartphones costing 5x-10x as much as the cheap smartphones people actually carry around.

Web developers have to solve this performance problem the way we've always solved our problems: with a new framework. ;-)

But specifically we need a framework designed to generate HTML with no JS at all, and designed to run in a Service Worker, which is a little web server that runs directly on the user's phone.

This style of app is often called a "Progressive Web App," and there are plenty of frameworks that support PWAs, but they generate PWAs on top of a single-page app framework that downloads megabytes of JavaScript running on the main thread. PWA is an afterthought for most frameworks, but we need it to be the centerpiece of the design.

> On mobile, users spend less than 7% of their time on the web. https://vimeo.com/364402896 All of the rest of their time is in native apps, where big corporations decide what you are and aren't allowed to do.

And the web isn't controlled by big corporations? You literally just linked to a website on the web owned by a big publicly-traded corporation.

I am in favor of using the web over apps but mostly because it has less tracking features. A more high-performance, bare metal implementation of the web would most likely give more stuff for websites to fingerprint you with. You want WASM, but WASM makes the web just as bad as apps in that respect. WASM is going to make it impossible to hide ads (because it will be painted without the DOM) or to block tracking or otherwise malicious code (because it will be heavily obfuscated).

>A more high-performance, bare metal implementation of the web would most likely give more stuff for websites to fingerprint you with. You want WASM, but WASM makes the web just as bad as apps in that respect. WASM is going to make it impossible to hide ads (because it will be painted without the DOM) or to block tracking or otherwise malicious code (because it will be heavily obfuscated).

This is true, but what's the alternative? That just seems like a necessary trade-off. Native code will always be easier to obfuscate. It seems backwards to think that we should keep things far more inefficient and consume more cycles and electricity and place things behind more layers of indirection and make web use slower for users just so that it's harder to hide malware. This reminds me of the argument that we shouldn't make cryptography too strong because then it could be used by criminals in a way that even intelligence and law enforcement agencies can't pierce.

There's already a ton of tracking going on, and typically already with heavy obfuscation. The obfuscation doesn't seem to make a difference in terms of practical solutions either way, since the detection and blocking is generally based on origins and IP addresses rather than static (or even dynamic) analysis. And a lot of ad blocking does the same, and should still work for WASM.

For cases where the ads are served directly by the origin and are painted without a DOM, more clever mitigations will be needed, but I don't doubt that people will come up with solutions.

So, yes, the cat-and-mouse game is going to become easier for ad/tracking companies and harder for anti-ad/tracking developers, but it's going to be a big challenge for both parties, just like it is now, and the anti-ad/tracking side is still going to have a lot of success.

You're right to raise apps as a threat to the open web. And about performance being a factor (WASM could help there).

However, the real reason that the ad money (and engineering effort) is going to apps is that app platforms do not protect privacy at the same level that the open web does. There's so much more tracking available on mobile platforms than there is on the open web.

I don't think I agree. The privacy protections are different between web and native apps. On the web, you can track users passively with cookies and with various fingerprinting techniques, and there are numerous ways for third parties to communicate to share tracking info. In native apps, you can use the "advertising ID" (IDFA for iOS, AdvertisingId for Android) the advertising ID is designed to change from time to time or even restricted by savvy privacy-sensitive users, and communication between third party apps is more restrictive. Privacy is mostly a fiasco in both cases, IMO.

There already exist such a framework. I like to call it "vanilla". eg. no frameworks, and due to the stability and backwards compatibility of the web platform, doing a "vanilla" web app doesn't just give you 10x performance, it will also be much easier to maintain. The trick to doing a vanilla web app is to not write any XML (eg. ban innerHTML and Jquery). And not storing state in the DOM. You can use Websockets for live updates, with event listeners and self mutating components (written as pure JS functions). Data is synced between devices, and the app can be started and used offline.

I am curious about your approach as I also write my web app in vanilla JS. Where do you store the state or save the data? For me, I use either localstorage for simple update or network fetch for every thing else. For data format, I use Json all the way, rather than handling others,eg FormData, key/value.

Svelte comes close, stencil.js looks to have a focus on PWA's. I've played a bit with svelte and want to try stencil

My one issue with PWA's is iOS doesn't support all the features.

100% agree with this. Another thing that users have to deal with on the web, but not on apps: Endless banners and popups telling you about useless things like cookies, emails subscription, notifications, gimme-your-location, "download our app", etc.

Rest assured, app vendors are working diligently to close the asshat gap with their cousins in the Web community. Open an app nowadays, you can expect a flood of popups, unsolicited notifications, and even ads that you thought you paid to remove.

How do you feel about JS frameworks like NextJS [1], Razzle [2], and Gatsby [3]? They try to reduce the bundle and first times to paint and interactions.

[1] https://nextjs.org/

[2] https://github.com/jaredpalmer/razzle

[3] https://www.gatsbyjs.org/

I'm not very familiar with Razzle, but Next and Gatsby are examples of what I was referring to in the last paragraph, SPA frameworks that have a PWA mode.


Gatsby has a no-JS plugin https://www.gatsbyjs.org/packages/gatsby-plugin-no-javascrip... but that also disables the PWA. The Gatsby team doesn't seem to be interested in no-JS mode. https://www.gatsbyjs.org/blog/2020-01-30-why-gatsby-is-bette...


I do miss that brief window when sites would have mobile-friendly views where they would give you a de-shitted basic HTML-only view if they detected you were using a mobile client. Then 4G happened, and certain markets got wireless data that was on par with what most desktops got through their cable internet, and it was over.

You should follow Surma. He's doing interesting work in this area (moving stuff off the main thread). https://twitter.com/DasSurma

JS isn't the bottleneck, the DOM is. A new framework isn't going to solve that.

What kind of alternatives exist to the DOM? Even conceptual ones?

Directly drawing on the hardware graphics pipeline with WebAssembly, having identical performance to a native application

> with no JS on the main thread


Why HTML whatsoever?

Because browsers. And because HTML isn't the issue, it's main thread JS.

What matters most is what you want to still be there when the power goes off: data.

No amount of processing power matters if you don't have the data.

Everyone in the industry focuses too much on the processing side: objects, functions, containers, VMs, k8s, etc. but nobody really gives proper attention to data, its provenance, where it stays and where it goes, etc. I'm not saying engineers don't think about these things, they obviously have to think about it at some point. It's just that data is always accessory to the story. It's like processing is the cool kid and data is the stinky one nobody wants to approach unless you have to. Look at the 12 factor principles for example, where is data in there? How easy is it to take data from one place/cloud/database to another? Data is the raw material, it needs to be the primary concern in programming languages and architectures, not objects or functions, containers or whatever, those come after, not first.

SQL and its continued success is a testament to how valuable this approach can be. Could be something here though, as I have found maintaining databases and datasets in a sane way to be difficult, even today.

The problems around data are in general harder to address than the processing part. Most people dabbling in software engineering don't have the skills or attention span to work on those, and most of the issues are already solved by extremely complicated systems (e.g. DBs, Apache projects, ...).

As for 12 factor principles, it was first touted by a PaaS provider. The whole idea is that they take care of the nitty gritty, while you can focus on the exciting (and technically easier) parts of software systems.

I'd be tempted to say something similar: data is harder because it sticks, in the sense that it ties us back to the physical world because it's always tied to a place somewhere, it has to be moved, copied, synchronised, etc.

However when I look at the accidental complexity we have created on the processing side, I wonder if it does not surpass the essential complexity of dealing with data.

In other words: maybe we have yet to create the proper concepts and tools to deal with data, all this time the industry created a tower of babel (and of overspent $$$) with our languages, frameworks, containers, etc.

As someone in the hardware world where CPUs and fancy accelerators are all the rage, it's the same story when it comes to thinking about non-volatile memory (AKA NAND, Optane, PMem, etc.)

I would add the problem of long term archiving - both as the physical medium and file formats into this. Will my jpgs be accessible 50 years dpwn the line? If yes, who will remember and have the equipment to read 100gb blu-ray mdisc?

A piece of technology that seems to be missing is a global decentralised anonymous identity and trust framework. It probably requires a leap of engineering comparable to the invention of the blockchain (although I don't think that a blockchain is necessarily the model to follow here, since the data should be encrypted).

Every website and app and network service seems to be reinventing the wheel here, and end up creating little silos of trust data, whereas in principle it should be possible to receive a (locally) consistent answer to the question of "Is this person in good standing with the other humans they interact with?" whether that person is sending you an email, or writing a software library you are downloading, or creating an account on your website, or selling you goods on Ebay, or offering you a lift through a ride sharing app.

I was talking about this with a friend the other day. I don't know about the technical feasibility (RE sybil attack[0]), but if it were possible, it would have a massive impact on the web. The current state of the art (in terms of new user signup) is using phone numbers as representing unique "trustable" people, which is kind of absurd.

This reminds me of how social security numbers weren't originally intended to be used as unique identification, but the demand for some form of identification was so strong that organisations ended up hackily using it anyway.[1]

[0] https://en.wikipedia.org/wiki/Sybil_attack

[1] https://www.youtube.com/watch?v=Erp8IAUouus

Just post a bond that is forfeited if you engage in verifiable abuse, the proceeds of which are used to compensate the victims (if applicable). Use pseudonymous identity to link any number of site-specific "identities" to that same initial posting. Real-name identity can then be optional (although some sites may still insist on it), but users in good standing are protected from "sybil" attacks because each entirely-new user requires posting a separate bond, so the cost quickly becomes infeasible.

I naïve reading of your solution implies that the poor wouldn't be able to put up such a bond and would therefore be excluded from this techno utopia - where the rich would be able to create Sybil's and game systems all day long.

Posting bonds might be part of a solution, but there is still a question of who gets to decide whether or not something is abuse (or who is a victim, for that matter). The closest I've seen to this sort of system is OpenBazaar's use of "proof of burn":


> but there is still a question of who gets to decide whether or not something is abuse (or who is a victim, for that matter)

In principle, all you need is a trusted arbitrator that's acceptable to all involved parties. This is how "multiple signatures" work on Bitcoin already; the third-party escrow can decide who's going to keep the coins by adding her signature to either party's claim.

This doesn't solve the problem of identity. You still would need some way of differentiating the accounts with the bond, or else I can just sybil the system by having a lot of money and a really good way of impersonating people.

Hot take, but the only real way to solve this identity problem is to take people's DNA. The only attack on that is to literally synthesize fake DNA/fake hair/fake saliva. Even then you can prompt randomly for DNA the way Twitter randomly prompts for phone number verification. Or ask the user for a selfie and spot inconsistencies in the mapping from DNA to face.

It's scary to let internet companies have your actual DNA (though that didn't stop 23andme customers), so there could be an layer in between (a nonprofit? machine with a hardware security module?) that does the DNA sequencing and returns a digital signature to authenticate you.

The obvious downside is that it would work too well. Banning becomes much more serious of a thing when it's lifelong and potentially could affect your descendants. I hope I'm not giving anyone any ideas because this is horrible.

Even DNA wouldn't work because of weird shit like chimerism. Our classic assumptions about this stuff just don't hold in reality.

Trust cannot be decentralized. The whole idea of blockchain/Bitcoin[1] was eliminating trust from a process that would have required trust (keeping track of a ledger of funds). A system that provides trust in a trustless way is as paradoxical as it sounds. However you implement it, what some humans say have to serve as inputs to the system, and in reality that's who you're trusting.

The best you can do is have just one central reference store of trust/reputation. China's social credit score is a working implementation of this. I don't think the West will ever be ready for that though, so you're going to have to deal with the fragmentation.

[1] The innovation was requiring a computationally difficult problem to be solved to participate in the network and using hash functions to prevent tampering. This made any participant (miner) in the Bitcoin network a dumb cog in the machine that could be replaced by another. The people aren't special; the computational work is special.

I completely agree with what you said, however I think there would be some very good value in simply keeping track of online relationships. I think the "web of trust" is actually very useful, if misnamed. Let's say I go to my online bank. I have a problem and I want to talk to the manager. It would be really great if I could have an easy way of determining if the person who is speaking to me is actually associated with the website I'm thinking of. I don't need to know their name. I don't need to know if they have a shady past or not. I just need to know that the somebody the website is going to let me talk to is the same somebody I'm talking to. Right now you can kind of jury rig something, but it's really difficult, error prone and easy to game socially. I work in the travel industry at the moment and even something as simple as being able to flag that a former salesperson no longer works for the company would be huge.

Sovrin[0] is a decentralized anonymous identity network that may be of interest.

[0] https://sovrin.org

Very interesting, thank you. To give a little more detail, I will share a quote from their whitepaper:

'Sovrin utilizes a “public permissioned” distributed ledger design (Fig 4). The easiest analogy is to the global ATM network: anyone can use an ATM (public), but only those who’ve been given special permission can add a new ATM to the ATM network (permissioned). With the Sovrin Identity Network, it is the Sovrin Foundation that grants permission for “nodes” (akin to ATMs in the metaphor) to join the network.'


I would fund a rigorous study of frontend development with a team of academics who would gather details from a very large sample of organizations about frontend development projects. My theory that I would seek to validate is that the vast majority of frontend work was unnecessary, overly complex, costly, and shouldn't have ever been funded. Chasing new approaches creates new problems. Following FAANG solutions to problems no one has is costing everyone time, effort, and money. Myths need to be debunked. Anecdotal evidence consisting exclusively of success stories is concealing the truth. It feels as if the entire frontend world has gone crazy and received financial support to fund an expensive addiction to unnecessary complexity.

Can you give some specific examples of this problem? Frontend developer curious what kind of trends/solutions you mean.

Here's my take on it from a few months ago:



Claim: Most sites are mostly static content. For example, AirBNB or Grubhub. Those sites could be way faster than they are now if they were architected differently. Only when you check out do you need anything resembling an “app”. The browsing and searching is better done with a “document” model IMO.

Ditto for YouTube... I think it used to be more a document model, but now it’s more like an app. And it’s gotten a lot slower, which I don’t think is a coincidence. Netflix is a more obvious example – it’s crazy slow.

To address the OP: for Sourcehut/Github, I would say everything except the PR review system could use the document model. Navigating code and adding comments is arguably an app.

On the other hand, there are things that are and should be apps: Google Maps, Docs, Sheets.

edit: Yeah now that I check, YouTube does the infinite scroll thing, which is slow and annoying IMO (e.g. breaks bookmarking). Ditto for AirBNB.


context: https://lobste.rs/s/jmmr3w/interview_with_drew_devault

I haven't used sourcehut much but the UI is going in the right direction IMO. It looks nice and is functional, at 5% of the weight and 20x the speed of similar sites.

I tend to agree with most of what you're saying, but one thing that leaves me conflicted is that with a document model, don't you lose the personalization of the content?

One of the reasons Netflix or YouTube is so slow is because every time you load up the page, you're supposedly getting a page that's full of content targeted specifically for you.

Would you say that you just don't want the personalization?

Why do you think you lose personalization? That's an independent issue.

I don't mean the site is completely static. I mean the site is rendered on the server, like how 99% of websites worked before 2010 or so, including Google's. Those sites were personalized.

A shorthand for the argument is jQuery vs. React. jQuery enhances a document; React "takes over" the page to give you an app. There are limitations of jQuery which is why I didn't say that specifically, but that's the general idea.


I'm finding in the last 5 years that people "forgot" how websites were made. It seems like the "default" mode of thinking switched to SPA. SPA makes some things more convenient and other things less convenient, but it's totally independent of functionality, like whether the site is personalized or not.

Ironically SPA seems to be so slow that people now render it on the server, which is totally bizarre to me.


Update: This comment in the same thread goes into the issue of state management, and performance: https://lobste.rs/s/jmmr3w/interview_with_drew_devault#c_6pp...

There's a legitimate reason for the SPA architecture, but it comes with many downsides as well.

Good article (which is ironically on Medium, a great example of a document turned into a terrible, slow app):



Isn’t Github following the document model? At least, that’s my impression of how the UI works (I.e. none of the consistent UI elements stick around as you navigate)

Yes Github looks pretty close to the document model. I think that's why they were able to optimize it over time.

It isn't exactly fast, but I think it's gotten faster over time.

On the other hand there are lots of blogs that won't even render without JS turned on. Those are apps, but IMO they should be static content.

What about when you need an editing interface with an instant preview, though? You would want to reuse the component that renders the static content. So it’s simpler, and leads to happier devs, to treat both as an app from the beginning, and if you need to (for SEO or performance), server-side-render using the same codebase.

Is this silly? Absolutely. Is it a global optimum? Possibly.

For this specific example, I would use event listener to sync the content. All done manually in DOM, no shortcut, some codes duplicated. I think that's what SPA frameworks and its components try to promote, single point of update.

Personally, I like the vannila JS approach, a bit tedious, but I know what I will get.

I don't really see what you're saying. That doesn't appear to apply to any of the examples I gave: AirBNB, Grubhub, Github, YouTube, etc.

Or if it does, it applies to a small part of the site. Again, the claim is: most sites are mostly static content.

Could you expand on this? I’m interested in the opinion

The decline of usability, recognizability and coherence in desktop user interfaces. I honestly think we reached peak UX some time in the mid-90s. With the advent of touch devices, paradigms are mixing in a way that's directly hostile to productivity.

I agree with your take that mid-90's was probably peak UX. But I think that has more to do with that being about the time when companies stopped trying to strike a balance between accommodating both power users and casual users. Since then, much more effort has been put into providing a polished/slick UI at the expense of things like automation. Of course, that also plays into the advertising model which I believe is diametrically opposed to things like automation. (i.e. if you're not clicking/touching, they can't tell if you're looking at the ads)

Touch capabilities can be a nice addition to many types of desktop productivity software without detracting from what's already there. Companies reworking desktop applications to look and feel like mobile apps (with all of the related pros and cons) is what causes productivity to suffer rather than anything inherent in the paradigm.

> working desktop applications to look and feel like mobile apps

This is basically what I mean by mixing paradigms, and it's getting more and more common in operating systems as well, with Windows as clear leader.

As for touch input, I personally think it's of little use on the desktop. I've already got a mouse, which when properly configured is a pixel-precision input device. I simply cannot do the things I do with a mouse on a touchpad, let alone a touchscreen.

There are several other crimes as well, big and small, mainly in Windows but also on Linux/BSD (especially in Gnome, where the worst decisions made seem to perpetuate into other FOSS DE:s). Apple is still keeping things relatively sane, even though they've slipped somewhat of late.

It's pretty shocking that we are where we are in 2020. That year sounds like the future to me, but in computer interface terms it's definitely dystopia. The market failure to cap all market failures!

The future was better in the past!

In so many ways. What a disappointment the world can be.

Understanding the problem.

Most of the time the users haven't taken enough time to understand their own problems and have trouble articulating them in a way that's meaningful for a product manager or developer. And on the opposite side of that product managers and developers often do not develop sufficient domain experience to understand the problems users are trying to express.

This is why the absolute best software is built by people who are developing for themselves. You know when it's not solving your problem and you fix it because you know nobody else will.

Good point. But is not a computing or engineering problem. Is a cultural, education problem. People is using the languange poorly. If they can't articulate well then... all that is left is garbage in, garbage out.

I seem to spend far more time on the deployment of code than I ever used to, more time writing CloudFormation and ansible than the Java backend, python lambdas and static UI that it deploys.

That seems nuts to me. I'm not sure how you fix it without being Amazon/Google. People moan about Terraform too before that gets mentioned.

> I seem to spend far more time on the deployment of code than I ever used to, more time writing CloudFormation and ansible than the Java backend, python lambdas and static UI that it deploys.

I'd really like to understand more about what the pain points are here. In contrast to your experience I spend almost no time deploying code. We commit, tests run, we click a button and the deployment is updated in our kubernetes cluster. We did have to put a fair bit of work into our gitlab pipelines and the tooling that patches config into our kubernetes manifests and depoys them to the right place (let's say 2 person months all-in), but the payoff in the end was pretty significant. We're a fairly small company so if we could afford to put the time in to build the foundation for painless deployment I wonder what prevents larger orgs from doing this work? Is it access to the right skills? Difficulty prioritizing non-revenue tasks?

Part of it is perhaps that I mostly do consultancy work. So rather than a product a company lives and breathes through the success of, they're usually costs that just need to be done sufficiently, quickly and then left to run themselves. 2 months is an awful lot if the project only runs for up to 6 months.

The last project I worked on was deployed to two different environments (double the headache in itself). One side required AWS CloudFormation (requirement imposed on us for that side to be serverless) which we were new to, so we brought in three different specialists one after another. None of whom could really help other than to tell us it was messy and confirm that we were on the right track.

There was a time when I'd just drop a jar and an init script on a linux box for an MVP, now I'm looking up what the yaml should be for sticking a lambda in a VPC, or adding security groups to EC2 instances or whitelisting CloudFront access IP addresses. And redeploys involving CloudFront are slooow. I think that project must have used a dozen AWS services and a few thousand lines of CloudFormation - for deploying perhaps half that many lines of code.

As an aside, GitLab PM for CI/CD here. It would be awesome to hear about what you built and how we could have made that learning and setup process easier. Feel free to DM me on Twitter? @j4yav

> As an aside, GitLab PM for CI/CD here. It would be awesome to hear about what you built and how we could have made that learning and setup process easier. Feel free to DM me on Twitter? @j4yav

Honestly most of the effort was not in the gitlab pipelines. We found those facilities easy enough to understand and implement. Most of the work went into the tooling layer between gitlab and our runtime environment. My email is in my profile if you'd like to reach out with more specific questions.

We're in a weird transition period where developers are being forced to do ops. People will wake up to the fact that this so-called automation was just changing one type of work into another. Eventually there will be jobs called "build system engineers" and there will be a whole department that does this bookkeeping for you.

I recently heard about a startup trying to do something about this. Probably not totally mature, but might be worth a look: https://www.kaholo.io/ (I am not related to company in any way)

We're working on this at Spaceship: https://spaceship.run

We've still got a lot to build, but the goal is zero config deployment for any app/service.

I was actually just talking to a friend the other day about starting a company to solve this for fortune 500's.

Nothing radical. Just a managed service provider that would build and maintain the CI/CD pipeline for them.

There is always heroku until your app is actually popular. Fake it till you make it.

No idea where this fits on the priority list, but I think a lot of problems around stream processing still aren't solved, and it's holding us back from a really productive programming paradigm. Handling updates/retractions elegantly is hard or impossible on many platforms, handling late (sometimes _extremely_ late) data can be very inefficient. Working with complex dependencies between events (beyond just time-based windows), in realtime, can be really tough. As the saying goes, cache invalidation is one of the hardest problems in software engineering. Having a simple platform to represent processing as a DAG, but fully supporting both short and long term changes transparently would make event sourcing architectures trivial and extremely productive. The closest we've come seems to be:


Lots of very active CS research in this area though.

When humans enter the mix, it becomes really complicated. Your event sourcing code can no longer look at events like "Human A changed the record with primary key B to have property C equal to D at time T" because that event is actually a function of the snapshot of how Human A saw the entire state of the interface at time T. And what if someone edited the record with primary key B to represent an entirely different entity (from the perspective of a rational human observer)? Is it still the case that a prior event on this record should be retroactively interpreted to affect the new identity? All code becomes recursively dependent on all prior code, and all events become recursively dependent on prior events. Figuring out how to code in a paradigm where you can't retire old code is very difficult and I'd welcome people pushing the envelope.

Martin Kleppman's articles and research is a great place to start for anyone interested: https://martin.kleppmann.com/2015/02/11/database-inside-out-...

Unlimited time and resources? I'd try to tackle the global security issues related to advanced cyberattack. It's such a complex problem I don't even know if it's possible, but it would require hardening software update servers, networks, and utilities (especially electrical power distribution) to the point where a single bad Windows update doesn't take out the economy.


Far too often when I want to learn something new I either find there is not much documentation or find that there is a large mass of documentation that probably has everything I need but is so disorganized that I can't figure out how to approach it.

Another documentation problem, especially with open source projects, is that development and documentation are often loosely coupled, if at all. The people doing documentation usually don't have the resources to keep up with development, and so even if there is good well organized comprehensive documentation it is usually obsolete.

Hey there, I'm actually currently working on something in the documentation space that will benefit both companies and open source projects.

Would love to chat with you further! Shoot me an email over if you're interested at li.eric00@gmail.com

I’m one of those sad product designers who finds documentation tools fascinating :) Ping me if you need a hand on the design side (mrjoshuahughes at gmail)

Having my phone as my ultimate CPU, immediately connectable to commodity peripherals (monitor, keyboard, printers) at home, office and on the street. Content would be stored on the phone following the "local fist" principle to favor speed and security.

Something like Ubuntu Edge for those who remember it, but with more local storage -- TBs maybe -- more connectivity out of the box.

I envisioned something similar around the time of the 2nd gen touch phones: just plonk your phone into a dock with keyboard, mouse and monitor.

I realize now it’ll probably never happen, since the companies making the phones are the same companies that make computers.

It did happen. Samsung DeX is a desktop environment that you get when you plug your tablet or phone into a screen. It's passable but of course most Android apps aren't designed for work.

They also did a Linux on DeX. You could boot Ubuntu on your Samsung phone and display it via HDMI. They stopped supporting it around Android 10 though, presumably nobody ever used it.

Oh, didn’t know about that, thanks!

Huawei P20 and P30 series, Mate 20 and 30 series do this. Plug a USB-C cable to an external monitor, Bluetooth mouse and keyboard and you have a desktop like experience, with apps opening in Windows like you'd expect on a desktop. This includes a taskbar, pinned apps, resizable windows and more. No extra hardware needed.

Thanks for the info, I’ve obviously not been following this close enough!

Here's a photo of my P30 connected to a BenQ 35" curved a few months back: https://cdn.geekzone.co.nz/imagessubs/3fd54a951fe50626b4d53b...

This is amazing. Thanks for sharing.

I talk about this all time. Our phones are strong enough, just give me a stupid terminal to plug in to for a keyboard and screen.

Yeah I'd love to just dock my phone into a desktop setup.

Not sure why this doesn't exist for Apple/Android. It seems so obvious that they're must be Some critical flaw I'm overlooking

Decent nocode, or some sort of nocode holy grail.

I want to be able to plug APIs together, process user input, have persistence and identity, without writing so much boilerplate.

If we don't take software engineers as a fixed population, but rather everyone on Earth with sufficient intelligence to code well with the right no-code framework (which currently doesn't exist), then this is indeed the biggest priority. Of course, I am under no illusion that no code will work for everything, such as developing ML applications from scratch. Even then, it can be a way to interface with pre-built ML apps, and use them in various industries. It would also serve as a pipeline for non-STEM workers into full-blown programming and STEM.

Think about what Excel did to productivity, and multiply that by 3x or more. That's what no code could do as the 60% solution for a 200x larger labor pool (more like 80% solution for anyone who can't access engineering talent). It would also make its creators obscenely rich. With the amount of processing power that we have, the time for no code is now.

By the way, there's a large chance that Big Tech incumbents won't be the ones to create the no-code holy grail. They have institutional handcuffs that will make that difficult.

Salesforce got it partially right. To be somewhat cynical, the trick is to enable no-code, but (a) make it just borderline user-hostile enough that you foster an entire marketplace of no-code consultancies to no-code for companies too busy and frustrated to no-code themselves, and (b) ensure everything is extensible with actual code. They got (a) balanced perfectly, but (b) is somewhat lacking, and it didn't have to be. Someone who builds the no-code-but-you-can-drop-to-code consulting network will be the next Salesforce + Oracle combined, and will empower a lot of people around the world.


I stan Airtable and use it daily, but without websockets or other ways to “push” changes to external systems, it’s not really suitable for (b), and from an interface perspective it’s not customizable enough (without coding a whole new UI, which runs into the above) to accommodate (a).

> It would also make its creators obscenely rich.

How? If you want a language to be adopted and become standard, it would have to be free and open-source. You don't get "obscenely" rich from that.

This is a good example of those problems that could relatively easily be solved with a proper public-goods funding mechanism, but very difficultly without.

The developer can create an ecosystem of services that leverage network externalities and tie into the no-code framework. Of course, they’d have competitors. However, there are network externalities both with the no-code framework itself and some of the services that tie into it. If you launch an open-source no-code framework simultaneously with your own ecosystem of services, then you will profit from those services as the first-mover.

Remember, non-engineers are more sensitive to ease of use. They aren’t as fickle as engineers are with dev tools. That’s why they pay money for Microsoft Word and stick with it even though there are many free competitors. They’re accustomed to Microsoft Word, their friends use Microsoft Word, and the software is always marginally better because outsized profits are reinvested into the software.

Additionally, if the above is not enough to make one obscenely wealthy, consider this: it is not pre-ordained that no-code would be as open as a programming language like Python. In fact, it might be better for the users if no-code is a platform rather than an open-source framework (I don’t believe that, but maybe). Obviously, platforms can charge rents as the intermediary between end-users and suppliers of services on the platform. That will make you fabulously wealthy, assuming widespread adoption.

So, if there is a lack of:

- profit potential from ancillary services, and

- funding from users, suppliers, and patrons

- or, in any case, if the no-code would be fundamentally worse with open source

then a for-profit platform is the way to go. Personally, I believe that the open source + ancillary services route will result in a better ecosystem. Nonetheless, it’s worth investigating which of those three clauses are true (if any) and why, because that will have big implications for the no-code project.

Ah, the impossible holy grail of businessmen and marketeers everywhere

I’m a developer by trade and I want nocode, albeit thinking as a product guy/entrepreneur. I want to get to market fast, and then possibly iterate with code.

This seems like an antigoal to me.

Abstracting away infrastructure just means it will become increasingly centralized as a commodity and less likely/harder to create independent services.

I don’t necessarily agree.

Look at n2n or userbase, they’re open source and can be self hosted.

Other nocode tools can follow suit, and can be federated.

I agree with the other comment that this can have the same effect as the one Excel had during the past decades (there are companies running completely on Excel).

At the moment though, the tools feel too limited.

Sure they can be, but the whole ethos behind it is "don't make me think about anything other than the application."

So by default the majority are looking for hosted services, otherwise they'd just build their own stack. For anything other than some organization building a bunch of applications, that same Nocode Dev has no need to ever migrate so long as some major service provider is doing it for them.

Formally specify all the commonly used languages, runtimes and APIs, so that we could translate programs between them. Then make sure that all programs such translated can be compiled, so that we don't waste resources by running inefficient VMs.

This would go a really long way towards efficient code reuse as well, with far less "reinventing the wheel" all the time if shared libraries more easily work across all (common) languages.

A true build once run anywhere platform, that doesn't sacrifice security or verifiability, and that would also scale from embedded systems (that are often married to their own limited C compiler), up to distributed systems and mobile code (like browsers acting as a remote UI typical of web apps).

WASM is a great advance over what came before in many ways, but still has some room for improvement. These are all problems with known good solutions, but there is no holistic platform that integrates these smoothly. It's largely a hodge podge of various programming languages, configuration systems, build systems, scripts, etc. Look elsewhere in this thread for people complaining about how difficult it still is to setup a development environment. Embedded programming is typically even worse, though it's improved dramatically since the rise of Arduino.

It needn't be this way though. A well designed programming language can be used for configuration, scripting and more. Racket is a good example on the dynamically typed side, and F# is a good example for a statically typed language along these lines.

>What's the most important piece of technical debt across software engineering, that could practically be solved if we put enough energy into it?

Being able to update libraries, tools etc. automatically and without friction. Right now upgrading is so tedious, error prone and painful that most places just keep using ancient versions that are not only lacking bug fixes and newer features but are a huge attack surface.

That is not known to have a solution.

It's a human coordination and cooperation problem on a global scale.

Take the people (and LAWYERs) out of the equation and software becomes much simpler.

I'm not sure if this is sarcasm or not.

But, yes, without users it is easier to create software.

Completely pointless, but much easier.

Statically-link everything and ship it as single file EXEs or containers.

Offer different versions.

Sandbox programs at the OS level to mitigate old (or new) vulns.

What if the new version has a bug?

Build the tools and infrastructure necessary to make FPGA accelerator programming accessible to the average programmer. Moore's Law is mostly dead; we are getting more cores but we are not going to get significantly faster cores in the near future. What will bring us next big jump in computing performance is unclear but FPGA acceleration seems like one of the few promising directions.

A computer and OS that boots in 100ms. Every user action gives a response in 10ms.

Well, the Atari ST booted in about one second, from black screen to moving a mouse around. A lot of that time was waiting for a floppy disk boot sector read to time out (so if you had a formatted floppy in the drive it'd spin up, quickly read a dud boot sector, and continue the boot process of the ROM-based OS rather than timing out).

I suppose if you were gonzo about it you could format the first track with sectors all numbered zero, and eliminate rotational latency. You'd save 80 ms (on average) that way.

Didn't seem worthwhile at the time :-)

We had approximately that in the 1980s. And straight to a programmable shell, too.

And then we had to wait 10 minutes for a program to load over cassette tape with a significant failure rate.

As an example of just how far we are from this, generally the bootloader hasn't even passed off to the OS within the first 100ms.

I think it's really sad, because a system would likely have to be redesigned from scratch to make this happen. So it seems unlikely to happen.

But I'm not an OS expert, maybe it is possible.

You would have to rewrite the bios and not check any of your memory.

Boot time is kind of ireelevant when everything spends its time asleep. But I strongly agree that we should try to reduce frame latency for "productivity" apps; you probably don't need 10ms as most people are still on 60Hz, but we ought to be able to manage two or three frames.

At least get typeahead working properly again. Various places have borken this, especially Facebook, but back on vt3270 systems experienced operators could just hammer a bunch of data into forms as fast as they could type. Can't usually do that with web apps.

I've used a Tempest arcade machine with near-zero latency, and it was a very weird and pleasant experience.

This is definitely not near the top of my list when I think of things that get in my way on a day-to-day basis, but I'm genuinely curious about how/why it is for you. I can't remember the last time I waited for my computer to start from a cold boot (updates apply in the middle of the night and the reboot happens then). It wakes up from sleep quick enough for me, and pretty much anything I click on or type responds fast enough to appear instant.

This probably isn't quite what the commenter had in mind, but boot time matters a lot for embedded devices. It's quite possible the "Linux box" you cold boot the most is the center console of your car.

That's a really great point: I'd love the infotainment system in my car to boot faster. But that is also not "the single top-priority software engineering problem" for me, either.

I think you make a great point. I agree that waiting for my computer to boot is a non-issue today. (Other latency certainly isn't)

But my Chromecast takes a very long time to start, modern TVs, Blurays, mobile apps etc.

Modern lightbulbs don't even turn on in 100ms. I have a hard time believing sub second computer booting is really at the top of anyone's priorities.

I think it is important for embedded, but sure, not for mobile and desktop. App start time being instant is much more important.

TVs and monitors don't even turn on in 100ms

Would you mind expounding on why this is important, and/or why you chose those specific timings?

not op but everything feels sluggish. Waiting for all the latency everywhere - over years/decades - feels really draining. i.e slow websites, slow desktop apps, slow languages, slow OS updates, slow IDEs, slow DB syncs, slow email clients.

Exactly, I wait almost a minute for my Chromecast to start every day.

My newest laptop is closer to that than any computer I’ve ever had since 1984. Mostly because of SSDs I guess, but Windows boots in a few seconds. Usually takes longer to shutdown than to boot.

10ms response is a bit on the fast side. Aside from pros doing sports or music, we (humans) don’t register most kinds responses that fast. Plus a 60hz display is 16ms anyway. These days browsers, editors, and OSes really do usually try to meet this goal for the most common workflows, though there are plenty of demonstrable exceptions. It’s a good goal, I agree with it, but maybe 100ms would be a more reasonable worst-case response time?

100ms is too slow for e.g. a pen on a touch screen. The ink will lag behind the pen.

But for worst case in general, not bad.

Yes absolutely! Drawing needs 60Hz or 120hz or better. I assumed perhaps incorrectly you were talking about one-time responses like apps opening and menu clicks and things like that. My statement above is not good for time-based motion, apps like games and drawing, those are better with 33ms response times or better, for sure, and 100ms is too slow.

Isn't that basically what a phone or tablet does?

This problem is best solved with an effective, reliable sleep/wake mechanism, not a fast boot mechanism.

On that note, the Slackware guy said that the startup speed advantage of systemd wasn't really worth it for his distro, on the reasoning that most people only rarely completely reboot their computer anyway, especially how we have hibernate and suspend today.


I am willing to go out on a limb and say that as much as 25% of software engineering time worldwide is wasted due to poor documentation.

It's an asymmetric problem too. If someone benevolently funds a team of engineers for a couple of months to write great docs (with detailed examples) for top 500 libraries, frameworks, APIs. They could increase global productivity of software engineers by %25 percent.

A very conservative estimation! And it's not only documentation of software but also the requirements and other parts of the whole lifecycle. It's incredible how many companies hold meetings after meetings to keep up the oral tradition only to run in circles. Instead of POs who churn out dozens of irrelevant tickets, we need people who can tell and write stories - real stories not "user stories". They offer insight, motivation, engagement and the right understanding to break down the work into reasonable and deliverable bits.

I'm actually currently working on something in the documentation space that will benefit both companies and open source projects. Would love to chat with you further! Shoot me an email if you're interested. (li.eric00 at gmail)

UI development is still very clunky on any desktop OS other than windows. (and I have not done UI on windows in a decade, it used to be pretty good back in the day). iOS is good, I mean like you want to create a utility to do a job for yourself, not a big project.

And I know there are lots of html / web based things you can run on desktop but they are even more complex, for me at least.

What's your idea of a good UI development environment? I've asked this question in the past and the variety of answers I got astounded me. I got everything from MacOS to Lazarus to hypercard.

I believe there are cycles in computing, one of them is between centralized and distributed, another little bit different one is between local and remote computing. For example stuff is moving into cloud, but then we have mobile apps. Etc.

Thus, on a longer term, you might want to identify the cycles and look into opportunities beyond the current phase in these cycles, and whether there are under-utilized ideas there.

For instance, for unix-style command line operation, we have the idea of piping data between applications and combining multiple applications to perform a job.

These applications communicate through very simple protocol, the text file format, where one line means one thing. Thus, if we want to combine more complex applications, such as operations on image files, each needs to implement their own processing for various file formats etc.

My idea would be to try to increase the abstraction level of operating system from files to something more generic.

For example, what kind of things I could script more easily if the operating system would allow me to read source code tokens/statements/packages in any language? Or images as an abstraction regardless of their file type?

Formal verification in the basic parts of the infrastructure.

User interface responsiveness.

The Web being based on standards that leak users' data left and right.

End to end validation of complex systems.

Cookie-cutter developers using open source they don't understand to implement mission critical infrastructure for companies that don't understand the internet. Its a recipe for disaster.

Pay math majors to manually (or with software assistance to help maintain rigor) prove the correctness of open source software Infosys-style. Get paid by enterprise customers. You don't need math majors per se just people with some math talent. Unfortunately there are tons of mathematically inclined people in the world, much more than there are opportunities so you can probably hire e.g. 10 proofs engineers in Eastern Europe/Iron Curtain countries (Soviet style math education while brutal, is tremendously effective) for the cost of one SV Engineer. Companies like Galois may be nice but they are expensive and don't scale very well; industrial automation and IoT hardly have the same margins as overpaid defence contractors. Training humans in TLA+ is cheaper. For example in Kazakhstan and Ukraine, Pascal is widely taught as part of the high school curriculum. Boeing can verify the output of their HCL outsourcing by an army of Slavic engineers trained in Ada Spark (which is not too dissimilar syntax-wise to Pascal [0]). Namecheap managed to train an entire customer service team in Eastern Europe on the finer details of DNS and networking technologies. All this on razor thin margins of selling domain names. It is not unimaginable to scale this to formal verification. Considering that GitHub acquired Semmle, this field will explode in the near future.

[0]: See also: https://link.springer.com/chapter/10.1007%2F3-540-48753-0_16

A search engine that produces results you would find interesting/helpful/engaging most of the time. It should heavily penalize SEO optimized, clickbaity listicle trash. Content that was created organically as part of a conversation should rank very highly if it's relevant to the query, as well as blog posts from obscure but highly relevant and informed sources.

Google died when they stopped being well informed librarians and started being aggressive salespeople. It's time for a new search engine to step in specifically catered to the curious.

This does not seem like a software engineering improvement.

It would improve it for me, but only because I'm a glorified google search monkey

In engineering: There should be a single global content-addressed namespace for data. The space should be unguessable, rather than enumerable or searchable. The effect would be to end all problems of networked data storage, and also to end copyright. DNS, Bittorrent, IPFS are all fine attempts, but also clear and abject failures. If it's not possible, then we should prove the impossibility.

In theory: Prove that one-way functions don't exist, or explicitly construct one. Similarly, prove that P!=NP, or similarly settle the question.


1) Why unguessable? That sounds like something one would naively use to keep secrets, but encryption is the right tool for that. Apart from that, it sounds like nodes participating in the network would inherently have to see names in order to process requests...

Would something like 256 or 512 bit hashes suffice? You can try to guess or enumerate, but the chances of finding anything are slim.

2) What problems of networked data storage would this end? I see lots of problems, such as discoverability, bandwidth, latency, retention, scaling, censorship, etc. Which of these are solved by globality of the netwokr? Which of these are solved by unguessable addresses?

3) How does this end copyright? AIUI copyright is a social problem, not an engineering problem. If you want to work around it, you would need (at least) strong anonymity and censorship resistance. Freenet is the only thing (that I know of) that comes close, and while it has engineering problems, the main issue with any such project is a people problem: you need huge adoption, otherwise it is impossible to resist deanonymization and offer sufficient bandwidth & storage, etc.

1) Unguessability lets us treat the namespace as uniform and opaque. You are right that encryption is required; see the designs of Tahoe-LAFS or Dat for examples of how to blend these concepts. Another advantage is that an unguessable reference is a basic capability, and in fact the most complex capability that can be used to protect mere data.

You are also right that we don't yet have a satisfactory proof that any of this can happen. It does indeed seem like participants in the network must, as a condition of routing, be forced to handle bundles of data to which they do not have keys, and while mixnets are real, mixnets do not completely solve the problem.

Yes, if we get concrete, the typical way of building unguessable names for data is to do something like take a Merkle tree hash, and then use that as a basis for several "exported" names which are made from further hashes.

2) This would form the basis for a data commons. Existing commons are actually centralized platforms supported by small specialized entities, leading to lots of extra details that nobody is incentivized to get right. By contrast, if there were a single namespace that existed beyond corporate or state control, where participation is platform, then people are incentivized to pay their own way on discoverability (using existing social graphs), bandwidth, latency (using existing compute hosts), retention, scaling, and censorship (using low cost of publication plus low cost of maintenance).

The global/universal nature of the network ensures that, if you can reach it, then it can reach you, and you are connected. IP gives us a hint of this; the typical connection comes with an IP address, and that address can be globally routed. For a real example today, look at how Bittorrent is diverging from the need for trackers.

Finally, it's worth pointing out that decentralized designs might be able to shard computational work or otherwise balance resources. Bittorrent famously was designed to go faster as more peers contribute more spare bandwidth, to the point where the early days of Bittorrent were marked by the protocol chewing up and choking residential ISP connections.

3) Copyright is a social solution to a technical problem in most media. The problem is that publication isn't instantaneous and uniform; there is a delay of time while a published work is copied around the world, and that delay introduces opportunity for pirates to make bootlegs and undercut official releases. On the modern Web, though, this is silly. If one wants to make a simultaneous publication to all paying customers, then one can sell customized encrypted copies at each point of sale, and release a master key which decrypts them all, a day later. The entire window of opportunity for pirates can be shrunken to mere milliseconds, which is impossibly small for pirates to make a profit.

Since we don't really need copyright in order to prevent piracy, then it makes sense to raise the eyebrow and look more cynically at such an intrusive and artificial right. In particular, the proposed namespace grants massive power to publishing artists first.

I think coming up with a good, modern alternative to the traditional Unix/Linux/Posix-style OS software interface would be worthwhile. Something more consistent, user-friendly, and designed with modern security concerns in mind.

Code reviews are bad, I want to be able to run the code and put breakpoints from GitHub.

Having to check out the branch, install dependencies, build the code and run it only to then go back to the UI to see the changes and comments is very time consuming and being able to test suggested changes instantly would save me a ton of time.

I would gladly pay for this.

Excellent point. The code review tool in GitHub is a much worse than I was using in 2009, ten years ago! (https://smartbear.com/product/collaborator/overview/)

I haven't used Collaborator in a while, but I wonder how much it has improved in ten years.

It sounds like you're thinking of a code review tool + CI/CD pipeline, right, @inglor? e.g., 1. suggest a change in the UI or backend in the code review tool 2. run CI/CD and deploy to test environment 3. refer to this new environment in the the code review tool

Am I on the right track?

You are on the right track - I will check collaborator out :]

You might want to check out Gitpod, if you haven't already. It sounds like it should be compatible with your workflow. I think it's possible to self-host, too, if that's important to you.

Removing out of date and incorrect advice on c++ from the internet.

Same with python. I was looking for a good message queue yesterday and all the results were from 8 years ago due to stackoverflow's zealous dupe rules.

As someone trying to learn c++ this would be great.

If you get a job at a company that uses C++, you will probably need some of that outdated information.

On the meta level, I want a tool that shows me the wasted time and resources in my business' development pipeline, all the way from idea to delivery. There are endless numbers of tools that will spit out metrics that no one knows how to interpret, but I want a tool that gives me actionable advice on specific things my teams can do to increase their throughput and reduce lead times and work in progress.

I would create a cross-platform GPU-targeted GUI framework for .NET Core. Something like Avalonia, only without skia, directly on top of D3D11, GL, GLES or Metal. Modern GPUs are awesome, can directly render very complicated vector graphics. Outside Windows not used for GUI despite there's need for that, slow mobile CPUs + very high rez displays are both common.

Training could be improved. We somehow seem to repeatedly solve the same problems. It would be nice if programmers were more aware of the things that were done in the past.

Requirements specification. It trumps every single concern raised so far. If you're seriously considering R&D in this space, count me in.

The silver bullet programming language + environment. Fast and zero-cost-abstraction as c++, safe as Rust, productive as Python, async as Go, runs everywhere like JS, live upgrades like Erlang, development environment as C#, designed for remote debugging, near instant recompilation and testing, universal dependency manager with sandboxing of shared libraries and scm integration.

There is still room for Domain Specific languages like sql, html and such but for imperative programming there are way too many options all filling almost the same need with just slight variation. Even with the rise of all types of VMs and cross compilers we are still porting mountains of code to another language just because the execution environment was slightly different. The gap between embedded and web is also too big, for embedded you are stuck with prehistoric c++ with dangerous syntax, hour-long build times and days of figuring out how to correctly compile and link that shared library. vs web where your only options are dynamic languages where both safety, predictability and performance are a joke. I also realize safety and performance vs productivity often contradicts each other but not as much as often argued, there has to be a better middle ground than what we are stuck with today.

In isolation, this should be an achievable task, especially if dropping any legacy compatibility. What might be difficult is is people want legacy compatibility and proven in use, so adaptation-rate will turn this into "there are now 15 competing standards" and a "peace on earth"-type problem.

Memory safety. And I am addressing it by adding an Ownership/Borrowing system to D.

Only 2 things: Naming things, invalidating caches, and off-by-one errors.

Training on a common ‘software engineering body of knowledge’. The state of the art is so different across organizations it’s like having medical tricorders at one place and leeches at another.

Making software security & data privacy really easy for all stakeholders. It's something I dread doing as an engineer but I know is one the most important 'feature' the customers are expecting to be baked into what I write.

Privacy at the language level. There's a lot of inroads been made[0] but still a long way to go before it's just ubiquitous.

[0] https://twitter.com/jeanqasaur


A large and increasing amount of human-made energy goes to computation, yet only a tiny proportion of software is written with energy efficiency in mind. Most programmers don't even have the tools to answer how much power their programs consume.

At a start, no compute benchmark that measures wall duration should ever be taken seriously if it doesn't also include energy consumption.

What people/projects/labs/companies are doing the best work on efficiency? I'm aware of https://greenlab.di.uminho.pt/

IMHO, making WebAssembly fully and tightly integrated with browsers like JS is today will be the next big leap forward. In such a way that you won't need to use JS at all - direct access to DOM and other Web APIs, choice to use whatever programming language you want

Java applets came 20 years too early - potentially had the power to do everything we do with the web today but 10x faster and more cleanly.

The web remains one place where you lack the freedom to easily use whatever language you like - WASM will end the era of JS if they do it right.

The JS ecosystem is extremely wild and turbulent - even something as simple as "I need this project to be built exactly as it was in August 2017" is almost impossible with the npm world.

Meanwhile native apps compiled in 1985 still run, and can even build today with minimal fuss.

Lets be honest - how many of you use JS because there was no other option? Its not a terrible language - in fact I like JS/ES7 more than python, but it's still one of the pillars of chaos in the world of programming

> IMHO, making WebAssembly fully and tightly integrated with browsers like JS is today will be the next big leap forward. In such a way that you won't need to use JS at all

It exists, and is called VirtualBox. It just doesn't run inside the browser though, but you can run a browser in it.

Rewriting all popular proprietary software with GPL license, such that no more vendor lock-ins prevent people from using old hardware.

State observability for the entire stack.


I am working on this problem. Let me know if you are interested, I would like to hear your opinion. My Keybase is in my profile or I email you.

Thanks I’ll take a look

Measuring how productive software engineers are.

If you could solve this problem, you could convince management why engineers need offices, for one.

And which language to use. Even better, which language to use when.

CSS bloat belongs somewhere on the list. Thankfully, actual, pragmatic and standards-based solutions (proper design systems, treatment of layout as a first-class concern, and component-scoped CSS-in-JS) are finally emerging.

There are so many times where I think think to myself, "We shouldn't be doing this in 2020". Just mundane stuff like checking for null values and stuff. You'd expect we'd have this covered by now

I would somehow fix software engineering training so that people understood the vast corpus of techniques and approaches that have been tried, what the pros and cons of them were, and what the reactions to those issues were. Less on the theoretical algorithmic level, but on the practical and implementation level.

We should not be rewriting and adopting a slightly better but incompatible version of make every five years, or waffling between SQL and NoSQL, or churning back and forth between slightly different versions of MVC paradigms. Or going from tables to div-tables to flow layouts to css-grid.

My kingdom for a technology that lets me write an app once and run it acceptably on Android, iOS and in the browser. This probably beats everything else listed in this thread in terms of developer hours saved.

Are you aware of Apache Cordova? I've used it a couple times to deploy simple web apps as Android apps and I have no complaints about the result. Supports iOS also.


A vanilla web app. But you would still have issues designing for different screen sises, eg. someone with a 40 inch screen would be annoyed if the app was designed for 5 inch screens, and vice versa.

In theory Flutter allows you to do that. (With the caveat that you have to write a ton of custom code due to the lack of libraries).

Like cold fusion, it's one of those things that's perpetually around the corner but the big breakthrough never quite happens.

Just us out at [Ionic](https://ionicframework.com/) - a complete platform for doing just that. Write a web app using your framework of choice (or none at all!) then deploy as a PWA, iOS, Android, or Electron app.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact