
Ask HN: What is the single top-priority software engineering problem? - abrax3141
If you have unlimited time and&#x2F;or resources, what single software engineering problem would you address? I&#x27;m not talking about &quot;Peace on Earth&quot;-type problems, but rather real world practical problems facing software engineering that could be actually solved if you could pay an rationally large team of serious hackers a rationally large amount of money for a rationally long period of time. Another way of asking this is: What&#x27;s the most important piece of technical debt across software engineering, that could practically be solved if we put enough energy into it?
======
simonw
Development environments. The amount of time and hassle I've seen lost to
getting a working development environment up and running is incredible.

Every time I talk to someone who's learning to program I tell them: "Setting
up your development environment will be a nightmare. I have been doing this
for twenty years and it's STILL a nightmare for me every time I do it. You are
not alone."

Docker is reasonably good here... except you have to install Docker, and learn
to use Docker Compose, and learn enough of the abstractions that you can fix
it when something breaks.

[https://glitch.com/](https://glitch.com/) is by far the best step forward
I've seen on this problem, but it's relatively limited in terms of what you
can accomplish with it.

~~~
amasad
This is my calling. I'm the cofounder of Repl.it and I've dedicated my career
to solving this problem.

Simon -- since you're the cocreator of Django, you might get a kick out of
this: From loading up a Django environment to adding and rendering the first
view takes less than a minute:
[https://gifs.amasad.repl.co/django.gif](https://gifs.amasad.repl.co/django.gif)

Before starting this company I was a founding engineer on the React Native
team where I focused speed of setting up and taking the drudgery out of cross-
platform mobile dev. And before that I was a founding engineer at Codecademy
where we were attacking the same problem from an education angle.

With Repl.it, however, we're determined to simultaneously solve the "getting
started" problem while scaling the environment to allow real work. It's
incredibly challenging but we've made great progress. If anyone is interested
in working with us, collaborating, or if you're simply passionate about this
problem and want to chat then I'd love to hear from you: amjad@repl.it

~~~
AndrewKemendo
I love Repl.it and taught my daughter python there. Can you please make a self
hosted version? I think the only thing I've seen is that maybe sometime in the
future you'll have it.

~~~
amasad
Great to hear :-)

Certainly the plan. Just a matter of priorities and time. I'm curious, what's
so attractive about self hosted version?

~~~
CamperBob2
As long as my critical infrastructure and tools depend on someone else's
computer, I'm nothing more than a (potentially well-paid) sharecropper.

In fact, I think that's a valid answer to the question posed by the headline.
Returning the power to the end user, _and keeping it there_ , should be the
most important priority for software developers. This is a social problem, not
an engineering problem, but unlike many other social problems the solution
will have to be engineering-driven.

In more concrete terms, that means being able to self-host your tools.

~~~
Multicomp
> As long as my critical infrastructure and tools depend on someone else's
> computer, I'm nothing more than a (potentially well-paid) sharecropper.

I heartily agree. There are so many new tools and things that are interesting
concepts, that I don't try because there is no offline/self-hosted version
available.

Current example (but can be replaced by a myriad of other tools, this is not
specific to 'em): Notion. Apparently could be adopted into my personal
knowledge management, has some interesting features most of the software I've
seen so far does not, but why would I ever invest even a moment of time to pay
the costs of using the system (let alone the membership fees etc.) if one day,
poof, it's gone, like Frank Sinatra, like WiiWare, like Microsoft eBooks, like
Google Reader.

> Returning the power to the end user, and keeping it there, should be the
> most important priority for software developers.

This is what we need to return to. Look at the open-source manifestos, FSF
documents, heck even certain sections in the Windows 9x User interface guides
and .NET Framework design guidelines indicate that the user should always be
the focus, the user should be the one in control.

------
lame88
A constantly cycling proliferation of different languages, frameworks,
libraries, etc. that all do the same things in a different way and are most
often mutually incompatible with each other and have entirely different
ecosystems with their own comparative advantages but also major pitfalls. This
causes tech workers’ investment in skills to get more out of shallow knowledge
and trivia than on deeper concepts, creates silos of employment opportunity
based on the trivial knowledge workers have, and hinders the ability for the
software engineering field as a whole to have a large pool of shared knowledge
and develop and evolve stable, relatively timeless systems and tools of high
quality, both as end products and in intermediate tooling toward those ends.

~~~
closeparen
Learning new skills is easy enough, what kills me is the enormous manpower
invested in churning working code.

Easily 30% of my company's total engineering effort is just treading water,
migrating from a deprecated platform to another one that _will_ be deprecated
by the time the migration is complete. Typically because the original team's
standard 18-month tenure has elapsed and the new guy was under-leveled at
hiring so he needs impact for promotion.

It's a great disservice to our field that people so deep in the stack are so
comfortable changing their minds all the time. The Python 2.7 thing feels like
the Library of Alexandria. Burning down mountains of perfectly good working
code just because we can.

Backwards compatibility is tragically underrated.

~~~
zelly
Backwards compatibility causes a lot of evil. I would prefer that breaking
changes come with automated tools to migrate. Sometimes bad decisions get
made, and we shouldn't have to carry the burden of that forever out of
laziness/stubbornness. What ends up happening is the opposite of what you
wanted, because after enough backwards compatibility debt adds up, someone
starts a fresh competitor that takes over.

~~~
gridlockd
> I would prefer that breaking changes come with automated tools to migrate.

The hard part of the migration isn't that you can't automate it. The hard part
is that you can't automate verifying that it didn't break anything. So you
rather stick to what you have.

> Sometimes bad decisions get made, and we shouldn't have to carry the burden
> of that forever out of laziness/stubbornness.

Here's the deal, Mr. Developer who wants to change everything because the old
stuff is somehow bad and the new stuff surely is better: I almost certainly
have better use for the money than to spend it on migrating the code. The old
code will keep working. That's not laziness, it's prudence.

Developers _always_ exaggerate the benefit of rewriting stuff and changing
things around. I understand why, you'd rather arrange the code you work with
to _your_ taste than whatever horrible stuff is there already. The problem is
that every other developer feels the same way, but with a different taste.

For instance, some people hate object oriented programming, some people think
procedural programming is somehow bad, some people believe functional
programming is generally good. I believe the best programming is the one you
can just _stop arguing about_ and solve the damn problem with, sooner rather
than later.

> What ends up happening is the opposite of what you wanted, because after
> enough backwards compatibility debt adds up, someone starts a fresh
> competitor that takes over.

That's not true at all in the vast majority of businesses cases. The benefit
of the migration would have to be so spectacularly high, it would have to
offset its cost. Again, this almost never happens.

Whenever you break compatibility, you burn good will. Whenever you break
compatibility, you give me an opportunity to _switch to your competitor_. If I
_have_ to migrate, I might as well migrate to something else.

~~~
closeparen
When code is hard to maintain, extend, or debug we should absolutely refactor
it. Even if something is fine but can be expressed better using a feature from
the language's new backwards-compatible release, go for it.

The tragedy is mandatory, all-out migration for code that was already high
quality, and just happened to be written for an older stack.

~~~
gridlockd
_All_ code is "hard to maintain, extend and debug" to the person who didn't
write it and therefore doesn't like working with it. Developers like to
exaggerate this all the time, because they want to maximize their own comfort.

Even if developers were able to objectively asses that code is hard to
maintain/extend/debug, that doesn't mean changing it is the right business
decision. It may or may not be. From a pure efficiency standpoint, it may well
be a net loss. From a developer psychology standpoint, it may be worthwhile.

You also can do a lot of refactoring without breaking a lot of code,
especially if you didn't buy into the idea of writing tiny functions and unit
tests for every piece of functionality.

~~~
closeparen
If you’re stuck working with people who think they are the only ones ever to
have written good code, I’m sorry. That must be unpleasant. But please don’t
take that out on everyone else who merely gives a shit about the quality of
their work.

Most of the code I refactor is my own from a few years ago, btw. Usually
because I learned new information, past choices turned out to have regrettable
unforeseen consequences, etc.

But we have learned to be really careful about public APIs. Implementations
may change, interfaces are forever.

~~~
gridlockd
> If you’re stuck working with people who think they are the only ones ever to
> have written good code, I’m sorry.

I didn't say that.

> Usually because I learned new information, past choices turned out to have
> regrettable unforeseen consequences, etc.

Indeed, that "awful" code may have been written by the same developer from one
or two years ago and now they're not comfortable with it anymore either.

Of course I'm being a bit hyperbolic, but what I'm saying is basically true
and applies to pretty much everyone.

It's not all bad either, without some push towards renewal, we would be stuck
with old ideas forever. The key point is that permanent renewal has a cost
that the business must carry. Sometimes no renewal at all _is_ the right
business decision.

------
dfabulich
We need a faster web framework that generates HTML on mobile phones with no JS
on the main thread.

The web is the "single top-priority" software platform, but it's in big, big
trouble.

On mobile, users spend less than 7% of their time on the web.
[https://vimeo.com/364402896](https://vimeo.com/364402896) All of the rest of
their time is in native apps, where big corporations decide what you are and
aren't allowed to do.

As a result, the money is going to native apps. The ad money is going there,
the dev time is going there, and the mobile-web developer ecosystem is in
peril.

The biggest reason people use native apps instead of mobile web apps is
performance. Developers design web apps for fast desktop CPUs on fast WiFi
data connections, and test their sites on top-of-the-line smartphones costing
5x-10x as much as the cheap smartphones people actually carry around.

Web developers have to solve this performance problem the way we've always
solved our problems: with a new framework. ;-)

But specifically we need a framework designed to generate HTML with no JS at
all, and designed to run in a Service Worker, which is a little web server
that runs directly on the user's phone.

This style of app is often called a "Progressive Web App," and there are
plenty of frameworks that support PWAs, but they generate PWAs on top of a
single-page app framework that downloads megabytes of JavaScript running on
the main thread. PWA is an afterthought for most frameworks, but we need it to
be the centerpiece of the design.

~~~
zelly
> On mobile, users spend less than 7% of their time on the web.
> [https://vimeo.com/364402896](https://vimeo.com/364402896) All of the rest
> of their time is in native apps, where big corporations decide what you are
> and aren't allowed to do.

And the web isn't controlled by big corporations? You literally just linked to
a website on the web owned by a big publicly-traded corporation.

I am in favor of using the web over apps but mostly because it has less
tracking features. A more high-performance, bare metal implementation of the
web would most likely give more stuff for websites to fingerprint you with.
You want WASM, but WASM makes the web just as bad as apps in that respect.
WASM is going to make it impossible to hide ads (because it will be painted
without the DOM) or to block tracking or otherwise malicious code (because it
will be heavily obfuscated).

~~~
meowface
>A more high-performance, bare metal implementation of the web would most
likely give more stuff for websites to fingerprint you with. You want WASM,
but WASM makes the web just as bad as apps in that respect. WASM is going to
make it impossible to hide ads (because it will be painted without the DOM) or
to block tracking or otherwise malicious code (because it will be heavily
obfuscated).

This is true, but what's the alternative? That just seems like a necessary
trade-off. Native code will always be easier to obfuscate. It seems backwards
to think that we should keep things far more inefficient and consume more
cycles and electricity and place things behind more layers of indirection and
make web use slower for users just so that it's harder to hide malware. This
reminds me of the argument that we shouldn't make cryptography too strong
because then it could be used by criminals in a way that even intelligence and
law enforcement agencies can't pierce.

There's already a ton of tracking going on, and typically already with heavy
obfuscation. The obfuscation doesn't seem to make a difference in terms of
practical solutions either way, since the detection and blocking is generally
based on origins and IP addresses rather than static (or even dynamic)
analysis. And a lot of ad blocking does the same, and should still work for
WASM.

For cases where the ads are served directly by the origin and are painted
without a DOM, more clever mitigations will be needed, but I don't doubt that
people will come up with solutions.

So, yes, the cat-and-mouse game is going to become easier for ad/tracking
companies and harder for anti-ad/tracking developers, but it's going to be a
big challenge for both parties, just like it is now, and the anti-ad/tracking
side is still going to have a lot of success.

------
fmjrey
What matters most is what you want to still be there when the power goes off:
data.

No amount of processing power matters if you don't have the data.

Everyone in the industry focuses too much on the processing side: objects,
functions, containers, VMs, k8s, etc. but nobody really gives proper attention
to data, its provenance, where it stays and where it goes, etc. I'm not saying
engineers don't think about these things, they obviously have to think about
it at some point. It's just that data is always accessory to the story. It's
like processing is the cool kid and data is the stinky one nobody wants to
approach unless you have to. Look at the 12 factor principles for example,
where is data in there? How easy is it to take data from one
place/cloud/database to another? Data is the raw material, it needs to be the
primary concern in programming languages and architectures, not objects or
functions, containers or whatever, those come after, not first.

~~~
pesfandiar
The problems around data are in general harder to address than the processing
part. Most people dabbling in software engineering don't have the skills or
attention span to work on those, and most of the issues are already solved by
extremely complicated systems (e.g. DBs, Apache projects, ...).

As for 12 factor principles, it was first touted by a PaaS provider. The whole
idea is that they take care of the nitty gritty, while you can focus on the
exciting (and technically easier) parts of software systems.

~~~
fmjrey
I'd be tempted to say something similar: data is harder because it sticks, in
the sense that it ties us back to the physical world because it's always tied
to a place somewhere, it has to be moved, copied, synchronised, etc.

However when I look at the accidental complexity we have created on the
processing side, I wonder if it does not surpass the essential complexity of
dealing with data.

In other words: maybe we have yet to create the proper concepts and tools to
deal with data, all this time the industry created a tower of babel (and of
overspent $$$) with our languages, frameworks, containers, etc.

------
dane-pgp
A piece of technology that seems to be missing is a global decentralised
anonymous identity and trust framework. It probably requires a leap of
engineering comparable to the invention of the blockchain (although I don't
think that a blockchain is necessarily the model to follow here, since the
data should be encrypted).

Every website and app and network service seems to be reinventing the wheel
here, and end up creating little silos of trust data, whereas in principle it
should be possible to receive a (locally) consistent answer to the question of
"Is this person in good standing with the other humans they interact with?"
whether that person is sending you an email, or writing a software library you
are downloading, or creating an account on your website, or selling you goods
on Ebay, or offering you a lift through a ride sharing app.

~~~
ijpsud
I was talking about this with a friend the other day. I don't know about the
technical feasibility (RE sybil attack[0]), but if it were possible, it would
have a massive impact on the web. The current state of the art (in terms of
new user signup) is using phone numbers as representing unique "trustable"
people, which is kind of absurd.

This reminds me of how social security numbers weren't originally intended to
be used as unique identification, but the demand for some form of
identification was so strong that organisations ended up hackily using it
anyway.[1]

[0]
[https://en.wikipedia.org/wiki/Sybil_attack](https://en.wikipedia.org/wiki/Sybil_attack)

[1]
[https://www.youtube.com/watch?v=Erp8IAUouus](https://www.youtube.com/watch?v=Erp8IAUouus)

~~~
zozbot234
Just post a bond that is forfeited if you engage in verifiable abuse, the
proceeds of which are used to compensate the victims (if applicable). Use
pseudonymous identity to link any number of site-specific "identities" to that
same initial posting. Real-name identity can then be optional (although some
sites may still insist on it), but users in good standing are protected from
"sybil" attacks because each entirely-new user requires posting a separate
bond, so the cost quickly becomes infeasible.

~~~
dane-pgp
Posting bonds might be part of a solution, but there is still a question of
who gets to decide whether or not something is abuse (or who is a victim, for
that matter). The closest I've seen to this sort of system is OpenBazaar's use
of "proof of burn":

[https://openbazaar.org/blog/why-proof-of-
burn/](https://openbazaar.org/blog/why-proof-of-burn/)

~~~
zozbot234
> but there is still a question of who gets to decide whether or not something
> is abuse (or who is a victim, for that matter)

In principle, all you need is a trusted arbitrator that's acceptable to all
involved parties. This is how "multiple signatures" work on Bitcoin already;
the third-party escrow can decide who's going to keep the coins by adding her
signature to either party's claim.

------
say_it_as_it_is
I would fund a rigorous study of frontend development with a team of academics
who would gather details from a very large sample of organizations about
frontend development projects. My theory that I would seek to validate is that
the vast majority of frontend work was unnecessary, overly complex, costly,
and shouldn't have ever been funded. Chasing new approaches creates new
problems. Following FAANG solutions to problems no one has is costing everyone
time, effort, and money. Myths need to be debunked. Anecdotal evidence
consisting exclusively of success stories is concealing the truth. It feels as
if the entire frontend world has gone crazy and received financial support to
fund an expensive addiction to unnecessary complexity.

~~~
ketzo
Can you give some specific examples of this problem? Frontend developer
curious what kind of trends/solutions you mean.

~~~
chubot
Here's my take on it from a few months ago:

\----

...

Claim: Most sites are mostly static content. For example, AirBNB or Grubhub.
Those sites could be way faster than they are now if they were architected
differently. Only when you check out do you need anything resembling an “app”.
The browsing and searching is better done with a “document” model IMO.

Ditto for YouTube... I think it used to be more a document model, but now it’s
more like an app. And it’s gotten a lot slower, which I don’t think is a
coincidence. Netflix is a more obvious example – it’s crazy slow.

To address the OP: for Sourcehut/Github, I would say everything except the PR
review system could use the document model. Navigating code and adding
comments is arguably an app.

On the other hand, there are things that are and should be apps: Google Maps,
Docs, Sheets.

edit: Yeah now that I check, YouTube does the infinite scroll thing, which is
slow and annoying IMO (e.g. breaks bookmarking). Ditto for AirBNB.

[https://lobste.rs/s/jmmr3w/interview_with_drew_devault#c_wuc...](https://lobste.rs/s/jmmr3w/interview_with_drew_devault#c_wucxv3)

context:
[https://lobste.rs/s/jmmr3w/interview_with_drew_devault](https://lobste.rs/s/jmmr3w/interview_with_drew_devault)

I haven't used sourcehut much but the UI is going in the right direction IMO.
It looks nice and is functional, at 5% of the weight and 20x the speed of
similar sites.

~~~
btown
What about when you need an editing interface with an instant preview, though?
You would want to reuse the component that renders the static content. So it’s
simpler, and leads to happier devs, to treat both as an app from the
beginning, and if you need to (for SEO or performance), server-side-render
using the same codebase.

Is this silly? Absolutely. Is it a global optimum? Possibly.

~~~
richajak
For this specific example, I would use event listener to sync the content. All
done manually in DOM, no shortcut, some codes duplicated. I think that's what
SPA frameworks and its components try to promote, single point of update.

Personally, I like the vannila JS approach, a bit tedious, but I know what I
will get.

------
arexxbifs
The decline of usability, recognizability and coherence in desktop user
interfaces. I honestly think we reached peak UX some time in the mid-90s. With
the advent of touch devices, paradigms are mixing in a way that's directly
hostile to productivity.

~~~
blihp
I agree with your take that mid-90's was probably peak UX. But I think that
has more to do with that being about the time when companies stopped trying to
strike a balance between accommodating both power users and casual users.
Since then, much more effort has been put into providing a polished/slick UI
at the expense of things like automation. Of course, that also plays into the
advertising model which I believe is diametrically opposed to things like
automation. (i.e. if you're not clicking/touching, they can't tell if you're
looking at the ads)

Touch capabilities can be a nice addition to many types of desktop
productivity software without detracting from what's already there. Companies
reworking desktop applications to look and feel like mobile apps (with all of
the related pros and cons) is what causes productivity to suffer rather than
anything inherent in the paradigm.

~~~
arexxbifs
> working desktop applications to look and feel like mobile apps

This is basically what I mean by mixing paradigms, and it's getting more and
more common in operating systems as well, with Windows as clear leader.

As for touch input, I personally think it's of little use on the desktop. I've
already got a mouse, which when properly configured is a pixel-precision input
device. I simply cannot do the things I do with a mouse on a touchpad, let
alone a touchscreen.

There are several other crimes as well, big and small, mainly in Windows but
also on Linux/BSD (especially in Gnome, where the worst decisions made seem to
perpetuate into other FOSS DE:s). Apple is still keeping things relatively
sane, even though they've slipped somewhat of late.

------
jspaetzel
Understanding the problem.

Most of the time the users haven't taken enough time to understand their own
problems and have trouble articulating them in a way that's meaningful for a
product manager or developer. And on the opposite side of that product
managers and developers often do not develop sufficient domain experience to
understand the problems users are trying to express.

This is why the absolute best software is built by people who are developing
for themselves. You know when it's not solving your problem and you fix it
because you know nobody else will.

~~~
sebastianconcpt
Good point. But is not a computing or engineering problem. Is a cultural,
education problem. People is using the languange poorly. If they can't
articulate well then... all that is left is _garbage in, garbage out_.

------
cs02rm0
I seem to spend far more time on the deployment of code than I ever used to,
more time writing CloudFormation and ansible than the Java backend, python
lambdas and static UI that it deploys.

That _seems_ nuts to me. I'm not sure how you fix it without being
Amazon/Google. People moan about Terraform too before that gets mentioned.

~~~
markbnj
> I seem to spend far more time on the deployment of code than I ever used to,
> more time writing CloudFormation and ansible than the Java backend, python
> lambdas and static UI that it deploys.

I'd really like to understand more about what the pain points are here. In
contrast to your experience I spend almost no time deploying code. We commit,
tests run, we click a button and the deployment is updated in our kubernetes
cluster. We did have to put a fair bit of work into our gitlab pipelines and
the tooling that patches config into our kubernetes manifests and depoys them
to the right place (let's say 2 person months all-in), but the payoff in the
end was pretty significant. We're a fairly small company so if we could afford
to put the time in to build the foundation for painless deployment I wonder
what prevents larger orgs from doing this work? Is it access to the right
skills? Difficulty prioritizing non-revenue tasks?

~~~
j4yav
As an aside, GitLab PM for CI/CD here. It would be awesome to hear about what
you built and how we could have made that learning and setup process easier.
Feel free to DM me on Twitter? @j4yav

~~~
markbnj
> As an aside, GitLab PM for CI/CD here. It would be awesome to hear about
> what you built and how we could have made that learning and setup process
> easier. Feel free to DM me on Twitter? @j4yav

Honestly most of the effort was not in the gitlab pipelines. We found those
facilities easy enough to understand and implement. Most of the work went into
the tooling layer between gitlab and our runtime environment. My email is in
my profile if you'd like to reach out with more specific questions.

------
thom
No idea where this fits on the priority list, but I think a lot of problems
around stream processing still aren't solved, and it's holding us back from a
really productive programming paradigm. Handling updates/retractions elegantly
is hard or impossible on many platforms, handling late (sometimes _extremely_
late) data can be very inefficient. Working with complex dependencies between
events (beyond just time-based windows), in realtime, can be really tough. As
the saying goes, cache invalidation is one of the hardest problems in software
engineering. Having a simple platform to represent processing as a DAG, but
fully supporting both short and long term changes transparently would make
event sourcing architectures trivial and extremely productive. The closest
we've come seems to be:

[https://github.com/frankmcsherry/differential-
dataflow](https://github.com/frankmcsherry/differential-dataflow)

Lots of very active CS research in this area though.

~~~
btown
When humans enter the mix, it becomes really complicated. Your event sourcing
code can no longer look at events like "Human A changed the record with
primary key B to have property C equal to D at time T" because that event is
actually a function of the snapshot of how Human A saw the entire state of the
interface at time T. And what if someone edited the record with primary key B
to represent an entirely different entity (from the perspective of a rational
human observer)? Is it still the case that a prior event on this record should
be retroactively interpreted to affect the new identity? All code becomes
recursively dependent on all prior code, and all events become recursively
dependent on prior events. Figuring out how to code in a paradigm where you
can't retire old code is _very_ difficult and I'd welcome people pushing the
envelope.

Martin Kleppman's articles and research is a great place to start for anyone
interested: [https://martin.kleppmann.com/2015/02/11/database-inside-
out-...](https://martin.kleppmann.com/2015/02/11/database-inside-out-at-
salesforce.html)

------
3pt14159
Unlimited time and resources? I'd try to tackle the global security issues
related to advanced cyberattack. It's such a complex problem I don't even know
if it's possible, but it would require hardening software update servers,
networks, and utilities (especially electrical power distribution) to the
point where a single bad Windows update doesn't take out the economy.

------
tzs
Documentation.

Far too often when I want to learn something new I either find there is not
much documentation or find that there is a large mass of documentation that
probably has everything I need but is so disorganized that I can't figure out
how to approach it.

Another documentation problem, especially with open source projects, is that
development and documentation are often loosely coupled, if at all. The people
doing documentation usually don't have the resources to keep up with
development, and so even if there is good well organized comprehensive
documentation it is usually obsolete.

~~~
thepenguinco
Hey there, I'm actually currently working on something in the documentation
space that will benefit both companies and open source projects.

Would love to chat with you further! Shoot me an email over if you're
interested at li.eric00@gmail.com

~~~
joshuahughes
I’m one of those sad product designers who finds documentation tools
fascinating :) Ping me if you need a hand on the design side (mrjoshuahughes
at gmail)

------
rodolphoarruda
Having my phone as my ultimate CPU, immediately connectable to commodity
peripherals (monitor, keyboard, printers) at home, office and on the street.
Content would be stored on the phone following the "local fist" principle to
favor speed and security.

Something like Ubuntu Edge for those who remember it, but with more local
storage -- TBs maybe -- more connectivity out of the box.

~~~
arexxbifs
I envisioned something similar around the time of the 2nd gen touch phones:
just plonk your phone into a dock with keyboard, mouse and monitor.

I realize now it’ll probably never happen, since the companies making the
phones are the same companies that make computers.

~~~
thu2111
It did happen. Samsung DeX is a desktop environment that you get when you plug
your tablet or phone into a screen. It's passable but of course most Android
apps aren't designed for work.

They also did a Linux on DeX. You could boot Ubuntu on your Samsung phone and
display it via HDMI. They stopped supporting it around Android 10 though,
presumably nobody ever used it.

~~~
arexxbifs
Oh, didn’t know about that, thanks!

------
ronyfadel
Decent nocode, or some sort of nocode holy grail.

I want to be able to plug APIs together, process user input, have persistence
and identity, without writing so much boilerplate.

~~~
keenmaster
If we don't take software engineers as a fixed population, but rather everyone
on Earth with sufficient intelligence to code well with the right no-code
framework (which currently doesn't exist), then this is indeed the biggest
priority. Of course, I am under no illusion that no code will work for
everything, such as developing ML applications from scratch. Even then, it can
be a way to interface with pre-built ML apps, and use them in various
industries. It would also serve as a pipeline for non-STEM workers into full-
blown programming and STEM.

Think about what Excel did to productivity, and multiply that by 3x or more.
That's what no code could do as the 60% solution for a 200x larger labor pool
(more like 80% solution for anyone who can't access engineering talent). It
would also make its creators obscenely rich. With the amount of processing
power that we have, the time for no code is now.

By the way, there's a large chance that Big Tech incumbents won't be the ones
to create the no-code holy grail. They have institutional handcuffs that will
make that difficult.

~~~
btown
Salesforce got it partially right. To be somewhat cynical, the trick is to
enable no-code, but (a) make it just borderline user-hostile enough that you
foster an entire marketplace of no-code consultancies to no-code for companies
too busy and frustrated to no-code themselves, and (b) ensure everything is
extensible with actual code. They got (a) balanced perfectly, but (b) is
somewhat lacking, and it didn't have to be. Someone who builds the no-code-
but-you-can-drop-to-code consulting network will be the next Salesforce +
Oracle combined, and will empower a lot of people around the world.

~~~
aaronblohowiak
Airtable?

~~~
btown
I stan Airtable and use it daily, but without websockets or other ways to
“push” changes to external systems, it’s not really suitable for (b), and from
an interface perspective it’s not customizable enough (without coding a whole
new UI, which runs into the above) to accommodate (a).

------
js8
Formally specify all the commonly used languages, runtimes and APIs, so that
we could translate programs between them. Then make sure that all programs
such translated can be compiled, so that we don't waste resources by running
inefficient VMs.

~~~
drusepth
This would go a really long way towards efficient code reuse as well, with far
less "reinventing the wheel" all the time if shared libraries more easily work
across all (common) languages.

------
belltaco
>What's the most important piece of technical debt across software
engineering, that could practically be solved if we put enough energy into it?

Being able to update libraries, tools etc. automatically and without friction.
Right now upgrading is so tedious, error prone and painful that most places
just keep using ancient versions that are not only lacking bug fixes and newer
features but are a huge attack surface.

~~~
NateEag
That is not known to have a solution.

It's a human coordination and cooperation problem on a global scale.

~~~
beamatronic
Take the people (and LAWYERs) out of the equation and software becomes much
simpler.

~~~
NateEag
I'm not sure if this is sarcasm or not.

But, yes, without users it is easier to create software.

Completely pointless, but much easier.

------
naasking
A true build once run anywhere platform, that doesn't sacrifice security or
verifiability, and that would also scale from embedded systems (that are often
married to their own limited C compiler), up to distributed systems and mobile
code (like browsers acting as a remote UI typical of web apps).

WASM is a great advance over what came before in many ways, but still has some
room for improvement. These are all problems with known good solutions, but
there is no holistic platform that integrates these smoothly. It's largely a
hodge podge of various programming languages, configuration systems, build
systems, scripts, etc. Look elsewhere in this thread for people complaining
about how difficult it still is to setup a development environment. Embedded
programming is typically even worse, though it's improved dramatically since
the rise of Arduino.

It needn't be this way though. A well designed programming language can be
used for configuration, scripting and more. Racket is a good example on the
dynamically typed side, and F# is a good example for a statically typed
language along these lines.

------
GrumpyYoungMan
Build the tools and infrastructure necessary to make FPGA accelerator
programming accessible to the average programmer. Moore's Law is mostly dead;
we are getting more cores but we are not going to get significantly faster
cores in the near future. What will bring us next big jump in computing
performance is unclear but FPGA acceleration seems like one of the few
promising directions.

~~~
thu2111
Here you go:
[https://fosdem.org/2020/schedule/event/tornadovm/](https://fosdem.org/2020/schedule/event/tornadovm/)

------
petters
A computer and OS that boots in 100ms. Every user action gives a response in
10ms.

~~~
robotresearcher
We had approximately that in the 1980s. And straight to a programmable shell,
too.

~~~
sambeau
And then we had to wait 10 minutes for a program to load over cassette tape
with a significant failure rate.

------
snisarenko
Documentation!

I am willing to go out on a limb and say that as much as 25% of software
engineering time worldwide is wasted due to poor documentation.

It's an asymmetric problem too. If someone benevolently funds a team of
engineers for a couple of months to write great docs (with detailed examples)
for top 500 libraries, frameworks, APIs. They could increase global
productivity of software engineers by %25 percent.

~~~
pbedat
A very conservative estimation! And it's not only documentation of software
but also the requirements and other parts of the whole lifecycle. It's
incredible how many companies hold meetings after meetings to keep up the oral
tradition only to run in circles. Instead of POs who churn out dozens of
irrelevant tickets, we need people who can tell and write stories - real
stories not "user stories". They offer insight, motivation, engagement and the
right understanding to break down the work into reasonable and deliverable
bits.

------
alimoeeny
UI development is still very clunky on any desktop OS other than windows. (and
I have not done UI on windows in a decade, it used to be pretty good back in
the day). iOS is good, I mean like you want to create a utility to do a job
for yourself, not a big project.

And I know there are lots of html / web based things you can run on desktop
but they are even more complex, for me at least.

~~~
ScottFree
What's your idea of a good UI development environment? I've asked this
question in the past and the variety of answers I got astounded me. I got
everything from MacOS to Lazarus to hypercard.

------
rhn_mk1
Formal verification in the basic parts of the infrastructure.

User interface responsiveness.

The Web being based on standards that leak users' data left and right.

------
kukkeliskuu
I believe there are cycles in computing, one of them is between centralized
and distributed, another little bit different one is between local and remote
computing. For example stuff is moving into cloud, but then we have mobile
apps. Etc.

Thus, on a longer term, you might want to identify the cycles and look into
opportunities beyond the current phase in these cycles, and whether there are
under-utilized ideas there.

For instance, for unix-style command line operation, we have the idea of
piping data between applications and combining multiple applications to
perform a job.

These applications communicate through very simple protocol, the text file
format, where one line means one thing. Thus, if we want to combine more
complex applications, such as operations on image files, each needs to
implement their own processing for various file formats etc.

My idea would be to try to increase the abstraction level of operating system
from files to something more generic.

For example, what kind of things I could script more easily if the operating
system would allow me to read source code tokens/statements/packages in any
language? Or images as an abstraction regardless of their file type?

------
ChuckMcM
End to end validation of complex systems.

Cookie-cutter developers using open source they don't understand to implement
mission critical infrastructure for companies that don't understand the
internet. Its a recipe for disaster.

~~~
ampdepolymerase
Pay math majors to manually (or with software assistance to help maintain
rigor) prove the correctness of open source software Infosys-style. Get paid
by enterprise customers. You don't need math majors per se just people with
some math talent. Unfortunately there are tons of mathematically inclined
people in the world, much more than there are opportunities so you can
probably hire e.g. 10 proofs engineers in Eastern Europe/Iron Curtain
countries (Soviet style math education while brutal, is tremendously
effective) for the cost of one SV Engineer. Companies like Galois may be nice
but they are expensive and don't scale very well; industrial automation and
IoT hardly have the same margins as overpaid defence contractors. Training
humans in TLA+ is cheaper. For example in Kazakhstan and Ukraine, Pascal is
widely taught as part of the high school curriculum. Boeing can verify the
output of their HCL outsourcing by an army of Slavic engineers trained in Ada
Spark (which is not too dissimilar syntax-wise to Pascal [0]). Namecheap
managed to train an entire customer service team in Eastern Europe on the
finer details of DNS and networking technologies. All this on razor thin
margins of selling domain names. It is not unimaginable to scale this to
formal verification. Considering that GitHub acquired Semmle, this field will
explode in the near future.

[0]: See also:
[https://link.springer.com/chapter/10.1007%2F3-540-48753-0_16](https://link.springer.com/chapter/10.1007%2F3-540-48753-0_16)

------
genidoi
A search engine that produces results you would find
interesting/helpful/engaging most of the time. It should heavily penalize SEO
optimized, clickbaity listicle trash. Content that was created organically as
part of a conversation should rank very highly if it's relevant to the query,
as well as blog posts from obscure but highly relevant and informed sources.

Google died when they stopped being well informed librarians and started being
aggressive salespeople. It's time for a new search engine to step in
specifically catered to the curious.

~~~
belltaco
This does not seem like a software engineering improvement.

~~~
someguy101010
It would improve it for me, but only because I'm a glorified google search
monkey

------
lidHanteyk
In engineering: There should be a single global content-addressed namespace
for data. The space should be unguessable, rather than enumerable or
searchable. The effect would be to end all problems of networked data storage,
and also to end copyright. DNS, Bittorrent, IPFS are all fine attempts, but
also clear and abject failures. If it's not possible, then we should prove the
impossibility.

In theory: Prove that one-way functions don't exist, or explicitly construct
one. Similarly, prove that P!=NP, or similarly settle the question.

~~~
clarry
Curious:

1) Why unguessable? That sounds like something one would naively use to keep
secrets, but encryption is the right tool for that. Apart from that, it sounds
like nodes participating in the network would inherently have to see names in
order to process requests...

Would something like 256 or 512 bit hashes suffice? You can try to guess or
enumerate, but the chances of finding anything are slim.

2) What problems of networked data storage would this end? I see lots of
problems, such as discoverability, bandwidth, latency, retention, scaling,
censorship, etc. Which of these are solved by globality of the netwokr? Which
of these are solved by unguessable addresses?

3) How does this end copyright? AIUI copyright is a social problem, not an
engineering problem. If you want to work around it, you would need (at least)
strong anonymity and censorship resistance. Freenet is the only thing (that I
know of) that comes close, and while it has engineering problems, the main
issue with any such project is a people problem: you need huge adoption,
otherwise it is impossible to resist deanonymization and offer sufficient
bandwidth & storage, etc.

~~~
lidHanteyk
1) Unguessability lets us treat the namespace as uniform and opaque. You are
right that encryption is required; see the designs of Tahoe-LAFS or Dat for
examples of how to blend these concepts. Another advantage is that an
unguessable reference is a basic capability, and in fact the most complex
capability that can be used to protect mere data.

You are also right that we don't yet have a satisfactory proof that any of
this can happen. It does indeed seem like participants in the network must, as
a condition of routing, be forced to handle bundles of data to which they do
not have keys, and while mixnets are real, mixnets do not completely solve the
problem.

Yes, if we get concrete, the typical way of building unguessable names for
data is to do something like take a Merkle tree hash, and then use that as a
basis for several "exported" names which are made from further hashes.

2) This would form the basis for a data commons. Existing commons are actually
centralized platforms supported by small specialized entities, leading to lots
of extra details that nobody is incentivized to get right. By contrast, if
there were a single namespace that existed beyond corporate or state control,
where participation is platform, then people are incentivized to pay their own
way on discoverability (using existing social graphs), bandwidth, latency
(using existing compute hosts), retention, scaling, and censorship (using low
cost of publication plus low cost of maintenance).

The global/universal nature of the network ensures that, if you can reach it,
then it can reach you, and you are connected. IP gives us a hint of this; the
typical connection comes with an IP address, and that address can be globally
routed. For a real example today, look at how Bittorrent is diverging from the
need for trackers.

Finally, it's worth pointing out that decentralized designs might be able to
shard computational work or otherwise balance resources. Bittorrent famously
was designed to go faster as more peers contribute more spare bandwidth, to
the point where the early days of Bittorrent were marked by the protocol
chewing up and choking residential ISP connections.

3) Copyright is a social _solution_ to a technical problem in most media. The
problem is that publication isn't instantaneous and uniform; there is a delay
of time while a published work is copied around the world, and that delay
introduces opportunity for pirates to make bootlegs and undercut official
releases. On the modern Web, though, this is silly. If one wants to make a
simultaneous publication to all paying customers, then one can sell customized
encrypted copies at each point of sale, and release a master key which
decrypts them all, a day later. The entire window of opportunity for pirates
can be shrunken to mere milliseconds, which is impossibly small for pirates to
make a profit.

Since we don't really need copyright in order to prevent piracy, then it makes
sense to raise the eyebrow and look more cynically at such an intrusive and
artificial right. In particular, the proposed namespace grants massive power
_to publishing artists_ first.

------
elihu
I think coming up with a good, modern alternative to the traditional
Unix/Linux/Posix-style OS software interface would be worthwhile. Something
more consistent, user-friendly, and designed with modern security concerns in
mind.

------
inglor
Code reviews are bad, I want to be able to run the code and put breakpoints
from GitHub.

Having to check out the branch, install dependencies, build the code and run
it only to then go back to the UI to see the changes and comments is very time
consuming and being able to test suggested changes instantly would save me a
ton of time.

I would gladly pay for this.

~~~
the-alchemist
Excellent point. The code review tool in GitHub is a much worse than I was
using in 2009, ten years ago!
([https://smartbear.com/product/collaborator/overview/](https://smartbear.com/product/collaborator/overview/))

I haven't used Collaborator in a while, but I wonder how much it has improved
in ten years.

It sounds like you're thinking of a code review tool + CI/CD pipeline, right,
@inglor? e.g., 1\. suggest a change in the UI or backend in the code review
tool 2\. run CI/CD and deploy to test environment 3\. refer to this new
environment in the the code review tool

Am I on the right track?

~~~
inglor
You are on the right track - I will check collaborator out :]

------
amiga_500
Removing out of date and incorrect advice on c++ from the internet.

~~~
fires10
As someone trying to learn c++ this would be great.

~~~
dkarl
If you get a job at a company that uses C++, you will probably need some of
that outdated information.

------
alangibson
On the meta level, I want a tool that shows me the wasted time and resources
in my business' development pipeline, all the way from idea to delivery. There
are endless numbers of tools that will spit out metrics that no one knows how
to interpret, but I want a tool that gives me actionable advice on specific
things my teams can do to increase their throughput and reduce lead times and
work in progress.

------
Const-me
I would create a cross-platform GPU-targeted GUI framework for .NET Core.
Something like Avalonia, only without skia, directly on top of D3D11, GL, GLES
or Metal. Modern GPUs are awesome, can directly render very complicated vector
graphics. Outside Windows not used for GUI despite there's need for that, slow
mobile CPUs + very high rez displays are both common.

------
Ididntdothis
Training could be improved. We somehow seem to repeatedly solve the same
problems. It would be nice if programmers were more aware of the things that
were done in the past.

------
contingencies
Requirements specification. It trumps every single concern raised so far. If
you're seriously considering R&D in this space, count me in.

------
hartator
Only 2 things: Naming things, invalidating caches, and off-by-one errors.

------
WalterBright
Memory safety. And I am addressing it by adding an Ownership/Borrowing system
to D.

------
loudouncodes
Training on a common ‘software engineering body of knowledge’. The state of
the art is so different across organizations it’s like having medical
tricorders at one place and leeches at another.

------
Too
The silver bullet programming language + environment. Fast and zero-cost-
abstraction as c++, safe as Rust, productive as Python, async as Go, runs
everywhere like JS, live upgrades like Erlang, development environment as C#,
designed for remote debugging, near instant recompilation and testing,
universal dependency manager with sandboxing of shared libraries and scm
integration.

There is still room for Domain Specific languages like sql, html and such but
for imperative programming there are way too many options all filling almost
the same need with just slight variation. Even with the rise of all types of
VMs and cross compilers we are still porting mountains of code to another
language just because the execution environment was slightly different. The
gap between embedded and web is also too big, for embedded you are stuck with
prehistoric c++ with dangerous syntax, hour-long build times and days of
figuring out how to correctly compile and link that shared library. vs web
where your only options are dynamic languages where both safety,
predictability and performance are a joke. I also realize safety and
performance vs productivity often contradicts each other but not as much as
often argued, there has to be a better middle ground than what we are stuck
with today.

In isolation, this should be an achievable task, especially if dropping any
legacy compatibility. What might be difficult is is people want legacy
compatibility and proven in use, so adaptation-rate will turn this into "there
are now 15 competing standards" and a "peace on earth"-type problem.

------
fillskills
Making software security & data privacy really easy for all stakeholders. It's
something I dread doing as an engineer but I know is one the most important
'feature' the customers are expecting to be baked into what I write.

------
new_guy
Privacy at the language level. There's a lot of inroads been made[0] but still
a long way to go before it's just ubiquitous.

[0] [https://twitter.com/jeanqasaur](https://twitter.com/jeanqasaur)

------
alangibson
My kingdom for a technology that lets me write an app once and run it
acceptably on Android, iOS and in the browser. This probably beats everything
else listed in this thread in terms of developer hours saved.

~~~
ampdepolymerase
In theory Flutter allows you to do that. (With the caveat that you have to
write a ton of custom code due to the lack of libraries).

~~~
alangibson
Like cold fusion, it's one of those things that's perpetually around the
corner but the big breakthrough never quite happens.

------
gok
Efficiency.

A large and increasing amount of human-made energy goes to computation, yet
only a tiny proportion of software is written with energy efficiency in mind.
Most programmers don't even have the tools to answer how much power their
programs consume.

At a start, no compute benchmark that measures wall duration should ever be
taken seriously if it doesn't also include energy consumption.

~~~
mlinksva
What people/projects/labs/companies are doing the best work on efficiency? I'm
aware of [https://greenlab.di.uminho.pt/](https://greenlab.di.uminho.pt/)

------
anonnybonny
IMHO, making WebAssembly fully and tightly integrated with browsers like JS is
today will be the next big leap forward. In such a way that you won't need to
use JS at all - direct access to DOM and other Web APIs, choice to use
whatever programming language you want

Java applets came 20 years too early - potentially had the power to do
everything we do with the web today but 10x faster and more cleanly.

The web remains one place where you lack the freedom to easily use whatever
language you like - WASM will end the era of JS if they do it right.

The JS ecosystem is extremely wild and turbulent - even something as simple as
"I need this project to be built exactly as it was in August 2017" is almost
impossible with the npm world.

Meanwhile native apps compiled in 1985 still run, and can even build today
with minimal fuss.

Lets be honest - how many of you use JS because there was no other option? Its
not a terrible language - in fact I like JS/ES7 more than python, but it's
still one of the pillars of chaos in the world of programming

~~~
amelius
> IMHO, making WebAssembly fully and tightly integrated with browsers like JS
> is today will be the next big leap forward. In such a way that you won't
> need to use JS at all

It exists, and is called VirtualBox. It just doesn't run inside the browser
though, but you can run a browser in it.

------
fsflover
Rewriting all popular proprietary software with GPL license, such that no more
vendor lock-ins prevent people from using old hardware.

------
AndrewKemendo
State observability for the entire stack.

~~~
StreamBright
+1

I am working on this problem. Let me know if you are interested, I would like
to hear your opinion. My Keybase is in my profile or I email you.

~~~
AndrewKemendo
Thanks I’ll take a look

------
veeralpatel979
Measuring how productive software engineers are.

If you could solve this problem, you could convince management why engineers
need offices, for one.

~~~
AnimalMuppet
And which language to use. Even better, which language to use when.

------
chrisweekly
CSS bloat belongs somewhere on the list. Thankfully, actual, pragmatic and
standards-based solutions (proper design systems, treatment of layout as a
first-class concern, and component-scoped CSS-in-JS) are finally emerging.

------
charlzbryan
There are so many times where I think think to myself, "We shouldn't be doing
this in 2020". Just mundane stuff like checking for null values and stuff.
You'd expect we'd have this covered by now

------
thrower123
I would somehow fix software engineering training so that people understood
the vast corpus of techniques and approaches that have been tried, what the
pros and cons of them were, and what the reactions to those issues were. Less
on the theoretical algorithmic level, but on the practical and implementation
level.

We should not be rewriting and adopting a slightly better but incompatible
version of make every five years, or waffling between SQL and NoSQL, or
churning back and forth between slightly different versions of MVC paradigms.
Or going from tables to div-tables to flow layouts to css-grid.

------
Supermancho
Some of these responses are high level stuff that ignores the day to day
hurdles of most projects (validation of complex systems, wtf), some are right
on the money.

I can think of 2 ripe opportunities, that I reference often.

1\. Comments integrated with code as associated records that can overlap,
which provides context, not merely inlined comments. The state of
understanding code is horrendously inefficient by design.

2\. Put a more type safe language on the browser with concurrency primitives.
Promises are a hack improvement and Python is too rigid in ideology to make it
eventially compatible.

~~~
RoyalPain
A funny story with regards to the first point :

We have some legacy code at my work which is basically in "do not touch or
it'll break" mode. Last year a senior developer who used to maintain it
retired. Before he left, he copied the code into a PDF and annotated the PDF
with his thoughts and advice. He didn't want to risk breaking the code on his
last week by adding the comments directly to the code.

------
Areading314
Package management for C/C++. Every project I've worked on has required
significant setup effort. Also, downloading OSS and compiling often fails due
to dependencies.

------
axilmar
These are the top problems in my opinion:

1) computers handle bits and bytes, not information. If computers can be made
to create, search, update and delete pieces of information, instead of bits
and bytes, 90% of code would go away and life would be much easier for all of
us.

2) programming is done wrongly and poorly: we write a program to do a specific
job, without any proofs, with serial control flow, we compile it, we setup an
environment for it, etc. Instead, we should write hierarchies of programs,
each level of hierarchy should have its own proofs (i.e. the specifications
should be part of our programs), control flow should be event based, programs
should be running as soon as we write them in a live test environment etc.

In other words, forget files, processes, handles, databases, source files,
bits, bytes, the command line, UIs etc. All these provide some level of
abstraction that doesn't really scale to what we actually need. We need
another level of abstraction: the piece of information.

Which should eventually include a piece of code that communicates with the
outside world via events, and that code would be composed of other pieces of
information, would be fully creatable, searchable, updatable and deletable
just like any other sort of piece of information.

And UIs should be creatable, searchable, updatable and deletable pieces of
information as well.

And a global communication language would replace all command line interfaces,
UIs, and programming languages: we shall talk to our UIs with this language,
and the UIs shall talk to us by using that language as well, using graphical
representation when needed.

~~~
tcbasche
This all sounds good, but how do you represent "information" on a physical
level without electrical signals?

------
l0b0
Replace POSIX shell with something reasonable. Anyone trying to bring in a
feature or name "because history" is temporarily banned from the working
group. Other than that I've no idea how it should be done or what will emerge,
but I would hope for it to include words like "exception handling", "test
framework", "API", "concurrency" and "asynchronous stream processing" non-
ironically.

~~~
Const-me
[https://github.com/powershell/powershell](https://github.com/powershell/powershell)

~~~
l0b0
PowerShell was on my mind when writing it, because it's the closest thing I've
seen to this. But the .NET framework is also full of clunk, and I'm sure we
could do even better on UX.

~~~
Const-me
Interesting.

I never liked UX of the PS. I just happen to know it’s there, and to an extent
how to use it. IMO, the older .cmd and especially .vbs
([https://en.wikipedia.org/wiki/Windows_Script_Host](https://en.wikipedia.org/wiki/Windows_Script_Host))
are better for most use cases of PS.

I like .NET, and I think both language, runtime, and standard library are
exceptionally good. Not sure what do you mean in that comment.

------
oli5679
Open source implementation of Microsoft Excel, which lets you write macros
using Python and has the plotting, scheduling and report sharing functionality
of Tableau.

~~~
repsilat
> _which lets you write macros using Python_

Someone upthread said "nocode", and I think that's more on the money. Most
people will never be "programmers" as we understand the term, but they could
still benefit from a narrowing of the gap between what Excel can do and what
(say) Python can.

An old hobby project of mine[1] had a go at this by letting spreadsheet users
write functions in the spreadsheet itself, in exactly the same way they
already used the spreadsheet to calculate things. Basically, if you just used
the spreadsheet as a calculator, it would automatically define reusable
functions out of those calculations. I think something like that would push
Excel's power and usability forward for your average person, because it
wouldn't make them reliant on programmers to implement the things they want.

1: [http://6gu.nz](http://6gu.nz)

~~~
mch82
I was working with people in finance & they found it difficult to follow the
flow through VBA. I used Blockly to model the code & they were instantly able
to provide me with feedback. It was pretty cool!

Not sure if Blockly is “no code”, but it can be more intuitive.

------
tyingq
Some new software that matches what _" Ruby on Rails"_ used to be would be
welcome.

There are too many un-opinionated choices today, so people waste too much time
on choices for the various pieces. The whole ecosystem is very fragmented.

A relatively sane, top-to-bottom framework that isn't the fastest, or "best",
but "good enough"...would be welcome. Maybe Golang or Rust or Node will
progress to having their own "Rails"?

~~~
petters
I never used RoR, but I thought Django is similar?

~~~
tyingq
Yes, it's similar, but both hail from 2003. I would imagine some new clean
sheet, not history encumbered, top-to-bottom framework would have value.

------
mhh__
Ignoring hard problems like memory safety and parallelization (let's say), I
think the biggest soft problem that I experience is optimizing the amount of
time spent doing _work_ relative to setting up boilerplate - mainly during web
programming, i.e. it took me literally a few days to work out how to use code
from _and_ deploy node_modules without cheating.

Not trivial but maybe automated by some meta-meta-build tool?

~~~
mistahenry
Most, if not all tooling, comes with a learning curve. But once you pay it,
that initial ramp up time is not required on the next project.

I could start a new project with Spring Boot and Ember.js, provision a Debian
box with Postgres and nginx, and deploy all of it within an hour because I
know these tools. But I too paid the learning price my first go round.

------
purpleidea
Getting rid of proprietary code, particularly in anything that is safety,
health, or security related. Doubly so for code in chips and any firmware.

------
lubujackson
Next level web scraping that takes any web app and fully groks its structure,
devolving it to a JSON-like document that encapsulates all form elements,
design elements and extrapolates some basic validation rules and the
underlying data structure. Load that into a universal editor, work on the
metacode. Since we are dreaming, compile that metacode back to any number of
frontend/backend combo templates. The result would always be incomplete, but
it would give you an amazing base for knocking out projects.

Web design seems to be unifying around progressive web design principles and
too many devs spend way too much time recreating the same variations of CRUD
apps.

If any new web feature could be distilled to a metacode base and then
restructured from there we could maybe find a way to escape this nightmarish
hydra of web technologies.

~~~
Glench
You might be interested in this project:

[https://twitter.com/geoffreylitt/status/1224094967922073600?...](https://twitter.com/geoffreylitt/status/1224094967922073600?s=21)

There’s a paper about it coming soon that gets more into some of the things
you mentioned.

------
closeparen
A distributed database engine with the same semantics as single-node Postgres
that's easy to operate reliably.

Cannot tell you how much time I spend fighting solved-since-1980 RDBMS
problems at the application layer. But we really are too big for single-master
(pushed it to its breaking point and a little beyond).

------
gameswithgo
1\. A cross platform gui library, that could replace electron as the go to
choice when making cross platform apps with a GUI.

2\. Tools to help making parallel programming easier, without massive impacts
on efficiency. Future CPUs are offering more performance primarily through
more cores and wider vector instructions. Currently to use these efficiently
is often very hard.

3\. A replacement for C/C++. Rust is promising, you could contribute there, or
to Zig, or come up with something better.

4\. large scale, high quality studies into the efficacy of various programming
ideas, languages, methodologies, etc. (Does dynamic typing increasing or
decrease productivity, or under what domains does it do one or the other, does
functional programming reduce error rates, by how much, and at what
performance cost? etc

~~~
zelly
1\. Flutter, Qt

2\. OpenMP is easy to use, just put #pragma statements

3\. Rust is the leader here. It's doing very well.

4\. I have read a few studies like this. The takeaway was that it took about
the same time to write a program in every language studied (it included
Haskell and FP langs). Language/paradigm also did not have much of an effect
on correctness and bugs.

------
Taikonerd
I'd like to see static analysis and formal verification become more integrated
with our development practices, especially w.r.t. security.

In a perfect world, I'd like to run an analyzer on my CRUD app's source code,
and have it list the 37 vulnerabilities I overlooked.

------
mlinksva
Some conventional wisdom candidates for most importance piece of technical
debt across software engineering: complexity, memory unsafety, excess
authority, formal methods underutilized

Probably not a candidate for "most" but maybe worth mentioning: pgp

There's also lots of non-technical debt; indeed not even any of the above are
purely technical, at the least getting solutions adopted likely requires a lot
more than just serious hacking.

Surely there are existing projects, or at least communities, that have formed
around taking on what participants thought/think of the most importance piece
of technical debt, at least that they were/are equipped to take on; perhaps
[non-]reproducible builds, shotgun parsers (langsec)?

------
aalhour
Declarative Programming.

Specify a problem statement and let the computer figure out a way to implement
it but take this idea and apply it to SaaS Applications. Here's how a domain
model looks like, go figure out a database-backed API application
implementation for me.

------
sebastianconcpt
Well if we have no restrictions on time and resources, then restricting
ourselves to preserve the current statu quo of engineering is the wrong
problem to takle.

I'd, instead, follow the Alan Kay idea of antecipating 10 years in the future
with some new hardware that allows a new computing platform and then see how I
can capitalize in it in some kind of pure way (removing as much technicalities
as possible, like Self, LISP or Smalltalk did).

For inspiration on what form it could take, I'd take a look at all Sci-Fi I
can but pay special attention to Bret Victor's: Humane Representation of
Thought [https://vimeo.com/115154289](https://vimeo.com/115154289)

------
l0b0
Cross-platform, low latency, FOSS graphics and audio stacks. Basically
something combining the best desktop frameworks of the last 30 years, rather
than the last six months. Let's have some actual UX rather than 10 different
ways of toggling a boolean.

------
mlthoughts2018
Sufficient research and PR efforts to finally refute open-plan office designs
and render it intractable for firms to use open hypocrisy to disingenuously
justify them without consequence as they do today.

Because this affects cognitive health and productivity of just about every
tech employee, it truly is the biggest problem and is no overstatement at all
that we would cure more diseases, feed more hungry people, develop better
policies for people in need, or even just satisfy consumer demands more
profitably, if the scourge of open-plan offices finally goes away.

Very few other problems faced by engineering workers are so vast and systemic
as open-plan offices, with such far reaching value if it is solved.

------
itronitron
fixing the shit-show of modern web application development

------
kaushikt
Single top-priority for software engineering - testing to improve quality. If
a lot of organisations are still not writing 100% test coverage to reduce ops
and increase reliability with quality then something is broken. It's mostly
got to do with engineering culture itself. Google famously did Testing on the
toilet experiment which paid off.

[https://testing.googleblog.com/2007/01/introducing-
testing-o...](https://testing.googleblog.com/2007/01/introducing-testing-on-
toilet.html)

------
xkriva11
The problem of Availability vs. Accessibility. In our systems, some
functionality is often available but not accessible for usage in another
context. It causes a huge loss of money and wastes human work. Mobile
platforms are significantly touched by it. You may have, for example, a
perfect application for spell checking but you cannot use it directly while
doing your spreadsheet. Generally, the user interface needs to be direct and
modeless. Applications are, of course, giant modes.

------
agentultra
I’d work on an industrial grade proof assistant and model checker suite. IDE,
cloud integration, and visualization tools to push the practice of formal
methods in industry.

~~~
Taikonerd
Yes! Formal verification has been in the ivory tower for years and years, but
hopefully it can break out in the coming decade. It's a field where the theory
is very advanced, but the tooling for Joe Average Programmer is very lacking.

------
pmlnr
Accessible, cheap, documented, standardized long - 50+ years - archiving
physical mediums and their software ecosystem, made available for everyone.

There's nothing out there for this.

------
perlgeek
There are things for which we don't seem to have found good ways to enable
code reuse.

I'm talking about things like sign-up workflows, password recovery, two-factor
authentication, role-based access control, search, approvals by manager,
integration with BI tools etc.

There are application development frameworks that solve some of these problems
for you, but they tend to be so heavy-weight and/or impose so many
restrictions on you that they hardly seem worth using.

------
Liron
Everyone has a complicated "data layer" with multiple boxes taped together. It
should just be one database.

The root of the problem is that data denormalization is broken [0]. The fix is
doable in theory, yet not done yet, and not talked about enough.

[0] [https://medium.com/@lironshapira/data-denormalization-is-
bro...](https://medium.com/@lironshapira/data-denormalization-is-
broken-7b697352f405)

------
austincheney
I would create a vendor agnostic nonprofit developer certification program.
Something similar to ISC2.

Right now there is no differentiation between actual engineers who write
original code and button pressers that either live in configuration hell or
that mindlessly need design patterns to tell them what to do.

Programming is the act of writing instructions and yet so many developers can
neither communicate in writing nor plan a series on instructions.

------
mkgolden
Linux on mobile. With the release of pinephone and librem we have the
opportunity to remove ourselves from the Google and apple mobile walled
gardens.

------
tiborsaas
Artificial general intelligence

~~~
StreamBright
First, we need immortality to make it until AGI becomes reality in ~100 - 1000
years from now.

------
emeerson
Won't claim this is the absolute priority, but recently been thinking about
the generalizable problem of: "domain disentanglement."

In other words: solve the Data Model Coupling problem programmatically.
Applied to an RDBMS, one could imagine a graph with weighted edges
representing coupling between database tables and dynamically reverse-
engineering joins into DB-isolated API calls.

------
convolvatron
effective batteries included declarative programming

~~~
slifin
Fulcro RAD is promising in this regard, going more declarative seems
inevitable, there are challenges in designing declarative things though

I think collectively we need more thought on how to design declarative systems
that are tangible inspectable debuggable in the same way following procedural
code is

~~~
slifin
[https://github.com/fulcrologic/fulcro-
rad](https://github.com/fulcrologic/fulcro-rad)

------
Pfhreak
Mine is tech related, but make it easier to form and operate tech
cooperatives. In general, I've found engineers have the knowledge of what the
best thing for their product is, and are often pulled away for various
reasons. Making them a part of the owenership structure for their work could
pay dividends in quality and usability of software.

------
salt-licker
A programming language with Python-like syntax and a Hindley-Milner type
system plus typeclasses. Basically, we bring Haskell’s type system an
imperative programming language. An easy-to-learn scripting language with a
strong, sound type system with reliable type inference will reduce bugs in
codebases of every size across the industry.

~~~
bordercases
Disagree about scripting.

Compile times are fast if you are willing to tradeoff performance. But you can
decide whether you whether or not you want a long compile fast program, a
short compile slow program, or an interactive compiler.

The example I can give is StandardML. You have SML/NJ and MLTon as two
possible compilation options among other. SML/NJ has an interactive mode.
MLTon compiles for longer but results in faster programs. (And it's also
Hindley-Milner)

Just going to the "it should be interpretable" route without there being any
option for whole-program optimization at compile time results in unnecessary
bloats in performance which by all means we can avoid now as development
hardware has become faster.

~~~
salt-licker
Yes, I agree the language should be compiled for performance, types enable
compiler optimizations and it would be silly to abandon that benefit. By
“scripting language” I mean the syntax will be similar to existing interpreted
languages and it will be easy to write small scripts without specifying types
since they are inferred.

------
soulchild37
Facebook has literally endless money and talent, and they still comes up with
React Native which makes apps slow and bloated

------
317070
The singularity. Make software which is capable of making even better software
without human intervention.

~~~
AnimalMuppet
The question said "that could be actually solved if you could pay an
rationally large team of serious hackers a rationally large amount of money
for a rationally long period of time." I have a strong suspicion that your
answer doesn't fit.

------
l0b0
Next frame responses for any action which could reasonably be computed within
a frame. Basically caching at every level of the system and aggressively
precomputing and preloading anything the user could reasonably do from the
current state. Let's use those cycles!

------
mch82
Decentralizing networks so it becomes impossible to limit access to
information & communication.

------
tomc1985
There's too many damn libraries

------
Aqueous
writing tests requires sometimes thousands of lines of boilerplate like
setting up mocks, crafting assertions, isolating the test environment and
data, etc. testing is important but engineers hate doing all the leg work to
set them up. if someone could write a testing engine to sort of ‘lock in’ the
implementation of a function or class and then automatically alert the
developer when the implementation deviates from the locked version’s spec
(determined by automatically profiling the functions input and output on a
continuous basis) that could improve testing and vastly reduce the amount
engineer hours spent on writing tests

------
abrax3141
Fix Lisp's shortcomings (primarily that its package universe is lagging) so we
don’t have to put up with terrible languages like python, which people only
bother with because of its well-maintained package universe.

------
di4na
Release engineering and tools for "operators" (ie dev and ops and whoever
touch your living system) that are collaborative with the Human. There is...
basically nothing that exist solving that.

------
oneplane
Quality Control in an automated fashion. While it does extend in to other
areas, you can scope it to software engineering by having some sort of static
analyser, sandbox with dynamic analyser and fuzzer in-a-box, that can you call
at various stages. While it doesn't cover future found issues like the ones in
the x86 ISA, it could prevent a lot of corner-cutting-by-managment and fix-it-
in-prod mentality.

Once you get to a point where you can demonstrate that 'doing it faster' is
not an option, everyone gets better for it.

Right now, all you get is bypassable CI pipelines, crappy in-IDE tools that
sometimes work, and very costly static analysers that usually don't work and
are more designed to please the clipboard warriors.

------
abrax3141
Figure out how to make Linux code bases cross language callable, like .NET, so
you could write packages in any language and call them from any other
language.

------
beamatronic
What makes software hard is the messiness of the human real world: time zones,
keyboard layouts, languages, Unicode, GDPR, SOX compliance, etc, etc.

------
sys_64738
Make software development a true engineering discipline by making software
developers and companies directly liable for bugs and security holes.

~~~
emiliobumachar
Sounds like a legal problem or at least a culture problem, not a technical
problem.

------
hbt
interoperability: "the ability of computer systems or software to exchange and
make use of information."

Emphasis on exchanging information between 2 unknown programs.

So much programming plumbing (parsing text etc.) to do just that.

So many hours wasted chasing API documentation to figure out how to call an
incantation.

So many software features hidden away and inacessible unless used through a UI
maze.

So much code sitting there unused and undiscoverable.

------
carapace
In a word: _complexity_.

(That and the 800lb gorilla in the room: programmers are _fashion-driven_ to
the point of absurdity.)

Except, thinking about it, I don't think it fits your description because it
can't be solved by the methods you describe. We are "complexity junkies". Most
of us haven't hit rock bottom, don't see a problem, or are _well-paid_ to feed
our habit.

We have tools and concepts that would do the trick, but we ignore them.

Consider Elm lang. Takes all the complexity out of writing web apps, has a
history of zero bugs in the code it generates, doesn't get traction
because...?

\- - - -

Dr. Margaret Hamilton (of Apollo 11 fame, who coined the phrase "software
engineering") developed a system of software construction she called "Higher-
Order Software" that eliminates the sources of most programming bugs. Sadly,
it was critically panned and has languished in obscurity for decades. See
"System Design from Provably Correct Constructs" for more info.

\- - - -

 _Graydon Hoare_ gave a talk on the history of compilers[1] and he doesn't
mention Prolog once. Is it possible he doesn't know about the research into
logic programming and compilers?

E.g. "Parsing and Compiling Using Prolog" Jacques Cohen and Tim Hickey ACM
Transactions on Programming Languages and Systems 9(2):125-163 · April 1987
DOI: 10.1145/22719.22946 · Source: DBLP
[https://www.researchgate.net/publication/220404296_Parsing_a...](https://www.researchgate.net/publication/220404296_Parsing_and_Compiling_Using_Prolog)

    
    
        1. Introduction
        2. Parsing
            2.1 Bottom-Up
            2.2 Top-Down
            2.3 Recursive Descent
        3. Syntax-Directed Translation
        4. M-Grammars and DCGs
        5. Grammar Properties
        6. Lexical Scanners And Parser Generation
        7. Code Generation
            7.1 Generating Code from Polish
            7.2 Generating Code from Trees
            7.3 A Machine-Independent Algorithm for Code Generation
            7.4 Code Generation from a Labelled Tree
        8. Optimizations
            8.1 Compile-Time Evaluation
            8.2 Peephole Optimization
        9. Using Proposed Extension
        10. Final Remarks
    
    

That's from _1987_.

Long story short, if you want to write a compiler it's easier and faster to
learn Prolog and write it in that than to learn to write a compiler in
whatever lower-level language you might already know.

[1] [https://thenewstack.io/rust-creator-graydon-hoare-
recounts-t...](https://thenewstack.io/rust-creator-graydon-hoare-recounts-the-
history-of-compilers/)

~~~
z3t4
> programmers are fashion-driven to the point of absurdity.

On a certain level of abstraction, there is really no right or wrong, just
different. And it becomes a matter of taste.

That and the limitations of the human brain. We cannot remember a billion
assembly instructions, so we have to abstract, and create layers of
abstractions in order to get to human level of tangibility. Only problem is
that the more layers we pile on the more complex the stack becomes, and it get
even more difficult to comprehend, so we get an exponential explosion of
complexity. And the only thing limiting the complexity is hardware
performance. So if computers keep getting faster we will be doomed.

------
mikst
Rewrite the Internet stack - it's a bunch of layers on top of workarounds on
top of layers of workarounds.

~~~
ScottFree
Do you mean the internet or the web? Either way, how would you rewrite it?

~~~
mikst
I mean the Internet. Mac addresses are deprecated, ipv4 is long overdue for
replacement, shortcomings of tcp are well documented, security/privacy is at
the sidelines etc etc etc

As to how, just use what we didn't know the first time. The right question
would be how to have everyone switch.

------
cloudking
Writing and implementing test automation

------
rurban
1\. The typical 3 safeties: memory type, concurrency.

Already solved, but not used.

2\. False advertising, overhype.

Related to 1)

------
techslave
iot security. some kind of framework to allow iot devices to be secure by
default. android things tried to address this but too much ecosystem and java
requirement. needs to be simpler and lower level.

------
RNeff
Develop a methodology for writing correct software. No bugs, guaranteed.

~~~
js4ever
Would you agree to spend 5 to 10 more time and budget to write a software in
exchange for the no bugs guarantee? I have thinked a lot about this and my
conclusion is that this requirement is foolish when you compare it to the
cost/time drawback.

~~~
Ididntdothis
It’s conceivable to have a language or markup where you can specify on more
detail what the code should do. This could then be checked automatically.
Things are already going in that direction with things like non-nullable but I
am pretty sure this could be taken much further.

~~~
acoma
I imagine an implementation of this is equivalent to writing code and tests
with the same level of detail as one might today. Language features put
constraints on the solution, but do not reduce the essential complexity of a
problem.

Could you elaborate more on what you mean by "could be taken much further"?

~~~
Ididntdothis
I think a language could even more constrain expected behavior so the compiler
can catch problems without having to write tests. With tests you have describe
the design constraints in two places: the code and the tests. If we could
reduce the number of tests and more information into the code it would be a
big win. Or a language that can model distributed systems and their
interactions. The problem is to still be flexible enough so it will be hard to
find the right level. I am pretty sure we will see a lot of development in the
next decades. Systems are becoming very complex. If you think an old COBOL
codebase is hard to maintain, good luck with a legacy microservice
architecture in a few years. And it seems to be getting worse.

~~~
acoma
Got it, thank you for that explanation. I completely agree that we can
eliminate all kinds of erroneous behaviors with better type systems.

> good luck with a legacy microservice architecture in a few years. And it
> seems to be getting worse.

It is getting worse. The complexity of systems and their dependencies are
growing faster than the discipline of seeking to reduce accidental complexity.

------
m0llusk
complexity overload: identify and eliminate unnecessary bloat

------
zupreme
That the world never should have abandoned Flash. We could have secured its
communications and isolated its OS interaction, while still providing awesome
user experiences.

I truly miss many immersive Flash experiences on the web.

------
geofft
One practical technical problem that also has direct "peace on earth"
implications is getting software that reliably doesn't have exploits into the
hands of oppressed people (or, equivalently enough, of all people). If you're
a human rights lawyer working with the Uyghur people, you should be able to
use a secure end-to-end messaging app and not be hacked. We've done very well
with end-to-end messaging as a cryptographic formulation - now we just need to
get the vulnerabilities out of WhatsApp and the iOS kernel and similar, so you
can't just get NSO's Pegasus thrown at you when someone wants to dump your
clients in re-education camps.

In my opinion there are two big angles here. One is memory safety, which is
the primary cause of remotely exploitable vulnerabilities these days (see
[https://twitter.com/LazyFishBarrel](https://twitter.com/LazyFishBarrel) for
some stats). Any evidence-based approach that reliably reduces memory unsafety
bugs is productive - whether that's "rewrite it in Rust," "rewrite it in Go,"
"rewrite it in Python," "rewrite it in Java," "write really good C static
analysis tools," etc.

The other is getting software updates into people's hands for a reasonable
price. If you buy a cheap Android phone, you're buying an insecure Android
phone. There are few options for a cheap iOS phone, especially one that's
still receiving security updates.

See [https://googleprojectzero.blogspot.com/2019/11/bad-binder-
an...](https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-
wild-exploit.html) for a good analysis of a combination of these two problems,
specifically weaponized in the wild by NSO Group. One part is that it's a use-
after-free. The other part is that it was fixed in Linux, and it didn't make
it into Linux for two years.

If I had a large team of serious hackers at my disposal to solve problems for
the world, I would work with some of them to fix the highest-risk code written
in memory-unsafe languages and get the fixes upstream, and I would work with
the rest of them to fix the various process problems that make it hard for
Android vendors to upgrade to new versions of Linux and other components
continually.

(Note that both of these problems are really best solved by a team that can
_continue working on the problem indefinitely_ , not a strike team that
delivers a thing and then declares the job done.)

------
crmd
Cache invalidation

------
thulecitizen
Ceptr.org - protocol cooperativism

------
pbedat
Video meetings that just work.

------
Glench
I think making software human-understandable (and by extension modifiable) is
one of the biggest problems for software engineering.

As it stands, both beginners and experts have difficulties understanding
exactly what their own programs are doing as well as what programs written by
other people are doing. We encode complex algorithms in static, abstract text
descriptions that are hard for humans to understand and reason about. We have
to imagine the behavior in our heads from these illegible descriptions. The
behavior and internal workings of their programs are invisible.

Not to mention that when trying to modify other's programs there is often a
huge communication problem, trying to construct a mental model of what the
program does is a tedious and often impossible endeavor because of missing
context. Just think, how is it even possible to make "write-only code", code
that was understood when written but is now completely unintelligible. To me,
that should be impossible. Or think about how open source software is open in
that its code is available, but closed in terms of being easily understood —
there's the formidable cognitive challenge of understanding the program well
enough to be able to modify it to one's ends. For most people, this is a
significant and unreasonable effort.

What to do about all this? To me, the answer is redesign programming so that
it is primarily about communicating behavior to humans. A couple things I've
made toward that end:

    
    
      * Legible Mathematics, an essay about the UI design of understandable arithmetic: 

[http://glench.com/LegibleMathematics/](http://glench.com/LegibleMathematics/)

    
    
      * FuzzySet: interactive documentation of a JS library, which has helped fix real bugs:

[http://glench.github.io/fuzzyset.js/ui/](http://glench.github.io/fuzzyset.js/ui/)

    
    
      * Flowsheets V2: a prototype programming environment where you see real data as you program instead of imagining it in your head:

[https://www.youtube.com/watch?v=y1Ca5czOY7Q](https://www.youtube.com/watch?v=y1Ca5czOY7Q)

    
    
      * REPLugger: a live REPL + debugger designed for getting immediate feedback when working in large programs:

[https://www.youtube.com/watch?v=F8p5bj01UWk](https://www.youtube.com/watch?v=F8p5bj01UWk)

    
    
      * Marilyn Maloney: an interactive explanation of a program designed so that even children could easily understand how it works:

[http://glench.com/MarilynMaloney/](http://glench.com/MarilynMaloney/)

In general, I think it's a neat research direction to redesign many concrete
programs by hand in the most understandable way possible (using custom
graphics, interactivity, game design mechanics — all the best things we have
for helping someone understand things through media), and then use those
experiments to work backward toward programming languages/environments.

In the end, programming is essentially only limited by human understanding, so
that's the most significant engineering program out there.

~~~
BjoernKW
You might want to have a look at Bret Victor's work:
[http://worrydream.com/](http://worrydream.com/)

~~~
Glench
I worked with Bret in his research group :)

------
mrfusion
Open source robotics

------
whateveracct
Better attitudes

------
donohoe
Naming things.

------
smt88
Most major problems now are people problems, not engineering problems.

One example: lots of software devs have no training and/or interest in
security, and employers have no way to vet that.

Another: there is no way for users to trust SaaS security. I have no idea
whether (as a random example) Atlassian has a great security culture or a
terrible one. We just trust people who seem trustworthy, usually due to their
polished marketing.

~~~
Areading314
Much of security is also a people problem, though. You can have all the best
encryption/firewalls/etc and still get hacked because your CEO uses their
first name as their AWS password.

~~~
geofft
The existence of passwords is a problem, though. Perhaps it was the best
approach in the '70s and '80s, but the world has changed:

\- Most people use their own computers with high-quality local hardware, not a
shared workstation, a public terminal, or a dial-up / serial line. So "store a
secret on the client" becomes viable. (One of the biggest quiet successes in
security, in my opinion, has been that everyone uses SSH as standard practice
instead of telnet, and most of them use SSH keys or similar instead of
passwords. It is significantly better to authenticate with unencrypted SSH
keys on a laptop you keep in your bag or house than with a password.)

\- Most people carry a phone around with them, and you can authenticate them
by whether they have the phone.

\- You can get a tiny USB device that does serious cryptography to
authenticate you (and authenticate the site you're connecting to) for about
$10. You can get two of them and put one next to wherever you keep your birth
certificate, in case you lose the first.

All of these are technical wins worth celebrating, and now we're at the point
where the people problem is not so much convincing the CEO to use a better
password as convincing systems to use one of the above methods instead of
using passwords at all.

