
The alarming state of secure coding neglect - jgrahamc
https://www.oreilly.com/ideas/the-alarming-state-of-secure-coding-neglect
======
achou
The survey polled 430 "mostly everyday programmers". Unfortunately, everyday
programmers mostly know very little about security.

Developers tend to think of security as about avoiding coding mistakes, and
that's reflected in their idea that security is about pen testing, code
review, tools, etc. Any security professional will tell you that these are
valuable but only a small part of the big picture. Take a look at Microsoft's
SDLC for a wider view of what it takes to weave security into every aspect of
software development[1]

Probably the single most valuable thing most development organizations could
do to improve security of applications is to do threat modeling[2][3]. It's
especially valuable in the early stages of application design, but it can be
applied at any time. Threat modeling can increase awareness of how an
application's security assumptions interact with its overall architecture.
Thinking through your application's threat model systematically is the first
step to prioritizing mitigations.

Unfortunately, this is voodoo to most developers even though it really should
be an intrinsic part of designing application architecture. I've heard people
say there's a mental block because the kind of thinking required for security
is almost the opposite of that required to design and construct systems. I
don't believe that though. I think it's mostly a matter of training and
historical accident that security is even a separate discipline. It shouldn't
be.

[1] [https://www.microsoft.com/en-us/sdl/](https://www.microsoft.com/en-
us/sdl/)

[2] [https://msdn.microsoft.com/en-
us/library/ff648644.aspx](https://msdn.microsoft.com/en-
us/library/ff648644.aspx)

[3]
[https://www.owasp.org/index.php/Application_Threat_Modeling](https://www.owasp.org/index.php/Application_Threat_Modeling)

~~~
ktRolster
_Unfortunately, everyday programmers mostly know very little about security._

If you really want security, it's something that every programmer should be
thinking about, at least in the back of their mind, on every line of code they
write.

~~~
petra
Maybe it's actually should be the other way around ? Isn't it possible to
build frameworks(using relatively popular/easy languages) for the most popular
application classes(CRUD web apps, IOT MCU) that in many cases will isolate
the developer from needing to think about security ?

And if it's possible, And we already have a few such tools(like say scala
lift, ARM mbed ) but somehow haven't yet became popular, why is that ?

~~~
lbearl
Many of them already are, but they aren't "sexy". I personally do a lot of
.Net, and MVC 5 has relatively good defaults if you just install and go.
ASP.NET Core is even better in some regards (CSRF tokens are completely
transparent now). I think a lot of the problem is that people want to use a
lot of new tech which hasn't had time to develop security as a convenience
feature, or they just flat out don't want to use a framework.

------
cmehdy
"security proponents will probably have to demonstrate improvements to the
bottom line: less maintenance, improved customer satisfaction, or other
measurable incentives to bring everyone on board"

In most fields there is little incentive to change things when the company
itself isn't too affected in case of hack. Stocks go down for a week, then
back up. General population doesn't seem to care enough to stop using the
services, and does not understand enough about privacy/value of their
data/need for encryption. So why would a company care ?

If being "ethical" and more devoted to privacy becomes a trend, perhaps there
will be a stronger drive to follow security experts' advice. Did Whatsapp get
a surge in users after enabling end-to-end encryption?

[https://hbr.org/2015/03/why-data-breaches-dont-hurt-stock-
pr...](https://hbr.org/2015/03/why-data-breaches-dont-hurt-stock-prices)

~~~
VT_Drew
>In most fields there is little incentive to change things when the company
itself isn't too affected in case of hack

This is correct. As long as the risk isn't too high then companies will just
take the risk and accept a hack as the "cost of doing business". Much like
Goldman Sachs expects they will get fined by Governments, but they don't care
because the money they make far outweighs the fines imposed.

~~~
sitkack
New legislation, fines against businesses form a feedback cycle. They keep
going up against repeat offenders until the behavior changes. Inverse
exponential-backoff under collision with the regulatory body.

~~~
vikeri
Haha awesome, determine fines with a PID-controller

------
iliketosleep
If you personal info gets stolen from a company (or government) you entrusted
it with, it ends up being your problem, despite any negligence on their part.

Personally, I believe that if it's negligence, there should be compensation -
they lost something that belongs to you and is of value. But few people seem
to really care. The info of 1 billion yahoo accounts was hacked, but who
cares? Until that changes, the problem will continue to exist.

~~~
CaptSpify
And this is why we see companies scooping up people's data and saving it with
no regard to the individuals. We need to move to companies seeing that data as
a liability, not an asset.

~~~
kudokatz
> We need to move to companies seeing that data as a liability, not an asset

this could lead to serious limitations on legitimate future progress including
on anything built using statistical methods that require large training data
sets.

~~~
claytonjy
That's certainly a concern given the way things are currently, but I'm hopeful
concepts like Google's federated learning [1] will become increasingly
popular, and a change to data-as-a-liability would help drive development and
adoption of such approaches.

[1] [https://research.googleblog.com/2017/04/federated-
learning-c...](https://research.googleblog.com/2017/04/federated-learning-
collaborative.html)

~~~
FridgeSeal
I really, really like this approach to data, and there's so many advantages
beyond the ones outlined there: if you could turn that collection approach
into a universal/OS-level API you could have some kind of setup where user
data by default stays on the device, then to use it, the app/app server has to
request it explicitly (allowing the user easy and powerful, but granular
control over who gets what data) and then user data is further protected by
the other measures talked about in their paper.

Some level of public education is also required I think, to make people aware
of the value/threat of just giving everything up every time someone asks for
it.

------
elihu
I think we need to do everything we can to make it so that the tools that
regular programmers use aren't dangerous and/or insecure by default;
otherwise, we're just playing security vulnerability whack-a-mole. Better to
solve problems at the source than try to educate people how not to use a tool
incorrectly. This is really hard because there are so many widely used tools
and abstractions that were designed to be as powerful as possible rather than
to be easy to formally verify for correctness. I feel like the whole software
industry is built on a wobbly foundation, but it's hard to part with tools we
know because they're useful and they work, even if they do break rather often.

A good start would be to stop using C and C++ for new projects, and generally
try to eradicate undefined behavior at all levels of software. There's a lot
of really great software written in C and C++, and it would be a huge
undertaking to replace, say, the Linux kernel with something written in Rust
or Swift or some language that hasn't been invented yet. I think the eventual
benefits may greatly outweigh the costs, but it's a lot easier to sit in a
local optimum where everything is comfortable and familiar than to set out on
a quest to, say, formally verify that no use-after-free errors or race
conditions are possible in any of the software running on a general-purpose
computer with ordinary applications.

~~~
lawnchair_larry
Yep, this is the only thing that can possibly work. The path of least
resistance has to be secure, and in order to do something insecure, you need
to know enough about what you're doing to jump through hoops to get there.
Right now, we are in the opposite situation. People have to jump through hoops
to do the secure thing.

You also have to meet them where they are, not get them to change their ways
to suit yours. Otherwise, you're adding resistance, and you'll fail.

~~~
elihu
Well said.

------
rst
Among the examples cited was the Sony entertainment breach, whose causes
included a negligent security officer keeping passwords in a plain text file
on his desktop. It's not clear what better coding practices could have done to
improve that (or anything else at Sony entertainment, where the breached
systems were mostly running commodity software).

Ref: [http://www.telegraph.co.uk/technology/sony/11274727/Sony-
sav...](http://www.telegraph.co.uk/technology/sony/11274727/Sony-saved-
thousands-of-passwords-in-a-folder-named-Password.html)

------
tyingq
I know at least part of this problem is driven by the IT security field
itself. Try, for example, to find a pragmatic PCI auditor that can focus on
real issues. They exist, I'm sure, but the defacto process is to create a huge
report filled with minutea...versus something rolled up and actionable.

It's not good, but I can see why, after a few of these experiences, proactive
security gets dropped off the priority list.

~~~
raesene6
Unfortunately Security and compliance (like PCI) cover similar ground but are
very different in implementation.

PCI auditors work to a fixed standard and can be negatively affected if its
found that they deviated from it, so there's a strong incentive for them to be
picky. When combined with the fact that it's hard to have a standard that
reflects the reality of good security practice, you end up with well the
current PCI process.

The problem you're describing isn't really (I'd say) one that came from the IT
Security industry though, it was the card issuing companies who set the PCI
DSS standard and them that mandated the compliance process. Auditors are just
carrying out those requirements.

~~~
tyingq
It was just one example. I've seen similar issues from IT security in other
situations. Like recommending every tier of an application in AWS having it's
own VPC, with firewall appliances between them, manual approval chains to open
up ports in a dynamically scaled app, etc.

Basically, finding pragmatic security people that can balance "perfect" with
"real life" is hard.

~~~
raesene6
Indeed, part of that will be the culture of the company.

I've seen quite a few companies where any breach of security is held to be the
"security team's fault", so they have an incentive not to accept risks
(limited upside if they accept a risk, alongside a large potential downside if
a breach/incident happens as a result)

Getting past that really requires a culture where security is the
responsibility of all people in the organization and there's no finger
pointing in the event of a breach/incident.

~~~
ozim
Huh but that is the job of security auditor to assess risk level give report
with level of threat and then product owner job is to take responsibility for
implementing fixes based on that report. I do not understand way of working
where you have security team that dumps report with bs on developers heads and
say fix all now or we die.

------
raesene6
Interesting, although not particularly surprising, results there.

I'm afraid that the InfoSec community has been unsucessfully persuing the idea
of "RoI" for security activities for a long time (I remember debating the idea
10+ years ago..)

Also the idea that increasing breaches would drive good practices seems not to
have taken root that much, probably due to breach fatigue and the fact that
most companies who are breached don't take any serious financial hit.

Realistically the most likely way to improve this situation is for it to
feature more heavily in contracts and perhaps regulations.

Having a contractual requirement to carry out specific activities relating to
code quality/security can drive them as there's a clear monetary cost of not
doing so.

~~~
arcbyte
If there isn't any financial hit, then there isn't any value in added
security.

~~~
raesene6
I said no _serious_ financial hit, not no hit. Serious in terms of "the stock
price went down significantly", there's still costs of breach clean up etc.

Also there's a big negative externality in that a lot of the costs are borne
by users of the app/system and not necessarily the developers, but due to a
lack of liability for software development and security breaches that isn't
taken into account by many companies

------
jdc0589
I promise this will never get better until fines get handed out left and right
for breaches of ANY personal information. Right now, no one really gets
penalized for breaches unless it involves regulated data (financial,
healthcare, etc...).

Why? Money, obviously.

1\. employing security engineers who know what they are doing is EXPENSIVE.

2\. third party pentests are expensive.

3\. if there isn't an open source tool available, all of the software in the
security area is SUPER expensive.

No company, especially small or medium size, is going to spend that kind of
cash without a real motivator.

Even if you do EVERYTHING you should be doing, you will still have
vulnerabilities. Its a loosing game.

~~~
kinghajj
In my dreams, some combination of closed hardware and/or software (perhaps the
latest Intel AMT vulnerability?) leads to the personal information of all
congressmen to be leaked--financial, medical, residential, etc. They respond
with a "Secure Computing Act" that requires that the all US agencies, as well
as any companies they do business with, to use 100% open-source hardware and
software.

~~~
tensor
The more likely outcome would be to nonsensically ban open-source
implementations and instead give a monopoly to a small list of "governmentally
approved security companies." These companies in turn would be required to
produce massive volumes of paper report to "manage the risk and prove that
their software is secure."

------
partycoder
In healthcare, you have HIPAA and if you mess up patient records you can lose
your license, be subject to legal actions. e.g: leaking 1 patient record.

If a pharmaceutical company releases a drug that causes negative side effects,
lawyers are happy to sue the company on your behalf for free.

But software engineering is a discipline where no license is required, and now
thanks to informal educational institutions like coding camps, not even a
degree is required. You can ruin the lives of millions of people but always
hop around and get another job.

Companies maximize their margin saving money on security (and other non-
functional requirements), and expose customer sensitive information to
significant risks with no accountability. A statement like "Sorry! we got
hacked, your SSN and credit card information is now being sold by the bulk in
an .onion site!" would do. We as consumers should punish those incidents more
aggressively and demand a reasonable cause.

The product-driven minimum-viable-product lean-agile full-stack get-it-done
culture of spaghetti code bases without security needs to die now. It's highly
profitable and the preferred business model for many, yes. Is it ethical? hell
no. Stop doing it. In those cultures, security is treated as "tin-foil hat
paranoia" and laughed upon, and put into some "nice to have"/"maybe some day"
list, with the lowest priority.

A security bug can make it into any software. But if you assembled a team of
coding camp guys or fresh graduates to work on a banking platform or making an
IoT pacemaker you deserve to be sued for neglicence.

Unfortunately because software is a relatively new activity compared to
others, there is no established legal framework around it and that needs
fixing.

------
graystevens
Security is one of those arts, especially when it comes to the programming
side of things, that one tiny chink in your armour is enough. There are people
and tools out there which scan continually for the bugs and holes, either
wearing a white hat and submitting them to bug bounty programs or similar, or
wearing a darker shade of hat and doing much worse.

Of course, there are things out there which can help a business to minimise
these risks and to try and catch these potential coding horrors before they're
put in-front of the general public:

\- Static Code Analysis (sometimes referred to as source code analysis) is
sometimes a quick win here, but of course not the silver bullet. Sometimes a
bug cannot be easily identified by just looking at code for common mistakes,
it takes a skilled eye or even dynamic analysis for it to be spotted. However,
static analysis can be added into your production pipeline and workflow,
checking on each push for any newly added vulnerabilities!

\- Automated vulnerability scanning/testing is also something else which can
be done in-house usually, with the right tools. There is no reason why you
shouldn't be running various security scanning tools against your application
during testing/pre-production, such as web application scanners or even
fuzzers.

\- Go external, and get a 3rd party to penetration test your application if it
requires that level of scrutiny. There are plenty of smart folks out there who
do it day after day who can do this for you.

You can also deploy thing post deployment of course (depending on what you are
coding!), so for web applications, a WAF (web application firewall) is
sometimes useful to stop the vast majority of automated attacks. The alerts
from this will also give you a very good idea of what is out there and at what
scale you are being targeted. I'm currently working on a side project [1]
which is to try and identify breaches once they have happened, as
unfortunately they are almost inevitable. It isn't always your code which lets
you down! It may be a dependency or library, or even a simple phishing email.
Put simply, my project produces a canary to add to your user base, that we'll
monitor continually for a number of tell-tail signs that someone else may have
a copy of your data.

At that point, it's time to invoke your incident response process! Or... get
someone in to run that process for you.

[1] [https://breachcanary.com](https://breachcanary.com) \- If you get this
far, I would absolutely love any feedback.

~~~
tensor
>There is no reason why you shouldn't be running various security scanning
tools against your application during testing/pre-production, such as web
application scanners

Likely one reason is cost. When each individual tool costs 10k+ with the
vendors trying to throw in consulting, it adds up.

------
blanket_the_cat
And we're surprised? Didn't we learn anything from the 90's? No amount of
diligence sitting at a desk, carefully evaluating the implications of the
placement/thoroughness of your user input sanitation, adjusting the settings
in server configuration files, preventing your employees from using removable
media and accessing outside sites... No threat model, seriously none.. ever..
will ever.. Stop a young Angelina Jolie on rollerblades from gaining access to
your evil corporations's super computer and thwarting your carefully laid,
super-villain plan.

~~~
unit91
There's so much sarcasm in your post that I'm not sure I understand your
point. Can you clarify?

~~~
blanket_the_cat
Let me rephrase that; Skipping reindexing punch cards: If your adversary can
write some ASM to get the EIP to point to a malicious instruction, they can
instruct your system to do something you don't necessarily want it to do. Then
our homies at bell labs built C, as a layer of abstraction to ASM. With C,
your adversary has several ways to accomplish getting the EIP to his malicious
instruction. Then, over the years, many brilliant, incredible minds, (no
sarcasm about that. None.) have built abstractions to simplify C, and then
built abstractions on top of those abstractions, and then abstractions to
simplify those abstractions. (I'm totally not even going to touch networking
protocols) There is decades, of building systems with flaws, on top of systems
with security flaws, (which admittedly wasn't as much of a concern to anyone,
as providing the functionality to accomplish objectives, business and
otherwise) ... literally, like over half a century of this. So then these
middle-management suits, operating with "LEAN 6-Sigma" misconceptions about
the nature of the world, expect a kid, with a degree in anthropology (not
knocking the study) to run through a 12-week intensive program, and be able to
write code for a production system, with perhaps 2 people on their dev team of
8-16, and 3 folks in devops/IT who understand security to be able to proof all
of that code, and make sure that your Gibson is bulletproof? It's unrealistic.
If she wants to hack your Gibson, she's going to hack your Gibson. We're all
going to attempt to stop that, and after we've failed, we will spend days
filling out reports, talking to feds, and mitigating the damage. But we're
continuously building onto a flawed mechanism, with another flawed mechanism.
I mean, do you know any civil engineers who would say, "Oh hey this foundation
is cracked, let's build something that tries to patch those cracks, and when
that's broken, we'll build another level on top of that, and let's just
obfuscate what's really going on underneath everything so that nobody who uses
the building realizes it's unstable, and just hope it doesn't get too windy,
or that there is an earthquake." ? Ipso Facto: When you launch some
ransomware, that threatens the software reading a gyroscope to tip over an oil
tanker if you aren't paid $1,000,000, and try to blame it on some kids who's
only crime was curiosity, they will find a way to subvert the carefully
measured security mechanisms you have put in place, to not only clear their
names, and prove beyond the shadow of doubt that it was in fact YOU, who
hatched this terrible plot, but also save the environment.

Sorry, I should have said that to begin with.

~~~
NeutronBoy
> I mean, do you know any civil engineers who would say, "Oh hey this
> foundation is cracked, let's build something that tries to patch those
> cracks, and when that's broken, we'll build another level on top of that,
> and let's just obfuscate what's really going on underneath everything so
> that nobody who uses the building realizes it's unstable, and just hope it
> doesn't get too windy, or that there is an earthquake."

No, but civil engineers say stuff like "What's the likelihood that a 9.5
earthquake will his this area? What about a 5?" and model their designs on
that. That's the point behind threat modelling - if a nation state actor
decides they want to 'hack your Gibson' that's one thing, but if you're a bank
than it may be that your most likely threat is employees or contractors
stealing customer data. So you put your effort into protecting against those
threats as well.

------
kazinator
Programmers probably don't go out of their way to do secure coding not only
because it's not mandated to them, but because they know that unverified code
is junk. If security is a feature, it has to be verified. It's not the kind of
verification where you can show that a feature works on correct data, and
rejects incorrect data in some anticipated ways. It requires exhaustive code
review, and a lot of cunning in the test strategy.

~~~
DonbunEf7
Programming languages from Adga to Monte are coming now with the ability to
encode meaningful real-world proofs, and the compilers are checking those
proofs.

We could be writing a _lot_ of verified code, if we wanted.

~~~
adrianN
You radically overestimate the skills of the average programmer.

~~~
atqtion
You are correct when you say that writing verifiable/verified code is a
separate skill that not many programmers possess.

But crucially, it's a skills gap -- not an intelligence gap -- that stands in
their way. For most properties, writing verified code doesn't require _that
much_ more intelligence than writing normal code. Different skills, but in
most cases not much more intelligence.

But I don't think a skills gap is the most pressing barrier. There is an
_intrinsic_ difference in difficulty between stating conjectures and proving
theorems. There's no silver bullet here, including either education or raw
intelligence. The latter is, intrinsically, more difficult and more time
consuming.

~~~
adrianN
I'm reasonably sure that for all things where a proof would be helpful, that
is properties that the programmer has a nontrivial chance of getting the
implementation wrong and not catching the mistake with a test, a correctness
proof is _a lot_ harder than writing a correct implementation. I've dabbled a
bit with Coq and Isabelle and certainly found this to be the case.

Stronger type systems help a lot with common security problems. Memory safety
and a system to express taint of inputs eliminate a huge class of potential
problems. They are still very, very far from correctness proofs that could
catch actual logic bugs.

------
kevin_thibedeau
> One of the central security protocols protecting the web—OpenSSL

OpenSSL is not a protocol. It is an implementation of TLS. Other
implementations were immune to Heartbleed.

------
Ron2223
I implore you to try your very best to hire only professionals.
(computerhackguru AT gmail DOT com or (513) 453-6539) will increase your
chances of getting your job completed. I was lost with no hope when my credit
score is below 500 and i was about to lose my job, my car, most of my
properties including my house because i was yet to pay the mortgage, I sit my
best friend down and tell him all my problems and he then introduce me to
savior of all time Computerhackguru AT gmail DOT com or (513) 453-6539. I
contacted him and he asked for my info and i gave him all he needed to proceed
with the hack. Behold! I thought it was a joke after which have funded the
exploits and i was told to wait to 48 hours. He helped me hacked into the FICO
and all and get my credit score boost to 825 plus and now, I got a new perfect
job and had to pay my mortgage and i'm living my life in peace without debts
and bad credits. All thanks to the realest hacker alive! get to him through
Computerhackguru AT gmail DOT com or (513) 453-6539. He's real, swift and
affordable. I bet you get to find me and thank me later.

------
firasd
Cultural changes among engineering teams would be helpful, but the bottom line
as far as business apps are concerned is that, to prioritize security, the
financial liability of letting hackers access customer data needs to meet the
financial incentive of shipping working code. Until those curves intersect the
status quo remains.

~~~
doubleunplussed
Government could impose fines for not meeting certain security standards, and
then do random audits, or in a more free market way this sort of alignment
ought to eventually be able to come about via:

1\. Class action lawsuits to sue companies who have data leaks.

2\. Companies take out insurance for being sued for data leaks.

3\. Insurance companies impose security requirements in order to provide
coverage.

------
devy

    		The costs of dealing with breaches, no matter how demoralizing, never seem to justify 
    		the extra time and money that good security requires. Additionally, although a security 
    		flaw is sometimes traceable to a single line of code—as in Apple’s famous “curly braces” 
    		bug—breaches are often a simultaneous failure on several levels of the software stack 
    		and its implementation. So each company may be able to shift blame onto other actors, 
    		and even the user.
    

That above paragraph provides another perspective on why the software security
vulnerabilities happens more frequently, which is different from the hardware
ones. [1]

[1]:
[https://news.ycombinator.com/item?id=14238391](https://news.ycombinator.com/item?id=14238391)

------
kyled
Self plug =).

Check us out at [https://oneupsecurity.com](https://oneupsecurity.com) if
you're interested in secure software development.

Always great to chat with businesses who are passionate about security.

------
amelius
Replace "security" by "safety", and I suspect you have the same problems.

I wonder how that will work out with e.g. self-driving cars.

~~~
yoz-y
I would be surprised if safety was not very high on the list of priorities.
Getting hacked is always a nebulous proposal but I think it is quite easy to
describe the consequences of somebody being hurt or killed by your product.

~~~
amelius
"We need to add this feature to the car immediately!"

"We can't do that because it would mean we would have to re-test the codebase,
meaning we'd have to test-drive the car for hundreds of thousands of miles."

"Can't we take a shortcut? I believe the change is quite innocent. And the
competitor already has this feature."

"Ok, sounds reasonable. Make the change."

~~~
user5994461
You are assuming that car manufacturers operate like web companies.

Tip: They do not.

~~~
curioussavage
"Sir our car won't pass smog tests!"

"Oh no! Now we will need to do x and y and z and it will cost x dollars"

"Well we could adjust the software.."

"Sounds great!"

Yep, car industry...

~~~
AstralStorm
Easily caught and heavily fined. (Not to mention loss of PR.)

------
draw_down
What's the incentive for companies to care? They get hacked and leak
everyone's data all over the place and we all just kinda shrug and say "that
sucks". Sometimes bigger companies get in trouble for some millions, an
insignificant amount to them.

Look at what happened last week with Netflix. It didn't involve user data, but
they got hacked and their stuff leaked and then what? Everyone just shrugged.
No big deal. I mean probably some people are getting yelled at internally, but
otherwise the situation is clear: we pretend to care about this, but we don't
care about it.

Imagine being a security advocate in an organization in this environment. You
get to convince business people to spend money so that something doesn't
happen, which even if it does happen, will result in embarrassing headlines
for a day. Not exactly a convincing case!

------
burntrelish1273
There are some obvious contributing factors:

0) The majority of programmers at this point in history are novices. Src: Joe
Armstrong.

1) There is very little formalized training except in some enterprises. Many
outsourcers invest very little in training and expect staff to learn on-the-
job.

2) Large enterprises with immense codebases with many committers easily become
Tragedies of the Commons without active code reviews and high standards.

3) There is very little standardization (convention over configuration): many
languages, many non-orthogonal coding styles/language features.

4) Security seems like a non-value add activity... until there's a major
problem. (Non-proactive development/CMM.)

5) Offensive and defensive sec require a different, acquired skill-set and
engineering mindset from simply implementing features and fixing bugs.

------
SadWebDeveloper
You can only have two things in software development either your software is
"user friendly" or "secure". Example: PGP

~~~
astrodust
PGP/GPG is the FFMPEG of encryption software. It could be done _way_ better.

~~~
voltagex_
ffmpeg has had a lot of hours of fuzzing and improvements thrown at it. I
believe it's come out of the avconv fork looking better - after all, it
survived.

~~~
astrodust
It's powerful software, but goddamn that command-line interface makes
ImageMagick look downright simplistic.

There are GUI tools for setting these options, but they're absolutely
atrocious. You may as well be trying to program a guided missile.

~~~
AstralStorm
FFmpeg is mostly a set of libraries rather than the test command-line
application. There are really a huge number of applications using it
internally.

------
solatic
ITT: talking about poor incentives, doing basic threat modeling, etc.

The answer has always been professional licensing.

And no, just because there's always a new flavor of the month language or
framework, doesn't mean the fundamentals change all that often.

* Sanitize inputs * Secure data at rest and in transit * Never store secrets in plaintext * Set up IAM properly for your organization * Use 2FA whenever possible * Principle of least access * Update frequently

Why, again, can't we conditionally license professionals on knowing the
basics, and threaten disbarment for knowingly neglecting security?

~~~
lawnchair_larry
> _The answer has always been professional licensing._

That's only the answer if you're asking people who have no real experience in
security. Those who do are more likely to tell you that will only make the
problem worse.

Professional licensing for writing or operating software will never work, and
if it does, it will be extremely harmful.

