
OpenBSD 6.0 released - paride5745
http://undeadly.org/cgi?action=article&sid=20160901090415
======
mrb
_" To deter code reuse exploits, rc(8) re-links libc.so on startup, placing
the objects in a random order."_

I love this defense in depth, buried in the release notes...
[https://www.openbsd.org/60.html](https://www.openbsd.org/60.html)

~~~
armitron
This is pretty much useless. Attackers have moved on to using information
leaks in order to determine memory layout and placement of objects.

So in practice, this doesn't really deter anything. Information leaks are
everywhere.

~~~
CraigJPerry
I don't know why you're being down voted. We've had ASLR in Linux for years
and various attacks have been successful.

The string format attack is the one I remember most clearly which basically
renders ASLR useless.

~~~
aweinstock
Attacker-controlled format strings are very convenient bugs, but they can't do
everything. Consider the program:

    
    
        int main() {
            char buf[20];
            fgets(buf, sizeof buf, stdin);
            printf(buf);
            return 0;
        }
    

An attacker writing to the program's stdin can read at offsets to the stack
(e.g. "%42$x"), read the contents of arbitrary non-null memory (e.g.
"ABCD%5$s", where ABCD is a 32-bit memory address, and 5 is the positional
parameter corresponding to the start of buf), and write an arbitrary value to
an arbitrary address (e.g. "ABCD%38x%5$n", to write the value 42 to address
0x44434241).

A significant limitation of the vulnerability in the above program is the
attacker can't, in a single execution of the program, read a value, _then_ do
computations on it locally, _then_ write a value based on those computations.
This flexibility is needed in order to bypass ASLR.

~~~
CraigJPerry
Kudos for the worked example!

------
justin66
This seems like a big deal:

 _One thing to note: this will be the last version of OpenBSD to be pressed on
CD. The project will now focus on internet-only distribution, giving much more
flexibility in the release schedule._

~~~
jasonkostempski
It does? I was under the impression the only reason anyone still did that was
to give something physical to donators.

~~~
ue_
Perhaps I'm the only person using computers in 2016 who only installs OpenBSD
via CD :-)

~~~
agumonkey
I only do to enjoy the pleasure of old ways. Interacting with CDs and CD
drives reminds me of when it was cutting edge. Tray mechanics, the speed
intake of spinning motors. Even the latency and seek sounds. And somehow, the
(almost since it's on DVD RW) immutability. In some ways, the sheer speed of
SSD is ... boring when you're not in a hurry.

ps: a bit like vynil lovers who take their player out for similar reasons.

~~~
jasonkostempski
I do that too. I also like to imagine a post-apocalyptic, no-internet time
where I have a generator, PC, BSD disk and the knowledge to use them to rule
the new world while everyone else is stuck on the "Activation Required"
screen.

~~~
PhantomGremlin
Fortunately OS X isn't as bad as Windows. I haven't tried the most recent
version, but for previous versions I have successfully installed OS X from a
USB stick, without needing any internet access at all.

However, the OS X binaries are signed. So, when the key expires, people are
SOL. I had this problem, and easily re-downloaded my various older OS X
versions. But in the general case it's quite disturbing:
[http://www.macrumors.com/2016/03/03/older-os-x-installers-
br...](http://www.macrumors.com/2016/03/03/older-os-x-installers-broken-by-
certificate/)

------
hackuser
This is a good opportunity to ask: Can anyone recommend a laptop I could put
OpenBSD on and be fully functional for busy workdays (i.e., when I need to
spend 100% of the day being a knowledge worker with a reliable tool I don't
have to think about, and 0% being a sysadmin trying to get their tool to
work)?

On one hand I've seen threads on HN and Reddit saying how OpenBSD works
flawlessly, esp. on various Thinkpads. OTOH I read in-depth reports like
these, by OpenBSD insiders who know far more than I ever will, and even their
results seem insufficient:

* [https://gist.github.com/reyk/80dca43c8bcfa76d2a7ff147ea64d44...](https://gist.github.com/reyk/80dca43c8bcfa76d2a7ff147ea64d442)

e.g., "The [wlan] connection is sometimes not stable or the firmware shows
errors when switching the AP configuration. But stsp@ is actively working on
improving it!"

* [http://www.tedunangst.com/flak/post/Thinkpad-Carbon-X1-2015](http://www.tedunangst.com/flak/post/Thinkpad-Carbon-X1-2015)

* [https://marc.info/?l=openbsd-misc&m=145275871714024&w=2](https://marc.info/?l=openbsd-misc&m=145275871714024&w=2)

EDIT: For those interested in this topic, I would start with tedunangst's
excellent summary from earlier this year. Thank you Ted!:

[http://www.tedunangst.com/flak/post/openbsd-
laptops](http://www.tedunangst.com/flak/post/openbsd-laptops)

~~~
aaron_m04
I used OpenBSD CURRENT on the Thinkpad Carbon X1 for 3 months earlier this
year, and my experience has been that hardware support is not a problem for a
"knowledge worker" (I am assuming you don't need things like HDMI audio for
that).

The main problem is software that is outdated, unavailable, or buggy on non-
Linux platforms.

I came crawling back to Linux in the end.

~~~
hackuser
Thanks. Though a knowledge worker sometimes does need to connect their laptop
to a TV for collaboration and presentations.

> The main problem is software that is outdated, unavailable, or buggy on non-
> Linux platforms

What kinds of applications could you not find a good solution for?

~~~
aaron_m04
> What kinds of applications could you not find a good solution for?

* LibreOffice Writer (lowriter) had an annoying bug where it would sometimes take over a second to redraw the toolbar buttons, and the editor was unavailable during that time. This happened on save, resize, or unhiding the window.

* Konsole's logic to identify the name of the foreground process would occasionally go into an infinite loop and all the konsole windows would lockup and need to be kill -9'd. It can be avoided by using tmux but that's inconvenient. This is sad because it's the most featureful and user-friendly terminal emulator I know of.

* No good C++ IDEs.

* No support for cargo in the rust packages.

* No workable virtualization.

Also, if you use OpenBSD on a laptop, you'd be crazy not to use GNOME 3 in my
opinion. Support for the other DEs are not on par.

~~~
hackuser
Thanks for the tips; that's very helpful

> * No workable virtualization.

For those interested, they are working on it AFAIK. Look up vmm/vmd.

------
riffraff
I am not an OpenBSD user, but as in every release I am happy to check the
lyrics for the release song, this time we got 6 so it's awesome :)

[http://www.openbsd.org/lyrics.html#60a](http://www.openbsd.org/lyrics.html#60a)

~~~
corv
That song[0] brings a smile to my face. Keep up the great work OpenBSD!

[0]:
[http://ftp.openbsd.org/pub/OpenBSD/songs/song60a.mp3](http://ftp.openbsd.org/pub/OpenBSD/songs/song60a.mp3)

------
willvarfar
One big step in this release is the mandating of W^X by default.

> "Unfortunately there is important third-party code, such as just-in-time
> compilers, that still uses mmap(2) to make memory both writable and
> executable, so for the time being, we have to arrange ourselves with it."

If a program wants to JIT on OpenBSD, how should it do it in a secure,
OpenBSD-approved way?

~~~
avsm
Use mprotect(2) on the region of memory that the program wants to make
executable. [http://man.openbsd.org/OpenBSD-
current/man2/mprotect.2](http://man.openbsd.org/OpenBSD-
current/man2/mprotect.2)

This is good portable programming practise anyway...

~~~
willvarfar
So they should create a writeable mmapping, write the code into it, then
change it to W^X using mprotect?

How does this stop an attacker doing the same via ROP?

ADDED: The approach that comes to my mind is that they could have two
processes. One process has the sourcecode, and pages where it can write code.
The other process can execute the code. The code updates performance counters
which the JITing process can read, so the JITter has feedback to know what to
optimise.

However, this sounds a large architectural change, prevents programs JITting
programs they generate on the fly, and causes the JIT to lack behind somewhat.

On the Mill CPU (disclaimer: I'm on the Mill team) the CPU can change
processes ("turfs" in Mill terms) using a "portal" function call. This
alleviates somewhat the performance concerns, as the JITted program can call
into the JITer process synchronously and cheaply.

~~~
dmm
> How does this stop an attacker doing the same via ROP?

Wait, isn't that backwards? Doesn't the use of W^X necessitate the use of ROP?
Right now a JIT has lots of memory that is W&X so you just need a memory
exploit to inject some code and then find a way to jump to it, no need for
ROP.

But if you implement W^X this won't work because now you can't inject code
with just a memory exploit, you also have to set it to executable with
mprotect(2). So instead you use ROP, and inject only data which includes jumps
to carefully selected sections of code in libc, etc, that implements an
exploit. I mean you could probably use ROP to run mprotect, but by that point
it's pointless because you're already running code on the system right?

~~~
willvarfar
Forgive me for dreaming of a world where CPI or other approaches close the ROP
vector too :)

------
yegortimoshenko
Now, that OpenBSD runs on Xen (and, by extension, on Amazon EC2), is there an
official AMI?

~~~
microcolonel
If I understand correctly, this also means that the main excuse for it not
being supported on DigitalOcean is now alleviated. :- )

~~~
misframer
Doesn't DigitalOcean use KVM?

~~~
microcolonel
I stand corrected, seems it's KVM.

Seems some other people were confused. I've been told in other places that
it's Xen. OpenBSD has had working virtio-blk and virtio-net drivers since
about 5.3/5.4/5.5 IIRC. hmm...

------
zzzcpan
Talking about security, I'm hoping for a bound-checking memory-safe C compiler
to eventually made it into something like OpenBSD or FreeBSD and be used by
default for all ports and packages. Almost no software there would suffer from
the overhead, and OpenBSD doesn't even promise or try to be very fast, it's a
perfect place for it.

~~~
pedrocr
Is a "bound-checking memory-safe C compiler" even possible in the general case
without implementing a new Rust-like language?

~~~
nanolith
Yes. But, you typically have to use a theorem prover to build up static
checking for functions, and then enforce proof obligations on callers to said
functions. If done in a system like Coq or Isabelle, the proof obligations
become a parallel markup to C that is used in conjunction with the source code
to enforce policies. Bounds checking is one policy -- and a relatively easy
one to implement at that -- and other policies can be stacked.

Take a look at VST for an example already in the wild. I'm currently working
on a somewhat different approach that does not have the severe license
restrictions that CompCert has.

[http://vst.cs.princeton.edu/download/](http://vst.cs.princeton.edu/download/)

~~~
nickpsecurity
That's not true. You just use a compiler transformation. See this:

[https://news.ycombinator.com/item?id=12407156](https://news.ycombinator.com/item?id=12407156)

Difference between memory safety and full, formal verification of correctness.
You're describing the latter. Definitely check out Myreen et al's CakeML work,
COGENT at NICTA, and AutoCorres/Simpl used in seL4. They might have stuff to
speed up your own tool development. I wish you great luck on your project. :)

~~~
nanolith
Compiler transformation can solve most of these concerns, but it is not
perfect due to undecidability. Compiler transformations will always be
conservative, falling back on runtime enforcement.

The main edge cases I've run into -- hence my need for building a tool like
this -- is dealing with tight performance concerns found in realtime and
embedded applications. Falling back to runtime enforcement is not an option,
and neither is trusting developers to be able to fully analyze complex
control-flow and data-flow paths without solid tooling.

For general-purpose applications, of course, compiler transformation with
runtime fallbacks is perfectly acceptable.

~~~
nickpsecurity
"is dealing with tight performance concerns found in realtime and embedded
applications"

I thought tools like Astree and SPARK have knocked this out the park. Copilot
was also pretty good on runtime side given it works in embedded. Just gotta
structure your program to use the tools. I think they should cover plenty of
use-cases given stuff like IRONSIDES DNS runs in SPARK.

"For general-purpose applications, of course, compiler transformation with
runtime fallbacks is perfectly acceptable."

Also true for many embedded apps given the remaining runtime hit can range
from single-digits to 40% depending on scheme used. Even more combined with
something like SPARK or Astree for stronger, static analysis than the minimal
stuff academics usually use. Certainly a subset will benefit from or even need
methods you prefer but many won't. It's just management or a consumer's
preference to save a few bucks shooting them in the foot. ;)

Do shoot me an email, though, in case I get some free time to run some ideas
by you. I don't know enough formal methodists these days. My mile-high
perspective can only let me do so much as a generalist. Gotta have specialists
to help me filter the chaff from the wheat. Then pass such recommendations
onto more specialists as always. :)

~~~
nanolith
"Also true for many embedded apps given the remaining runtime hit can range
from single-digits to 40% depending on scheme used."

I think that depends on the definition of embedded. When one is lucky enough
to work with a chipset in which a 40% performance hit still results in
acceptable performance, such a tool is reasonable. I can tell you from my
experience in the consumer electronics field that BOM costs often rule over
software performance, and as such, firmware engineers are often stuck with
finding creative ways to solve problems on nerfed hardware. Wrestling with a
tool that can occasionally inject unwanted runtime overhead in unpredictable
locations can compound razor-thin time-to-market timelines for last minute BOM
changes.

The focus for my tool is to provide people with the ability to build
additional propositions that go beyond basic correctness or memory bounding
found in tools like Astree. Astree, for instance, may be able to automatically
discover whether a subset of C is correct -- because it has a very reasonable
policy of assuming incorrect behavior unless otherwise proven -- but it can't
solve for custom policies such as, "does this seemingly correct code actually
follow my architecture and specification?"

Also, Astree isn't free software. It's an AbsInt product, which means most
likely that it comes with a hefty price tag. To give you an idea, their
CompCert compiler is licensed in the five to six figure Euro range per seat.
Of course, for the sort of customers AbsInt is interested in -- companies like
Airbus or Boeing -- this isn't necessarily a deal breaker. They are willing to
pay for such a tool because of the mission critical aspect of their work.

But, I'm of the opinion that the industry as a whole needs access to tools
like these. The tool I'm building will be released under the LGPL. The
security problem is one with which the entire industry needs to engage. My
tool may not necessarily be the best one out there -- nor am I trying to build
the best tool -- but I am trying to build one that is "good enough" and
reasonably sound. I think I have struck the right balance, but time will tell.

I wouldn't say that I'm a formal methodist as much as a software engineer
looking to be able to make strong guarantees about critical components. I have
a problem to solve, and I've spent a few years studying this problem off and
on. Somewhere along the way, I acquired enough hubris, if not enough
knowledge, to think that I could build such a tool. So far, things have worked
out better than I've thought. But, I still have a few big hurdles to overcome
before I can confidently say that this tool will be useful to others. INRIA
and other researchers have blazed the trail, and they have rightfully charged
a small fortune for what they have discovered. I'm much more interested in
ensuring that the average developer has access to such technology. My tool
won't have the bells and whistles of commercial offerings, but it's hard to
beat the price.

~~~
nickpsecurity
Alright, this sub-thread started with a person wondering if one could make C
programs memory-safe against common errors. You replied with formal
verification while I replied with automated tools that make C memory safe. One
or two use formally-verified models. :) You countered with the needs of
embedded making such tools impractical. There's many in that industry that can
use tooling like I recommended, esp if not BOM-sensitive or hard real-time.
You brought up the needs of ultra-constrained market in terms of cost or hard
real-time. The increasing market-share of expensive ARM's vs 8-16-bitters and
embedded JVM's makes me wonder if that's oversimplified, too. Makes me think
similar companies might consider tooling like I described if it likewise adds
benefit with tiny costs. The majority will do as you say and avoid these tools
since they don't fit their use case. So will some non-embedded sectors focused
only on performance or lowest cost.

Let's cut to some important stuff, though, as I think the argument got over
semantics or at least less interesting stuff. My background is high-assurance
security with generalist experience but none with formal methods or building
static analyzers. You seem to have experience in those I don't. Appel, who
makes the tool you linked, is a formal methodist (among other things) building
stuff that's useful, achieves high-end stuff, and sometimes enabling less-
skilled people. Rare type of researcher and deliverables I love seeing. You
say you want to do something similar. So, let's have a more interesting
conversation.

"Also, Astree isn't free software. It's an AbsInt product" (and CompCert)

I saw writing on wall with CompCert when they hesitated to FOSS it or answer
questions. So sad such a wonderful tool got locked up. Thanks for confirming
with a dollar range what I suspected. Same with Astree. We need replacements
for both like you said about CompCert. My first idea was for a team to clone
CompCert based on high-level description of what it does with dependent types
a la Chlipala or just FLINT ML-style compiler. Add Design-by-Contract,
QuickCheck, and a few other things with take little time but collectively
knock out tons of errors. Compile it with MLton for development speed and
CakeML for release version with equivalence checks. Be 90+% to quality of
formal verification without CompCert and have a nice foundation for whoever
wants to go fully formal.

Far as what formal people were doing, first I saw were trying to use a
micropass approach to piecemeal build one for a MIPS machine where they spent
tiny fraction of time of CompCert. Don't have link handy or know their status.
Fortunately, Myreen et al's method already got used in seL4. A lot more than
that actually haha. Perhaps a short-cut to CompCert replacement is to automate
the process of running a C subset through such tooling with possibly
guidelines for programmers a la MISRA to keep automated part feasible?
Existing tools, esp AutoCorres/Simpl, _should_ have solved hardest problems in
that.

"But, I'm of the opinion that the industry as a whole needs access to tools
like these." "The security problem is one with which the entire industry needs
to engage"

You preaching to the choir. I was designing and preaching high-assurance
security, at least for critical stuff (eg kernels, compilers), way before the
Snowden leaks. The stuff mainstream, "INFOSEC professionals" said was
hypothetical in risk or overkill in solution was often validated by specific
attacks in leaks. ;) Odd thing is they still argue with us and dismiss high-
assurance as red tape or 100% impractical. Same with things like SPARK. (rolls
eyes) Anyway, I published for free a summary of high-security INFOSEC on
Schneier's blog in 2013 in an argument about secure code != secure systems:

[http://pastebin.com/y3PufJ0V](http://pastebin.com/y3PufJ0V)

"but I am trying to build one that is "good enough" and reasonably sound. I
think I have struck the right balance, but time will tell."

I encourage people to do exactly what you're doing. There will be no one-size-
fits-all. Tools that approach these analyses balancing ease-of-use with
properties proven or bugs found are best bet because it's all time-constrained
developers will likely use. I'm also glad you're FOSSing it. I don't expect a
community to show up as FOSS types ignore most high-security stuff. The model
I'm pushing for is businesses licensing software or selling hardware
leveraging high-assurance components with a percentage of revenue put into
improving or maintaining them. On top of what practical, grant-funded
academics contribute. FOSS is still a pre-requisite to this model working as
likes of AbsInt or Alt Software keep barrier-to-entry too high with licensing
costs. Medium-to-high assurance might cost money and time to develop but
access to it has to be cheap and easy for uptake. Ironies of life.

"I wouldn't say that I'm a formal methodist." " Somewhere along the way, I
acquired enough hubris, if not enough knowledge, to think that I could build
such a tool."

Lol. That's how it starts. You'll be asking me to trust 100 pages of proof
because the checker said so within 3-5 years. :P

Seriously, though, I'm glad you're not cocky as being careful & using many
methods for correctness is best route. At least one always misses something.
Even happened to CompCert where the spec was wrong 2-3 times but at least the
code faithfully implemented the bad spec. You might dodge that if more careful
& testing every part of lifecycle.

" My tool won't have the bells and whistles of commercial offerings, but it's
hard to beat the price."

Good luck to you on that. While we're at it, the last time I heard something
like this was someone telling me about Liquid Types analyzer for C programs
(below). What do you think of it for this problem area or versus your own
approach? Or if someone like me with no specialist skill should attempt to
mess with it? Very hard for me to evaluate that aspect. I fear I will waste
dozens to hundreds of hours on the wrong approach haha.

[http://goto.ucsd.edu/csolve/](http://goto.ucsd.edu/csolve/)

~~~
nanolith
I agree with your synopsis on Pastebin. It's pretty close to what I have been
trying to practice. As an interesting aside, I worked with a few researchers
about a decade ago on tracking SMM vulnerabilities in various Intel chipsets.
They were the experts though. I was just a hired hand. Still, I learned a lot
about the very real threat of hardware vulnerabilities.

The Liquid Types approach used in CSolve -- if memory serves correctly --
makes use of an SMT solver to attempt to find counter-examples given a
particular set of constraints. This is similar to the approach that Frama-C
uses. SMT solvers are interesting and can do quite a bit, but they are still
incomplete. Many linters have started incorporating SMT solvers. It's better
than much of what is out there, but there are still significant limitations.
Furthermore, the markup languages provided are often limited in the amount of
customization that can be performed.

My tool is designed as a framework that can be used to build much more
comprehensive proofs. Similar to what you posted in Pastebin, the security of
a system is intimately tied to the architecture of a system as a whole. I want
to formally verify the architecture, then demonstrate that the implementation
is operating under the constraints of the architecture using a combination of
equivalency and bounding proofs. This, in conjunction with a strong
understanding and integration with the sorts of runtime features available in
a system (i.e. MMU protections) can be used to build a compelling verified
proof of security similar to what was done with seL4.

But, ultimately, even this is not enough. Few systems today are contained
within a single machine. IoT has demonstrated just how complex the current
software ecosystem is becoming. Security proofs must extend beyond individual
applications and systems and cover heterogenous and widely distributed systems
with many different potential attack vectors. This can be formally verified,
but as far as I'm concerned, such a verification is an architecture-first
approach. I don't think that in this sort of environment, a correctness tool
is enough. It's a great addition to the toolbox, and it is a damned useful
tool at that, but correct software can still violate policy unless this policy
is part of the constraint of "correct".

~~~
nickpsecurity
I'm mixed about the post, esp the science part. The science of developing
robust software is great and pretty consistent going back decades varying
mostly in specific tools and tactics. Mainstream programming just doesn't
apply it although more adoption in past decade of key techniques. Here's some
computer science from the 1960's-1980's used in robust and secure system
development (esp Orange Book B3 or CC EAL6) people might want to copy. I'm
taking an empirical route where I reference techniques that were applied to
many real-world projects with lessons learned in papers or studies that were
consistent. All one can do with limited data & these aren't in order of
importance.

1\. Formal, non-English (eg math/logical) specifications of requirements or
abstract design. English is ambiguous and misreadings of it caused countless
errors, even back then. CompSci researchers tried formal specs with both
English as a start and precise notations (eg Z, VDM, ASM's, statecharts) for
clarity in specifics. Result was many inconsistencies caught in highly assured
systems and protocol specs before coding even began.

2\. High assurance stuff often used mathematical (formal) verification.
Whether that worked or made sense was hit and miss. More on it later. Yet,
virtually all of them said there was benefit in restrictions on the specs,
design, and coding style to fit the provers' limitations. Essentially, they
used boring constructs that were easy to analyse and this prevented/caught
problems. Don't be too clever with design or code. Wirth and Hansen applied
this to language design to bake safety & comprehension in with minimal to low
loss in performance.

Note: Led to Nick P's Law of Trustworthy Systems: "Tried and true beats novel
or new." Always the default.

3\. Dijkstra's THE project showed that modular, layered design with careful
attention to interfaces (and interface checks) makes for most robust and
maintainable software. Later results confirmed this where each module must fit
in your head and control graph that's pretty predictable with minimal cycles
prevented all kinds of local-becomes-global issues. Many systems flawless (or
nearly so) in production were built this way. Dijkstra correctly noted that it
was very hard to do this even for smart people and average developer might
screw structuring up a lot. Solid prediction... but still worth striving for
improvement here.

4\. Fagan ran empirical studies at IBM that showed a regular, systematic, code
review process caught many problems, even what tests missed. Turned that into
formal inspections with the periodicity and prioritizing tuned per
organization for right cost-benefit. Was generalized to whole SDLC by others
in high robustness areas. Improved every project that used it from then on.
Exactly what parameters to use is still open-ended but periodically looking
for well-known flaws with reference sheet always works.

5\. Testing for every feature, code-path, prior issues outside of code base,
and common use-case. All of these have shown repeated benefits. There's a cut-
off point for each that's still an open, research problem. However, at a
minimum, usage-based testing and regression testing helped many projects
achieve either zero or near-zero, user-facing defects in production. That's a
very important differentiator as 100 bugs user never experiences is better
than 5 that they do regularly. Mills' Cleanroom process combined simple
implementation, code review, and usage-testing for insanely-high,
statistically-certifiable quality even for amateur teams.

6\. By around 60's-70's, it became clear that the language you choose has a
significant effect on productivity, defects, maintenance, and integration.
Numerous studies were run in industry and military comparing various ones.
Certain languages (eg Ada) showed vastly lower defects, equal/better
productivity, and great maintenance/integration in every study. Haven't seen
many such studies since the 90's and most aren't constructed well to eliminate
bias. However, it's grounded in science to claim that certain language choices
prevent common negatives and encourage positives. So, it follows to adopt
languages that make robust development easier.

7\. By the 80's or 90's, it was clear that computers were better at finding
certain problems in specs and code than humans. This gave rise to
methodologies that put models of system or code into model-checkers and
provers to show certain properties always hold (the good) or never show up
(the bad). Used successfully with high-assurance safety and security critical
systems with results ranging from "somewhat beneficial" to "caught stuff we'd
never see or test for." Back then it was unclear how applicable it was. Recent
work by Chlipala, Leroy, et al show near perfect results in practice when
specs/proofs are right and much wider application than before. Lots of tooling
and prior examples means this is a proven way of getting extra quality where
high-stakes are worth the cost and where core functionality doesn't change
often.. The CompCert C compiler, Eiffel's SCOOP concurrency scheme, and Navy
team's EAL7 IPsec VPN are good examples.

8\. Static analysis, aka "lightweight formal methods," were devised to deal
with specialized skills and labor of above. Getting to the point, tools like
Astree Analyzer or SPARK Ada can prove absence of common flaws with little to
no false positives without need for mathematicians in the company. Just a half
dozen of these tools by themselves found tons of vulnerabilities in real-world
software that passed human review and testing. Enough said, eh?

9\. Software that succeeded with testing often failed when random stuff came
at it, especially malware. This led to various fault-injection methods like
fuzz testing to simulate that and find breaking points. The huge number of
defects, esp in file formats & protocol engines, found via this method argues
for its effectiveness in improving quality. It ties in with stuff above in
that well-written code that validates input at interface and preserves
invariants throughout execution should simply disregard (or report) such
erroneous input.

10\. Interface errors themselves posed something like 80+% of problems. This
was noted as far back as the 60's in Apollo project when Margaret Hamilton
invented software engineering, fault-tolerance, and specification techniques
to fight it. Dijkstra and Hoare pushed for pre- and post-conditions plus
specific invariants to document the assumptions of code during procedure
calls. Modern version is called Design by Contract in Eiffel, Ada, and
numerous other languages (even asserts in C). Many deployments and tests
showed such interface checks caught many issues, esp assumption violations
when new code extended or modified legacy.

11\. Concurrency issues caused all kinds of problems. Techniques were devised
by Hansen (Concurrent Pascal) and later Meyer et al (SCOOP) to mostly immunize
against them at language level with acceptable performance. Languages without
that, especially Java, later got brilliant tooling that could reliably find
race conditions, deadlocks, or livelocks. Use of any method inevitably found
problems in production code that had escaped detection. So, using prior,
proven methods to immunize against or detect common errors in concurrency is A
Good Thing. Note that shared-nothing, event-driven architectures also emerged
but I have less data on them outside that some (NonStop, Erlang) worked
extremely well.

The above are just a few things that computer science established with
supporting evidence from real-world projects so long ago that Windows didn't
exist. Anyone applying these lessons got benefits in terms of code quality,
security, and maintainability. The rare few applying most or all of them,
mainly high assurance community, got results along lines of space shuttle
control code with extremely, low defects or zero in production. So, given the
past and present results of these methods every time they're put to the test,
I'm irritated every time another person talks like there's no good science to
quality software. I just listed a bunch of it, it's been tested in production
as scientific method requires thousands of time, tweaked probably hundreds,
and core approaches remained even if tactics got modified.

Now people can feel free to use and improve on the science. CompSci continues
to in every area I listed with a chunk of proprietary and FOSS developers
using a subset of the techniques. Just need more uptake. Use what's proven.
And do note that there's plenty of examples for specific design and
implementation decisions for common types of functionality. Many things that
were shown to work or not work that could be encoded in libraries, DSL's,
templates, whatever. No excuse except for our field's continual failure to
learn and hand down the lessons from the past.

[http://pastebin.com/xZ6m4T8Z](http://pastebin.com/xZ6m4T8Z)

~~~
nanolith
It is upon the shoulders on these giants where I stand. Dijkstra, Floyd,
Hoare, Knuth, Curry, Church, and Turing, among others.

The science exists. The trick now is bringing it all together in a way that is
useful enough to others that they actually start using it. :-)

------
octotoad
Farewell sparc32. You will be missed by the retrocomputing geeks.

~~~
rjsw
Still supported by NetBSD.

~~~
chriscappuccio
Through emulation. How useful...

~~~
rjsw
NetBSD/sparc runs on real hardware as well as emulators.

------
20yrs_no_equity
I love BSD and would like to use it, but we're kinda in a Linux world.

Question: If I am building a custom hardware device (and we will be in the
future, I believe) can we run OpenBSD on it using the ARM port, but also
invoke binaries created for linux?

It appears that support was removed.

In the near term I need to build for Linux, But eventually we'll be able to
target our own hardware, and one of our toolchain items doesn't support BSD
right now (and is closed source, though we are considering an open source
alternative.)

Any thoughts?

~~~
rjsw
Linux ARM binaries require the OS to map a page at the top of the address
space that is shared between the kernel and userspace, you would need to check
whether OpenBSD ever included this in their Linux emulation.

The support for this isn't in NetBSD either but there has been a request to
add it in order to run a Citrix app.

~~~
bigato
They removed Linux emulation support in 6.0.

------
Johnny_Brahms
For those of you who are looking at a reason to play around with openBSD,
there might be some progress at getting it to run at the Raspberry pi 2 and 3:
[http://marc.info/?l=openbsd-
cvs&m=147059203101111&w=2](http://marc.info/?l=openbsd-
cvs&m=147059203101111&w=2)

Probably not going to happen, but it runs on some other ARM7 SBCs. Mine is
running FreeBSD currently, but where is the geek cred in that?

~~~
nayden
this is currently being worked on as I type this reply :)

~~~
Johnny_Brahms
_nice_.

For my pi I want something that I don't have to think about. A colleague pwned
my rpi because I hadn't updated it in ages. OpenBSD seems like a better
choice...

------
Theizestooke
I'm trying to use the austrian mirror, ftp5.eu.openbsd.org, but I'm getting
empty directories or "Permission denied" when trying to access the packages
folders
[http://ftp5.eu.openbsd.org/ftp/pub/OpenBSD/6.0/packages/](http://ftp5.eu.openbsd.org/ftp/pub/OpenBSD/6.0/packages/)

~~~
fredmorcos
Have you tried ftp2? I had problems today with one of them (can't remember
which worked and which didn't).

EDIT:
[http://ftp2.eu.openbsd.org/pub/OpenBSD/](http://ftp2.eu.openbsd.org/pub/OpenBSD/)

------
keithpeter
_"...the kern.usermount sysctl is also no more. Administrators who want to let
users mount devices will need to configure doas(1) for that task."_

For convenient laptop use I prefer allowing my user account to mount USB
drives and have that done by mouseclicks in some way.

Antoine Jacoutot's toad package is not in the 6.0 packages collection, but
xfce4-mount is, so I assume I can set up the appropriate doas rule for a user
and then configure xfce4-mount to ignore the local drive. I shall have a play
on Sunday.

------
protomyth
I've never really had a separate /usr/local partition, but it looks like that
might not be such a bad idea given the upgrade guide:
[https://www.openbsd.org/faq/upgrade60.html](https://www.openbsd.org/faq/upgrade60.html)

~~~
Esau
I like having separate partitions (or slices) for everything. The guy who
introduced me to UNIX did it so he could mount certain filesystems "ro" or
"noexec". He also told me that partitioning can help avoid inode exhaustion
but I really doubt that is an issue with modern filesystems.

I still partition with NetBSD. It just feels right; even if not necessary.

~~~
Tharkun
I never run out of disk space, but I frequently run out of inodes. It's
definitely still an issue. And if you don't stop to think about when you're
creating your file system, it tends to bite you in the arse at a later date.

~~~
Esau
Out of curiosity, can you share what platform/file system you generally use? I
had assumed inodes would not be an issue with modern operating systems;
especially 64-bit ones.

~~~
kazinator
On traditional Unix filesystems, a fixed number of inodes is provisioned when
a filesystem is created. Though of course the upper bound on that is limited
by the width of the numeric type ino_t (or whatever plays that role in kernel
space) have you, in practice, the actual limit is lower.

I'm looking at a filesystem I have handy here; the root FS in an Ubuntu Linux
VM. dumpe2fs /dev/sda1 says that the inode count is 1146880. Just a million
and something: way less than what a even a 32 bit inode number can represent.

That filesystem simply cannot have more objects than that, in total.

With this type of filesystem, you have to estimate how many objects you will
ever need to store in it, not only how much total storage. If you estimate too
high, you waste inodes. Suppose inodes are 128 bytes wide and you provision
four billion of them. Oops, that's like half a terabyte of storage dedicated
to the array of inodes!

------
jrcii
Yay! I get to spend 5 hours figuring out how to update the syntax for my
pf.conf rules.

~~~
ben_bai
I'm assuming you run 5.9.

There is only one entry in
[http://www.openbsd.org/plus60.html](http://www.openbsd.org/plus60.html) so I
would assume you don't have to change your rules. Also pfctl -n should tell
you.

"In pf.conf(5), change the parser to make af-to on pass out rules an error.
This fixes a bug where a nonworking configuration could be loaded."

------
nn3
It's good that they have their priorities straight. No more Linux binaries
support (who need compatibility anyways?), but instead you get 5 songs sung by
the project leader.

~~~
IntelMiner
If you're running OpenBSD, likely you're running it for the features it
provides (security and code correctness)

If you want to run Linux apps, run them on your Linux box and use your OpenBSD
machine to firewall it off

