I've seen this being presented before, it seems to be a new thing, but never navigated one myself.
Edit: Just saw a comment about pressing space bar, that seems to work linearly. Thanks! (Source: https://news.ycombinator.com/item?id=11653281 )
I find slides.com very handy because I can present from any web browser and don't have to worry about bringing my laptop.
If you hit space, how do you know
you end up seeing every slide?
For example, someone else said that if you hit ESC you get an overview. How do you find that? What if you don't have an escape key, such as on my tablet?
I'm not trying to be grumpy or anti-anything, I'm trying to gain an insight into how people think this is a good thing, and to provide, in return, an insight into why I think it's unusable.
When you create a deck, there is a pretty good tutorial explaining how the interface works, including what spacebar and escape do.
The real advantage of the service, for me, is the support for embedding any digital media you want into a slide via iFrames, and the ability to use your phone/tablet to advanced slides and see speakers notes if your venue did not provide you with a clicker.
It also has another pretty neat feature where audience members can pull up the presentation on their laptops/tablets to follow along, and their slides will automatically advance to match my progress through the deck.
I'm just trying to explain why its very much a tool for the presenter, not the presentee.
Edit: Slides.com is a locked down instance of reveal.js
Here is a link to their deck, explaining all of its features http://lab.hakim.se/reveal-js/#/
1 go down until you can't do down anymore
2 go right once
3 goto 1
Prime example of sacrificing usability for looks.
What about on my tablet, which doesn't have a space bar?
Edit: These are genuine questions, and I am deeply frustrated that it's being downvoted without any discussion. People are claiming that hitting the space bar takes you linearly through the entire presentation. How do you know you reach every slide like that?
Really, seriously, how do you know?
As far as I'm concerned the navigation is opaque - if I hit space, I don't know that I'll reach every slide.
How do you know ?!?
I suspect that people are enamoured with the attractiveness of the presentation - it's cool, it's slick, it's gorgeous, it's wonderful - and when I question its usability or discoverability, people are downvoting because they have no answer, and just get pissed off at being questioned.
Do you actually have an answer? If so, tell me.
Not even sure if users or just developers and website owners. As in - oh man, that looks so cool, let's do that!
Then if some equally stupid "journalist" gives praise and publicity to their foolish UI/UX this trend increases and perpetuates.
Unfortunately, it seems to be happening to a very large extent.
The irony of this article is that it talks about security, and calls for anti-bloat things.
I'm not totally sure if moving sideways pops you back up to the top row (which it should) or to the equivalent depth of the neighboring stack.
I actually really like this style. It allows you to quickly skip across the top row to get the main points, then dive deeper when needed.
By looking at one of the slides, is what you've said obvious? How can you tell? From the start it's not obvious to me that there's a grid of slides. It seems to me that you need to experiment to work out how it works, and it's really not obvious. You yourself say that you don't know if moving sideways pops you back to the top of the next column.
In short, it's all taking my attention away from the presentation, it's making me work to figure out what's going on, and it's just not obvious. Ask yourself if this is a good thing - making your audience work to figure out how the presentation works, taking them away from the content. It's swish, it's fancy, ...
... it's obscuring the actual content.
Certainly do feel like I missed parts.
It's not like it's big or broad enough for the reader to only be interested in particular 'chapters'.
Additionally, here's a small op-ed piece that is supposed to go with it: http://mricon.com/i/airbags-and-steel-frames.html
Btw, one thing worth correcting is false claim that QubesOS was or is only attempt at workstation security. I've evaluated almost a dozen over past 10 years with some still existing. List those here:
You really need to look up separation kernels as isolating most critical stuff in a dedicated partition protected with 4-12kloc kernel is one of strongest approaches. seL4 and Muen are examples with GenodeOS an example of FOSS attempt to do a Nizza-like architecture with strong foundation and best-of-breed components (esp Nitpicker GUI). High-assurance security is moving forward with hardware-software architectures with one maybe getting SOC release (plus source code) in 1-2 years. Yet, our prior work with separation kernels/VMM's plus safe code (esp SPARK Ada or C w/ Astree Analyzer) for trusted components is still stronger than any crap mainstream FOSS, VMware, etc are making. They rarely learn from the past.
Note: Email me if you want more examples of past and current high-assurance work. I have collected them for most focus areas with papers, prototypes and/or products.
Hey, I'm not the ones who linked to slides.com. :) The PDF version is linked off the main conference page: http://kernsec.org/files/lss2015/giant-bags-of-mostly-water....
> Btw, one thing worth correcting is false claim that QubesOS was or is only attempt at workstation security.
You must look at my statements in the context of presenting this at the Linux Security Summit. You know a lot more about this than me, obviously, but from what I can tell, each of the other solutions you mention run custom non-Linux microkernels that provide virtualization to other consumer OSes. I'm ready to be educated here, but I believe I didn't misstate that QubesOS was one of the first pure-Linux mainstream attempts at workstation security through compartmentalization.
EDIT: It was 28MB so I compressed it down to 1.7MB here (image quality wont be as good but meh): https://www.dropbox.com/s/8bu3rkj6pjbneiv/giant-bags-of-most...
Re "one of the first pure-Linux mainstream attempts"
Damn, I'd have had you if you didn't say mainstream. This statement is so well-worded I might have to agree with it. Sad part, though, is it's because mainstream rarely accepts anything more secure, esp high integrity/security. Rust and QubesOS are among a tiny set of exceptions.
Am I the only one who is skeptical about it?
From what I saw superficially reading their source code, there are some frightening stuff going on:
* tons of C code with nearly zero unit tests, same with the python code
* lots glue in form of bash or python scripts
* some not so beautiful stuff like:
- https://github.com/QubesOS/qubes-core-agent-linux/blob/maste... (kill -9 on a daemon...)
- https://github.com/QubesOS/qubes-core-agent-linux/blob/maste... (a daemon is a little bit more than an exe launched with '&'
- https://github.com/QubesOS/qubes-core-agent-linux/blob/maste... (changing a config file in an init script, humm, weird...)
- https://github.com/QubesOS/qubes-core-agent-linux/blob/maste... (starting a service inside the init of another service...)
- https://github.com/QubesOS/qubes-core-agent-linux/blob/maste... ("logging" with stderr redirection in a file)
And it's just the init scripts... I'm too lazy to take a look further inside the C or python stuff.
IMHO, as a proof of concept, it's interesting, as a finished, reliable and secure OS, it's frightening...
"The only serious attempt at workstation security"
"The Volvo of blah blah"
Quite a slam to those of past and present that handed NSA or DOD pentesters their asses back to them. Maybe be more accurate if you said "a FOSS attempt at workstation security" minus Volvo part. Volvo probably goes to INTEGRITY-178 as SKPP cert requires more attack areas to be covered plus 2 years of pentesting for kernel. Genode Architecture is prime contender for FOSS far as foundations go. Next time a FOSS project claims to be designed securely just ask for a covert storage and timing channel analysis of any components that handle secrets. They'll either say "Huh? What's a covert channel analysis?" or "We don't really have anyone doing that as we're too understaffed or it doesn't really matter." ;)
Anyway, the best way to do it is a microkernel or separation kernel that virtualized Linux with security-critical stuff running directly on the microkernel & communicating through protected IPC. That's the MILS/SKPP model whose first commercial releases were around 2005 or so with security kernels doing similar stuff in 90's. Closest thing to that in FOSS is GenodeOS: uses many proven components from CompSci like Genode, Nitpicker, Muen, and seL4. They're a small outfit. They need contributors to get it into Beta shape.
Separation kernels & their platforms
Note: CompSci stuff that explains how these things work really well without corporate marketing. :) This is similar to separation kernels below, GenodeOS and Sirrix's TrustedDesktop.
Note: The design techniques listed here make for strong guarantees against apps screwing it up. Esp "hard currency." Worth copying by any FOSS project.
Note: One with more features and risk to make app development easier. Nice visuals showing split between untrusted OS's & runtime partitions w/ careful comms.
Note: Great presentation on CMW capabilities & risks. Several are on market but Argus is most mature & featured. Orange Book taught us to use CMW/OS features for medium-assurance, damage reduction with security kernels isolating those & individual tasks for high-security. Balance in everything. Argus runs on Red Hat Linux now.
So, yes, there's plenty of them going back to 90's with a variety of security tradeoffs. Strongest ones use split app architecture with untrusted VM's for legacy stuff (or OS's) plus isolated runtimes on high-security kernels. Usually custom, slimmed-down stacks for filesystem and networking that isolate against OS-level attacks. A few, like GEMSOS (old) or Muen (recent), were implemented in safer languages to reduce error potential. Such features were in my recommendations to QubesOS mailing list, rejected entirely, and some later implemented as if it was their idea all along. Best to avoid it unless vanilla malware is the concern. ;)
*Ugly* bags of mostly water.
And along with several other commenters here, I strongly dislike this sort of multi-direction navigation with no overall map showing what's there, where I've been, and what I haven't yet visited. Beautifully designed and presented, with no concern for the user experience.
A bit like the cars being described.
The car is designed | The presentation is
perfectly. | designed perfectly
Any crashes are the | Any inability to navigate
driver's fault. | is the user's fault.
EDIT II: Found a script: First reference is in an initial translation and is:
Ugly... Ugly... Giants...
Bags of Mostly Water...
Ugly Bag of Mostly Water
EDIT III: Wow - downvotes! No complaint, obviously people feel either that this comment is wrong, or doesn't belong. I'd appreciate knowing why people might think that, but I guess I'll never know. Which is a shame, I'd welcome the opportunity to learn.
On the otherhand organizing slides by index/subindex + it is on internets already won me over.
These are genuine questions - I'm heavily into usability (in a different context) and I'd be interested to know what people try, and what they conclude. In my context people are uneasy about hitting keys and random just to see what happens. On some web sites I'm afraid to move the mouse, because the mouse-over event has been trapped, and things happen that I don't want, and can't undo.
So how did you discover that ESC will produce an overview?
To the non-initiated, lawyers and infosec people are seen with nearly-equal amount of both dislike and trepidation. They are seen as a force of lawful evil that descends on your team and starts telling you that all those cool things you're trying to do cannot actually be done, or must be done in a non-obvious roundabout way. When asked for reasons, both lawyers and infosec start talking about concepts that are entirely unfamiliar to most devs (code provenance, license agreements, trademarks, patent litigation, IP isolation, containers and namespaces, RBAC policies, multifactor authentication). All you care about is that this is a person who is telling you that your project, 99% complete after your team worked multiple 60-hour weeks, must be delayed until a bunch of things -- that you don't consider broken! -- are fixed.
However, this is where things usually go differently. If a lawyer comes to management and says "this project cannot launch because a bunch of code was copy-pasted from stackoverflow and links with an incompatibly-licensed library," the management is likely to listen even if they don't understand a word of what was said -- because they know the importance of lawyers and know that, in the long run, litigation is extremely expensive. However, if an infosec person comes to them and says "this project cannot launch because they have a PHP script running as root that listens on external port 80," management will not value this input nearly to the same degree, even though, in the long run, a bad security vulnerability can have just as much of a detrimental impact on a company as litigation -- and probably worse, because you won't be able to hush-hush and "settle out of court."
The reasons for this are multiple -- infosec is in infancy compared to the legal field, and, sadly, many IT security practitioners tend to look and act in a way that makes their recommendations carry so much less weight with upper management.
So, where I'm going with this is -- if you work for a company in an infosec field and you genuinely want to improve things to the point where management actually starts to listen (which translates into $$ for your team and your projects), then you need to both convince them that your expertise is equally as important as the lawyers', and probably present yourself with the same amount of gravitas as those working on the legal team.
: in that order preferably : )
After that, Red Hat's documentation was probably the next most useful thing for getting started; https://access.redhat.com/documentation/en-US/Red_Hat_Enterp...
But mostly it was a lot of proding and poking whilst setting various things up that helped a lot, and remembering that when something doesn't work, to check selinux first.
The 'settroubleshoot-server' package for the Red Hat based distributions is also good whilst you're getting started in dev, as it takes the avc logs and 'guesses' what the likely cause of the problems you've had are, giving percentage likelihoods of the policies and booleans that might be causing problems.
If you configuring logging correctly -- or just be aware of it, really, you can always optimize later -- you are no longer SysAdmin-ing blind.
Packages, troubleshooting tools, practical tips and names of stuff (avc logs).
I can't stand this kind of "2-d" presentation...
one right, then waterfall down, and then loop.
They mention a VPN or insecure access panel having bad permissions, but recommend a mixed bag of differently coloured jellybeans as the solution without once reccomending shutting down the PHP script, allowing access from the VPN only through certificate, password and hardware two-factor authentication, and ensuring good access controls and employee on- and off boarding systems.
Far more importantly, I question the efficacy of any security recommendation that doesn't mention threat modelling at all. What is it you want to protect? What's it going to cost to protect these things? What's it going to cost to lose them? What's the simplest and most effective best way of protecting these? Is it really moving your entire system to a different platform and upgrading all your cypto -- ask yourself -- are we really installing air bags, or are we building our car out of armour plates? Some kid is going to spend 2 hours on XSS in your app if you spend all your resources investing in in-datacentre encryption and service-to-service authentication.
As someone who practices security, I found the keywords you can pull from the slide reasonable in their suggestions to follow up on. There were a couple of places he went into the weeds, and I think he probably could have talked up iOS security a bit more instead of smart cards which are a bit overkill relative to his other suggestions.
But, this is just a slide deck. Try not to rush to judgement considering we didn't hear the talk that came with it.
You may work somewhere that this is the case, but I can't count the number of times I have tested an application where someone has equated security to having an A+ HTTPS rating.
> This is a slide deck
Understood, and something I didn't consider before. That said, I think my comments will still be useful to those here who have also not seen the original talk.
As a developer, I definitely liked the framing of the presentation. Though I don't think it goes far enough in emphasizing defense in depth. Put simply, user workstations should ideally never be trusted. Getting into the network shouldn't help an attacker much if everything requires authentication even once you're in.
In terms of mitigating severity from crashes, cars are far ahead of airplanes.
Of course, air travel is a great area to draw on for security when it comes to professional computer users (developers, sysadmins, etc.). A lot of the practices which have improved safety in air travel (automation, checklists, blameless post mortems) are directly applicable to computing safety when it comes to people whose job it is.
Because it's really the only sensible way to compare different forms of transportation.
Imagine a teleportation device with an accidental death rate of 1 per 10 trillion miles. But it only takes 2ms to operate (regardless of distance), so the death rate measured in hours would be horrible.
Compare that to covered wagons. Per mile, they're quite dangerous—going on a long trip in one might mean a 1 in 10 chance of death. But they're also extremely slow. So measuring death in terms of hours would make them look safe.
From a safety perspective, which would you rather use?
How can we federate identity and manage them safely?
Plus, once you decide to be connected how to make AAA system talk to each others?
And then, sometimes you need to make money ... and you know, safely pass token forth and back... And what standard solution do we have that is not a framework?
Well, 3GPP proposed IMS based on IETF Diameter. Still not there.
Some proposed Role Based Accounting and proxy authz based on LDAP ... well not really deployed.
So we are also waiting for a new standard of inter communication of centralized Enterprise Directory that has tokens, tickets, multi policy for authentication according to origin, roaming, sane schema...
LDAP is honestly a good tool, is it actually relying on a secured, strongly typed NOSQL. But, I hardly saw any devs understand the anonymous bind then authed bind mechanism.
I do feel our biggest problem is not in the tools/technology, but rather in too much education of people that are useless in production.
AWS, and I assume Google's stuff, will let you integrate their authentication system with your internal one using SAML to authenticate.
It seems to me like modern security by trust in "expert" rather than understanding the basics. But I guess I am wrong in my appreciation?
The end result is you have a low probability of a huge impacts occurring, without enough data to suggest the distribution of low probabilities, or the nature or distribution of huge impacts. You're sort of selling tiger repellent rocks, and when the big event does happen, you now have to convince the executives the counterfactual that if you had the budget to implement X, Y and Z, the event would have been prevented. Meanwhile, the costs on team velocity are silently ignored.
The best case scenarios here probably revolve around insurance. Compliance auditors can impose non-compliance fees, and giving security teams a direct financial consequence avoided to point to would help their budget justifications.
What gives? More comments on the format rather than content
People, read the comments before you comment. Chances are someone already complained about the slide format or the font color or whatever.
This is a very common attitude of sysadmins who think that configuration management is the only thing they need to do to "become DevOps". Sadly, years after DevOps movement has started, the majority of people who are "doing DevOps" are those sysadmins who just added Puppet or Chef to their toolbox.
Security is a very difficult subject when it comes to DevOps practices, but the approach given in this presentation is definitely something I would not want to be part of. Unless what they are securing is a nuclear reactor control center.
It looks like the reveal.js pdf export function of appending /reveal-js?print-pdf to the URL seems to be broken.
EDIT: Thanks to someone in the comments below who linked to the PDF, it was 28MB so I compressed it down to 1.7MB here (image quality wont be as good but meh): https://www.dropbox.com/s/8bu3rkj6pjbneiv/giant-bags-of-most...
Stop talking cars and and analogies!
They just never will. Follow the money. The money is paying for revenue generating activity.
Using your example that "zones and SmartOS" are preferable to "SELinux and Linux Containers"....that's not going to happen. The people with the money have already chosen the container direction.
A much better strategy would be to give up on what you want, and focus your energy on making the direction that's been chosen more secure.
I suspect if enough talented, security minded engineers descended as contributors for docker, rkt, etc...the situation would improve much faster than the current direction of just complaining they aren't secure.
Perfect is the enemy of good.
That has actually already taken place, and is taking place. There is unlikely to be one winner-take-all.
Following the money does not mean that the application cannot be programmed from the ground up to support SmartCards and roles, or that it has to be full of security holes.
> I suspect if enough talented, security minded engineers descended as contributors for docker, rkt, etc.
Depends on what is under etcetera. Docker and rkt are not the Silver bullets everyone who has not gotten busted by them think they are, they are just a trend. With all of those you instantenously lose lifecycle management, because they are just images of massive file dumps, not images of software and configuration installed with packages. When you use Solaris zones in SmartOS, Docker and rkt become completely superfluous, because you suddenly get a fully working yet completely isolated UNIX server, running at the speed of bare metal no less. Add some OS packages on top of that, make them into an image for imgadm(1M), and in few seconds you're done. What does one need Docker for in that scenario?
And I should certainly hope that perfect is the enemy of good, because life has taught me, the hard way, that good isn't good enough. I absolutely hate being woken up during the night because of an incident, and will go out of my way to get as close to perfect as possible in order to be able to sleep through the night.
"Not going to happen" meaning it's not going to overtake docker/containers in terms of overall mindshare, activity, etc.
If you're focused on driving the best solution in some sub niche, yes...you can be successful with that.
But, the larger market is going to continue prioritize things that directly generate revenue over everything else.
I would be hard pressed to call securing containers, virtualization and cloud a niche, since that is exactly what SmartOS has been designed for, from the ground up.
A large part of generating revenue is not having downtime caused by data corruption incidents or security breaches (or both), which means picking and using a substrate which provides guards against that.
Just as Betamax was a technically better solution than VHS.
"They are dumb, because they don't do it how I would do it. I know better. They are so stupid."
I'm not a security professional by any means, but I'm interested in learning how to do stuff better.
Can you please reframe your response with solutions, and not snark?
Hardly, as it would require writing a book.
But, what I can do is give you some starting points:
Set up a TFTP server, a DHCP server, and use PXEGrub (ipxe is flaky). Boot the system off of the network. As soon as you log in, read the manual pages for imgadm(1M), and vmadm(1M), then pull down the "base64" image. Also read up on Solaris zones. Oracle documentation will do, as SmartOS and Solaris 10 are similar to a good degree.
Next, read up on pkgsrc, and make a simple "helloworld" package. After you get all that working, get Gemalto SmartCard code working on SmartOS (as Solaris was a big market for SmartCards, the code should still work on SmartOS).
> "They are dumb, because they don't do it how I would do it. I know better.
if you want security, you have no business running Linux-anything, not now, not ever: the choices are either OpenBSD or SmartOS, and rejoice that we do have alternatives.
That's a very bold assertion and you haven't even tried to back it up. Who do you think is going to read that and say “some random Internet commenter is right and Google/Facebook/the NSA/etc. must all be incompetent”? You really need to back that claim up and do so comprehensively across the entire security landscape from basic design to operational concerns like patch management.
(As an example: I've never logged into a Solaris / Illumos machine which was current on security updates because the admins didn't have the confidence which Debian admins have had since the 90s that updates won't break things. I'm open to the argument that they're all paranoid or incompetent but that's probably the most important real world security task and it at least merits discussion)
That depends on one's experience level, to have the requisite experience to recognize a solution which one should do one's homework on, when someone mentions it, and so will anyone else who has been doing this kind of work long enough. For example, the NSA has been both using and modifying (Open)Solaris for decades now; you can find their papers on securing it with IPsec and locking the OS down, as they've been released to the public. They are still relevant, both in illumos / SmartOS and security context today. Now with virtualization, cloud and containers, moreso than ever. NSA was prescient, or at the very least had common sense.
Apropos Google, Facebook, trendy... two things:
"if a hundred million flies all eat shit, surely they can't be wrong?"
I've been doing this container, virtualization and UNIX long before Facebook and Google even got the idea, even before those companies existed (;-)
When you dedicate every waking moment of your life to studying and mastering UNIX, not only professionally but privately, in time you obtain the insights needed to understand and evaluate what is and isn't good technology. Put enough priority 1 incidents and sleepless nights behind you, and you will know what works, and what doesn't.
As Paul Graham so aptly put it, "When you choose technology, you have to ignore what other people are doing, and consider only what will work the best."
I want to adress this separately: UNIX and Solaris, from which SmartOS stems, practically invented the concept of backwards compatibility, the guaranteed not to break application binary interface, and paranoia about end to end data integrity. The system administrators' concerns are themselves disconcerting: imagine you need to have a brain tumor removed, and the surgeon who is to remove it is not specialized in neurosurgery? You write about sysadmins who are unfamiliar with the subject matter... and as professional system administrators, they well should be. With this in mind, would you trust them to know what they are doing?