Here is a nice starting point for a generic rb tree structure in SML.
(* generic red-black-tree in Standard Jersey ML *)
type key = string
datatype color = R | B
datatype tree = E | T of (color * tree * key * tree)
fun rbmem (x, E) = false
| rbmem (x, T (_,a,y,b)) =
if x < y then rbmem (x,a)
if x > y then rbmem (x,b)
fun balance ( (B,T (R,T (R,a,x,b),y,c),z,d)
| (B,T (R,a,x,T (R,b,y,c)),z,d)
| (B,a,x,T (R,T (R,b,y,c),z,d))
| (B,a,x,T (R,b,y,T (R,c,z,d)))) = T (R,T (B,a,x,b),y,T (B,c,z,d))
| balance body = T body
fun insert (x,s) =
let fun ins E = T (R,E,x,E)
| ins (s as T (color,a,y,b)) =
if x < y then balance (color,(ins a),y,b)
else if x > y then balance (color,a,y,(ins b))
val T (_,a,y,b) = ins s (* guaranteed to be non-empty *)
in T (B,a,y,b)
The comparisons between filesystem-level vs. block-level encryption that I've encountered usually make a common distinction; namely that file metadata is still present when only applying fs-level encryption. What are some attributes of fs-level encryption that would make it a superior choice over block-level?
There are three huge advantages filesystems have over block devices when it comes to encryption:
1. They have storage flexibility, so they can allocate metadata to authenticators and nonces. The fact that sector-level crypto can't do this means that the "state of the art" in efficient sector crypto is essentially unauthenticated ECB mode.
2. They're message-aware, so they can apply authentication at meaningful boundaries; a block crypto device is essentially a simulated hard disk, and so it doesn't know where files begin and end.
3. Being message-aware, they can protect files at a better level of granularity than "all or nothing", which for instance is the security failure that made it so easy for the FBI to convict Ross Ulbricht for Silk Road.
A lot of concerns about filesystem crypto stem from the fact that filesystem crypto precedes sector-level crypto, and most of it was designed (or has designs tracing to) the 1990s. What people who don't spend a lot of time studying crypto should remember is that nobody knew how to encrypt anything in the 1990s. It was a unique and weird time, where there was a lot of demand and interest in crypto, but not enough knowledge to supply crypto effectively.
So we should be careful about judging filesystem crypto by the standards of the 1990s.
>Trying to hold on to worthless jobs is a terrible but popular idea.
It seems warm and fuzzy to think Sam, and the implicit company he keeps (the
ultra rich)--who are "leveraging not only their abilities and luck" but already
accrued wealth--can and will redistribute it. Anyone who wasn't born yesterday
will simply laugh at this prospect.
I'm not sure why Sam feels the need to call what most of the world is doing
worthless. I think it's crude and indicative of a narrow social and cultural
experience (which surprises me considering his position).
Believe it or not, there are cultures and groups of people who do not revere
technology the way most North Americans do.
Also, a good exercise for Sam (and others possessing a similar world view)
might be to think about how many "worthless" people and jobs it takes to
accomplish the things he does (including this blog post).
The problem I take with this mindset, is that it treats all value systems as equal.
At the end of the day, if your culture and economic system create a poorer quality of life for its people consistently, just because it wins out in the percentage of employed citizens doesn't mean a thing. You're treating the lack of disease as a measuring stick of health, when it's simply one piece of the puzzle.
I think the thing Sam has been trying to do for the last few years, is get others to think about the ways we can enrich more lives as a whole, without just slowing labor and progress in its totality- because while that can work for the short term, it can severely inhibit us in the long term to eradicate things like hunger, disease, or poverty.
The thing you also need to be careful of along the way though, is not making perfect the enemy of good.
Unfortunately, many societies today incorporate too strongly one's official profession/title with self-worth. I don't believe sama was implying those who work those jobs are worthless. Instead, it seems like he's trying to say that there are far better ways to accomplish the same objective, and we shouldn't ignore them.
Because our systems of retraining and placing workers into a new profession are so terrible, it's common to assume that many or most displaced workers will remain unemployed. Breaking this status quo is essential to giving everyone a fair chance to work on what truly drives them while we automate more and more worthless jobs. That's why I'm so excited about free and widely available educational resources springing up online. It's not perfect yet, but we had to start somewhere. I have a deep respect for everyone (including sama) who've helped build or teach a mooc.
>1. modularity (and now that they have added generative functors à la SML, you can have true abstraction)
Could you share some information regarding Haskell and it's 'modularity' problem (vs. the ML family of languages). I'm fond of the way SML projects can be structured. How are people solving this using Haskell? Are there any interesting solutions for creating modular Haskell application/system's I can see today?
There is a weak form of modularity which can be achieved by using two parts of haskell:
1. hiding constructors in files
2. type classes
But it doesn't really begin to approach the kind of structuring that is possible in an ML-like system. Unfortunately, the issue is very (needlessly) controversial, and I don't think I want to get dragged into it here.
I'll mention, though, that Haskell does have one form of modularity which ML doesn't really, which is the fact that you can write algorithms separately, compose them together after the fact, and expect to get reasonable performance in most cases. This is because of two things: haskell is non-strict, and GHC has pretty good fusion. In ML, you often end up manually fusing things together in order to get good performance, so composition can be a bit more difficult.
> I'll mention, though, that Haskell does have one form of modularity which ML doesn't really, which is the fact that you can write algorithms separately, compose them together after the fact, and expect to get reasonable performance in most cases.
I'd actually argue the opposite. MLton is one of the best whole program optimizing compilers I have ever used. There are virtually no penalties for abstraction. OCaml has a bit of trouble here, from what I've heard, but I've never personally run into serious performance problems as a result of abstraction.