Hacker Newsnew | comments | show | ask | jobs | submit | YAYERKA's comments login

Here is a nice starting point for a generic rb tree structure in SML.

        (* generic red-black-tree in Standard Jersey ML *)

        type key = string

        datatype color = R | B

        datatype tree = E | T of (color * tree * key * tree)

        fun rbmem (x, E) = false
          | rbmem (x, T (_,a,y,b)) =
            if x < y then rbmem (x,a)
            else
            if x > y then rbmem (x,b)
            else
            true

        fun balance ( (B,T (R,T (R,a,x,b),y,c),z,d)
                    | (B,T (R,a,x,T (R,b,y,c)),z,d)
                    | (B,a,x,T (R,T (R,b,y,c),z,d))
                    | (B,a,x,T (R,b,y,T (R,c,z,d)))) = T (R,T (B,a,x,b),y,T (B,c,z,d))
                    | balance body = T body

        fun insert (x,s) =
            let fun ins E = T (R,E,x,E)
                  | ins (s as T (color,a,y,b)) =
                    if x < y then balance (color,(ins a),y,b)
                    else if x > y then balance (color,a,y,(ins b))
                    else s
                val T (_,a,y,b) = ins s (* guaranteed to be non-empty *)
            in T (B,a,y,b)
            end

-----


If your project can get away without having to build your own images; I would recommend Amazon Linux to anybody using EC2.

I've used Amazon Linux AMI's since 2012 on several different instances. At first `ps aux` you might think you are using *BSD!

Here are two useful links.

Regarding Amazon Linux AMI security updates; https://alas.aws.amazon.com/

FAQ; https://aws.amazon.com/amazon-linux-ami/faqs/

-----


This is a great tip. I've been using it for a few years now to watch various lectures on youtube--especially mathematics. Professors lecturing mathematics tend to speak slowly since they don't want to say anything erroneous. Speeding them up is usually amazing. For youtube; open the javascript console in your browser and type '$('video').playbackRate = 2.5;'. I've found that each lecturer usually has their own magic number regarding speed--after watching someone for a few hours and varying the speed you can usually find it.

-----


English translations of Science and Hypothesis as well as The Measure of Time can be read here;

`http://en.wikisource.org/wiki/Science_and_Hypothesis',

`http://en.wikisource.org/wiki/The_Measure_of_Time'.

In French, most of Poincare's original works can be read here;

`http://henripoincarepapers.univ-lorraine.fr/bibliohp/index.p....

-----


http://en.wikisource.org/wiki/Science_and_Hypothesis

http://en.wikisource.org/wiki/The_Measure_of_Time

-----


FYI, Your top two links fail since they're capturing the second quotation mark inside the markup.

-----


The comparisons between filesystem-level vs. block-level encryption that I've encountered usually make a common distinction; namely that file metadata is still present when only applying fs-level encryption. What are some attributes of fs-level encryption that would make it a superior choice over block-level?

-----


There are three huge advantages filesystems have over block devices when it comes to encryption:

1. They have storage flexibility, so they can allocate metadata to authenticators and nonces. The fact that sector-level crypto can't do this means that the "state of the art" in efficient sector crypto is essentially unauthenticated ECB mode.

2. They're message-aware, so they can apply authentication at meaningful boundaries; a block crypto device is essentially a simulated hard disk, and so it doesn't know where files begin and end.

3. Being message-aware, they can protect files at a better level of granularity than "all or nothing", which for instance is the security failure that made it so easy for the FBI to convict Ross Ulbricht for Silk Road.

A lot of concerns about filesystem crypto stem from the fact that filesystem crypto precedes sector-level crypto, and most of it was designed (or has designs tracing to) the 1990s. What people who don't spend a lot of time studying crypto should remember is that nobody knew how to encrypt anything in the 1990s. It was a unique and weird time, where there was a lot of demand and interest in crypto, but not enough knowledge to supply crypto effectively.

So we should be careful about judging filesystem crypto by the standards of the 1990s.

-----


>Trying to hold on to worthless jobs is a terrible but popular idea.

It seems warm and fuzzy to think Sam, and the implicit company he keeps (the ultra rich)--who are "leveraging not only their abilities and luck" but already accrued wealth--can and will redistribute it. Anyone who wasn't born yesterday will simply laugh at this prospect.

I'm not sure why Sam feels the need to call what most of the world is doing worthless. I think it's crude and indicative of a narrow social and cultural experience (which surprises me considering his position).

Believe it or not, there are cultures and groups of people who do not revere technology the way most North Americans do.

Also, a good exercise for Sam (and others possessing a similar world view) might be to think about how many "worthless" people and jobs it takes to accomplish the things he does (including this blog post).

-----


The problem I take with this mindset, is that it treats all value systems as equal.

At the end of the day, if your culture and economic system create a poorer quality of life for its people consistently, just because it wins out in the percentage of employed citizens doesn't mean a thing. You're treating the lack of disease as a measuring stick of health, when it's simply one piece of the puzzle.

I think the thing Sam has been trying to do for the last few years, is get others to think about the ways we can enrich more lives as a whole, without just slowing labor and progress in its totality- because while that can work for the short term, it can severely inhibit us in the long term to eradicate things like hunger, disease, or poverty.

The thing you also need to be careful of along the way though, is not making perfect the enemy of good.

-----


Unfortunately, many societies today incorporate too strongly one's official profession/title with self-worth. I don't believe sama was implying those who work those jobs are worthless. Instead, it seems like he's trying to say that there are far better ways to accomplish the same objective, and we shouldn't ignore them.

Because our systems of retraining and placing workers into a new profession are so terrible, it's common to assume that many or most displaced workers will remain unemployed. Breaking this status quo is essential to giving everyone a fair chance to work on what truly drives them while we automate more and more worthless jobs. That's why I'm so excited about free and widely available educational resources springing up online. It's not perfect yet, but we had to start somewhere. I have a deep respect for everyone (including sama) who've helped build or teach a mooc.

-----


>1. modularity (and now that they have added generative functors à la SML, you can have true abstraction)

Could you share some information regarding Haskell and it's 'modularity' problem (vs. the ML family of languages). I'm fond of the way SML projects can be structured. How are people solving this using Haskell? Are there any interesting solutions for creating modular Haskell application/system's I can see today?

Thanks for your comment.

-----


There is a weak form of modularity which can be achieved by using two parts of haskell:

1. hiding constructors in files 2. type classes

But it doesn't really begin to approach the kind of structuring that is possible in an ML-like system. Unfortunately, the issue is very (needlessly) controversial, and I don't think I want to get dragged into it here.

I'll mention, though, that Haskell does have one form of modularity which ML doesn't really, which is the fact that you can write algorithms separately, compose them together after the fact, and expect to get reasonable performance in most cases. This is because of two things: haskell is non-strict, and GHC has pretty good fusion. In ML, you often end up manually fusing things together in order to get good performance, so composition can be a bit more difficult.

-----


> I'll mention, though, that Haskell does have one form of modularity which ML doesn't really, which is the fact that you can write algorithms separately, compose them together after the fact, and expect to get reasonable performance in most cases.

I'd actually argue the opposite. MLton is one of the best whole program optimizing compilers I have ever used. There are virtually no penalties for abstraction. OCaml has a bit of trouble here, from what I've heard, but I've never personally run into serious performance problems as a result of abstraction.

-----


sgeisenh — wow! How cool. If that is the case, than that's awesome, and makes me very happy.

-----


Tcl/Tk is an often overlooked tool to quickly test UI ideas.

Check out `http://wiki.tcl.tk/' (which has lot's of good information).

Also learn some Tcl by reading the redis test suite here,

`https://github.com/antirez/redis/tree/unstable/tests'.

-----


Thanks. I was wondering about the Tk implantation in terms of plotting and charts.

-----


Thanks for sharing, and for anyone else reading you might also like the following book--`Modern Compiler Implementation in ML' by Appel. (http://www.cs.princeton.edu/~appel/modern/ml/)

-----


>The street became colder, short Russian summer moves for the winter. Brains cooled down a bit and began to think.

I've always anticipated winter with a similar sentiment--I like the way this was said.

-----

More

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: