Hacker News new | comments | show | ask | jobs | submit | akkartik's comments login

My favorite theory of the origin of language: http://www.amazon.com/The-Symbolic-Species-Co-evolution-Lang.... It unfolds a bit like a murder mystery, so I won't spoil the ending for you. If the middle section seems a bit of a slog -- persevere!

Anybody know if there's any progress with ADEP4? http://phenomena.nationalgeographic.com/2013/11/13/killing-s...

I've actually been using this approach for the past 12 years. There used to be an old firefox plugin called Password Composer (http://web.archive.org/web/*/http://jlpoutre.home.xs4all.nl/...) which would let you type in your master password into firefox, append the website domain, take a digest of the result and truncate to 8 characters before sending to the website. Over the years, I replaced this plugin with a simple 5-line script so I can reproduce my passwords from anywhere, strengthened the hash function from SHA-1 to bcrypt, and also added a configurable password length so I can gradually grow passwords over time as computers grow more powerful. Here's my solution: http://akkartik.name/pc

Regarding other comments in this thread about managing exceptions to the scheme over time, I have a simple solution: I let my browser save my passwords. I think this is a reasonable trade-off; the dominant threat online today is not somebody gaining physical access to your devices, but random script kiddies brute-forcing passwords using stolen hashes.

Tl;dr - use my script to generate passwords, use your browser to remember passwords, and the complexity is quite manageable.

Edit - sorry, I forgot that that 5-line script calls another 5-line script: http://akkartik.name/bcrypt_digest.

One potential gotcha I realized recently is that it uses the bcrypt encryption scheme rather than the well-known digesting algorithm. The two share a common kernel, from what I can tell, though there are significant differences. There might by cryptographic implications to using an encryption scheme for digesting; I'm not an expert. Hopefully the fact that this is a bespoke arrangement being used by a tiny population will keep anyone from hacking me :)

If you're storing your passwords in the browser anyway, why not just generate random, arbitrary passwords anyway?

It gives me a way to figure out my password on a strange system. Admittedly this use case is growing less common with the ubiquity of personal devices and sync.

So as your browser stored passwords pass through (and rest at) the server, waiting to be synced to another browser or device, you have passwords "up there." Are they in the clear? Are the encrypted well enough?


Your point is well-taken that I'm just replacing one external service with another, so the difference isn't large enough to convince anyone to switch. My original response in this thread was just to point out prior work relevant to the brainstorming above.

Yeah, that happens on firefox for me as well.

Firefox has been falling behind of late. Every third website seems to have a huge banner video these days, and firefox chooses to run them automatically, and it can't seem to do so snappily. Makes the entire UI unresponsive, including the chrome outside that tab. Google maps is unusable on firefox as well for several months. Many are the flash videos that my firefox can't run because they changed some default and I can't be arsed to locate it. I just switch to chrome, which seems to handle them without problems using the same supposedly-insecure flash plugin.

(I'm on linux.)


Very cool! It got me daydreaming about an alternative to html designed from the ground up to minimize webpage bloat and avoid tracking (1x1 pixel gifs, etc.)


0. Is Laarc an arc variant you're working on? I don't see it on the github page.

1. You might be interested in how my arc-inspired language did indent-sensitivity and infix: http://akkartik.name/post/wart. For example, here's bresenham's line-drawing algorithm in wart: https://gist.github.com/akkartik/4320819. There are more code samples at http://rosettacode.org/wiki/Category:Wart.

2. Do you have a username on arclanguage.org? Perhaps we've met!


It is indeed. I'm in the process of porting from Racket to Lumen. It's taking a bit, and it'll be tricky to add true greenthread support to js and lua, but it's a win so far.

Wart is freaking cool! Thanks for showing that. Whoaaa, you use tangle/weave! How do you like it / what do you think about literate programming? I was batting around the idea of writing code that way, but it seemed like tangle might not be the best solution. But I love your 000prefix naming scheme; I instinctively do that for branches. After reading through a few files starting at https://github.com/akkartik/wart/blob/master/literate/000org... it became pretty clear how wart worked, pretty quickly. I'm mainly wary of the "compile" step that using Tangle implies; do you find it slows you down much? Would you choose to do it that way if starting fresh?

What would a language ecosystem be like without any backwards-compatibility guarantees, super easy to change and fork promiscuously?

It's fun to find someone has been mulling over the same problems!

I was considering taking Laarc in a similar direction:

  (import "akkartik/foo")
That would check whether the folder "../foo/akkartik" exists, and if not, clone foo from github.com/akkartik/foo.

It would also rewrite its own source code to look like:

  (import "akkartik/bar" "f1d2d2f924e986ac86fdf7b36c94bcdf32beec15")
That means whenever someone clones your codebase, running the program would cause them to clone github.com/akkartik/bar using commit f1d2d2f9.

So it's static linking, basically. It's a guarantee that "This source code will always work, the same way (+ 1 2) will always yield 3. You could copy-paste this source code into a gist, and whoever ran it would end up running the same program."

That removes dependency hell / worrying about versions entirely. But it's not the right solution, and I've been searching for a better one. What do you think?

The problem is that it throws away the ability for the code to pull updates from library maintainers. If the import is "pinned" at commit f1d2d2f9, then you push a security-related patch to akkartik/bar, most existing code will stay vulnerable forever, because most code is never maintained after being written.

I was thinking of following that model, but using the convention that libraries are expected to use git tags in order to indicate major.minor.patch, and (import "akkartik/bar" "f1d2d2f924e986ac86fdf7b36c94bcdf32beec15") will bump itself to the next commit if it's tagged as an increment of the patch version.

What kind of things have surprised you about Wart during development? Any unexpected pitfalls or interesting discoveries along the way? Thanks again for pointing it out!

By the way, were there any ambiguities with indentation-as-parens? I haven't thought about it rigorously, so I was just wondering if there are any corner cases to watch out for when I add it.


Thanks for the kind words! I'm particularly gratified that you found the literate/ subdirectory. There's some details about it at http://akkartik.name/post/wart-layers, but it sounds like you understand the point already so I'm preaching to the choir :)

"What kind of things have surprised you about Wart during development?"

I stopped working on wart at some point for two reasons. First, I realized that I was falling into a blind spot shared with many contemporary programmers: blindly assuming that the way to improve the state of programming was by creating a new language. But there is more to programming than languages. Languages just happen to be memetically tempting sirens to chase after. Second, I painted myself into a corner with wart because I made it too late-bound, so late-bound that it was incredibly inefficient and so hard to optimize. (I think it might still be possible, but I lost patience partly because of the previous point.) I still like my infix scheme and have a particularly soft spot for the way wart allows keyword arguments anywhere in any function call. See, for example, the use of :from in a webserver implementation that makes everything so much clearer: https://github.com/akkartik/wart/blob/ce64882c69/071http_ser.... However, I've forced myself to harden my heart and ruthlessly ignore considerations of syntax, and focus on what I think are more impactful directions: conveying global rather than local structure of programs. More details: http://akkartik.name/about. In brief, I'm working on ways to super-charge automatic tests (by making them white-box, and by designing the core OS services to be easy to test) and version control (using the literate layers you noticed). It's a whole new way of programming where a) readers can ask "why not write this like this?", make a change, run all tests and feel confident that the change is ok if all tests pass; and b) readers can play with a simple version of the program running just layer 1, then gradually learn about new features by running layers 1+2, 1+2+3, and so on.


"The problem is that it throws away the ability for the code to pull updates from library maintainers. If the import is "pinned" at commit f1d2d2f9, then you push a security-related patch to akkartik/bar, most existing code will stay vulnerable forever, because most code is never maintained after being written."

I hadn't considered hard-coding hashes in imports, because it's historically been very hard to truly distinguish compatible changes from incompatible ones without introducing bugs. I think it's more important to give people control over upgrades than to try to improve over the current advisory+pull method of communicating security issues.

You're right that most code is never maintained after being written. But most code is also utterly unimportant to anyone including the author. So that failure of security is ok :) Trying to force concern about security only increases the moral hazard -- people get used to other people thinking about their security for them.

"were there any ambiguities with indentation-as-parens? I haven't thought about it rigorously, so I was just wondering if there are any corner cases to watch out for when I add it."

The big insight, I think, is to not fear parentheses too much. A common failure mode of such approaches (like http://readable.sourceforge.net and http://dustycloud.org/blog/wisp-lisp-alternative) is to create too many new token types and lose the simplicity of lisp syntax. My approach instead is to a) just use parens sooner than look "outright barbarous" (https://www.mtholyoke.edu/acad/intrel/orwell46.htm), and b) take it a step further and disable all indent-sensitivity inside parentheses.

Anyways, that's my two cents. The precise rules I use are at https://github.com/akkartik/wart/blob/master/004optional_par.... (They're really in the first couple of sentences there.)


"Whoaaa, you use tangle/weave! How do you like it / what do you think about literate programming?"

Like I said above, I like it very much, especially in combination with tests and a notion of layers. Here's my take on why literate programming has failed so far: http://akkartik.name/post/literate-programming


I've also been following your work akkartik by the way :] Sneaking from time to time on the arc forum. Keep up the great work!


I too was thinking on this thread that I remembered you from the arc forum :)


Arc1, arc2 and arc3 are available at https://github.com/arclanguage/anarki/commits/official


The EWD by Dijkstra now actually mentions Engelbart by name: https://www.cs.utexas.edu/users/EWD/ewd03xx/EWD387.PDF.


Props to Dijkstra for choosing the right topics to have strong opinions about, which is the hardest part in making an intellectual contribution -- far harder than being right or wrong about a given topic. The writeup is like code that's correct but for a sign error.


Is this the paper they're talking about? http://people.csail.mit.edu/madhu/papers/1992/almss-conf.pdf


Yes. Note both the article and the paper are from 1992.


Junctions and autothreading seem to be importing the core of APL/J: http://doc.perl6.org/type/Junction

(I'm randomly scanning http://faq.perl6.org)


Perl 6 basically hoovered every good programming concept they could think of (and some that ended up being not so good, which have hopefully been shaken out over the last decade). If you want a small language, this is not it. If you want an amazingly powerful language that just allows you to do what you want with a minimum of fuss, pull up a seat.


Whereas Go has gone the other route and jettisoned everything. Not having a ternary operator is a real pain.

One thing Go does well is provide the gofmt tool. There is only one way to format code properly.

Does Perl 6 have something like this?


Probably soon.

Perl 5 has a Tidy'ing tool[0], as well as a Lint[1] tool. Both are based on rules created from (originally) Best Practices [2]. These rules of course can be modified to taste, so you can set your own rules/filters before checking in your changes to a repo (or, whatever)

[0] https://metacpan.org/pod/distribution/Perl-Tidy/lib/Perl/Tid...

[1] https://metacpan.org/pod/Perl::Critic

[2] http://shop.oreilly.com/product/9780596001735.do


Perltidy predates PBP by a long time. Perltidy is a bit of a huge mess, and it has its own parsing code and is nearly impossible to hack on. We really need a PPI-based re-implementation.


There was a very interesting submission to HN on why code formatting utilities aren't easy things to write, [0] called, The Hardest Program I've Ever Written [1]

The code of Perl Tidy at least looks pretty good! [2]. Perhaps there's a reason why PPI isn't used, that I'm not famliar with, other than - as you say, it predates it. Perl::Critic uses PPI, yes?

[0] https://news.ycombinator.com/item?id=10195091

[1] http://journal.stuffwithstuff.com/2015/09/08/the-hardest-pro...

[2] https://metacpan.org/source/SHANCOCK/Perl-Tidy-20150815/lib/...


Thanks for the links.

As the current de-facto maintainer of PPI, the answer is rather simple:

P::T predates PPI by almost two years: https://metacpan.org/source/SHANCOCK/Perl-Tidy-20021130/CHAN... https://metacpan.org/source/ADAMK/PPI-0.1/Changes


For the longest time Perl was considered unparsable, primarily due to the two features of function parens being optional, and the argument-slurpiness of function calls being unknowable without introspecting the function reference that ends up being the final one, at runtime. In the most famous example this can lead to a / after a function call being considered either the division operator or the start of a regex; with both interpretations resulting in valid Perl code. It took a while for anyone to come up with a schema in which Perl could be parsed while also being round-trippable. It took PPI a while to get there and be stable, and meanwhile P::T had already become stable itself.


No, but Perl 5 has Perl::Critic[1], which is a configurable opinionated code and style analyzer. It defaults to using the Perl Best Practices book by Damian Conway, with configurable levels of exactness. You can modify the rules to suit your particular needs/team.

I imagine something on this front will materialize for Perl 6 sooner than later.

1: https://metacpan.org/pod/Perl::Critic



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact