I've actually been using this approach for the past 12 years. There used to be an old firefox plugin called Password Composer (http://web.archive.org/web/*/http://jlpoutre.home.xs4all.nl/...) which would let you type in your master password into firefox, append the website domain, take a digest of the result and truncate to 8 characters before sending to the website. Over the years, I replaced this plugin with a simple 5-line script so I can reproduce my passwords from anywhere, strengthened the hash function from SHA-1 to bcrypt, and also added a configurable password length so I can gradually grow passwords over time as computers grow more powerful. Here's my solution: http://akkartik.name/pc
Regarding other comments in this thread about managing exceptions to the scheme over time, I have a simple solution: I let my browser save my passwords. I think this is a reasonable trade-off; the dominant threat online today is not somebody gaining physical access to your devices, but random script kiddies brute-forcing passwords using stolen hashes.
Tl;dr - use my script to generate passwords, use your browser to remember passwords, and the complexity is quite manageable.
One potential gotcha I realized recently is that it uses the bcrypt encryption scheme rather than the well-known digesting algorithm. The two share a common kernel, from what I can tell, though there are significant differences. There might by cryptographic implications to using an encryption scheme for digesting; I'm not an expert. Hopefully the fact that this is a bespoke arrangement being used by a tiny population will keep anyone from hacking me :)
So as your browser stored passwords pass through (and rest at) the server, waiting to be synced to another browser or device, you have passwords "up there." Are they in the clear? Are the encrypted well enough?
Your point is well-taken that I'm just replacing one external service with another, so the difference isn't large enough to convince anyone to switch. My original response in this thread was just to point out prior work relevant to the brainstorming above.
Firefox has been falling behind of late. Every third website seems to have a huge banner video these days, and firefox chooses to run them automatically, and it can't seem to do so snappily. Makes the entire UI unresponsive, including the chrome outside that tab. Google maps is unusable on firefox as well for several months. Many are the flash videos that my firefox can't run because they changed some default and I can't be arsed to locate it. I just switch to chrome, which seems to handle them without problems using the same supposedly-insecure flash plugin.
It is indeed. I'm in the process of porting from Racket to Lumen. It's taking a bit, and it'll be tricky to add true greenthread support to js and lua, but it's a win so far.
Wart is freaking cool! Thanks for showing that. Whoaaa, you use tangle/weave! How do you like it / what do you think about literate programming? I was batting around the idea of writing code that way, but it seemed like tangle might not be the best solution. But I love your 000prefix naming scheme; I instinctively do that for branches. After reading through a few files starting at https://github.com/akkartik/wart/blob/master/literate/000org... it became pretty clear how wart worked, pretty quickly. I'm mainly wary of the "compile" step that using Tangle implies; do you find it slows you down much? Would you choose to do it that way if starting fresh?
What would a language ecosystem be like without any backwards-compatibility guarantees, super easy to change and fork promiscuously?
It's fun to find someone has been mulling over the same problems!
I was considering taking Laarc in a similar direction:
That would check whether the folder "../foo/akkartik" exists, and if not, clone foo from github.com/akkartik/foo.
It would also rewrite its own source code to look like:
That means whenever someone clones your codebase, running the program would cause them to clone github.com/akkartik/bar using commit f1d2d2f9.
So it's static linking, basically. It's a guarantee that "This source code will always work, the same way (+ 1 2) will always yield 3. You could copy-paste this source code into a gist, and whoever ran it would end up running the same program."
That removes dependency hell / worrying about versions entirely. But it's not the right solution, and I've been searching for a better one. What do you think?
The problem is that it throws away the ability for the code to pull updates from library maintainers. If the import is "pinned" at commit f1d2d2f9, then you push a security-related patch to akkartik/bar, most existing code will stay vulnerable forever, because most code is never maintained after being written.
I was thinking of following that model, but using the convention that libraries are expected to use git tags in order to indicate major.minor.patch, and (import "akkartik/bar" "f1d2d2f924e986ac86fdf7b36c94bcdf32beec15") will bump itself to the next commit if it's tagged as an increment of the patch version.
What kind of things have surprised you about Wart during development? Any unexpected pitfalls or interesting discoveries along the way? Thanks again for pointing it out!
By the way, were there any ambiguities with indentation-as-parens? I haven't thought about it rigorously, so I was just wondering if there are any corner cases to watch out for when I add it.
Thanks for the kind words! I'm particularly gratified that you found the literate/ subdirectory. There's some details about it at http://akkartik.name/post/wart-layers, but it sounds like you understand the point already so I'm preaching to the choir :)
"What kind of things have surprised you about Wart during development?"
I stopped working on wart at some point for two reasons. First, I realized that I was falling into a blind spot shared with many contemporary programmers: blindly assuming that the way to improve the state of programming was by creating a new language. But there is more to programming than languages. Languages just happen to be memetically tempting sirens to chase after. Second, I painted myself into a corner with wart because I made it too late-bound, so late-bound that it was incredibly inefficient and so hard to optimize. (I think it might still be possible, but I lost patience partly because of the previous point.) I still like my infix scheme and have a particularly soft spot for the way wart allows keyword arguments anywhere in any function call. See, for example, the use of :from in a webserver implementation that makes everything so much clearer: https://github.com/akkartik/wart/blob/ce64882c69/071http_ser.... However, I've forced myself to harden my heart and ruthlessly ignore considerations of syntax, and focus on what I think are more impactful directions: conveying global rather than local structure of programs. More details: http://akkartik.name/about. In brief, I'm working on ways to super-charge automatic tests (by making them white-box, and by designing the core OS services to be easy to test) and version control (using the literate layers you noticed). It's a whole new way of programming where a) readers can ask "why not write this like this?", make a change, run all tests and feel confident that the change is ok if all tests pass; and b) readers can play with a simple version of the program running just layer 1, then gradually learn about new features by running layers 1+2, 1+2+3, and so on.
"The problem is that it throws away the ability for the code to pull updates from library maintainers. If the import is "pinned" at commit f1d2d2f9, then you push a security-related patch to akkartik/bar, most existing code will stay vulnerable forever, because most code is never maintained after being written."
I hadn't considered hard-coding hashes in imports, because it's historically been very hard to truly distinguish compatible changes from incompatible ones without introducing bugs. I think it's more important to give people control over upgrades than to try to improve over the current advisory+pull method of communicating security issues.
You're right that most code is never maintained after being written. But most code is also utterly unimportant to anyone including the author. So that failure of security is ok :) Trying to force concern about security only increases the moral hazard -- people get used to other people thinking about their security for them.
"were there any ambiguities with indentation-as-parens? I haven't thought about it rigorously, so I was just wondering if there are any corner cases to watch out for when I add it."
Props to Dijkstra for choosing the right topics to have strong opinions about, which is the hardest part in making an intellectual contribution -- far harder than being right or wrong about a given topic. The writeup is like code that's correct but for a sign error.
Perl 6 basically hoovered every good programming concept they could think of (and some that ended up being not so good, which have hopefully been shaken out over the last decade). If you want a small language, this is not it. If you want an amazingly powerful language that just allows you to do what you want with a minimum of fuss, pull up a seat.
Perl 5 has a Tidy'ing tool, as well as a Lint tool. Both are based on rules created from (originally) Best Practices . These rules of course can be modified to taste, so you can set your own rules/filters before checking in your changes to a repo (or, whatever)
For the longest time Perl was considered unparsable, primarily due to the two features of function parens being optional, and the argument-slurpiness of function calls being unknowable without introspecting the function reference that ends up being the final one, at runtime. In the most famous example this can lead to a / after a function call being considered either the division operator or the start of a regex; with both interpretations resulting in valid Perl code. It took a while for anyone to come up with a schema in which Perl could be parsed while also being round-trippable. It took PPI a while to get there and be stable, and meanwhile P::T had already become stable itself.
No, but Perl 5 has Perl::Critic, which is a configurable opinionated code and style analyzer. It defaults to using the Perl Best Practices book by Damian Conway, with configurable levels of exactness. You can modify the rules to suit your particular needs/team.
I imagine something on this front will materialize for Perl 6 sooner than later.