I think because you're a new account and you're posting a lot of links, your posts are likely getting hidden. I've vouched for them where I can and hopefully other people with showdead on will see them and vouch for them too.
> To be very honest here, you risk having KeePassXC blocked by relying parties
Even if the bigtechs don't "officially" make the passkey standards require bigtech involvement, it seems very likely to me that conservative businesses like banks will only accept bigtech implementations. And then you're sunk.
Similarly, look at how OpenID turned into "Sign in with AppleGooFaceSoft".
This ZKP+hardware secure element stuff seems even worse, because how are you going to make it work on old hardware, or with free software, or with open devices?
> Even if the bigtechs don't "officially" make the passkey standards require bigtech involvement, it seems very likely to me that conservative businesses like banks will only accept bigtech implementations.
> This ZKP+hardware secure element stuff seems even worse, because how are you going to make it work on old hardware, or with free software, or with open devices?
I don't love it, but I actually do see an argument that this kind of proof-of-property stuff really does belong in a secure area, backed by approved software. It is making government-backed, legal claims about a person or entity. Unlike with Passkeys, it's not really "your" data, rather it's a way for the government to provide legally-backed information to someone, without the government actually having to be in the loop. I'd probably argue the solution to the big-tech dependency here is the government should be required to provide its own, verifiable solution (such as a physical ID card with open software) for users who do not want to trust big-tech.
Where the ZKP spec authors goofed was in not considering the wallet provider to be a party in the transaction. That third party may have interests that are not aligned with the user's.
> The Court held that the terms and conditions in the Steam subscriber agreements, and Steam’s refund policies, included false or misleading representations about consumers’ rights to obtain a refund for games if they were not of acceptable quality.
> In determining the appropriate penalty to impose on Valve, Justice Edelman noted that “even if a very small percentage of Valve’s consumers had read the misrepresentations then this might have involved hundreds, possibly thousands, of consumers being affected”.
> Justice Edelman also took into account “Valve’s culture of compliance [which] was, and is, very poor”. Valve’s evidence was ‘disturbing’ to the Court because Valve ‘formed a view …that it was not subject to Australian law…and with the view that even if advice had been obtained that Valve was required to comply with the Australian law the advice might have been ignored”. He also noted that Valve had ‘contested liability on almost every imaginable point’.
Here's an old reddit comment discussing how Valve failed to implement AUD and KRW pricing on schedule, and speculates that at least in Australia's case, it's because of local compliance reasons.
But I can't find anything that definitively ties the rollout of refund policies to an attempt to get the ACCC off their back. The comments on the above reddit post show that GOG and Origin had active refund policies at this time.
Last time I tried to build guix without substituters, I got hash mismatches in several downloaded files and openssl-1.1.1l failed to build because the certificates in its test suite have all expired. Bootstrapping is really hard, really valuable, and (it turns out) really unstable.
The first SAT solver case that comes to mind is circuit layout, and then you have a k vs n problem. Because you don’t SAT solve per chip, you SAT solve per model and then amortize that cost across the first couple years’ sales. And they’re also “cheating” by copy pasting cores, which means the SAT problem is growing much more slowly than the number of gates per chip. Probably more like n^1/2 these days.
If SAT solvers suddenly got inordinately more expensive you’d use a human because they used to do this but the solver was better/cheaper.
Edit: checking my math, looks like in a 15 year period from around 2005 to 2020, AMD increased the number of cores by about 30x and the transistors per core by about 10x.
What I’m saying is that the gate count problem that is profitable is in m³ not n³. And as long as m < n^2/3 then you are n² despite applying a cubic time solution to m.
I would argue that this is essentially part of why Intel is flagging now. They had a model of ever increasing design costs that was offset by a steady inflation of sales quarter after quarter offsetting those costs. They introduced the “tick tock” model of biting off a major design every second cycle and small refinements in between, to keep the slope of the cost line below the slope of the sales line. Then they stumbled on that and now it’s tick tick tock and clearly TSM, AMD and possibly Apple (with TSM’s help) can now produce a better product for a lower cost per gate.
Doesn’t TSM’s library of existing circuit layouts constitute a substantial decrease in the complexity of laying out an entire chip? As grows you introduce more precalculated components that are dropped in, bringing the slope of the line down.
Meanwhile NVIDIA has an even better model where they spam gpu units like mad. What’s the doubling interval for gpu units?
Focusing on the runtime's parser is a red herring and I think a common error in lisp advocacy.
Even if I didn't the full power of a lisp macro system, it is an absolute joy to manipulate programs written in s-expressions. Being able to cut/copy/paste/jump-[forward/back] by sexpr is really convenient, and often done nowhere near as well in other languages. I think this is because until the invention of tree-sitter and LSPs (and the former isn't yet widely adopted in editor tech), most editors had regex-based syntax highlighting and some kind of ad-hoc "parser" for a language. This makes them less aware of the language the developer is editing, but was probably a pragmatic design decision by editor implementers: it's easier than writing a full parser and means the editor can still assist even if a program is syntactically ill-formed.
It does sound interesting, and I'll look into Lisp, can you give me some advice on what's the best way to learn it?
On your other point, I've programmed in many languages in many years, and mostly I did some in an environment with an IDE, or powerful language-specific tooling (not tree-sitter) that had a proper good understanding of the syntax and semantics of the language used.
If you learn better by video than by reading, the Structure and Interpretation of Computer Programs lectures by Abelson and Sussman are spectacular. I have watched the entire course multiple times. The SICP book also receives a lot of praise, but I have yet to read it myself. They specifically use Scheme, but most of the knowledge can be translated to other Lisp dialects as well. The biggest difference between the different Lisp dialects are the macro systems and the standard libraries, so getting started and learning it doesn't really matter which one you choose. GNU Guile or Racket would be easy to use to follow along with SICP though.
Also... HtDP (How to Design Programs) is a good follow-on to SICP.
Oh... and I think we can't mention SICP without referencing this (relatively recent) video about why MIT moved from Scheme to Python for intro classes: https://youtu.be/OgRFOjVzvm0
A good text to make you aware of the power of Lisp is "The Anatomy of Lisp" by John Allen (MIT). It's an old text but they don't write books like that anymore.
What's the best way to learn programming in general? For me, is to try to build something. Find a problem, pick a Lisp, start building.
Just make sure to have two things: structural editing and the REPL. Without these two, Lisp may feel awkward. But when you have the ability to quickly move any expression around, transpose them, etc., - writing programs becomes like composing haikus or something. You basically will be moving some "lego-pieces" around. With the connected REPL, you will be able to eval any expression in-place, from where you're writing your code.
I started without these and indeed, that was challenging. Having to balance the parentheses by hand, counting them, omg, I was so obtuse, but I'm glad I didn't give up. You don't have to go too crazy - in the beginning, something that automatically balances out the parens, or highlights them when they are not, and lets you grab an expression and paste it somewhere else would be good enough.
And the REPL. Shit, I didn't know any better, I thought I was suppose to be copypasting or typing things into it. That is not the way! Find a way to eval expressions in-place. Some editors even show you the result of the computations right where the cursor is.
I have done years of programming prior to discovering Lisp, and I don't really understand how I was okay without it. I wish someone has insisted to try it out. Today I don't even understand how can anyone identify as a programmer and "hate" Lisp, just because they have stared at some obscure Lisp code for like two minutes at some point.
also, for what it's worth, I've moved on to mexp in my old age. Just easier to figure out where things begin and end without all those parens. And they more-or-less mechanically convert to sexpr when you need them to.
And back in my day you couldn't get a CS degree without a class on parsing and interpreting structured data. Usually it was the compilers class. Now we don't require kids to take a compilers class so an entire generation doesn't understand why regexes don't work in all cases. When kids in my organization try to "parse" HTML or XML with regexes, I hand them a copy of the O'Reilly Lex and Yacc book. And when they come back saying they can't understand it, I hand them the Dragon book. I guess I just think we should all feel that particular pain. Sorry for the digression, but I was triggered by "regex" and "parser" used in the same sentence.
This undersells gptel.el, because IMHO it does a really good job of feeling emacs-native: you can read from the minibuffer or active region; send outputs to the message area, another buffer, or replace the marked region; use buffers or files as context sources; build "tools" out of elisp, ...
I'm not surprised I undersell it. I try to take a few months to a year's break from programming once a decade or so, when I can, both for the sake of coming back with fresh eyes and because anything gets miserable if you do it hard enough for long enough. This seemed like a good time. So though I've installed and set up gptel and chatted enough over some Elisp to see that it's functional with local models, I haven't as yet actually used it in a serious way.
On that note, I'm not much in the Emacs blog/creator scene or ecosystem this decade, either. Do you know any good topical resources that might fit well with time spent mostly away from keys? I realize I may be asking quite a lot.
I don't follow that scene much, either, but I do subscribe to https://www.masteringemacs.org/ which has good roundups when new major versions come out. I also have several open tabs from there about various libraries I need to fold into my own emacs configuration.
reply