Given that you already need government issued ID that matches the name printed on your ticket to travel on an airplane, wouldn’t the government already have the ability to track you, regardless of facial recognition?
Indeed, the government doesn't even need the ID, they ingest a data feed of Passenger Name Records (PNRs) from all airlines. This is why when TSA performs the automated identity proofing, comparing a photo of you to your ID, they don't require that you provide a boarding pass.
Comparing an ephemeral photo taken of you to your government credential at the TSA checkpoint is a temporary formality. At some point, the government credential presentation will be unnecessary.
That is an incredible story of successful yak shaving. I have the same experience, but ultimately my attempts end in failure as I lose interest and sight of the end goal. Kudos!
The author of the piece here has something to say about that, too: write code top-down.
> There are two ways to architect a program and write code: top-down and bottom-up.¶ […] The correct way to architect and write a program is top-down. This is not a matter of taste or preference. Bottom-up design is fundamentally busted and you shouldn’t use it. Every system I’ve been involved in that used top-down succeeded and those that used bottom-up failed. [...]
> At every level there’s pressure to do bottom-up programming. Avoid it. Instead, start at the top, with `main()` or its equivalent, and write it as if you had all the parts already written. Get that to look right. Stub out or hard-code the parts until you can get it to compile and run. Then slowly move your way down, keeping everything as brutally simple as you can. Don’t write a line of code that isn’t solving a problem you have *right now*. Then you may have a chance of succeeding in writing a large, working, long-lived program.
I found "Java for Everything" really interesting, so thanks for posting it. It also seems it's been featured on HN many times[1], and the progression of comments (from 10 years ago to 4 years ago, to 1 year ago where nobody commented) feel like an archaeological strata that shows how things change and how they stay the same.
FWIW, while I'm not sure I agree that JVM is the right solution for every problem, I agree with much of the author's sentiment from that time, which is: programs are written much less frequently than they are run, so surely developer keystrokes are laughably unimportant compared to runtime performance and other user-facing concerns.
> programs are written much less frequently than they are run, so surely developer keystrokes are laughably unimportant compared to runtime performance and other user-facing concerns.
Taking this to its logical conclusion all programs should be written in assembly.
The reality is that there's a tradeoff: Programmer time vs performance, and which parts of performance matter. I've worked using anything with performance from assembly to shell scripts (including C, C++ and Java). It is all tradeoffs. Do users want more features, or more speed? Are we running at a scale or situation where ultimate usage of hardware matters, or not?
Saying we should do ultimate amounts of investment in performance when there's three users and one programmer doesn't make sense. They'd typically rather have more features and adequate performance.
Absolutely 100% yes. Difference is these projects themselves are obscure. Opposed to an official Google branded service that will see significant publicity.
Not surprising - from what I can tell, machine learning has been going down this route for a decade.
Anything involving the higher level abstractions (tensor flow / keras /whatever) is full of handwavy stuff about this or that activation function / number of layers / model architecture working the best and doing a trial error with a different component in the above if it doesn't. Closer to kids playing with legos than statistics.
I’ve actually noticed this in other areas too. Tons of them just swap parts out of existing works, maybe add a novel idea or two, then boom new proposed technique new paper. I remember when I first noticed it after learning to parse the academic nomenclature for a particular subject I was into at the time (SLAM) and feeling ripped off, but hey if you catch up with a subject it’s a good reading shortcut and helps zoom in on new ideas.
It concerns me that these defensive techniques themselves often require even more llm inference calls.
Just skimmed the GitHub repo for this one and the read me mentions four additional llm inferences for each incoming request - so now we’ve 5x’ed the (already expensive) compute required to answer a query?
reply