I also have the habit and am not sure why. I just find myself double-clicking and highlighting whatever I'm reading. Someone noticed me doing it once and asked if I had a tic.
Similar story for me. With my work, I get pulled in a lot of different directions at seemingly random times. This helps me quickly resume what I was doing.
> Indeed, the computer, by virtue of its brittle nature, seems to require that it come first. Brittleness is the inability of a system to cope with surprises, and, as we apply computers to situations that are ever more interconnected and layered, our systems are confounded by ever more surprises. By contrast, the systems theorist David Woods notes, human beings are designed to handle surprises. We’re resilient; we evolved to handle the shifting variety of a world where events routinely fall outside the boundaries of expectation. As a result, it’s the people inside organizations, not the machines, who must improvise in the face of unanticipated events.
In this new age of AI, maybe we can start to reverse this trend? Make systems that can adapt and handle surprises, instead of pushing all this brittleness onto the humans using the system
This is so cool! I first learned about homomorphic encryption in the context of an election cybersecurity class and it seemed so pie-in-the-sky, something that would unlikely be used for general practical purposes and only ever in very niche areas. Seeing a big tech company apply it in a core product like this really does feel like a step in the right direction towards taking back some privacy.
Alternative solution that would require less heavy lifting of ML but a little more upfront programming:
It sounds like the cars are arranged in a grid on the wall. Maybe it would be possible to narrow down which car the user took a photo of by looking at the photos of the surrounding cars as well, and hardcoding into the system the position of each car relative to one another?
Could potentially do that locally very quickly (maybe even at the level of QR-code speed) versus doing an embedding + LLM.
Con of this approach would be that it’s requires maintenance if they ever decide to change the illustration positions.
Put each painting in an artsy frame whose edges are each different, colorful pattern. When the user photographs the painting, they’ll include all (or even most) of the frame, and distinguishing the frames is easy.
Embedding a QR code or simply a barcode somewhere and you're done. Maybe hide it like a watermark so it does not show to the naked eye and doing some Fourier transform in the app won't require a network connection nor lot of processing power.
the article does mention that the client rejected a similar approach. steganography seems like a bad choice for a museum setting where you don't own the images.
Sounds like the client cared a lot about the user experience being smooth (they declined the solution of presenting the user with the narrowed-down choices of which car they took a picture of), and I think adding a bunch of QR codes to this aesthetic wall of car illustrations would not align with that goal.
Persistent memory is a software abstraction and a corresponding programming style, both of which are easy to implement and practice on ordinary computers — fancy newfangled non-volatile memory hardware is not required. Persistent memory programming is easy to learn, and it can make applications simpler and more efficient by streamlining the handling of persistent data.
Author Bio:
Terence Kelly studied computer science at Princeton and the University of Michigan, earning his U-M EECS/CSE Ph.D. in 2002, followed by twenty years in industrial research (HP Labs) and software engineering (AWS/Amazon). Kelly now teaches and evangelizes persistent memory programming and writes the popular “Drill Bits” column in ACM Queue magazine (https://queue.acm.org/DrillBits).