Hacker News new | past | comments | ask | show | jobs | submit login
Take more screenshots (alexwlchan.net)
657 points by goranmoomin on July 24, 2022 | hide | past | favorite | 231 comments



A few years back I was going through some old floppy disks I found in a box and, on one of them, I found a screenshot I took of my desktop circa winter of 2000. In it was a window open with a MUD I was logged into at the time. Another window had Winamp open with a playlist of songs and another window had ICQ open. The only reason I took it was because there was an unofficial competition between our pub and another pub elsewhere on the MUD about which was more popular, and we had finally surpassed them.

It's amazing how many emotions seeing that one image gave me. But the biggest was just this overwhelming sense of nostalgia. As I looked at that, I could remember what I was thinking, what I was feeling, everything that was happening in my super confusing teenage life at that time. Occasionally I will look at that image now, even 22 years later, I can still feel all those feeling again.

Of course, my ex's character is in the screenshot too. So, a bit bittersweet as well. :/


Exact same thing happened. Found a bunch of old screenshots from 2000 to about 2006. So many programs you just stopped using at some point without really realizing, but seeing the screenshot immediately makes you feel like past you again, and what it was like to use your computer back then.

Especially when you see parts of a conversation with someone you didn't talk to in over a decade, or who passed away. You'd think that's what a photo would do, so I was surprised how strong of an emotional reaction screenshots can trigger. But then if you were spending 90% of your time online as a teen/early 20s it's not that surprising on a second thought.


> But then if you were spending 90% of your time online as a teen/early 20s it's not that surprising on a second thought.

You know, this is a very enlightening point. I never really thought about it from this angle, but there is a lot of truth to this.

I struggled a lot as a teen with anxiety, depression and bullying. I had a few IRL friends, but the very vast majority of my social interaction during that time came via MUDs and chatting. Many of the people I played and chatted with were fellow social outcasts, and we created our own parallel virtual communities to support and lift each other up. It didn't matter where we were, what we looked like, or how we did or didn't fit in. Many days in the 90s it felt like going to school was the thing I had to put up with, and logging in and seeing my friends when I got home was my real life.

Without them, there's a very real chance I might not be here today. Even all these years later, the people I met virtually during that time are still some of my best and closest friends, and it's a real treat when my travels take me close enough that we can meet for coffee or lunch. Many were at my wedding even, and in one case that was the first time I had ever met them IRL. And yet we knew each other deeply. It felt like we all grew up together because, kinda, we did.

When you look at it like that, those of us who grew up in that environment would look at a screenshot from that era the same way others might look at random photos from high school. Because this was our world.


Very cute story! I have a lot of my files going back to my first computer. Maybe 2003 or so. I have a lot of screenshots, high school work. I have all of my chat logs from msn. I just know when I’m older and my mind is weaker, looking back will help jog my memory.


I still have computer files from 15 years ago, the time I was in high school. They are of no use, but I keep them around. Class projects, power point slides, word files.

I have deleted everything from uni though.


Interesting madeleine cookie


maybe some HN readers don't get the madeleine reference: https://www.thelocal.fr/20190814/french-expression-of-the-da...


I'll expand on this since I found it interesting. Essentially it's a reference to a French epic called "In Search of Lost Time" by Marcel Proust. It was his life's work and is around 4.2k pages or 1.3 million words! That's much larger than "War and Peace", "Les Miserables", "Crime and Punishment", "The Iliad", and "The Odyssey" combined; which is super impressive.


It is also impressive how Monthy Python managed to compress it to 30 seconds :-)


I actually haven't seen Monty Python's parody of it! Will be sure to give it a watch after reading it



I have never had a Madeleine cookie, they are not common around here. Are they worthwhile to bake yourself? Since they are so famous because of this reference, I think I should find one to taste.


They use a lot of butter and they go well with coffee. Yes. (Both Wikipedia [1] and the Joy of Cooking consider them cakes rather than cookies.)

[1] https://en.wikipedia.org/wiki/Madeleine_(cake)

[2] Irma Rombauer, Marion Rombauer Becker, and Ethan Becker. Joy of Cooking (rev. ed.). Scribner. 1997. pp 962-963.


Interesting, for some reason I translated the Croatian word "kolačić" for "small cake" to cookie. I guess cookie is not a small cake. In Croatian the expression is "madeleine kolačić".


Thanks for this. My father was reading Swann’s Way when he passed away. I wonder what his Proustian madeleine was?


I think you would enjoy the “Emily is away” series of games on Steam.


I've gotten mixed reactions whenever I share this, but when I program I like to record my screen with OBS.

* It's a mental hack to keep me accountable, especially now working from home. If I'm in an office anyone can look over and see whether or not I'm working. It started as an attempt to mimic this feeling at home, even though I'll be the only one to ever see the recordings.

* It allows me to go back and see how I worked in the past. I have a few videos of myself working from 2015 which I think is pretty neat just because of how different my workflow was back then compared to now. I'm not using the same tools or even on the same operating system.

* I'm working on video games which is what makes this very useful for me. If something visually interesting happens, or if there's graphical bug of some kind, I can go back and breakdown exactly what happened. I've stepped through videos frame by frame in the past to debug, it's been surprisingly helpful.

* It allows me to go back and see my progress. I can know what I was working on a given day, see how far I've progressed, it's just generally a good motivator. You can of course do this with git, but if you're working on something visual it can be nice to see it in motion rather than a textual diff.


I discovered a while ago that all those errors and bugs that only appear when you demo something to an audience also magically appear when you record yourself demoing it to nobody. Maybe narrating a feature to a pretend audience takes the blinders off enough that you notice little mistakes you wouldn't have otherwise.


Very similar to the process of rubber duck debugging.


Back in my office days we was doing this constantly. The sentence "will you be my rubber duck" always put smile on my face


I often wish I had a pet dog husky so I could walk him through some tricky python and have him figure out my bug with me.


This is a fantastic tip, thank you! I'm definitely going to try this out.


Sometimes, I do a full screencap with my face when I am coding. Then at the end of all that, I will even do a reaction video to my full video.

Why? I REALLY enjoy the dopamine rush when you are struggling then find a solution. I see myself pulling my hair, staring blankly at the screenshot then at a random moment of pure luck I find a solution and it literally is euphoric.

I enjoy relieving those moments.


Nice to hear this! I am a big fan of screen casts as a way to do async updates inside remote teams. Tools like mmhmm an dyac are really good at this and its gives everyone a high bandwidth walkthrough of whatever you're working on however you do it so code, drawings, docs, etc works across a wide spectrum of activity. The rubber duck effects are a bonus!


What’re you guys working on that this is a regular occurrence? I just never run into walls like this when coding stuff up.


I'm working on a new collaborative editing engine for arbitrary data, using CRDTs. I'm trying to keep it highly performant. (Like, 300M edits / second performant).

I've been working on this problem for years and I've gone through dozens of design revisions. Sometimes I catch bugs in the design phase. Sometimes I only notice core problems after I finish implementing parts then attach a fuzzer. (Fuzzy boi is the best and the worst.)

Some design problems have taken me months to find a solution. One of my solutions ended up with me rewriting of thousands of lines of working code, written over several months.

We're making great progress; but sync engines are hard.


Debugging is huge part of this. For example, for web where you can plop breakpoint or print statement anywhere and have a good level of transparency into what is happening really helps resolve issues quickly.

Game development where GPU is not really going spill the beans of what is happening under the hood that easily - one can stuck for a longer times easily.


Yeah debugging or testing theories for production only bugs (without adding risk) can be a big source for "aha!" moments.


I'm writing Swift code that calls Apple's UIKit. I'm constantly wasting time trying to figure out how to use the API properly, since it's buggy and poorly documented. Each solution brings relief, not euphoria.


Sounds like hell


I find SwiftUI a lot more fun but run into similar feelings using it, documentation is maybe worse :/


Multithreading problems Algorithms problems where you have two variations of the algorithm that work for different combinations of inputs and you need it to work for both cases, so you need to merge the algorithms together. This is my most common programming experience. Realising how many problems a problem has, such as constraint satisfaction, layout engines and programming language design. Myers agorithm and diff3 See my profile for links to my GitHub profile to see what I'm working on.


Reducing latency. Or any sort of optimization for that matter. It’s usually a slow methodical process, but every once in a while you look at the right trace and instantly know what the problem is and how to fix it.


It sounds intriguing, but where do you find the space to store all the recordings? A bunch of external drives? Feels like 8hr/day × 20 days/month of recording my multi-monitor setup would fill up my drive pretty fast.


If you’re working on open source, stream straight to YouTube or Twitch. Can be private or not.

I do this sometimes. the accountability hack works even better if someone could be watching.

Bonus points is that it feels way more natural to narrate aka rubber duck problems when you are streaming.


x264/veryfast, 1080p 10fps at 2000kbps, is more than enough for plain text recordings and it won’t take that much space.

You can go even lower with other encoders (x265) + if you don't record audio at all


2000 kbps = 0.25 MiB/s = 900 MiB/h?

That's only 1.14 TiB per year doing it 5 h/week * 5 days/week * 52 weeks.


You only need 40kbps for acceptable audio, so I wouldn't worry about that at all in this ballpark of video bitrate.


If you record mono speech, opus can produce great results at 15kbps.


If you just want speech, yes. For cheap generic audio I don't think I'd push it below 32 unless I really needed to shave bits.


A while ago I tried to see what compression I could get out of screen recording losslessly to a scratch disk and encoding afterwards as slowly as I could wait. I didn't write any numbers down, but the difference in efficiency was significant. Some observations:

- Of the lossless encoders in OBS/libav, utvideo was the best in both CPU usage and efficiency, followed by lossless ultrafast x264.

- An SSD can handle even uncompressed 24bpp 1080p60, which is 375 MB/s. Typical screen content compresses well below the ~100 MB/s write speed of an HDD. Fullscreen video does not, instead gradually filling the write cache until either OOM or thrashing.

- For onscreen content, I prefer the bitrate tradeoff of keeping PC color range and not chroma subsampling.

This technique isn't as effective for this use case of recording several hours daily, since reencoding must be fast enough on average to keep up. Best to already have a home server (any spare desktop). Otherwise, use AOM codecs, known for poor multithreading, to encode at full speed without hogging CPU.

ps, temporal compression means that dropping framerate makes surprisingly little difference with modern codecs. But I really should be writing down the results of my ad hoc tests...


(3 days later): Necroing with a link to this serendipitous related submission about building a NAS for professional video editing from 40 TB of SSDs: https://news.ycombinator.com/item?id=32235158

Yea, on second thought, long term screen recording like this would be a terrible waste of silicon. At 150 MB/s, 46 days of nonstop recording would exhaust the entire 600 TBW of a 1 TB consumer SSD. And for fun: a high-end 500 GB SSD writing at 3500 MB/s could burn its 300 TBW in a single day, not counting the EoL slowdown.

Curiously, RAID 0 hard disks are perfect for such workloads, yet the blogger has still chosen to use consumer grade SSDs.


This is a phenomenal way to wear out an SSD for no real reason at all.


> An SSD can handle even uncompressed 24bpp 1080p60, which is 375 MB/s.

Well, it can for an hour.


You're using 4:4:4 (disabled chroma subsampling) to keep text readable?


If you have an old machine you can use as a NAS and run raidz2, disks are $7.50/TB or less: https://diskprices.com/

Screen captures also compress much better than live action since most frames are duplicates of their predecessor. So a cheap NAS can run for years before you start thinking of deleting VODs.


No need to record 4k 120fps videos if you're doing web development, something like 1080p in 10fps might be enough and it won't take ridiculous amount of space.


Parent comment specifically mentions recording for game development so 1080p at 10fps probably isn't going to cut it.


Update it to be 20~25 fps then, still won't take ridiculous amount of space.


Unlike movies, the content on screen recordings usually remains roughly the same for minutes before switching scene. So it can achieve significantly higher compression ratio even at high fps


I wrote more about it here: https://news.ycombinator.com/item?id=32223240

But the TLDR, ffmpeg can remove duplicate frames:

    ffmpeg -i in.mkv -map 0:v -vf mpdecimate,setpts=N/FRAME_RATE/TB out.mp4


1080p may easily mean that the code is unreadable if you have a decent size screen.


I've been streaming some of my side projects on Twitch, with OBS.

During the stream, I keep up a fairly constant spoken description of what I'm doing, what I'm thinking, what problem I'm stuck on, etc.

I've noticed I've also been speaking my thoughts out loud when programming, but not streaming. It ends up being a continuous "rubber duck" conversation, and feels (completely subjectively) like it helps me develop easier/better.


Yeah that's neat! The part about reviewing the video. Back in the day we would use VHS to record the game and capture rare glitches to review. Nowadays our QA runs with OBS always on and can attach clips to bugs. It would be cool if every dev had it too.


I've found this too... at work I regularly make videos of what I'm doing for other people. Then I realized how much stuff comes up in the videos because I'm being attentive, and started making videos I never share. Like it splits my attention to both act and watch myself acting


I've started doing this more. The videos of my work has always outlasted the work itself. Even though technically I can dig up old compilers or try and update my dependencies, I rarely do. So whatever was captured in the video becomes the only artifact of my older programming projects.

I used to think that video formats would no longer be supported over time, but even the oldest weird video formats still play in VLC and MPC, and probably would work fine if uploaded on YouTube.


I did the same for a while. It's a neat productivity hack. There are a few services that market accountability by hooking you up with strangers. Both of you must have your webcams enabled, and you just work on whatever you need to do for a set period of time without talking.


I used one such service. It was such a drag. Hated it. Does anyone use it repeatedly with different strangers?

I think I’d prefer live streaming to keep myself accountable to one-on-one sessions with both cams on.


How do you setup OBS to keep the recording sizes tolerable?

I've used this before on engineering grade machines, but it doesn't do so well on "everything is in the cloud so you can use a word processor quality" laptop, any advice?


I have a 4k display but record to 1080p, bit blurry but I'm not really using it to find small copy of text, but to see general state of things. Recording in 20fps as well.

I have a small shellscript that takes all video files recorded by OBS, and runs them through this ffmpeg command:

    ffmpeg -i in.mkv -map 0:v -vf mpdecimate,setpts=N/FRAME_RATE/TB out.mp4
Using mpdecimate removes duplicate frames, so if nothing is happening on your screen (although smaller changes gets ignored, like my clock showing the seconds), it removes those duplicate frames.

So one ~1 minute video of you thinking for 40 seconds can get reduced to 20 seconds. Not uncommon for some of my video files to go from multi-GB to just ~100 MB when removing all the pauses.


Very nice tip, mainly the part about when something happens you can rewind it.

What about hard disk space? How you handle it?


The space requirements can be very low capturing something like writing code, where only 1% of the screen might change second-to-second.

"ffmpeg -f gdigrab -i desktop -c:v libx264 -preset medium -fps_mode vfr -crf 0 -an -vf mpdecimate capture.ts" (Windows, -f x11grab -i $DISPLAY for Linux on X11) produces a lossless video that averages 102 MiB/hour for me (1920x1080 @ <= 60fps). That's about two cents a day at current disk prices, and easy to upload as a private YouTube video if you don't want to lug the files around.

If you don't like the CPU hit, use -preset ultrafast to record the capture using less processing power but giving a larger file, and then re-encode that file later using -preset slower. There's no quality loss if you used lossless mode (-crf 0), and for content like this the savings are especially large (reducing to around one-quarter ultrafast size, in my experience).


Wow, I did not know that. Awesome!


That’s an interesting idea. I might try this myself


Hmm, that's actually a very clever hack.

I'll try it!


why don't you just stream on twitch


Because of sharing internal information from the screen? This would be. My first thought.


BTW do you think twitch is best for this? I'm working with js for the first time and tools seem really slow and poor, I'm sure I'm doing something wrong, I figure looking at others would be great.


I would do that but my workplace doesn’t allow it ;(


I totally understand the desire to keep a record of the past, and space is cheap so why not. I used to be a big "digital hoarder", virtually never deleting anything that might be a bit interesting. But a couple years back I deleted most, though not all of the "archive" of past me. It was a great decision that I don't regret. The important things you did will still surface from time to time. It's also always cool to accidentally find a photobucket or google docs account you forget you had and look through it for 10 minutes. But I just don't find value in intentionally preserving a digital record of myself, and instead allow serendipity to poke my nostalgia centers on occasion. Sorry for the violating the spirit of the thread with a contrarian opinion. My point is just that I've done the digital hoarding thing for years and it turned out to not have value, for me.


I am myself dealing with the effects of my digital hoarding and trying to delete as much as I can, but I do think that is different from this.

1. Photo apps like iCloud Photos, Google Photos, and Photo Prism are getting much better at auto organizing/cataloging what photos are and surfacing them together in much more interesting ways. More photos is now a plus instead of a minus. 2. You can always delete, but if you never record you can't go back and record (often).

So I take a Marie Kondo "Keep what Sparks Joy" approach and delete anything I find that I do not care for when I find it. I also sometimes pick an area of stuff I have and try to aggressively delete things I don't care for.


>It's also always cool to accidentally find a photobucket or google docs account you forget you had and look through it for 10 minutes.

I find that horrifying. I would probably scramble to delete that as soon as possible. Don't know about anyone else here, but having my junk float around the internet is mortifying. I delete unused accounts as soon as the thought of it pops into my mind.


Yeah like, dude deletes all his local files and is fine with personal stuff floating through the internets and randomly finding that stuff. I'd rather do it the other way round.


Different things are important to different people. I did have a "digital paranoia" phase but it gave me more anxiety than it was ever worth. :)


I used to save everything I did in the past. Over time, I've found that I almost never needed to access those files and most of that wasn't even useful for the kick of nostalgia. Old games? I already replayed them to exhaustion.

I learned that the nostalgia is not about the files by itself but my life context at that time. I don't miss old code or Old OS's. I miss that sense of wonder when I was less experienced and more naive, and everything was new.


>I learned that the nostalgia is not about the files by itself but my life context at that time.

I'm learning the same - whenever I feel nostalgic about playing an old SNES or PSX game, I've realized that it was just about that time in my life, and usually just watching a clip on youtube or listening to the soundtrack is enough to scratch the itch, rather than actually playing the game again


While your memory might be great now, it won't be so forever. Serendipity might not happen as often as you would like, and "important things" may not be all you want to remember.

I'm sure your parents or someone from their generation have actual, physical photo albums from their past - and that the experience of browsing through these photos brings back things that they haven't necessarily _forgotten_ about, but that they wouldn't have brought into active memory unless they were browsing through them.

Over the years I have experienced and built many things that I do not deem "important" to me, yet when I see them mentioned (even in writings by myself), it takes me back to that point in time - all the feelings, learning and discoveries that it brought to life.

An example is scrolling through a list of my repositories on GitHub. Some of the projects on there I have "forgotten" about, but with the mention of it I am instantly brought back and remember a whole lot more details - motivations, feelings, the ecosystem...


This is just the way I see it: unless you make a deliberate effort to scrub yourself off the Internets, most people leave a bewilderingly enormous digital footprint. I think that's what makes the analogy with the physical generation not really work: that generation had a scarcity of records, while we have an absolute over-abundance. For most of history, most of our conversations, creations, the way we looked, etc. were not recorded, so I just don't see this over-abundance of preserved records -- which is quite a novel thing -- as being important to the human experience. But definitely appreciate your perspective, thanks!


It’s not always about hoarding.

Legally it’s very helpful if you create a paper trail of your work


Yeah, I think it can be great if you're intentional about what you're preserving and why. To elaborate a bit, I went from having tens of thousands of emails a few years ago to "only" having about 5k now. I did that by adopting a strategy of aggressively deleting trivial emails. I apply "aggressive decluttering" throughout my digital life, with screenshots also (trying to stay on topic a bit), old conversations, failed creative projects, etc, and have found the benefits of less clutter to be profound.

I never really regret deleting something, but that could be because I try to keep my life simple, within reason, and focus on the future.

I also recognize, as you point out, that there are limits to this -- sometimes there is a genuine need to keep a record. As a programmer, my work is all tracked in git. For a creative professional, I assume that a basic requirement of that sort of job is an excellent backup system.


I always try to separate my data by importance.

I.e. photos and important documents are only about 50GB so very easy to keep. This would be the things that I wanted my family to keep, If I died.

Then theres random projects and files that I keep in yearly archives, just to look and remember what I was up to x years ago. That's also not huge amounts of data (1-2 GB per year).

And then theres data which I didn't create. E.g. movies, game installer,... Where a loss wouldn't really be a big problem.


I used to feel the same way. Then the internet forgot the stuff that I thought would be there forever. Now only a few remanants remain. I still look at my DeviantArt and Livejournal accounts when feeling nostalgic. Those will also one day be relics of the past. I should back them up somewhere.


The "Virtual Trash Can" approach can be a middle-ground. You keep records, but regularly move them to a NAS or any other storage that isn't something you'd accidentally wonder into.

In day to day life these past record could as well not exist, but you still get to go look at them if you're willing to make the effort to do so. Also being willing to lose these items if something catastrophic would happen relief a lot of the archiving burden.


Hoarding is way different from taking snapshots of your work.


Yeah. I meant that I don't see the value in keeping those snapshots for years and years. Space is cheap, but for me there was something unhealthy about the practice, which is why I used the word hoarding. Deleting the hoard was actually quite freeing for me, and it was a little surprising that I never actually missed e.g. the hundreds of screenshots of in-progress games I was developing. So I just wanted to share that perspective, not really expecting people to agree (almost didn't post my comment at all.)


I actually like when such findings evoke older emotions in me, puts things in perspective for me, how much have I grown and how much I am still the same etc etc


i i i me me me i i i


I use Hazel[0] to archive everything in my "downloads" folder after a month. Files get separated into "~/Archive/Images/", "~/Archive/Screenshots/" "~/Archive/Audio" ..." depending on certain criteria (mostly file type). Each sub-folder is broken down by year and month, for example "~/Archive/Images/2015/05/". I also occasionally dump stuff that doesn't belong elsewhere into the same folder system, for example WhatsApp media.

Basically, if it doesn't need to be specifically filed somewhere, I just put it in my "downloads", knowing that it will be somewhat searchable in the archive by either date, file type or in-document search. This is great for all of the bits and pieces that you don't necessarily have a home for or want to manage.

[0] https://www.noodlesoft.com/


I do something similar, except files are moved from Desktop > Downloads > 'Needs Review' after 1/7/30 days. The vast majority of files are trash, and I simply delete them every other week from the 'Needs Review' folder manually.


Bit late to this thread. Another "archive" idea is to capture archive.org links of past projects. And it can be used to showcase things in the future.


Is there a similar program for Windows?


File Juggler (https://www.filejuggler.com) seems to be the recommended alternative.


It's too bad that screenshots don't have more useful metadata. Or any useful metadata, beyond a timestamp.

I'd like to have the names of all programs visible in the screenshot (easy), possibly application specific metadata like the opened filename or a URL (more difficult) and more generally full OCR of the visible text (pretty easy). You'd need a PDF to get the most out of this, but presumably most other image formats have generic metadata storage.


A paid tool in this direction is APSE (https://apse.io) which bills itself as a personal search engine that OCRs intermittent screencaps. I loved the idea, but in practice it lacked polish. I agree that additional metadata like foremost application filepath/url would take this to another level.


Additional meta-data may in the future be extracted through machine learning from such videos.

Regarding open programs: I once led a project where we developed something that keeps track of the software you're running (no screen recording) in order to conduct research into attention and distraction. We didn't have the resources to support many platform versions, so we wrote only a Windows client (most used OS on the floor). It was similar to RescueTime https://www.rescuetime.com/ but more respecting one's privacy and absolutely avoiding the cloud, as we deployed the experiment in a lawyer-intensive environment; for instance we logged running program names but not titles of open windows because file names often reveal sensitive matter. We questions occasionally how productive they felt in the last half hour, and they could comment.


> Additional meta-data may in the future be extracted through machine learning from such videos.

Why can't we le the ML models figure classification themselves and then give them the human data to adjust to be readable by the human.


The screenshot tool i use called shareX https://getsharex.com/ allows for so much customization. It's FLOSS.

You can set the File name using various parameters in combination like - time date - program name - window title

This can also be configured to do complex workflow like

- for each screenshot add a border then copy image to clipboard and then upload to Imgur and copy url finally run and ocr and copy results as well.


Check out https://www.manictime.com/ it doesn’t give your OCR but could still be useful.


I take screenshots of my computer screen every minute since 2008. I've a dedicated 2TB hdd storing these archives compressed with 7z It takes just about 100-200MB/day so it's quite easy to store.

These archives saved me from data loss a few times (due to powerloss, or bad mistakes). I just browse through the archive to look for the screenshot where I was coding to recover them by typing it again.

this is the screenshot tool I use https://github.com/soruly/TimeSnap


I'm glad you found something that works for you, and I don't mean to dissuade you even if I could, but to me that feels like an antipattern if you only use it for typed text.

Consider that with a text editor like Vim, for example, you can "time travel" [0] through your file's edits, or even have undo branches/trees [1][2] available per file. That saves you the trouble of having to transcribe text from screenshots, and also barely uses any storage space.

Plain text is also highly more portable and more likely to be recoverable in case of drive failure or file corruption.

Additionally, or alternatively, you could try any sort of manual versioning system or background automatic backup solution that keeps versions of files as you work on them.

[0]: https://vimtricks.com/p/vimtrick-time-travel-in-vim/

[1]: https://neovim.io/doc/user/undo.html#undo-tree

[2]: https://github.com/simnalamburt/vim-mundo


That’s impressive.

I would also like this, but what worries me is privacy. There’s already so little that it makes me wonder whether this is degrade it even faster.


When I was 8 or 9, my dad brought home a Macintosh SE/30. On it, I used MacPaint to create 5 or so black and white paintings using the various patterns and brushes. It is probably the first creative thing I did on a computer.

When we upgraded to System 7, the version of MacPaint didn’t run and he told me that essentially the art was lost and unrecoverable. I can picture in my mind what they look like, and wish I had the file to look at.


It's probably not much help now, but if you still have the floppy disks, those pictures can definitely still be viewed again.


You can even bring them into the modern world with Mini vMac… by taking a screenshot of it!


Tragically, they weren’t ever saved to any disk that I’m aware of.


At one point I realized that whenever I went on vacation I took pictures of wherever I went: Germany, France, Sweden, Costa Rica, Texas, Florida, etc.

But then I realized most of the "traveling" that I do is the different places that I go on my computer. If I'm going to take photos when I travel, I should also be taking screenshots constantly, to document places I went on my computer. Such screenshots are useful because they will capture:

1. the large number of websites that exist in a particular year, but which won't exist 5 years later

2. minor interests I have one year that I don't have later. Occasionally I'm curious when I first got into a particular interest, and seeing the screenshots is helpful for documenting that.

When I go back to the oldest entries of my weblog, from 2005, I notice that more than 80% of the links are now 404. The Web is constantly disappearing. Like a forest on its way to extinction, you might as well photograph it now, because it won't be there 10 years later. Most of the websites you visit, you cannot go back and visit them a few years later. Take a photo of them while they still exist.


I use ShareX[0] to screenshot all my work. I have it set up so that CTRL-SHIFT-F6 makes a screenshot of a region and it's automatically uploaded to some shared hosting server. It's a lot of fun to see work back from years ago!

[0] https://getsharex.com/


I disagree with this advice. Nostalgia is a waste of time. I started coding in about 1988. I have nothing that goes back further than 2010. And I don't waste any time on anything I'm not currently working on. Keep working, keep moving forward. Don't waste your time looking backwards.

That's just my opinion, obviously. But I find the obsession with nostalgia in our culture to be sad and destructive.


This sounds really sad. I really hope that you can build memories that are worth spending some time remembering with joy.


> But I find the obsession with nostalgia in our culture to be sad and destructive

My occassional evenings spend looking back through the life that brought me where I am are sad and destructive?

I find them to be very effective tools for reflection.


Re-read what you quoted, please. I did not say "individuals occasionally recollecting their past experiences is destructive." I said the cultural level obsesssion with nostalgia. There's a big difference.


Is there? What do you consider the cultural level obsession then?


You seem to be conflating reminiscence with reflection.


I agree that one is usually concerned with more recent information (hence the effectiveness of the LRU caching heuristic). However, looking back at past endeavours has been a source of new ideas and joy (reminisce achievements) for me. I don't do it frequently though. Maybe once a year or once every two years. Revisiting your past serves as revision as well (you get to see concepts you once used in your work/life).


I wouldn't say nostalgia is sad or destructive, but it is also not for me.

I delete everything when I'm done with it. I backup less than 2 GB of personal data. I have around 100 photos of my entire life, and I'm certain I'll get around to deleting all of them too.


I agree. The past is useful for post-mortem, learning from, but it's too easy to yearn for it in lieu (low agency) of moving forward (high agency).


Is there some software (macos) to regularly create screenshots throughout the day? Preferably if it merges it into a time-lapse.

I could probably create something with cron, but maybe a neater solution already exists.


> Is there some software (macos) to regularly create screenshots throughout the day?

On macOS here's a little shell command to take a screenshot every 60 seconds and place it in the /tmp folder:

   while true; do
      screencapture /tmp/screen_capture_$(date +\%s).jpg ;
      echo "wrote screenshot";
      sleep 60;
   done
You can also include the name of the front most application by fetching it with AppleScript and then putting that in the filename. (Or you could put different app screenshots in different folders)

   while true; do
      FRONT_APP=$(osascript -e 'tell application "System Events" to name of first application process whose frontmost is true');
      screencapture /tmp/screen_capture_${FRONT_APP}_$(date +\%s).jpg ;
      echo "wrote screenshot";
      sleep 60;
   done
Of course if you want this to run long term, use a cron job or even better, a launchd job. With launchd you get better coalescing and sleep recovery behavior. You can also add other fun trigger events. Like take a screenshot every time your `.zshrc` is modified [0]. Or a screenshot every time your internet access changes (by triggering on /etc/resolv.conf)

[0] Look up `WatchPaths` on my favorite launchd guide here: https://launchd.info/


Yes, I have ten years of screenshots (and webcam shots) at half hour intervals from having LifeSlice:

Source:https://github.com/wanderingstan/Lifeslice

Download page: http://wanderingstan.github.io/Lifeslice/

I developed it as an early Quantified Self tool primarily for the Webcam shots, but also have been saved on more than one occasion by having screenshots of work that would otherwise be lost.

Edit: the first version was just a shell script, which if you want a starting point to modify: https://github.com/wanderingstan/Lifeslice/blob/master/1.0-S...


I use cron; I think it's pretty neat:

/$INTERVAL * * * /usr/sbin/screencapture $CRONSHOTSDIR/`date +\%s`.png

and then to make the timelapse:

ffmpeg -r $FPS -pattern_type glob -i "*.png" -vcodec libx264 lapse-`date +%s`.mp4

You have to give screen capture permissions to cron.


I've been happily using Time Sink (https://manytricks.com/timesink/) to do this for about a year now. It does not automatically merge into a timelapse, but this is accomplished easily with ffmpeg.


Just remember to disable it before attempting extra curricular activities


I used to use chronolapse, a python program to take screenshots, including picture in picture with a webcam. It can use mencoder to create a video. I found I preferred using ffmpeg for my use case.


Something bash'ish

  while sleep 5
  do
    import -id root `date -Is`.png
  done
Then make a time lapse with mplayer et.al.


  $ which import
  import not found

How should I install that tool?


It's part of ImageMagick.


Got it, thanks!


import -window root `date -Is`.png


I love taking screenshots and even wrote a script a while back to take screenshot when there is mouse movement periodically when handling incidents, i also have crontab job to compress and gpg encrypt the folder with my key at the end of the day if it's not empty. This, together with another script to record all terminal activities during incident helped me a lot of time in the past when writing up post incident write-up after many late nights!

Unfortunately i only see the old version here with flameshot taking screenshot at full resolution.. my few later versions turn screenshot to black and white and applied a few imagemagick tweaks to make screenshot file incredibly smaller to store but you get the idea :): https://gist.github.com/santrancisco/9d14e0105316cfa15f98f0f...


I couldn't agree more an I appreciate someone giving permission to not feel bad about accumulating screenshots.

I just went through three years worth of screenshots from attending tech bootcamp and working my first dev job. It was a great reminder of projects and people I care about who I'll probably never get to work with again. Like random office polaroids for the remote work era (as if I'm that old).

To my own surprise my only regret was that I should have taken more screenshots... Also they're all pngs...


I am a big advocate of this, I spend a lot of time in online meetings and presentations and use Onenote to take my notes. A screenshot in context is extremely useful when revisiting later as I am a visual thinker. The ability of Onenote to index text in a screenshot is one of the most useful features in any program and allows me to find items even if I only recall a snippet of context.


I had a scheduled screenshot every few minutes for years. At time it helped me recover data that I lost but eventually I stopped using it and deleted my archive when I realized I couldn't police my screenshots. Any one of them could have sensitive information or, ahem, sensitive information

Overall though it's nice to see old screenshots even on my phone, but intentionally "taking a screenshot for the memory" feels weird as every one of my screens is boring on its own.


Funny comment! You could write a script that would not take screenshots of, e.g., incognito, windows.


I take screenshots at fairly random intervals and have done for years. My earliest screenshots are from the 1990s.

I consider this an adjunct to my photography hobby. When I travel, I take opportunistic photos of interesting things I see. When I use the computer, I am a traveller in a digital universe and I do the same.

I find it really interesting to see the evolution of desktop UIs that I have used over the years, and to see the slow change in applications I typically use.


Can’t pass by without recommending my https://shottr.cc (app for Mac.) I’ve set a dedicated folder for screenshots and save there literally everything now — purchase receipts, important chat conversations, work in progress, reminders to myself, zoom slides. Agree with the author, screenshots are under-appreciated.


Thanks for sharing - installed and liking it so far. Looks like it can replace Monosnap for me.


I like to take a screenshot too when having just created something. Myself I like to create a screenshot of my whole desktop. Years later I get a nice feeling of nostalgia seeing what theme, wm, apps etc. I used to work with. For the same reason I usually enjoy old photos more for what is in the background than the subject itself.


For my creative coding projects (which are usually looping animations) I've set up scripts that export a short video at every git commit. Going back and seeing things develop is quite satisfying, and also useful if I stumbled upon a configuration that I like and want to branch from


That’s a fantastic idea, thanks for sharing. It’d be fun to set something like this up for gamedev


Do you have the scripts or a demo somewhere? sounds cool


I taught myself Commodore BASIC and 6502 Assembly including writing a BBS program between 1982-1985.

I kept literally nothing from that time or probably even a decade after.

Literally hundreds of thousands of lines of code.


I do wish that I had kept a universally readable record of more of my work over the years. I've managed to preserve a fair deal of it, but there are notable holes and a lot of it is software which isn't going to run without some tinkering, particularly the projects with significant third party dependencies. Web stuff (e.g. RoR projects) is particularly bad in this regard, often being nigh unrecoverable.

The bigger thing to emphasize I think though regardless of archival method is to make sure to regularly back things up. Some of my earliest things from the 90s were on the boot drive and got wiped when the family computer needed a reformat. Later on when I had my own computer, a lot of stuff was on an external drive to make room on the boot drive, but one day the external decided to kick the bucket and everything on it went up in flames because there were no other copies. I didn't have much cash at that point since I was a high schooler but I'm sure I could've figured out something that would've preserved at least the most prized documents.

These days I have everything automatically incrementally backed up with Backblaze but now that Time Machine on macOS uses APFS snapshots and is more storage efficient I also want to use my home server for backup.



Any more sites like this?


Anytime screenshots come up in conversation I have to recommend Flameshot, it totally changed my workflow with including them. You can create, crop, and edit screenshots really quickly and it's a must-install for me at this point. Open source and cross platform. https://flameshot.org/


Flameshot is great, the only thing that would be better is if you could video record a screen region using the same UI instead of just taking screenshots.

Most screen recorders are incredibly cludgy. They either require extra cropping and editing after the fact or tank framerate into being unusable. I don't understand the technical problems and the whys though.


A wonderful tool that promptly broke when I switched to Wayland.


Flameshot is great. I wish there was a way to use a different image hosting service -- a self-hosted one preferably -- than imgur.


I've been using Notebloc Android app more for this purpose. It can take pictures of pages in a notebook and crop nicely. Also good for taking screenshots of a computer. Although it might seem strange to use a mobile phone to take a picture of a screen, I like that I have a centralized tool that can capture even non-digital stuff like handwritten notes.

Also, it packages pictures into a PDF which I find to be more organized than a bunch of pictures scattered about. I usually close the PDF every day and so each PDF corresponds to pictures taken during a day which gives it a context to make stuff easier to find.


Similar advice if you make electronic music: always save lossless raw audio (wav, flac etc), not just project files. Software updates break project files, hardware updates break old software, backwards compatibility is often imperfect, licenses expire, old VST plugins become unusable, source files for the project become lost, etc. It's always safer to have a pure raw audio export of all your projects even if they're unfinished because there's a chance you'll want to use or reference those unfinished snippets in the future.


Oh yes. I have a tremendous amount of Cubase .arr files I can't read, and heaps of floppies in various weird formats from old workstations that there's no easy way to read except by using the original hardware, which is slowly dying out (my second MV-30 gave up the ghost recently).


This was my idea when I built Electron-vLog: automated screenshots of whatever Electron app I'm building right now whenever I run it, so that I can stitch images together later and watch the evolution. If there's no change from the last screenshot, it doesn't save the current screenshot - helps curb duplicates. The core part works, but never got around to releasing it as an npm package.

https://github.com/CatalanCabbage/electron-vlog


I've been using timesnapper on Windows. It will capture a screen shot and timeline it based on title/program text. For years. The time portal aspect is fascinating, and ultimately useless.


Fellow TimeSnapper user. I combine it with a script that collapses it to a daily archive video. 4kx2 monitor setup, at 5 second intervals, have also been saving things for >5 years now.

It feels like a superpower sometimes... Reviewing the exact research steps of projects from years ago based on a timestamp from browser history, digging up an archaic screenshot of just the right configuration screen based on a file modification date, etc.

Sorry you find it useless...


This looks interesting but my antivirus didn't like it as it detected "SWF.Exploit.Kit.Rig.tht.Talos" when downloaded from https://timesnapper.com/


Founder here… looking into this.


Thank you, I appreciate your time and help! I guess it's a false positive on my side but have emailed details to support


Here's a virus total report on the file. I find virus total useful (as a product developer) because it will help show cases where some smaller scanner is reporting a positive.

https://www.virustotal.com/gui/url/718fe790bb2c1d97428199830...

tl/dr -- virus total is happy with the file.

It remains possible that middleware of some sort has altered the file before you accessed it.

You can check the hash of the file like this:

get-filehash ".\TimeSnapperProSetup.exe"

and it should return "B50A4449C9C36871280A842A530EA19694C7CBC4CF55CED5A620D8838D87CA1E" -- the same hash shown in the virus total report.


Thank you for your quick response! I get the same hash, so a false positive it is. I am really excited to test your product, it seems very nice!


And if you've got any old floppy or hard disks - save those suckers now!

I managed to get most of the data off my old Apple IIGS hard drive but there were definitely block errors. So I can still boot up the desktop I had in the 80's and 90's in an emulator. Brings back so many memories. http://www.oldcomputerstuff.com/resurrecting-my-apple-iigs-a...


Last month while I was backing up my old files from my 2005-ish laptop, I also realized that screenshots are very nostalgic. Just like this old Android project I worked a decade ago and if anyone wants to feel nostalgic on Google’s Nexus 7: https://initviews.com/2022/07/16/legacy-projects-part.html


That looks pretty horrible (text on top of placeholder/label?!), but seeing the OS again is indeed nostalgic.

Plus just the feeling of WIP app I guess. That could have been me.


Yeah it was just an experimental UI design thing. I remember back in the day I don’t give a crap about UI/UX. As long as the ListView functions smoothly.


Alex says, "Digital work is inherently ephemeral." This is precisely backwards; digital work is one of the least ephemeral aspects of human material culture, exceeded only by occasional miraculous analog exceptions like the Pyramids, potsherds, the Lascaux paintings, and Ötzi's axe. The Torah is digital—encoded in a sequence of discrete symbols rather than continuously varying quantities—and that's why it's survived for 3000 years. The digitization of Socrates's words by Plato and Xenophon is the reason we argue about him today, 2500 years later, rather than his forgotten Persian contemporaries or even Heraclitus.

Being digital is what makes the idea of an "exact copy" make sense. You can make an exact copy of some version of the Torah or the Symposium because it's only the discrete letters that matter; the analog nuances of tone of voice or thickness of pen stroke do not count.

So digitality is the alternative to the ephemerality of the analog, which is inevitably eaten up by moths and rust. We all know this about digitized language, but for some reason now that we've digitized reasoning in the form of computer programs, we habitually throw up our hands and declare defeat in the face of inevitable ephemerality.

This is bullshit.

What I really want, instead of screenshots, is a deterministic, reproducible computing environment. The idea is something like uxn or Nock: a platform that's simple enough to stay compatible forever, and efficient enough to be used for many things, even if there are a few things that I do on a computer that need more performance.

There are a lot of inspirational examples that offer tempting evidence that this is possible for large, interesting classes of computations: the Smalltalk-78 revival emulator Vanessa Freudenberg wrote, the UM of the Cult of the Bound Variable (which had over 300 successful independent reimplementations), Nguyen and Kay's sketch of Chifir, Lorie's archival UVC, Wirth's RISC, uxn/Varvara, the JVM, and the numerous emulators of things like the MS-DOS environment, the NES, and the Gameboy that are good enough to run the original games.

I'm not saying it would be an improvement to do all your digital creative work on an emulated Gameboy in order to ensure that it was reproducible. I think we can do a lot better than that. None of the presently existing archival virtual machines are adequate. But I think the reproducibility of Gameboy games tells us that we don't have to accept bitrot as the price of using computers.

Alex says, "They’re not as good as having the original, working thing – but they’re much better than nothing". Well, let's figure out how we can have the original, working thing! This is software, it's a simple matter of programming.


> a platform that's simple enough to stay compatible forever

Two things that make this impossible in a general sense: security vulnerabilities, and hardware that fails and is no longer manufactured.

It's only possible by defining the forever-platform as a sandbox abstraction, such that the containing layers can be updated for vulnerabilities and hardware.

This is what happened with the old game platforms, but even that is leaky. The NES Zapper is at least one case that remains troublesome. More generally, input and display lag always pose a problem and always will, since there always must be some translation layer for hardware that won't exist forever.

It's not just a matter of software programming. The hardware matters. For your forever-platform to truly work, it would need to include hardware manufacturing instructions from raw materials. Which is possible, but you can see where it goes out of feasible scope for any commercial endeavors.


I am definitely talking about defining a sandbox abstraction, not mass-producing a piece of hardware. I agree that defining a sandbox abstraction is the only way to achieve what I am proposing. (This has been key to software longevity since ALGOL-60 and the IBM 360's microcoded implementations, but I'm proposing to go a lot further than the IBM 360 did.) I also agree that we have to aim higher than mere commercial endeavors can in order to achieve the forever-platform.

Security vulnerabilities at that level are a non-problem. There are no security vulnerabilities in Wirth RISC, in Nock, in Chifir, in the UM, or in the λ-calculus, and there never will be. There probably aren't any security vulnerabilities in uxn/Varvara. None have ever been discovered in the 8086, which is orders of magnitude more complex than what I'm talking about. The 6502 did have some, but they were in the hardware implementation, not the architecture, and are not present in modern 6502 emulators.

Input and display lag can be a problem, it's true, but there are many applications that can tolerate a lot of lag. A guitar pedal cares a lot more about lag than a paint program, which cares more than a word processor, which cares more than a compiler.

Hardware manufacturing instructions are not needed as long as people have access to some kind of programmable computer on which to implement the forever-platform emulator.


If you define anything involving timing and lag as out of scope, then yes, any computational sandbox can be run by any computational host with sufficient resources to maintain state, of course.

Hardware still matters for input and output. For your platform to run a guitar pedal, you'll need some way to physically get an audio signal in and out if you want it to be of any use.


Agreed. And timing and lag do matter a lot for human–computer interaction. Common platforms won't be able to meet the latency demands of VR/AR rendering or guitar pedals anytime soon even without an extra emulation layer. But I think they can do Voxel Space Comanche or MilkyTracker.


Depending on how much one's willing to stretch that argument, they can also point out that all known life is digital.

Human genome is, effectively, a historical digital record (Nature, DOI: 10.1038/nature10231).


I don't think it's a stretch at all to say that the base sequence of DNA, or the residue sequence of a peptide, is digital, and yes, that's what makes reproduction as we know it possible.

Many aspects of life, however, are analog. Magnesium concentrations, membrane polarizations, molecule orientations, temperature, and so on.


Wasm can be a key piece of the system you seek. A simple VM, the heap is serializable and the linkage to the outside world has to be fully defined.


Wasm definitely has some useful ideas for efficient reproducible computing, but it is ridiculous to describe it as "a simple VM" in comparison to Wirth's RISC, the Cult of the Bound Variable's UM, Chifir, Smalltalk-78, or even the NES or uxn/Varvara. I think this page lists over 1000 instructions: https://webassembly.github.io/spec/core/syntax/instructions....


It is a lot, but not quite 1000, yet. From https://webassembly.github.io/spec/core/appendix/index-instr... I see 436 instructions including SIMD which is over half the population. If I filtered it correctly, it looks like there are about 203 instructions in Wasm without SIMD. Many of those are not necessary for most programs.

There at least three small Wasm interpreters in Rust

4kloc https://github.com/yblein/rust-wasm

3kloc https://github.com/k-nasa/wai

500loc https://github.com/rustwasm/wasm-bindgen/tree/HEAD/crates/wa...

This list has 169 Wasm instructions https://github.com/rolfrm/wasm-lisp/blob/master/instruction....

Wirth's RISC is neat, I'd love to re-do it in RISC-V (only 47 instructions in the base ISA). UM, Chifir and UXN look like Art (not pejorative), I'll definitely read the Chifir paper. They would be great systems to run on top of Wasm.

https://git.sr.ht/~bctnry/chifir

One might be able squeeze a Chifir VM into an ESP-32 (with external PSRAM).


I agree that UM, Chifir, and uxn are Art, and that wasm is at least potentially a great platform to run this kind of archival virtual machine on top of, as well as having some very interesting ideas about how to design a VM instruction set to be amenable to efficient implementation. RISC-V is a good source of ideas for that, too!

And I appreciate you setting me straight about the extent of wasm's instruction proliferation.

But I find much to disagree with.

— ⁂ —

> There at least three small Wasm interpreters in Rust

None of those are small; the smallest one you found is 500 lines of code, and it's very incomplete, implementing only 11 of the 436 or however many wasm instructions there are, and even those it only implements partially. (Its only arithmetic is addition and subtraction, for example.) Even the 3kloc one says it doesn't pass the wasm testsuite; its pub enum Instruction has 174 items and most of those are implemented as follows:

                Instruction::F64Store(_, _) => todo!(),
                Instruction::I32Store8(_, _) => todo!(),
                Instruction::I32Store16(_, _) => todo!(),
                Instruction::I64Store8(_, _) => todo!(),
The 4kloc one (which I think is actually closer to 2.8kloc) doesn't include any of the SIMD ops, but it claims to implement the whole wasm spec as of 02019, and it's plausible that it actually does. A casual glance at the code doesn't reveal anything that contradicts that claim; it implements about 200 instructions.

None of them include the peripherals, which are by definition excluded from wasm. But peripherals are usually the part of an emulator that requires the most effort, and they're usually a much bigger compatibility bitrot shitshow than your CPU is.

By contrast, the UM interpreter in the Cult of the Bound Variable paper was I think 55 lines of C (also, not including peripherals). My dumb Chifir interpreter was 75 lines of code; adding Yeso graphical output was another 30 lines https://gitlab.com/kragen/bubbleos/blob/master/yeso/chifir-y....

Uxn, despite its inefficiency, runs useful applications on the Nintendo DS today (in 5200 lines of pretty repetitive C, including the peripherals), and wasm doesn't and probably never will. And 365 teams in the ICFP programming contest independently implemented the UM successfully enough to run at least some existing applications; there will probably never be 300+ independent reimplementations of wasm. So in important ways they're already closer to the goal of eliminating bitrot than wasm ever will be. This is probably because eliminating bitrot isn't part of wasm's goals.

(I realize I forgot to mention the Infocom Z-machine, as well.)

Wasm has another deficiency other than complexity: as far as I know there's no standard way for a wasm program to generate some wasm and start running it, although of course the browser platform does provide that ability. This is essential if the thing you want to run under it is a virtual machine or other sort of emulator, because the only way to do an efficient emulator is to compile the code you want to interpret into the instruction set the emulator is running on. Something wasm-like can dramatically simplify this; if you're compiling to wasm, for example, you don't have to do instruction scheduling or register allocation, and you might not even have to do constant folding and function inlining.

— ⁂ —

One of the interesting ideas in the RISC-V ecosystem is that a Cray-style vector instruction set (RV64V) can give you SIMD-instruction-like performance without SIMD-instruction-like instruction set inflation. And, as the APL family shows, such vector instructions can include scalar math as a special case. I haven't been able to come up with a way to define such a vector instruction set that wouldn't be unacceptably bug-prone, though; https://dercuano.github.io/notes/vector-vm.html describes some of the things I tried that didn't work.

— ⁂ —

Why am I being so unreasonable about the amount of code? After all, a few hundred lines of C is something that you can write in an afternoon, right, so what's the big deal about 500 or 3000 lines of code for something you'll use for decades? And nobody has ever written an NES emulator in 500 or 3000 lines of code.

The problem is that, to parody Perlis's epigram, if your virtual machine definition has 500 lines of code, you probably forgot some. If a platform includes that much functionality, you have designed it so that that functionality has to live in the base platform rather than being implemented in programs that run on the platform. And that means that you will be strongly tempted to add stuff to the base platform, which is how you break programs that used to work.

In the case of MS-DOS or NES emulation this is limited by the fact that Nintendo couldn't go out and patch all the Famicoms and NESes in people's houses, so if they wanted to change things, well, too bad. NES emulator authors have very little incentive to add new functionality because the existing ROMs won't use it, and that's what they want to run.


I think SIMD was a distraction to our conversation, most code doesn't use it and in the future the length agnostic, flexible vectors; https://github.com/WebAssembly/flexible-vectors/blob/master/... are a better solution. They are a lot like RVV; https://github.com/riscv/riscv-v-spec, research around vector processing is why RISC-V exists in the first place!

I was trying to find the smallest Rust Wasm interpreters I could find, I should have read the source first, I only really use wasmtime, but this one looks very interesting, zero deps, zero unsafe.

16.5kloc of Rust https://github.com/rhysd/wain

The most complete wasm env for small devices is wasm3

20kloc of C https://github.com/wasm3/wasm3

I get what you are saying as to be so small that there isn't a place of bugs to hide.

> “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.” CAR Hoare

Even a 100 line program can't be guaranteed to be free of bugs. These programs need embedded tests to ensure that the layer below them is functioning as intended. They cannot and should not run open loop. Speaking of 300+ reimplementations, I am sure that RISC-V has already exceeded that. The smallest readable implementation is like 200 lines of code; https://github.com/BrunoLevy/learn-fpga/blob/master/FemtoRV/...

I don't think Wasm suffers from the base extension issue you bring up. It will get larger, but 1.0 has the right algebraic properties to be useful forever. Wasm does require an environment, for archival purposes that environment should be written in Wasm, with api for instantiating more envs passed into the first env. There are two solutions to the Wasm generating and calling Wasm problem. First would be a trampoline, where one returns Wasm from the first Wasm program which is then re-instantiated by the outer env. The other would be to pass in the api to create new Wasm envs over existing memory buffers.

See, https://copy.sh/v86/

MS-DOS, NES or C64 are useful for archival purposes because they are dead, frozen in time along with a large corpus of software. But there is a ton of complexity in implementing those systems with enough fidelity to run software.

Lua, Typed Assembly; https://en.wikipedia.org/wiki/Typed_assembly_language and Sector Lisp; https://github.com/jart/sectorlisp seem to have the right minimalism and compactness for archival purposes. Maybe it is sectorlisp+rv32+wasm.

If there are directions you would like Wasm to go, I really recommend attending the Wasm CG meetings.

https://github.com/WebAssembly/meetings

When it comes to an archival system, I'd like it to be able to run anything from an era, not just specially crafted binaries. I think Wasm meets that goal.

https://gist.github.com/dabeaz/7d8838b54dba5006c58a40fc28da9...


> When it comes to an archival system, I'd like it to be able to run anything from an era, not just specially crafted binaries. I think Wasm meets that goal.

Wasm can only run specially crafted binaries; it can't run arbitrary i386 or ARM7 code, only Wasm code.

I think the way to run anything from an era is with the following four-layer cake:

0. Whatever hardware and operating system the programmer-archaeologist happens to be running 1000 years from now.

1. An implementation of the forever-platform running on that hardware and OS. This needs to be simple enough for the programmer-archaeologist to write and debug without a running implementation to compare to, even if their hardware is base-10 or balanced ternary or has 21-bit word-addressed memory or whatever, but efficient enough to support real applications.

2. An implementation of a popular platform from the era you want to preserve, running on the forever-platform, such as wasm, MS-DOS, RV64+UEFI, or amd64+BIOS. (This doesn't have to be a platform that was widely used, just a platform to which the software you want to preserve can be compiled.) Probably you want to implement this as a binary-to-binary compiler targeting forever-platform code, not an interpreter, so that the slowdown introduced by this layer is less than an order of magnitude.

3. The application software you want to preserve, the "anything from an era".

Wasm is not even close to being a candidate for the forever-platform, but it might be a reasonable choice for (the CPU part of) the layer on top of it, because there are compilers that can compile a large body of C and C++ to it; and because it's a lot easier to compile from, and compile efficient code from, than botched monstrosities like amd64 and ARM7. (Did you realize that in ARM7 with Thumb an addition instruction can change the instruction set the processor is implementing? Because the low bit of the PC tells you whether it's in Thumb mode, and you can specify the PC as a destination register.)

In that context the fact that Wasm keeps changing isn't a big problem. You can compile some code with Wasm today and bundle it into a cart with an implementation of today's Wasm spec, and 8 or 16 years from now when you compile different code to a different version of Wasm, you bundle that code into a cart with an updated Wasm implementation.


You say, "I get what you are saying as to be so small that there isn't a place of bugs to hide." But that's not really what I was saying.

The issue of perfection, bug-freeness, is an interesting one. You could have a specification that was bug-free but implementations that were buggy, or you could have a specification that was itself buggy. Most specification errors won't give you an unimplementable specification; they'll give you a specification that specifies behavior you didn't want.

In the forever-platform context, most such specification errors are unimportant. It doesn't matter if your bitwise operation is accidentally NOR instead of NAND, or if you accidentally specified division to round toward -∞ instead of toward 0, or whatever. As long as your implementation of the forever-platform follows the spec, and you test your forever-platform programs on the implementation and fix them when they break, they'll continue working on future implementations of the spec.

So, with respect to specification complexity, the reason the specification needs to be very short is not so that the specification has no bugs; it's so that the specification has no ambiguities that result in observably different behavior among implementations. (The spec also needs to specify something Turing-complete and reasonably efficient, but those are less difficult.)

A buggy implementation you test on would be a much bigger problem, because it could result in purportedly working "carts" (or "roms" or "programs") that in fact only operate as intended on the buggy implementation. Then programmer-archaeologists would be forced to guess what the behavior of the actual implementation was. In many cases a metacircular interpreter will not help with this: if reading outside the bounds of memory produces 0 or -1, or if division by 0 produces 0 or ∞, or if arithmetic overflow saturates or wraps mod 2³², the metacircular interpreter is likely to inherit this behavior from its host implementation.

Buggy implementations might be detectable by formal methods or by asking for alternative implementations, perhaps in a contest format as in the UM case.

— ⁂ —

Wasm3 looks very appealing in a lot of ways. 64 KiB (?) for code and 10 KiB for RAM is small enough to run on a lot of small platforms! I didn't know Wasm could scale down that small. However, most of things compiled for Wasm will need a lot more than that.

I think getting something like LuaJIT or HotSpot, or even GCC's trampolines for nested functions, working on top of the trampoline approach you suggest for Wasm, is probably technically possible but impractically slow on existing implementations.


There's a lot to think about here, but right now I just want to respond to one point, about vectors and SIMD.

Performance limits what you can program on a platform, and there's a pretty wide range of interactive software where the performance bottleneck is generating pixels to put on the screen, just because there are so goddamned many of the fuckers. And fortunately that is commonly vectorizable; vector operations implemented by special CPU instructions have been a key enabling hack since BitBlt on Smalltalk in the 01970s. (In fact, the very first machine that ran a GUI had SIMD instructions in hardware, the TX-2, completed in 01958, but it didn't have pixels.)

This is even more important now than before, even without bringing GPUs into the picture. With optimized single-threaded C on an 8-core system, you're getting at best 12.5% of the machine's performance. If SIMD gives you an 8-fold speedup on slinging pixels around, which is common, you're getting at best 1.6% of theoretical performance with single-threaded SIMD-less code. With a naive, CPython-style bytecode interpreter, you have another factor of 40 or so slowdown, so you're getting 0.04% of the machine's performance: 11.3 Moore's-law performance doublings, so on today's machines you're back to 01999, age of Winamp, Quake III, ICQ, RealAudio, and Baldur's Gate. On the 02009 machine I'm typing this on you're back to 01987.

Now, people wrote a lot of interesting programs in 01987. Fractint, Hypercard, Quattro Pro, X11R3, a lot of the more elaborate Infocom games. And modern SSDs and humongous RAM mean you can do a lot of things now you couldn't do then, even if your CPU is clunking along with 01987 performance. And even a basic JIT compiler can probably get you from 0.04% up to 0.5% of the machine's performance, maybe 4% if it's multithreaded, though that makes determinism enormously more difficult.

But in theory Numpy-style vector operations can take advantage of both multiple cores and SIMD, usually giving you about 20% of the machine's performance even in a naive, CPython-style bytecode interpreter, at least on the kinds of operations where it's applicable, and without sacrificing determinism. Possibly with the kind of pipelining and tiling optimizations Numpy doesn't do you can do even better than that.

Of course, you can also design a virtual machine architecture in such a way as to permit efficient compilation, and not have to fall back on turbocharged vector operations. A lot of Wasm is designed with this objective in mind; for example, there is no dynamic typing to require runtime type checks, local variables can't alias linear memory, control flow is structured to help out the compiler, and so on, so that your interpretation/emulation slowdown is almost nil, reduced to the occasional bounds check that couldn't be hoisted out of a loop. And you can provide access to SIMD instructions, which Wasm now does, and multithreading, which it doesn't.

But all of this can't be done on a simple VM architecture. And for a given VM architecture I think an interpreter is always simpler than a compiler.

One of my stretch goals for the forever-platform is for it to be independently and compatibly reimplementable from the spec, that is, without access to a working implementation to test against, ideally within a few hours (as with Nguyen and Kay's "fun afternoon hack" desideratum for Chifir). There are indeed 300+ RISC-V implementations, but they aren't independent in the way the implementations of the UM were; the authors had not only a body of RISC-V software to work from but also other implementations of RISC-V to test the software on. But for Rosetta-Stone-level archival, we want the programmer-archaeologist to be able to get their implementation working even though they don't have an existing implementation to test against.

UM is a demonstration that this goal is achievable, if you're willing to accept that ceiling of 0.04% of native performance for the straightforward interpreter you'd write as a fun afternoon hack.

The hope with vector operations is that they give you an extra three orders of magnitude in inner-loop performance, without stretching the implementation task beyond what's achievable as a "fun afternoon hack", and without introducing so much ambiguity and so many tricky corner cases in the spec that in practice a given "ROM" will not run on an independent reimplementation. I don't know if I'll find a way to do it or not. But that's the objective.

And that's why I've been wrestling with SIMD, and don't think it's just a distraction.

So, thank you very much for the links to the relevant efforts! Oh dear, though, the V1.0 version of RVV is 111 pages, longer than the entire RV32/RV64 unprivileged spec.


The Rosseta-Stone spec needs to have an executable test, otherwise how do you know it works? Even the software you want to get running is a sort of a test, you just need test oracles.

> And that's why I've been wrestling with SIMD, and don't think it's just a distraction.

I think in this instance, Wasm 128b SIMD is a distraction. I agree with everything you have said, and your goals. Wasm 128b SIMD was a stop-gap and distracts from what ultimately will be a much better solution (vector processing). It is the lowest common denominator (128b width registers) impl that they could come up with to satisfy necessary immediate perf needs. It does get good speedups on code right now, but it more than doubles the number of Wasm opcodes. It isn't tenable to have the same 128 -> 256 -> 512 path that CPUs have taken. Implementation complexity and having to recode software everytime the SIMD registers change shape is lame. CPU vendors are totally ok having a needless upgrade train. The future is the past with vector processing, like RVV and flexible vectors. For archival software, I'd argue that SIMD doesn't matter. I don't think it is fare to compare RVV against the base ISA spec, the base ISA spec is trying to be the simplest possible thing, it is just a bootloader for RVV and custom instructions.

Being able to extract parallelism is way more important than constant factors, even for pathologically slow systems like Python. Thread counts for single socket systems are already at 256. One could probably implement Quake 3 in PyPy, maybe even CPython now and hit 60 FPS. CPython could definitely run Quake 1.

Wasm does support threads now. https://web.dev/webassembly-threads/

Wasm SIMD didn't even need to exist, the AST could have been encoded in a call graph that could have executed a fallback implementation, or been converted to local SIMD. It could have all just been function calls, like a binary intrinsic. It didn't really need an instruction set extension. You could encode a vector program, or something shader like in a binary wasm call graph.

I love that you use Moore Units in your comparisons, rarely do folks do this. When folks argue over 4% difference for some aspect of safety, they are talking about weeks of relative performance on a Moore Scale. But no amount of concrete performance gains will allow them to have safety or correctness. They are content to live in a ghetto all driving fast cars.


Interesting paper on a formally generated Wasm interpreter.

https://github.com/WasmCert/WasmCert-Isabelle

Mechanising and Verifying the WebAssembly Specification https://www.cl.cam.ac.uk/~caw77/papers/mechanising-and-verif...


You can't have an executable test if your objective is to get from not having an executable platform to having an executable platform. The best you can do is a set of (cart, expected output) pairs; the CBV UM published three carts — http://www.boundvariable.org/sandmark.umz, http://www.boundvariable.org/codex.umz, and http://www.boundvariable.org/um.um — and the expected output for one of them, http://www.boundvariable.org/sandmark-output.txt. They also had an online test oracle, in the form of their scoring server, but that didn't become relevant until after you already had the UM running well enough to run the Codex, or most of it.

However, the third cart was a metacircular interpreter, so in some sense it potentially represents a kind of reference implementation — just, one you couldn't use until your UM was running well enough to run it.

This may not be the perfect strategy but it worked well enough for 365 teams to submit a proof witness for at least one of the challenges in the Codex. And, importantly, it isn't very dependent on the hardware you're implementing the machine on, and in particular it doesn't require you to be able to run an external test suite.

— ⁂ —

You may be right about 128-bit Wasm SIMD.

When you say, "thread counts for single socket systems are already at 256", do you mean you need 256 threads to hit 100% utilization of the AVX units? Or are you talking about a chip with 64 or 128 cores with 2-way or 4-way "hyperthreading"? On https://www.tomshardware.com/reviews/cpu-hierarchy,4312.html... the highest number I see is 64 cores with 128 threads on the AMD Threadripper 3990X. Are you talking about the Tera MTA or Tilera or Altra or Parallela or something?

I'm not sure if mere vector operations scripted by a single control thread (SIMD, but in a broader sense than SIMD intrinsics, including things like RVV) can approach the level of parallelism needed.

Suppose you have 64 cores with 256-bit AVX2, like the 2.9GHz Threadripper 3990X, which Donald Kinghorn benchmarked at 1.57 teraflops on HPLinpack, and (optimistically) you're doing double-precision floating-point. In theory each core can do up to 4 flops per cycle (right?) and thus 740 gigaflops; perhaps he's counting FMA as two flops instead of one, because on the Cray-1 it would have been two flops. Without any parallelism, the max you get is 2.9 Gflops (or 5.8 if you count FMA as two), and if you have an additional 40× slowdown from naïve bytecode interpretation like CPython you get 73 megaflops, 0.01% or 0.005% of theoretical. This is maybe comparable to a Pentium MMX, Pentium Pro, or Pentium II; according to Greer and Henry's SC97 "Micro-Ops to TeraFLOPS" technical paper, matrix blocking boosted ASCI Option Red's Linpack scores from about 10 megaflops per 300 MHz Pentium Pro to about 160 megaflops per.

(This is perfectly adequate for Quake III, but maybe not at 60 fps.)

For our naïve interpreter to hit the 1.6% of maximum performance, 11.6 gigaflops, that you could get from single-threaded AVX assembly on that chip, you need an average vector length of 158 (11.6 ÷ .073). For it to approach 100% of maximum performance we need an average vector length of 10100 (740 ÷ .073). I am not convinced that you can reliably get such large vector lengths in most pixel pushing.

Even with a naïve JIT compiler with, say, 4 clock cycles per bytecode instruction, you need an average vector length of 1010 (≈1024) to get close to maximum performance, or 256 to be only two Moore generations behind (≈02018, or maybe ≈02016 given that the 3990X came out in 02020).

And the situation is 8× worse if you're alpha-compositing pixels or something; an AVX256 register holds 32 8-bit color components rather than just 4.

I think that usually you will need both SIMD and more flexible forms of parallelism to extract that much parallelism.

— ⁂ —

Why does this matter for reproducible software? Because if the forever-platform imposes a 10100× or a 1024× slowdown then we'll only use it when we really have to, and most of our software will fail to be reproducible, and we'll be back to directories full of screenshots of things we can no longer do.

— ⁂ —

Yes, 4% difference is three weeks of Moore's Law. But perhaps Moore's Law has already ended?

To, "no amount of concrete performance gains will allow them to have safety or correctness," I would add, "or durability". Modern software is not just a ghetto but a shantytown slum built from cardboard: it collapses after a few good rains.


You need a high level relational projection system that can operate on multiple dimensions simultaneously, shaders or SQL are ideal for taking advantage of parallelism.

Esp if you can operate symbolically, think triple nested loops where the inner loop is some expensive projection, if it is over a regular space, and can be decomposed as a tree. I think this language could be represented as a Wasm DAG of functions, map/pmap/apply/tree_call/reduce

Having a spec and a body of software to run but no test oracles for how that software runs feels contrived. If you have megabytes or gigabytes of code but not even screenshots of how it should it should look when it runs isn't the same as having a book in a dead language and not knowing what it should sound like.

> However, the third cart was a metacircular interpreter, so in some sense it potentially represents a kind of reference implementation — just, one you couldn't use until your UM was running well enough to run it.

This is a great example of a test oracle. Fixed-point correctness.


Hmm, I understand a "test oracle" to be a thing where, if you submit any arbitrary input to it, it will tell you what the correct output should be. The metacircular interpreter isn't such a thing: you can submit an input to it, and if its output is different from the output of the implementation it's running on, you know that implementation has a bug.

Say my implementation is P and the metacircular interpreter (presumed correct) is M. If P[M[X]] outputs "foo", while P[X] outputs "bar", we know there is a bug in I. This is similar to a test oracle in some ways: given a test oracle O, if O[X] outputs "foo" and P[X] outputs "bar", we know I has a bug. The difference is that in the metacircular case, the correct output might actually be "foo".

Worse, if P[M[Y]] outputs "baz" and P[Y] also outputs "baz", it does not therefore follow that "baz" is the correct output, as it would if O[Y] outputs "baz". It might just be that P has a bug that flows through to the metacircular interpreter. Maybe it rounds division the wrong way, or incorrectly considers negative zero to be unequal to positive zero.

I agree that screenshots would be very useful, but a static repository of screenshots is also not a test oracle, for a slightly different reason: you cannot submit a novel input to it and find out what the correct screenshot would be.

I agree that a DAG of functions is a useful representation that preserves significant degrees of parallelism that conventional machine-code representations expunge. However, Wasm also expunges them.

The idea of using a relational rather than functional or imperative computing paradigm for archival is interesting.

We should probably discuss this in a more amenable format.


ShareX for you Windows folks. It's great. https://getsharex.com/


You can also configure sharex to run tesseract ocr locally on the images, making them searchable while keeping everything sound in terms of privacy. There is also a hack to compress pngs so that the file size becomes next to nothing


If you're on Windows 10 or later, there's also Win + G which will bring up the Xbox Game Bar. There you can capture screenshots and video or set up custom key shortcuts to start recording, etc.


Windows 10 and up also has Win-Shift-S, which brings up a screenshot UI similar to that found on macOS with the Command-Shift-5 shortcut.


Was going to post this. It is amazing.

But I think it will upload to imgur, be sure to configure it off. I have it sent to have in a local folder.

It also has great .gif making support and can also screen record to a .mp4


ShareX looks great, I will check it out.

In Windows 10, you can also press Win+PrintScreen to save a numbered screen shot to your Pictures\Screenshots folder.


This seems like just what I need. Does anyone have recommendations for something similar for MacOS?


https://flameshot.org/ posted above. Not sure if it does all that sharex does.


The author says correctly that it takes a lot of time but I find the VM experience on modern PCs to be very satisfying as long as you are emulating systems from >10 years ago.

On my M1 Mac I have well functioning QEMU VMs for MacOS 7, 9, 10.4, 10.11, Windows XP Windows 11 and some Linuces. I don't use proprietary SAAS formats so by all purposes I can read 99.9% of all files I have ever created with little effort.


For a few months I ran a simple Python keylogger to study my keyboard usage, in response to pain. Much later I found disk was filling up and traced it back to periodic screenshots, like every few minutes. It was a creepy feeling but also kind of fun to step back in time and see what I had been doing. Would've been more fun if it were during the days when I did 3D modeling and pixel art.


Using tools, formats or workflows that you will lose access to in the future is the problem here. I have work going back decades and not much that I can think of that I couldn't resurrect in some form with relatively little effort.

Having screenshots of thing that are otherwise lost would just make me sad. It's like preserving a single frame from a movie and losing the rest.


> It's like preserving a single frame from a movie and losing the rest.

I don’t think that’s necessarily true, because you are unlikely to ever rewatch the movie anyway (and then it won’t be as good as your recollection). The point is to be reminded of the fact the movie existed at a time when you wouldn’t ordinarily think of it.


Not exactly a screenshot, but I recently found a file from December 1991 that was a proprietary image file of the home screen of a local BBS I used to visit in high school. Was actually able to convert it to a PNG after some work. This might be my oldest file that I personally saved to disk. Now if only I had all those cassette tapes from my TRS-80.


We've automated this nicely for you on any public URL, and even some private one's that use cookies for access: https://VisualSitemaps.com

Lot's of folks use us just for archiving purposes. We also have a nifty Ai that compares visual changes in the screenshots.


Wish I'd discovered you sooner...


Everything changes.

Your access to a website could change or the server could go down or a website may get bought.

Good reason to keep taking screenshots.

There have been dozens of times I ve been saved by a screenshot.

- insurance cards on phone - a random screenshot I took of a photo of my citizenship certificate saved me 2 weeks For travel - names of peopel - context during a call


I've found "print to PDF" to be great for web pages I want to keep, as well. You can then search, copy, print, etc.


For Windows folks, I’ve found OneNote’s screenshot tool convenient for quickly adding screenshots while taking notes.


I absolutely agree with you pcr9103303. I have been following this practice since 5 years and I have benefited immensely from it (recall, retention and revision).

I have a dedicated screenshots root folder with sub folders that gets synchronized with the cloud and I go through them regularly.


Tropy is an application to turn photos into documents and organize the items via collections. It’s free and open source. My partner’s research involves collecting images and they find it works well.

https://tropy.org/


I make lots of screenshots.

The problem is organizing them ...

Perhaps if I ran them through OCR, it would be easier to grep through them.


Not just OCR, but object recognition - just has to be smart enough to read the window's title bar to tag the shot by applications present in it and by document name when some are visible.


Google Photos does it for you. Very handy a few years ago, I took a picture of the private wifi password backstage at a bar I used to work at, lost the saved login when I swapped phones. Searched my photos for "Thunderbird" and boom... a picture of text saying "Thunderbird private WIFI password".


I'm a bit disappointed that this doesn't work in Apple Photos. The app does a fairly good job at recognizing photos and doing OCR, but you can't search the text.


I use Obsidian to manage my notes. And 90% of them contain images, most of the times screenshots [taken with Greenshot]. My killer duo, at work, to never forget something that was shown to me during a Google Meet, or something that I read in a power point presentation.


My relationship with my screen has been transformed since I learned the OS X screenshot short-cuts -- taking notes, writing documentation, etc, all a breeze since muscle-memory captured Shft+Cmd 4 with Ctrl+Click giving you copy to clipboard


> I can open them in a modern word processor, and they look similar – but it’s not the same. Some of the fonts and graphics are missing, and I don’t know where I’d find replacements.

Why not convert your documents to PDF? That‘ll preserve the layout.


One of the things I regret most is not keeping backups of the programs I wrote as a child almost 25 years ago. They all just gone, tens of thousands of lines of code. I remember spending whole summers coding and it's all gone.


Zoom meetings have been great for screenshot potential. Previously if someone had an interesting slide at a conference you would have to zoom way in with your phone and hope you got it with enough resolution from your seat.


I keep many old screenshots around. A lot from my "eye candy phase" are a mix of hilarious and cringe to look at. But it's also really amusing to see how my workflow and setup has evolved over the years.


I'm not even allowed to save an email on my work computer. i can still copy and paste but i cant save. I always wonder how much longer I'll be allowed to screenshot an app by default.


Open question → What's your favorite screenshotting tool?

Mine's CloudShot. I can upload some screengrabs straightaway to Google Drive. You have the option to configure keyboard shortcuts to do that.


I write my own powershell script for taking screenshots periodically

https://gist.github.com/soruly/889d926fcd4b2b8e8dc113a100b9d...


Supercool.

If you wanted to add a functionality wherein you could upload to Google Drive like CloudShot, what would you need to write?


Is there any program, maybe using ShareX, to just automatically and periodically take a screenshot in the background and save it to a folder?


Having worked in digital archiving, I wholeheartedly approve of this suggestion. Digital artifacts degrade alarmingly quickly.


What is your software pipeline to manage screenshots? If we could: - bucket and tag - turn them to tasks - follow ups


My technological life is the antithesis of this post. Screenshots are hosed as soon as I use them or send them.


any thoughts how to manage the screenshot? because I have thousanda and its kinda hard to browse them


I still have files I made from the 90s... but my guess is that they won't be around forever.


I'd say blog about your stuff with the screenshots/screencasts, I have one 12 years old of an iPhone game I made https://jeena.net/monster-tiles


macos makes it really easy to record the screen. cmd-shift-5 brings up the screenshot app. but converting them to gifs or other video formats is not possible, unless you use third party apps. in that case, you might as well just use a better screenshot app that does that for you.

what I wish we could record, however, is the system interactions (kinda like how you record games inside the game itself). it doesn’t record a video, but rather your mouse movements and keyboard inputs, along with the location of apps and windows and their state. it would take more space but it would be more useful in case you wanna go back and run counterfactuals.


Here you go.

https://imgur.com/a/BARUMWQ

EDIT: Ah, I realise now this achieves the same thing. But it does record video, so I'm not sure what's missing other than a visual representation of keyboard input.


My gripe with MacOS is that it creates huge files. A few MB for a screenshot, when 250KB are often enough; and probably 10MB per minute for a small area, for video, which makes it immediately impossible to send to customers.


You can configure the native screenshot app to create jpegs (smaller), but I think you lose transparency.


he was talking about videos and he’s right, recorded videos are huuuge


Screenshot history:

1988 - Emacs 18.52

1992 - Emacs 18.59

1994 - Emacs 19.7

1996 - Emacs 19.7

1998 - Vim ????

2002 - Emacs 21.1

2006 - Emacs 21.1

2012 - Emacs 24.1

2018 - Emacs 26.1

2022 - Emacs 28.1


I take screenshots of the Windows update with an iPhone.

A spinner shows with the words Working on updates. 100% complete. Don't turn off your computer

Windows 11 is ready—and it's free! Get the latest version of Windows with a new look, new features, and enhanced security. [Download and install] [Stay on Windows 10 for now] Checking for updates ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: