Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is there any software you only made for your own use but nobody else?
246 points by Crazyontap 16 days ago | hide | past | favorite | 468 comments
Just wondering if any of you has created any software that is meant for your use only and will never see the light of the day for anyone else.

What does it do? How did you make it? How much time you spent making it? How often do you use it?




I built a hydroponic garden as a covid hobby. I wrote software to maintain the garden, water it on schedule, apply ph changes to the water, turn lights on / off, humidify, as well as monitor statistics (temperature, humidity, water temperature, water ph, water conductivity).

Rough guess would be that I spent 50 hours actually working on the software.

There's a handful of raspberry pis involved. I wrote everything in elixir and used https://nerves-project.org. The dashboard is written with phoenix live view. One of the raspberry pis is the "brain" and basically runs the dashboard and controls devices. The devices are all in an elixir cluster. I also run timescale db for some basic history of metrics.

Once I start a grow I don't use it that much actively, but it passively runs all the time. I check in every few days or week to make sure nutrients are looking good.

I've grown strawberries, lettuce, jalapenos, and cayenne peppers.


What do you use for valve automation? Is it a commercial or custom part? I've been wanting to do the same myself, but I hear there's all sorts of quality issues with automatic valves.


I originally bought some solenoid valves to experiment with, but ended up simplifying my approach. I use a submersible pump that is plugged in to power. I can just automate turning on/off power to that outlet (I have two TP-link / Kasa HS300 strips). The nutrients / water are in a tank below the tray of plants, so when the power is "off", gravity brings the water back through the pump into the tank again.


What a clever and simple solution, I love it. I have been thinking about such a grow system for a long time for my chilli plants, and after having had a few split solenoid valves on a different irrigation project I was very hesitant. Thanks for the inspiration.


It's a relatively common setup known as Ebb and Flow (Flood and Drain).

Not OP but I use motorized ball valves from Amazon [1] which are hooked up to a four way water hose manifold to create four different zones. It's wired up to an ESP32 that controls them with relays via GPIO, using the ESPHome sprinkler controller module (which does pretty much everything OP's code does). I've never had a problem with them and the last time I even touched them was over a year ago. They're pricey but you can DIY them.

The usual sprinkler valves at hardware stores need quite a bit of water pressure to change state which is probably what most people have a problem with, especially if they're trying to feed them with the kind of pumps they get at hydroponic stores.

[1] https://www.amazon.com/Motorized-Stainless-Electrical-U-S-So...


Not OP, but I made one of these for my partner and I’s bonsai garden.

I use standard 3/4” sprinkler valves from the big box stores, connected to a manifold via unions on each side. This enables me to swap if needed, but these are ruggedized and will last a while. They do take 12VAC, so you need a transformer and use relays to turn them on, but they work very well.


Any chance of making your code public? I'm thinking of dabbling in aquaponics in the near future, and what you've built sounds almost exactly like what I would end up working on myself.


This sounds like a great project to reconnect one with growing food… with some perks.

Appreciate you sharing, helps share others are thinking about it too.

Is there a reason you went with hydroponics vs aeroponics?


I know next to nothing about aeroponics - I did read about aquaponics a bit when I started, but hydroponics seemed the most accessible for me.

I've loved making my own crushed red pepper. And there's something fun about growing plants in the middle of a cold snowy winter in the basement.


Thanks, appreciate it. Makes sense to me.


In practice aeroponics is very fragile. All it takes is a failure for a few hours for roots to irrecoverably dry out and kill the plants. Most hydroponic methods give you a safety margin of days or even weeks.


Thanks for the insight, I'll be sure to include hydroponics in my reading :)


Any chance we can see the code? I'd love to do some work in Nerves and that sounds like the perfect project.


Did you published this project, sounds interesting just to see the basics of nerves in a real small thing ?


That is AWESOME! I would love to see what you did for this. Even more I would love to build the same.


I agree, a blog post or other description would be great and really inspiring for my own projects.


How does it automate ph changes to the water?


I use two pumps from Atlas Scientific - one for a jar of base and one for a jar of acid. I have a sensor for PH so I can see in the live dashboard, and can click a button on the dashboard to disperse a set number of ml to the tank. I should have been clearer - PH is the one thing I didn't "close the loop on" because it'd be a little volatile. For instance when you first add nutrients to the water the PH drops steeply but stabilizes over hours or a day, I didn't want to respond too constantly to those changes. I should spend more time on this aspect though and maybe just have notifications for when it makes decisions.


Not OP but I use aquarium peristaltic dosing pumps that pump General Hydroponics pH Up/Down solution controlled by a Raspberri Pi with an Atlas Scientific pH sensor.


Reminds me of a classic story that makes a good programming parable:

Back in 2011, my girlfriend was working at a catering company that announced shifts via a webpage and workers had to sign up for them. Other workers tended to pick them up very quickly, so it was hard to get too many shifts.

I wrote a quick web scraper to automatically accept any shift offered and email her.

For a couple weeks it was great, suddenly she had all the work she needed.

Then one day she woke up late to find a voicemail telling her she was fired.

Earlier that morning the script had detected a last minute job announced just an hour before start time and immediately accepted it, resulting in her not showing up to it. I had not accounted for the possibility they would announce a job so last minute, since it had never happened before.


As they say, Unix gives you enough rope to shot yourself in the foot.


Great story. Did they fire her for that one time she didn't show up? Thats tough

This was in 2011, just after the Great Recession. It was a tougher labor market then. It was normal for low skill jobs to fire people on the spot for no show / no call. The labor market has changed in that sector since then.

How'd she take it?


Well, while we obviously were both upset and sad about the situation, she was very understanding, for which I am thankful. We were in it together. It helped that she managed to find a slightly better job pretty shortly thereafter. Nevertheless I felt really guilty about it and as a result the obvious relevant professional programming lesson was instantly and indelibly burned into my brain: you can fix bugs, but you may never be able to undo the real world user level consequences of the buggy software's time in production before being fixed. I was just out of college then and it solidified more respect for the power of the machine.


I've favorited your top-level comment. This is an incredibly important lesson to learn, one that I wish I learned earlier in life, but your favorited comment will be a reminder for me in the future.

I'm deeply gratified. Thank you.

Almost all of the software I make in my free time is for my own use.

My most used tool I have is a note taking web app that automatically saves markdown files to S3. The most important feature I added was a little button that would automatically make a note for the current day, and put it in the daily notes folder, all sorted and organized. I also have full text search which is very helpful for finding old notes.

I also use an RSS reader I made. Instead of just showing you the text of the page like a standard RSS reader, it proxies the web page so you can read the article directly from the website, but with the added benefit that since it's a proxy, I can control all the HTML, CSS, and Javascript on that page.

Another daily one I use is a random background music playlist for Spotify that is auto generated daily. Using the Spotify API you can find random music, then find random instrumental music from that random music. I use this to discover new songs to listen to while working.

Basically, making your own software is fun. Making production software is much less fun. I don't need to worry about a million things when I make my own software. Sure, I make $0, but if I spent months making, for example, my notes app production ready, and tried marketing it, I'm guessing I'd still be sitting at $0.


> Using the Spotify API you can find random music, then find random instrumental music from that random music.

For this, I made a Spotify playlist with 41 hours of instrumental soundtrack music that helps me focus. It's not random like yours, but with that many hours, there's enough variation if you put it on shuffle.

It's mostly epic and uplifting movie scores. Or suspenseful and building up to something... none of that 8-bit video game beep boop shrill stuff.

https://open.spotify.com/playlist/31buZEaVGW9f5Y4cEcKtbt?si=...


Since your #1 song on this playlist is a Star Wars track, there's a fun little easter egg in a Spotify client. The progress bar is replaced with a lightsaber and you can change its handle by clicking on it.


Couldn't find it...


Added to my Spotify. Thank you.


How does the full text search work?

On-device copy of all the notes? Text index in another database? Just download them all from s3 when you search?


Text gets indexed in another database every time a note gets saved/deleted/created. There might be better solutions with AWS Athena but using a simple MySQL database was by far the easiest (and quickest in terms of querying) way to add full text search. My database is paid via usage so I don't have to spend much money, if any, to index my notes.


> My most used tool I have is a note taking web app that automatically saves markdown files to S3. The most important feature I added was a little button that would automatically make a note for the current day, and put it in the daily notes folder, all sorted and organized. I also have full text search which is very helpful for finding old notes.

I have a very similar thing, mine uses a dropbox folder as a backend so I can easily browse on laptop and use whatever. I like the daily notes idea though!


Sounds like you'll be better served by obsidian. The free plan covers all those use cases and more


obsidian came out 2020 and I built my version before 2017. also the initial version was on Windows Phone and I migrated to android later

Did you read the title of this thread?


Yes I did. So?


I’m stealing Spotify random instrumental music idea! Why I cannot come up with similar ideas by my own? Thats the hardest part I think…


I actually stole the idea from a guy named Max Hawkins who made a Spotify Daily Random playlist. I made it to be just instrumental music since I wanted it for working.

https://maxhawkins.me/


> Using the Spotify API you can find random music, [...]

Which API endpoint do you use for this?


There is no endpoint for random tracks. The method I found that works the best is to search for two random letters, e.g. "fq", pick a random result, and then use the API to find recommendations based on that track that have a minimum instrumentalness of, say 0.9, then pick a random recommendation from the results. I store all the songs that have been added to the playlist in a DynamoDB database and make sure no song gets added twice.


Yes. I made two recently.

- TalkToYoutuber[1]: Download the transcripts from a youtube channel and hook it up to gtp4 with RAG to let me "talk" to youtubers. There's a bunch of youtubers who have useful knowledge to share, but no blog or wiki, so semantic searching their video transcripts is the next best thing.

- YoutubeThumbnailSearch[2] - Embded all of a youtube channels thumbnails using CLIP and search them using text. I often need to search through news channels with 10k+ videos, often in foreign languages, so not having to rely on the title or transcript but the video thumbnail helps. This is a much more niche usecase tbf.

I am thinking of making a scaled down version of [1] so I can "talk" to long videos, like conference speeches or university lectures. Should take an hour or two to cook it up since it will reuse most of the code from [1].

---

[1] https://github.com/FardinAhsan146/TalkToYoutuber

[2] https://github.com/FardinAhsan146/YoutubeThumbnailSearch


Something like talktoyoutuber would also be very useful for certain discord servers. They'll have a lot of knowledge on something but will gatekeep it with "just use the search." and they'll refuse to build a wiki or really engage in any kind of organization.


I can cook it up if it would actually get some users.

Ultimately, I want to make a tool where you can plug in any arbitrary document store/scraper to an LLM with RAG. But I think we are still not there yet in terms of all purpose scrapers.


> I am thinking of making a scaled down version of [1] so I can "talk" to long videos, like conference speeches or university lectures. Should take an hour or two to cook it up since it will reuse most of the code from [1].

This would be useful to me as well. If you do end up making it, could you reference it on your existingGithub


Yeah, I'll add a reference to it. Are you watching the repo ?

this is a really cool project, you should send it to people at youtube, get a job there and implement this in their sidebar...


The projects are dead simple. I think if Youtube wanted to, the would have implemented this long back. It's probably a legal landmine and financial tar pit. But thanks nevertheless.


You’d be shocked at how hard it is to fund, start, and ship even a basic project at a company the size of YT or anything else in the Google/Alphabetaverse. It’s probably also a legal landmine and all that, but organizational friction’s more than sufficient to explain simple useful features failing to ship.


Don't forget how Google incentivizes projects, one gets promoted for shipping the project, not for maintaining it. If indeed this were shipped at YouTube, I wouldn't be surprised if it gets slowly deprecated or outright shuttered in a few years. It's better to keep it open source for everyone.

Over the past two years I've been working on tooling that allows me to delink programs back into object files. What started out as a bunch of Jython scripts is nowadays a full-blown Ghidra extension that can export working object files from a program selection in two mouse clicks. I'm using it as part of a video game decompilation project, but it also enables a whole bunch of other use cases I've documented on my blog.

It's not that it is meant for my use only (any capable reverse-engineer familiar with Ghidra should be able to pick it up and use it) nor that it will never see the light of the day (it's open-source). However, it is such an esoteric capability and outright heresy according to computer sciences that I'm having a hard time just finding people who can wrap their heads around the concept, let alone people who would actually want to use this. Simply put, it's so far out there that I don't even know where or how I could even advertise it in the first place, which makes it de facto for my own use only.

A couple of people did end up reaching out to me in the last couple of weeks about it, so it might be on the cusp of sprouting a very tiny user base. Still, I've made it for my own use and until very recently I've never expected anybody else would use it.

If someone wants to check out the dark magic: https://github.com/boricj/ghidra-delinker-extension (disclaimer: might give nightmares to linker developers).


Something like that really deserves to be written in C so you don't need to install Ghidra to use it. The way I imagine it working is if I want the function `foo` then I'd say `objsuck -ffoo -ofoo.o prog.elf` and it'd look at the elf symbol table to find the `.size` of `foo` and copy that symbol into the .o. I guess you would then need to use xed to disassemble the opcodes to see what other symbols it jumps into or calls, and grab those too, along with any memory references, and then emit relocations. Overall I support this, since it'd be the easiest way to expropriate content from open source codebases whose source code is too byzantine to let me extract one teensy tiny little feature without the bloat. To me this is perfectly normal. You'd also be smart to only support COFF if the person uploads the binary to a hosted service you control and pays you money. In fact this would be even better if it could generate .s files from the object content, so no one would get triggered by binaries.


> Something like that really deserves to be written in C so you don't need to install Ghidra to use it.

I think there's the Witchcraft Compiler Collection if you want a freestanding option [1], although I haven't looked into it too closely.

The problem is that object files are made up of section bytes, a symbol table and a relocation table. You can't just rip bytes out of a program and call it a day, you need to recreate all that information in order to actually delink code into relocatable object files.

Doing that isn't a trivial problem, it requires a lot of analysis and metadata, especially if you don't have debugging symbols or symbols at all. Leveraging Ghidra allows me to concentrate on the specifics of delinking, which can get very tricky depending on the platform (MIPS in particular is a nightmare to deal with).

I'm also trying to solve delinking in general and not just for one platform/ISA pair, so reinventing the wheel for every architecture out there is a nonstarter in that context.

[1] https://github.com/endrazine/wcc


It's trivial if you can compile the binary with `cc -static -r` and you don't need to extract dwarf data too. If relocations are stripped, you're fine so long as you can count on st_size being present (i.e. doesn't contain handwritten assembly) and you're able to parse the machine language. If symbols are stripped, then it simply can't be solved. It's just reverse engineering at that point.


If you reduce the problem statement to an ELF x86 program written in C, with a symbol table and a complete relocation table (not just the one you get when dynamically linking), then sure it's trivial, you have almost all the information you need to make an object file (issues like switch jump tables can still crop up in that case). If you don't have that relocation table from `cc -r` however, you'll run into problems.

Without this relocation table on hand, you'll have to recreate it in order to make the section bytes relocatable again. This means analyzing code/data and identifying relocation spots, like you've said. But that `0x00400000` integer constant within an instruction or a word data, is it referring to the function at that address or is it the TOSTOP flag value? Who knows, but each time you get it wrong you'll corrupt four bytes in that object file.

I'm dealing with one rather gnarly scenario, which is a PlayStation video game without any source code, symbols [1] or linker map, just a bag of bytes in an a.out-like format. The MIPS architecture also happens to be an absolute nightmare to delink code from (one of the many pitfalls for example is the interaction between HI16/LO16 relocation pairs, branch delay slots and linkers with a peephole optimizer).

I've been at it for two years and I've only recently managed to pull it off on the entire game code [2]. Writing out the object file when you have the program bytes, the symbol table and the relocation tables is the easy part. Writing an analyzer that recreates a missing relocation table for the 80% of easy cases isn't too difficult. Squashing out the remaining 20% of edge cases is hard. All it takes is one mistake that affects a code path that's taken for some very exotic undefined behavior to occur in the delinked code.

Delinking with a missing relocation table (and without manually annotating the relocation spots yourself) is a thing that looks easy at first glance, but is deceptively hard to nail all of the edge cases. I'd gladly be proven wrong, but if you do have the full, original relocation table on hand then you're cheating with `cc -r` on code you just built yourself. Almost no real-world artifact spotted in the wild one would care about is ever built with that flag.

[1] I did end up recovering lots of data out of a leftover debugging symbols file from an early build later on, but that's a story for another time.

[2] Note that I'm working on top of a Ghidra database that contains symbols, type definitions and references, so the bulk of analysis is actually performed upstream of my tooling. Even then, the MIPS relocation synthesizer is a thousand lines of absolute eldritch horrors, but I do acknowledge that the x86 relocation synthesizer I have is quite tame in comparison.


> the MIPS relocation synthesizer is a thousand lines of absolute eldritch horrors

Wow I only really know amd64, arm64, and i8086. What is it about MIPS that makes it so evil?


That would warrant an entire blog post to describe all the pitfalls [1], but I'll condense it down to the highlights.

On MIPS, loading a pointer is classically done in two instructions, a LUI and an ADDIU, which forms a HI16/LO16 relocation pair I need to identify precisely in order to delink code. I'm using Ghidra's references in my analyzers, but these are attached to only one instruction operand, typically a register or an immediate constant.

So my MIPS analyzer has to traverse the graph of register dependencies for a reference within an instruction and find which two instructions are the relocation pair. It's trickier than it sounds because references can have an addend that's baked in the immediate constants (so we can't just search for the pattern of the address bits inside the instructions) and complex memory access patterns inside large functions can create large graphs (ADDU in particular generates two branches to analyze, one per source register). It's bad enough that I have one method inside my analyzer in particular that is recursive and takes six arguments, four of which rotate right one step at each iteration.

But that graph traversal can't be done in reverse program order, because there are instruction patterns that can terminate the graph traversal too early with the right mix of branches, instruction sequencing and register reuse. I've had to integrate code flow analysis to figure out which parent code block has to be actually considered during the register graph traversal.

But the most evil horror is the branch delay slot. One particular peephole optimizer consists of vacuuming up a branch target instruction inside a branch delay slot and shift the branch target one instruction forward, which effectively shortens the execution flow by one instruction. It also duplicates the instruction, which is catastrophic if it had a HI16 relocation because now we have LO16 relocations with multiple HI16 parents, which can't be represented by object file formats. I have to detect and undo that optimization on the fly by shifting the branch targets one instruction back, which I accomplish by adjusting the relocation addends for the branches.

I've only written relocation analyzers for x86 and MIPS so far. I don't know what other horrors are lurking inside other architectures, but I expect that all RISC architectures with split relocations will require some form of that register graph traversal and code flow analysis [2]. What I do know is that my MIPS relocation analyzer [3] is probably the most algorithmically complex piece of code I've ever written so far, one that I've rewritten a half-dozen times over two years due to all the edge cases that kept popping up. I also had to create an extensive regression test suite to keep the analyzer from breaking down in subtle ways every time I need to modify it. I expect that there are still edge cases to fix in there that I haven't encountered yet.

[1] I've written about some of them here, but it's far from the whole story: https://boricj.net/tenchu1/2024/05/15/part-10.html

[2] That piece of code is split off in its own Java class: https://github.com/boricj/ghidra-delinker-extension/blob/mas...

[3] In case you're curious: https://github.com/boricj/ghidra-delinker-extension/blob/mas... (remember that the register graph and code flow bits are split off inside another class)


> Something like that really deserves to be written in C so you don't need to install Ghidra to use it.

By that logic, it should be written in JS so you don't need to install a C build toolchain for it.


I know a linker developer, I'm going to send this to him :)

Not sure about heresy according to computer science. Sure, it's not intended, but it's a very clever thing to be able to do.


Delinking by itself isn't a heresy (no more than disassembly or decompilation), but what I do with it definitively is. Ripping out MIPS code from a PlayStation video game and shoving it into a Linux program, dismembering a x86 Linux program and turning it into a native Windows program...

It's when you get creative and throw ABIs out the window in order to create some cursed chimeras that this really becomes heresy.


One person's heresy is another person's sickos.png. This sounds exactly like my favorite sort of object code vivisection.


Technology like this will create huge selection pressures against desktop apps if it becomes easy for people to reverse engineer, remove payment mechanisms, and then freely distribute. Wouldn’t you think?


Delinking isn't a miracle technology in that regard, you still need to put in the work to reverse-engineer the artifact.

It does allow a couple of nifty tricks, like pervasive binary patching (if the program is chunked into relocatable object files, then you're no longer constrained by the original program memory layout when patching/replacing stuff). It's also useful for decompilation projects, where you can reimplement a program one piece at a time until you no longer have binary pieces left and still create a fully working program at each step (you don't even need perfectly matching decompilation since the linker will mend stuff back together anyway).


Freely redistribute, I don't know. If a program can extract code from another one, a program can detect that code. It looks similar to virus signatures. A company with some IP would run the detector on the software of competitors.


The tools and know-how to remove payment mechanisms from binaries have existed basically as long as binaries themselves have.


I’ve developed several webapps to help our family and personal life:

* Vehicle Maintenance Tracker: Logs all vehicle maintenance and sends reminders based on time or mileage for upcoming tasks. This one is probably one of the most used and useful of everything I have made.

Lend/Borrow Tracker: Keeps track of items we've lent or borrowed, to/from whom, and when. We use it most for books, but it can be used for anything.

Library Book Manager: Lists all the books we checked out from local libraries, displaying their cover images and ISBNs. Using a barcode scanner, we can quickly "check in" books on return day, accompanied by an adorable "ding ding!" sound recorded by one of my kids. Really beats manually searching the paper printout. It can be used on multiple devices at the same time - the checkin status of each book is synced live between all users, so each kid can find and check in their own books.

School/Home Lunch Manager: Displays the school lunch menu alongside each of my kids’ names. We mark school/home lunch days at the start of the week for easy morning prep.

Sickness Documentation: Tracks sickness details like symptoms, medication, temperatures, and timelines, helping us coordinate how we are going to take care of the kids when they are sick.

These apps, along with a few others not listed here, run on a server in my home, accessible only within our network (in home of via a VPN).


Very interesting! We’ve always tracked medications on paper, but would love the option to add in reminders, etc. Would you mind sharing any of these projects if you have them on gitlab?

Yep.

I wrote bard, a Lisp that combines features of Dylan, Common Lisp, and pure-functional languages purely as a long-running experiment in what I could do to make programming more fun for myself. I subsequently used two versions of it in client projects (to prototype some solutions and to compile them to data used by the delivered product; nothing delivered to the clients depended on my weird Lisp dialect).

I wrote a tool called nym to help generate names for characters and other things in games and stories. It's enough fun in itself that I sometimes just play with it.

I wrote another tool called model-namegen to generate names with stricter constraints for science-fiction stories. Those names are mapped uniquely and reversibly to 64-bit integers used in the story to generate names for machine intelligences. I also wrote a prototype for a 3D multiplayer game called The Fabric set in the world of those stories, and wrote code to generate clusters of colored cubes for use as avatars representing the AIs. Each character's unique ID is mapped to a unique name and also to a unique 8x8x8 configuration of lighted cubes. An example name is Ixion Eleven Chrysotile, the name of the autonomous robot who is the narrator of my Fabric stories.

Following is a link to a very short movie of the avatar for the character Miriam Five Bittersweet Earth: https://evins.net/downloads/miriam-5-bittersweet-earth-2-2.m...

I wrote a couple of versions of a library for Common Lisp to make certain functional programming idioms I like more convenient for me in Lisp. They're called folio and folio2.

I wrote a library called taps that similarly makes use of Richard Waters' SERIES package for Common Lisp a bit more convenient for me. Actually, taps started as part of folio and I ended up wanting to use it on its own often enough that I broke it out separately.

I've written several little tools for calculating and keeping track of information that's useful for friends-and-family gameplay.

I've written several other libraries and applications purely for the purpose of exploring some idea or other that interests me. For example, beasties was a pond-life simulation that I used as a playground for recreational genetic programming. Panels was an experiment in user interface that used a constraint solver for laying out windows and widgets, and an append-only log of all UI events together with a query engine and a knowledgebase. Panels could display graphic representations of recent user activity and retroactively define mappings from sequences of events to user-defined handlers. It operated on the principle that everything detectable that a user did potentially had some purpose, even if we don't know what it is, so we should remember all the user's actions and offer them the opportunity to define what they meant.

I've also written a few little things primarily for the purpose of helping friends and family with this or that task.

I've spent a lot of time over the years on things like these. As you would expect, they've taken varying amounts of time and I don't really keep track of how much. I use some of them (e.g. taps and other pieces of folio) almost every day, and others intermittently. A few of them (such as beasties and panels) I haven't really touched in years now.


I have a 3-letter domain that I use as my personal playground (think "xyz.com"). It features:

- One-click anonymous upload of text, clipboard, and files. So from any computer, I can visit xyz.com + ctrl+v to upload the clipboard, write notes to myself, drag and drop a file, etc. It's a risky feature, but there's nothing to attract attention and you don't even get a URL back.

- Authenticated sessions see a list of uploaded files, with buttons to copy link, download, delete, or generate QR code. There's also real-time notifications for each upload.

- Simplified RSS feed reader[1]. It scans my feeds every 10 minutes, and displays the URLs of new items. It has one button, that opens all URLs in new tabs and marks them as read.

- A Server-sent-events channel[2], and a UI that displays visitors in real time. The channel allows me to send redirects/file downloads to any visitor, so I'm constantly asking people to visit xyz.com when I need to hand them something digitally. Visitors are color coded with randomized backgrounds for identification.

- Youtube video downloader (powered by yt-dlp), along with audio extractor and pitch+speed correction for songs.

- Authentication is by approval from an already authenticated session (usually my phone). In the worst case I can edit the text file via SSH.

I add or remove widgets according to my needs. At one point I had chat integration with Facebook+WhatsApp+Telegram (got me banned from WhatsApp), radio player (replaced with Spotify), and even balances by scraping my bank accounts, credit cards, and Steam (removed after a bank started asking questions).

The whole thing runs in custom pair of Go and Python servers, developed slowly over the last ten years, and I use it multiple times a day. Feels like a digital tree house, messy but fun and useful.

[1]: https://github.com/boppreh/feeder

[2]: https://github.com/boppreh/server-sent-events

Edit: people have found it and started spamming uploads of "HELLO FROM HN". One IP looks Swedish. Well, Hi! But I'm taking it down for a few days to avoid tempting more people.

And shoutout to the Go team, that server has been rock solid and 100% backwards compatible over the past ten years.


This is like 90% of what I need to setup on a server for my personal usage. Issue is I don't do front dev at all so I never really start working on it. Any chance you've open sourced it ? (Or if not opensourced but you're willing to privately share the code, you can reach me at hn [at] manoz [dot] co)


I sent you an email with the code and instructions. I won't share it openly because I don't want to tempt people into finding security flaws for fun.

It's definitely not code that I'm proud of, but it's been serving me well for many years, and might help you get started.


> ..., along with audio extractor and pitch+speed correction for songs

What do you mean with pitch+speed correction? Have songs on youtube false pitch or speed?


I often have to download songs for children to sing, but they have trouble with fast or low-pitched songs. It destroys the audio quality, but they don't seem to mind.


Cool projects, I’d definitely like my server to be able to do all these things. What have you used to deploy it?


- Oracle Cloud Free Tier[1] for a Ubuntu VPS (4 ARM cores, 24 GB RAM). Surprisingly pleasant and reliable, given who's offering and for how much ($0). It used to be on DigitalOcean, until they kept screwing up their FreeBSD support and bricked my machine twice.

- Caddy[2] web server with Let's Encrypt certificates, working as reverse proxy.

The rest is a very lazy 2010's solution:

- A Go server for HTTP (static files, uploads, maintaining server-sent-event channels). It also reads and writes events in a custom format to a local socket, for the interactive parts.

- A Python server for the widgets, communicating with the Go server.

- Source code edited manually in-place (SSH or SSHFS[3], with Git) and restarted as needed. I know, I know, awful practice. But as I'm the only user, uptime during development is not a concern.

- Startup is handled by a @reboot cronjob and a bash one liner.

- Text files for "structured storage" (RSS feed items, authenticated sessions, mapping of uploaded file names).

As horrible as it might all sound, it has survived ten years and two cloud vendors. Nowadays I might containerize it, or rewrite as one Rust server, but I think I made the right choices at the time.

8/10 given the unusual requirements.

[1] https://www.oracle.com/cloud/free/

[2] https://caddyserver.com/

[3] https://github.com/winfsp/sshfs-win


Thanks a lot for this, I’ll check it out

> removed after a bank started asking questions

What? That seems crazy. A bank was giving you an earful for accessing your own balance?


I was scraping their website every few minutes, from a cloud provider, using a cookie I created by logging in manually while VPN'ing through the cloud (so the IP looked the same).

Eventually the cookie started "expiring" quicker and quicker, and they eventually thought I was being hacked. I knew how sketchy the whole thing was, so I didn't want to admit that I was doing the "hacking" myself.

It was probably against the terms of service, triggering alarms on their side, and risking being classified as actual hacking. Not worth it.

Eventually I got a bank account and app with instant transaction notifications, so now there's no point in scraping anymore.


I would add videos I liked to the "Liked Videos" or other playlist areas on my YouTube account. I'd return to find the video months or years later, only to see that it may have been removed (usually due to copyright or other channel closure). This video removal issue also extended to other platforms. I found this annoying.

To avoid this, I made an extremely simple video platform. I save all my videos there instead. It's still my hobby project that I use the most.

The code is at https://github.com/AlbinoDrought/creamy-videos

There's an optional youtube-dl importer UI at https://github.com/AlbinoDrought/creamy-videos-importer (separated for easier updates)

The importer repo also contains a Firefox extension so any target can be right-clicked -> Import to Creamy Videos -> Select a set of tags -> sent to the importer UI, youtube-dl, and then eventually video storage: https://github.com/AlbinoDrought/creamy-videos-importer/tree...


Financial Tracking Application.

I’ve been developing this app for five years. During this time, I’ve integrated it with all my bank accounts to automatically fetch data needed for creating daily graphs and analyzing trends. This allows the app to generate predictions for long-term net worth growth.

Additionally, I’ve integrated the app with Yahoo Finance, Fidelity, and Morgan Stanley to monitor my Restricted Stock Units (RSUs) and Employee Stock Purchase Plan (ESPP). This functionality helps manage tax responsibilities on capital gains and compare growth with other job offers that include RSUs from different companies. For instance, I can answer questions like, “What if I had chosen to work at Facebook instead of Apple?”

The app also extensively utilizes data from credit card transactions. It includes charts listing new restaurants visited in the last month, tables tracking monthly subscriptions, algorithms to detect discrepancies, and numerous other features.

While I’ve considered turning this into a business, my current job keeps me very busy.


Would love to see how this works, or at least hear about how you're able to pull this off!

I've been carefully keeping double-entry-accounts of my personal finances in GNUCash, but I don't have anything hooked up to my accounts file. I was originally hoping that GNUCash would have some way of letting me sum up a certain combination of accounts to calculate i.e. "how much float I have currently," but even that doesn't seem to exist... so my first order of business is probably that sort of simple custom repoting. But would love to make use of the data and create some more actionable reports!


God, I’d be curious to see what you’ve got there. But no way in hell I’ll know how to guess your name to lookup the details.


I wrote my own home automation platform. I started >10 years ago when the existing options were pretty disappointing to me. It’s a web app that runs on wall mounted tablets in my house as well as on my and my family’s phones and computers.

It handles lights, fans, cameras, sensors, locks, doorbells, speakers and TVs, HVAC and more. It runs automations based on time, weather, presence of people, and other events. It tracks energy consumption, sends alerts, etc. There’s a strong focus on local control (as opposed to cloud).

My favorite thing about it is that the client and server run from the same codebase and share state over a WebSocket connection. I’m still working on it occasionally but mainly it just runs quietly and my people seem to like it. The whole thing is configurable (connectivity, behavior and interface) so theoretically it could be used by someone else, but for many reasons I doubt it ever will :)


As a freelancer I invoice in multiple currencies, and I couldn't find a program to generate them that wasn't an overly complex $100/mo SaaS.

So I build a little cli to generate my invoices. 100 loc and super simple.

I miss the 70s when programming was the default way to solve problems. Excessive abstractions and proprietary software often slow us down.


> I miss the 70s when programming was the default way to solve problems.

When you get deep enough into writing a custom tool it starts doing things a generic tool would never accomplish (or it would have to be bloated with features no one needs) Hard coding values and constraints for personal use makes such elegantly simple interfaces.

For example, my agenda is a beutiful boulderdash-like grid of icons. The data is a set of arrays [1,2,1,0,0,1,1] and an object for special days. There are no setting, it has zero buttons to press.

I've made countless silly things other people could use if only they knew it existed.

https://go-here.nl/real-salary-calculator.html

https://title-spider.go-here.nl

https://salamisushi.go-here.nl

endless things I've made for personal use. I think something like 60% needs one or two lines of love to work in 2024.


My (polish) bank gives invoicing software for free and obviously you can put whatever currency you want.

Bank is called millenium bank.


Millennium bank is Polish TIL

What's up with Portuguese and Poland? (Millennium, Biedronka, what else?)


Globalization I guess :)


[flagged]


That's obviously not what they meant.

I think they're more referring to several decades ago when someone had a software need they wouldn't get some off the shelf proprietary tool that costs $$$$ and is crazy bloated and locks you in. Instead they'd hire someone and that person would work with them to deliver exactly what they need and that solution would last for decades and the eventual off the shelf software that replaces it is widely seen as inferior by the employees.

There are famous stories of this. A highschool once had a student write their entire automated HVAC system on a C64 for free. It has worked really well for 3 decades, but they struggle to find replacement parts. They asked for bids to replace it from several companies and were flabbergasted to learn it would costs them an insane amount of money. So instead, they call up the kid (now an adult who still lives nearby), to occasionally do maintenance. Which solution is better? The OP is just talking about how they miss the times when a lot more people were going for the custom in-house C64 option.


There's still a huge consulting market for this kind of work.

You just have to target the right market segments. Large enterprise firms either have in-house teams or contracts with big enterprise-scale providers, so don't do this a lot, and tiny mom-and-pops tend to use off-the-shelf SaaS, and don't have the budget or internal skillset to manage these kinds of projects.

But there are many medium-scale businesses that are structured enough to have specialized use cases that the off-the-shelf stuff isn't optimized for, and which have enough cash to invest in custom solutions.


Cancelling your ISP doesn't leave you surrounded by peers who default to solving problems from scratch.


This is the same problem that comes with my desire to give up a smartphone. Getting rid of my phone doesn't make other people know how to give me directions to their house without GPS (among other problems).


At least with GPS, I've found a reasonable substitute to be an honest-to-god map. Here in LA the Thomas Guide is fantastic, but other places have "road atlases" that function similarly to my knowledge. In some ways I find it better than Google maps, because it's actually designed to be a map instead of a robot overlord dictating what you do.


Web service that turns a YouTube channel to a Podcast-compatible RSS feed, just by changing the URL from https://youtube.com/XYZ to https://$SERVICE_URL/XYZ.

Use it almost every day. It’s nice to keep up with channels I care about while escaping the algorithm.

Something I work on-and-off on. Could be interested in open-sourcing it fully if folks are interested in helping out. Im a pretty junior engineer, so work is slow and I’ve never been satisfied enough with the code to publish yet.


You should open source this!


I second this! This is somehting I'd love to use.


I would totally use this no matter how bad the code is.


Would love to see this!


I made a small tool that takes a screenshot all 5 minutes and stores it in a password protected zip file, but only if something has changed on screen.

Sometimes I go through the old screenshots and get nostalgic as the oldest are 10 years already.

I spend so much time of my life on my computer, so why not take photos of it?


That is so cool! It'd be amazing to see a timelapse of the past ten years of someones computer life.

Assuming 8 hours/day average of active screenshots every 5 minutes, would be (8*60)/5 = 96 shots per day. Times 3650 for ten years is 350,400. Divide by 60fps is 5840 seconds, by 60 is 98 minutes. So a decent movie length, as a single video.

Though, I imagine it'd be unfeasible to go through every screenshot and remove sensitive information.

(At a rough ballpark: if it took 20 seconds on average, to analyse and either bin or selectively-censor each screenshot, it'd take about a full working year, 8 hours per day.)


Ha, MS has Recall which is like this ... but obviously it's had problems.


There are things you can do with yourself that you’re not OK with strangers doing them for you.

That’s pretty much the issue with privacy. A lot of feature feels invasive because the company don’t want to remove themselves from the operations. People don’t usually care because they either don’t understand the impact or they trust the government and the laws to reign in malpractice.



Hmm.


I created my own translation app using llama3-80b, I call it "expat translator": I live outside of my home country and always struggled with using translators like Google Translate because they don't tell you if the way you're writing something feels natural in the other language. It gives me some pretty good results and I also instruct it to give me rewrites for informal and professional use, so I don't sound weird on WhatsApp for example.

It uses an on-device model for language detection and results are sub 0.3s thanks to groq

If someone wants to try: https://testflight.apple.com/join/GBxPMw2h


As a sort-of-expat myself, I can definitely relate to this struggle. Out of curiosity: does the language you're translating to have a non-latin script? I've found that llama often struggles with those.


So this app would not be for regular immigrants or travellers?


Lots of my software are for myself. To algorithmically generate wallpapers for my phones and computers, I hacked up a forth-like mini language running on GPU to make generative arts easier. Some samples https://news.ycombinator.com/item?id=40413433.

To make recalling my web browsing easier, I built a browser extension that shows my browsing related information on one page. Pressing the shortcut key Alt-L brings up a page showing my frequently visited sites, my open tabs, my bookmarks, and my history visits. Searching and navigation are super easier. Searching can combine terms with logical operators. The hardest part of the project was getting fast performance when rendering a lot of data.

I use this extension daily, and I put in some effort to polish it as a releasable software.

https://chromewebstore.google.com/detail/one-page-favorites/...

https://microsoftedge.microsoft.com/addons/detail/one-page-f...

https://addons.mozilla.org/en-US/firefox/addon/one-page-favo...


Circa 2018, I went deep into investigating ADHD and whether I had it or not. I ended up "interviewing" a rather diverse group of people, from authors, teachers, and ADHD coaches to psychologists and other doctors. During this time, I was reaching out to many people, managing schedules and appointments, and eventually publish the interviews. The "reach out" to "interview" ratio was something like 100:1 and took a lot of management.

This was my part-time hobby, I guess. My full-time gig is a software engineer at a bio-tech company.

So, I wrote some basic web software (PHP/Symfony) to manage and track the whole process. It took around 3 months to write the code, though there was no set beginning and no set end. The code was started when I became overwhelmed with the manual aspect of tracking everything. The code was done when I figured I had done enough to manage and automate the process.

Talking with a few people after the fact indicated very high interest in the software I had written. And thus began the journey to convert it to multi-user.

I started on the multi-user conversion and, maybe because I tried to make it do everything and then some, have not since finished it. The code is on my private GitHub, partially converted to multi-user, slowly rotting away (needs library and core updates).

What does it do? Manage the process of cold lead acquisition, follow-up interactions and onboarding, and eventual publishing of the resulting interviews.

How did I make it? PHP/Symfony.

How much time spent making? About 3 months in my spare time.

How often do I use it now? No longer used.


40 lines of C to control brightness on Linux in the way I like (Pressing brightness up key should make it go up more than pressing brightness down key makes it go down). It took 20 minutes to make. Used as a keyboard binding.

TODO list program and the list is periodically printed on my desktop with conky

Software to automatically book reservations at good restaurants on Resy. Supported proxies, multi account, automatically running every day and getting reservations in a certain time range, and even something with a USB GSM modem to respond to the confirmation texts you get a day before. Used it until I got banned from Resy :\

Family photo search using CLIP (actually uform) and face labels from Synology NAS. So you can search “winter +christopher” and it will only show pictures of me and sort by the most winter related ones. You can also filter out certain names, search multiple names, click on an image to get images most like it, filter by year, or any combination thereof. Took a couple days to make with Flask, pgvector, and some code to scrape data from the Synology web interface. My family uses it sometimes too though.


any chance to opensource your foto searching?

Love the Resy one!


I once ripped the corner of my windows install code sticker. I was missing the last 2 characters. I wrote a script in SikuliX to bruteforce the remaining 2 characters. It took about 4 hours to run but in the end, I got the full valid code.


SikuliX looks quite useful.

http://sikulix.com/


I wrote a tool to do automated QA on internet video (HLS/DASH, tech used for Netflix, YouTube, Twitch, etc.).

It evaluates streams against a database of 100 or so "quirks" that identify either general issues or issues that will only manifest on certain player libraries. For instance, specific "in spec" encodings which are actually non-standard in practice get flagged.

Built on TypeScript/node/Docker over the course of maybe 18 months. Used it fairly often when I was working in the space, not at all these days. Originally the plan was to license it as an enterprise vid tool.

(I've been considering open-sourcing it - would YOU use it if so?)


I’d be interested if for no other reason to see if some of the hiccups I see in streaming video recordings are more common than just me/just random.


I am definitely curious about a tool like this. I work with a lot of video streams and this collective knowledge of quirks might be useful as a QA tool


I wrote a script that looks at C++ files, figures out the dependencies (by looking at what headers they include) and automatically compiles and links out of date files. It makes programming in C++ sort of like programming in a modern language like Go. It took me a couple of days to implement and I use it any time I am working on my own C++ projects. I eventually want to add package management features to allow integration with external libraries, but this isn't a priority for me currently.

I have also worked on trading algorithms that used publicly available info about crypto order books to make profitable trades, which will obviously not be published since then they would stop working.


Bill of Materials (BOM) software for identifying dependencies along with versions that are out of date or vulnerable is a growing market in Government.


I can wholeheartedly recommend Syft.[0]

Decoupling SBOM data collection from vulnerability tracking (with your tool of choice) is a nice capability.

0: https://github.com/anchore/syft


I have good experience with

https://github.com/pivotal/LicenseFinder

This produces BOM with versions but rather than out of date it focuses on licenses which comes handy during acquisitions due diligence. Supports many languages


I wrote an app to log mileage while doing deliveries, etc for my small business. It gives me a small tax benefit at the end of the year. To make it more fun, I gave the UI a "terminal" flair.

I wrote it over the course of a week or so and I use it frequently.

It's not that it won't see the light of day for anyone else but it's a very specific niche so I doubt anyone would even find it on the app store

If anyone's curious: https://play.google.com/store/apps/details?id=io.thisischris...


In high school I built a chrome extension to secretly chat with my girlfriend on the Google Search page.

Her parents were super strict and would not allow having a boyfriend, we were caught once so wanted to be few steps ahead of their detection.

Using internet was allowed for her to study. Made the extension such that when you search for something popular click on the largest image in knowledge graph (right side box) 4 times and then do a right click the wikipedia description will become a chat box (to avoid accidental discovery by her sister).

Backend a very simple .txt file and php to insert and maintain only last 5 conversations.

I think I made it over a few days. We used it for a total of 2 times because we couldn’t notify each other when we come online.

Switched to a simple ROT13 encoded sms-ing. With the key changing every day.


> Switched to a simple ROT13 encoded sms-ing. With the key changing every day.

I was confused. So, some days it was ROT-13, some days Double ROT-13, some days Triple ROT-13? Surely not.

I guess you mean that it wasn't always 13 (so, not really ROT-13).


Nice!


An out-of-spec DNS server implementation that helps me understand and control my internet usage.

The DNS server resolves queries based on rules I've set up. Essentially, I can configure certain categories of websites to only resolve at certain time periods.

For example, I could make it so reddit, HN, etc, only resolve for a 5 minute time period (either anytime or only in the mornings) every day.

The point of the server is to track my web usage across all personal devices, and then make conscious decisions to control my internet usage based on this data. (All personal devices are on the same tailscale VPN... that's how I ensure I am always using my DNS no matter the network I am on.)

It's still a WIP, but I've probably easily sunk over 40 hours on it by now. I also wrote a custom Rust async runtime (single-threaded) for fun.


Nextdns also has support for resolving queries based on the time. If you ever get tired of rolling your own that is.


This is awesome! Please consider open sourcing it!


I was selling an apartment in a developing country. I wanted to know, given the constantly updating property sales tax and rapidly worsening exchange rate, how much I’d end up pocketing in USD, if I sold for X vs X - 5% vs X - 10%, etc

That would inform my “I’m willing to go as low as” selling price.

So I built myself an app to scrape the current tax from the government’s online calculator and assemble a dozen what-if scenarios, given the current exchange rate.

When I found one that would let me break even, cash in hand after all fees and taxes, that became my price floor.

It worked. Now I don’t need my app anymore.


I tend to think about these types of problems this way as well. But then there’s folks that will throw this together in Excel, and I realize I would not know where to start. Anyone got pointers?


Part of this was in Excel. XLOOKUP() is magic.

My data source was a worksheet that was generated by my app.

Worksheet 2 - Scraped tax Data + Excel fetches current exchange rate from its "financial market information"

Worksheet 1 - My "dashboard" with a dozen possible price levels, agent fees, etc and using XLOOKUP to read from Worksheet 2.

The benefit of compiling this into Excel rather than a custom frontend is 1) Excel comes with unparalleled tools for sorting/filtering/visualizing data and 2) I shared this XLS with real estate agents.

They couldn't fathom somebody selling a property for any price other than whatever their market analysis provider says.

Before sharing this XLS with the agent, the conversations were... frustrating.

```

"I want to pocket a minimum $x USD after this deal. What should I set my price at in local currency?"

"Currently comparable apartments in your area at selling at Y local currency"

"Okay, but ultimately I'm leaving and after the sale I want to immediately exchange and transfer the money and pocket $x USD after fees and exchange rates"

"It's illegal to sell a flat in USD"

"Yes I know. I'm selling it in local currency. But my end goal is to get $x USD after fees and taxes. What should I set my price at to achieve this?"

"If we try to sell your apartment in USD, the buyers will be angry and won't come"

"I'm not trying to sell my apartment in USD. I'm selling it in local currency. What I'm saying is that after selling, I will convert this local currency to USD, and I want that to be at least $x USD. Can you calculate the fees and taxes, so that you can tell me how much in local currency I should set it for, so that in the end I get $x USD?"

"Currently comparable apartments in your area at selling at Y local currency."

"But after taxes and fees and money exchange how much will that get me in $ USD?"

"It's illegal to sell a flat in USD, so we can't do that."


My day job is all about doing presentations, so I obviously wrote my own slide software. It is a pure expression of my workflow, my requirements, and my idiosyncrasies and the way all of the above changed over ~10 years. It is a great performance booster if you can basically brain dump your thoughts straight into slides, and if you can just implement breaking changes (or choose to keep supporting an objectively bad feature for all eternity). I don't need to care about any other users, or the bus factor.

The only part of that software that ever became public was the animated syntax highlighter[0], but I don't believe any other part would help anyone else accomplish anything.

[0] https://code.movie/


Over the years I've built lots of small and large tools for personal use, some of them I'm still planning to release as open source "one day", others I have no intention of ever releasing.

One of the biggest and most useful of those tools is a Django web interface that accepts JPG/PNG/PDF file uploads of scanned documents and in case of images (JPG/PNG) it will run "tesseract" to OCR extract text from the images (in case of PDF it will pdf2txt). The extracted text is stored in an SQLite database which enables keyword search in the uploaded documents.

I use this as my (digital) archive of important documents, e.g. mortgage, bank statements, insurance, medical files, etc. I just checked and I now have 1129 documents in there ranging all the way back to 2006 .

It's super useful to be able to go back in time digitally, without having to dive into the collection of physical binders with the real documents (ordered by year and month but nothing beyond that, given that there are usually only a handful of documents per month, and I can use the web interface as an index into the physical archive, if I ever need it).


I use paperless-ngx but the concept is the same.

It is hard to understand how much easier this makes dealing with all of the paper in your life until you have it.


Once a quarter, I scan all my accumulated paper with a similar setup and tag it all in Logseq. I almost never need the archived docs, but when I do, it's a breeze.


I pretty much rebuilt my bioinformatics stack from scratch in Go, and I’m pretty proud of it! It does things that no other library, even python libraries, do for synthetic biology applications.

I’m pretty sure nobody else uses it, but I use it a lot for DNA design work for my company https://github.com/koeng101/dnadesign


Very cool!

I am curious if you might explain how/why this stack diverges from the upstream one (bebop/poly)? I see for example dnadesign has a version two of the seqhash algorithm that looks rather interesting.


I'll make sure to explain it a little more!

Basically, I developed a whole lot of bebop/poly, but I had some disagreements with the owner of the repo (Timothy Stiles) about direction at the later stages of development. For example, I wanted to standardize all the parsers to use a generic interface, so that they're all used in the same way, while he didn't really want to change anything. There were other features I wanted to add as well - you can see a full list in the changelog (which are the changes since diverging from upstream)


Yes, I run a daily hyper-local email newsletter that is custom built. It's called BTV Daily: https://btvdaily.com

I wrote the newsletter generator in Python. It uses all kinds of different APIs (news, social media, MailChimp, scraping, weather, even some LLM for summarizing some of the news, etc.). It runs every day at 8 AM on a little server.

It's not a super long script, but I've changed and refined it over the years many times. It's like I've spent many dozens of hours editing the same <1000 lines of code as I've modified, removed, and added features to the newsletter.


You’ll probably enjoy this (relevant to your question). I’m not the author.

https://www.robinsloan.com/notes/home-cooked-app/


Love the "learn to program" vs "learn to cook" dichotomy.


What does it do?

I built a bot that runs every 2 weeks, scrapes my local movie theater's currently playing titles, runs them against rotten tomatoes to filter out movies rated below 85%, then looks them up on youtube and posts the trailers to my discord server

How did you make it?

Python script that runs in google cloud functions, triggered on a bi-weekly schedule

How much time you spent making it?

An evening

How often do you use it?

Every two weeks


This isn't a bad idea but 85% seems like it would filter out a lot of good movies. The recent Dune, for instance, is at 83%.

Looks like about half my library would be filtered out.


might have to replicate this one...


I have a quite big userscript that fixes all the little annoyances I've been finding on the sites I often visit. Is the click area too small? I fixed it. Is there a really big form that when I select a specific value some other elements need to be at a specific state? It's fixed now. Did we get a new business lead and we need to enter all their information to our CRM? I can now paste that info and the form will be automatically filled.

I find usescripts way more easy to update than an extension, so that's what I've been sticking with for quite a few years now.


It's hard to call it "software", but the last one was a script to merge multiple Google Calendars into one, born out of frustration with juggling them manually. It runs on Google Apps Script, reacts on triggers, so once an event is created/modified/removed in one of the calendars, the change is reflected in the main calendar automatically.

[Edit] In the opposite direction as well: if modified in main, it reflects in the original one. AFAIR, it was the feature I missed, since Google only supports (-ed?) one-way mappings.


Back in the day, I had a laptop with only 4 GB of RAM, which would run out if I'd open too many browser tabs or programs. I didn't have an external monitor always connected, nor were most of the tray monitoring solutions to my liking. So instead I wrote a program that would create a few semi-transparent windows in the OS, snap them to the sides of the screen, allow clicks to pass through and resize them as the resource usage would change.

A blue full sized bar on the left size of the screen? The CPU is under 100% load. A purple bar on the right side of the screen, that reaches 3/4 of the way up? Guess my RAM usage is getting close to the limit, better not get too eager with the tabs, maybe close a program.

It actually wasn't that bad, but shortly after I just bought some more RAM. That was also before I had my own Gogs or Gitea instance, so I don't think I have the source anymore, it wasn't too hard to do on Windows though (nowadays I'd probably just put Linux Mint on the system or something, it needs a bit less memory in general). Oh, also now I have like 4 monitors for my main PC and don't do too much computing on the go. The 8 GB MacBook that I have for when I'm out and about feels rather slim, though.

Aside from that, there's the CMS I use for my website, though I'm not showing it to anyone because it's so bad that it makes me want to climb into a hole and disappear off the face of the planet. It still works though, so I don't necessarily see myself changing it out for something else just yet.


The last thing I made exclusively for myself and nobody else was a shopping list calculator that added tax to the shopping list as a kid on my TI-83 graphing calculator. Programmed in Basic, and I would use it to add up the items going I to the shopping cart while my parents shopped at Sams club as a kid.

And I'd tell them what the final total would be when we were standing in the checkout line.

Very small and basic, but I thought it was the coolest thing in the world as an 11 year old back in 1996.


I'm not a developer. I can write some bash scripts but otherwise I struggle. So, I use ChatGPT as my own private developer (version 4o with a subscription).

The most recent is a Chrome extension that plays a "server down" tone any time the word "critical" appears on our system monitoring web page (Netdata). It plays that tone when the number of "critical" words goes up, and plays a "server up" tone when the number goes down. It's dead simple and works to give me audio alerts so that when I'm hyper-focused on something, I can get pulled out of it by the "server down" tone. It's gone over well with my coworkers as well.


I made an app for my brother's wedding so all invitees could easily upload and view photos in a shared slideshow (displayed in real-time on a big TV at the event). Dismantled as soon as the event was over. It took less than a day to create with SvelteKit, Dropbox, and https://slidesome.com

Some more:

- https://weather-sense.leftium.com: weather app with the trendcast just the way I like it. WIP, but already using it on daily basis.

- https://multi-launch.leftium.com: quick link launcher; can launch multiple links at the same time. I use this multiple times a day.

- https://tt.leftium.com: tool to streamline conversions I frequently need. When the input type is detected on paste the converted value is automatically put into the clipboard. Also paste works from anywhere on the page. A super-niche hidden feature is if I paste the outerHTML of my SoFi relay accounts list, it will transform it into a TSV format for pasting into a Google sheets balance sheet. I use it a few times a month.

- https://ff.leftium.com: tool to calculate the time I needed to do something in a game I used to play. Automatically updated a calendar event with notifications.

- https://orbs.leftium.com: another tool to help with planning in the game I used to play.


> - https://weather-sense.leftium.com: weather app with the trendcast just the way I like it. WIP, but already using it on daily basis.

The multiple lines per graph kinda reminds me of the meteograms that yr.no used to have on their site and in their app. I really like their presentation. Concise, and much faster to digest than a table of data. Sadly, they eventually split them out to multiple graphs, with no UI option to put them back on one graph.


Yes, I think this is one of the cases where multiple scales on a single plot works[1]. To avoid confusion, I did a few things:

- Different units have differing plots (mm represented as bars, percentages as area graphs, and temperatures as lines.)

- The y-axis is not labeled. For exact measurements, you can hover over the graph and read the exact values at the top.

- The legend at the top actually uses checkboxes. Eventually, you'll be able to toggle individual metrics on and off.

[1]: https://hw.leftium.com/#/item/40391614


You made a marketing page for your one-off shared photo collection app?


No marketing was needed. The app was specifically made just for that single event.


Not being familiar with Slidesome, I thought /that/ was the app you created. Now I see it's just a component you used. Hence my confusion. :P


Ah, I understand your confusion, now. I didn't link to the actual project because it's not active, anymore. I guess I could link to the repo: https://github.com/Leftium/PhotoDrop

Finding a service that integrated with Dropbox the way I wanted probably took more time than the actual development.


Made a simple app to put all my Apple Watch walks on the same map, so I can walk every street in my city. Surprisingly there wasn't anything available that was free or straightforward, despite it being an easy app to develop. Honestly, it's extremely fun to use something of your own every day, and patch it up when you come up with some other ideas. I did release it on AppStore (https://apps.apple.com/us/app/mapcut/id6478268682), but mostly for my friends so they can use it as well.


Wanted to check it out and was surprised that it requires an account.


Yeah, totally fair point. I should probably add a simple guest access. Had to lock it behind auth as I keep adding features for myself that require some backend action, and was too lazy to write different code paths depended on authentication. I guess, that's also one of the drawbacks of writing something for myself, as you just forget about inconveniences that it can cause to others.


I note that explains what happens with the location data would also be nice.

Like is my complete health uploaded to your server? (I assume not)


Nope, no health data is being sent to me. The only time your location gets recorded is when you use it to generate a challenge based on your current location. For just mapping and visualizing your walks, everything is local.

But yeah, obviously a ton of polishing can be done. Sorry about the troubles!


Most of what I write is for an audience of one - even if it gets merged to the company tree, I'm usually writing for my own immediate needs. But to what you explicitly ask, I contact and I hate invoice management, so I have my own time tracker cli that integrates with stripe invoicing.

My hours get billed, I've got my notes for the cycle, and payments to my LLC get automatically routed to the right financial targets based on stuff I integrated with mercury.com.

Doesn't do the taxes, but cuts a lot of the low value/high cut vendors out of my revenue cycle and that makes me happy.


Absolutely. Of late I have an iOS app to measure the grade of hills. Initially this was to avoid parking tickets in San Francisco which requires curbing your tires for a 3% grade. The reality is that this is basically no grade and you should just curb no matter what in SF. Still fun to know how steep the hills are when walking. The other one is to set the exif data on photos to be taken in Pyongyang 100m underground tomorrow. This me is probably going to get me and my friends in trouble with the NSA for messing up their database, but these things happen.


Yes, lots.

I made bookmarklets to sort multiple retail stores, including my local grocery store website and Amazon by price and price per unit. Amazon price sort is nearly worthless, I assume intentionally. Drives me crazy to grocery shop without price per unit, my bookmarklet also converts units.

I made a browser extension that filters out youtube videos, reddit posts, youtube comments, twitter posts, twitter sidebars, and probably a couple other things based on an extremely long list of keywords of crap i don't want to see - largely political, violent, war related stuff. I think it filters out more than half of posts on /r/all, partially because there's also a long list of popular subreddit blocks on it too.

I made a custom interface for Habitica, a habit-tracking app, to mimic a game from the 90s I really liked.

I made an arduino water gun robot to spray my cats and break up frequent cat fights when i'm all the way on the other side of the apartment.

More random stuff like this.

Wish i could find a job!


My spouse is a teacher. The science department at the school uses a relatively complicated grading mechanism called conjunctive standards based grading, which used to require a lot of spreadsheet magic to work. I wrote a gradebook app (firebase, angular) that handles the grade entry (not grading), conversion of assessment data into grade reports for students, plus charting and stuff so you can see student or course aggregate progress over time.

I originally thought more people might use it, but I have basically 6 teachers.


Hey,

Hit me at elliot@edusign.com if you have some time, I might be interested in what you are doing!


I've been a huge proponent of ALWAYS writing software just for yourself. But I'm also a fan of releasing that software.

I have a couple notable stories about that:

I had a friend that wanted to scan his album cover, and I'd always wanted a scanner, so I bought one and wrote some command-line software to do it. Then I applied that as an extension to the venerable Xv image software. Wrote it entirely for myself, ended up selling something like a thousand copies of it.

Recently, I was tired of Spotify so I wrote a Python program to export my playlists into YouTubeMusic. Released it on Github and it now is by far my most-starred project, I figure it's helped at least a thousand people move away from Spotify, I've had a bunch of contributions to it and have had several people throw money my way.

Write for yourself, give to the world.


I built fake iOS for my baby. Recreated most of the apps, phone, music, videos, notes, wallet, maps, etc.. but simple versions with giant icons. Also added a ton of 'soundboards' which are their own apps for things like animals, food, family, home. I have like 20+ apps in there now.


I've open-sourced most of my stuff, but one thing I haven't is a gambling simulator - I feed in odds and results for a season of sport and I can tune some parameters to try different strategies.

Someone once told me that apparently some local sports reporters' weekly tips are used with some seed money and the proceeds are given to charity and I was intrigued enough to spend a few days building something that could test that out.


The two that I still actively use

- Danish Swimming Water Quality App[1]: Native app for my iPhone that I use to check the water quality. The official one wasn't working for me, too slow and too buggy. Ultimately, it's just a JSON of public data spread on the map, very simple. - Artwork Framer[2]: Script using Blender that converts .png into a framed 3D model. You can even adjust the frame if you want and then export it as a model to be used in VR (works even in Apple Vision)

[1] https://github.com/bartaxyz/denmark-swimming [2] https://github.com/bartaxyz/artwork-framer


Oh man. Do i have a list for you! I am very self-conscious of publishing my work. So over the years i have made many software games and utilities that will never be published. My wife thinks i am being silly and i shoud publish it. Anyway, here goes....

MP3Renamer(2002) - The age of music piracy is still rife and i have downloaded my share from napster, university file shares, etc. However, most filenames are horrendous and not clean. So, my first utility was a java program that would analyse file names based on common garbled patterns and rename it into [Artist] - [Songname].mp3. It worked surprisingly well for 90% of the use cases,

CombatLogAnalyzer(2008) - Me and my wife are in the throes of World of Warcraft arena which is a competetive dueling system. We only play 2v2 and we both suck at it. So, i enabled combatlogs in WoW and then wrote a parser, analyser and visualizer for every arena game and that shows which spells were used, where did damage come up, highest contributer of damage and this was by each playable class. By the end, we learnt what was killing us and the statistics showed our strengths and weaknesses. Suffering high latency and poor skills we managed to crawl from 800 rating to 1800 rating! We just couldnt go beyond that! (I was the crutch). This was done in .NET WinForms and i really learnt how to use linq.

Space Commander (2019) - My daughter is almost 4 years old and i think she is ready for computer games. I decide to learn MonoGame and i make a Space Commander clone. It is a HIT!

HappyMrsChicken (2019) - From my smash hit game above, i make a clone of HappyMrsChicken except this is in a forest where you have corn that the chicken has to eat and there is competetion from a mysterious goblin creature who also goes after the corn. Who will win?? Turns out, i cheated and gave the chicken a boost. My daughter won a lot!!

OptionsTrading (2020) - It is covid and i am locked in a quarantine facility for 28 days. Like a lot of retail noobs, we are getting into trading stocks and options. I decide IBKR interface sucks and I can do better. While spending those 28 days in isolation from family, i learn react to write a frontend and python to write a backend that displays all our trades, statistics, UIs, loss calculators, PnL, etc. My wife and I use this to date but i am too chicken-shit to publish it.

My personal favourites are: CombatLogAnalyser, OptionsTrading and HappyMrsChicken in that order


If you count a bunch of individual shell scripts for text processing then yeah. A bunch. I still use a lot of scripts I built like 15 years ago. Every now and then I make tiny changes but yeah I still use them a lot. I'd say 40% of the software I've ever built has been for me.

I'd say it's a rite of passage to write software for yourself:

1) It gives new programmers some practice.

2) It can help you understand software development better.

3) It reinforces the concept of dogfooding what you create (even though in this case others won't get to use your software). Again, making you better developer. You'd be surprised how many people write programs that they barely use or test. Never really knowing how useful it is or isn't to others.


Recently, I created a website that displays a map of restaurants featured by my favorite Japanese food YouTuber (who is mainly famous as a local TV personality). It's a public website primarily for personal use by my family and me, with minimal traffic.

As a fan, I was frustrated by the lack of a comprehensive list of featured restaurants, with watching all videos being the only way to find them. So I developed this website to see all locations on a map. I was surprised by how many restaurants were featured, so many pins on the map! Now, I'm enjoying trying out and planning visits to these places.

The biggest challenge was extracting restaurant information from the videos. Each video ends with a brief 3-second slide showing the restaurant's name and partial address. My solution converts videos to images, runs OCR, extracts text, uses an LLM to generate JSON data, and adds geolocation information. I set this up as a GitHub Action that regularly scrapes new videos and updates map markers automatically, all within the free tier.

It took me a few months to figure out the video information extraction and create the website using SvelteKit and MapLibre.


A TUI client for Confluence

A script that does a speed test of my Internet connection once an hour and plots a graph

A financial analysis webapp that barely works

I call it meware: works for me, and hopefully for someone else, too, but it's not polished.


I would really love to learn more about your TUI for confluence.



Technically it's possible to see most of my code on github, but most of it definitely wasn't intended for sharing, mostly protection from losing it.

A number of SmartThings device handlers and arduino code for home automation (early Ambilight clone, light strips, sump pump monitoring, desk fans).

A janky pendant to run on RPi for my CNC (clockworkpi.com devices are cool btw!)

Reddit bookmarks manager that google somehow scanned so technically there are a few people using it... accidentally

Custom Mailspring build with themes and minor annoyance tweaks

Not counting various TamperMonkey scripts that fix some site annoyances


I got annoyed at all the invoice generators so I recently built my own in a weekend. I put the data in Airtable and a python script generates the pdfs using typst. Much more flexible than any of the solutions I found and it took me less to build it than the time i spent trying out the available solutions.


What wrong and inflexible about off the shelf solutions?


They assume a specific format of the invoices and conditionals aren’t easily supported - like if I’m removing VAT use one format, if not use another.

Another problem is the annoying data input where things aren’t organized using tables for quick input, rather than forms on different screens. One more thing that comes to mind is it’s hard to reuse elements, sometimes I have a generic template per client and just swap in and out line items etc.


Tons. A few recent pieces are documented on my site, at else.co.nz/code ... but there are also plenty of utilities as well.

I've written "radio station" music software that used text-to-speech to back and pre -sell music and include news headlines and calendar appointments (that was back when I used Windows).

For work, software to build deployment manifests for the various client instances of my research funding web app, software to update the dev MySQL databases for the same clients, software to export zips of all the current code for each client.


I have a bunch of random scripts, executables, Python files, etc scattered all over my computer.

Some more often used ones: - A script that manages my Music files (add/rm/search) and syncs them with my phone when connected via ifuse+rsync

- a CLI to really easily deal with port forwarding because I run a lot of stuff on my raspberry pi and old laptop which goes out to the internet via WireGuard & forwarded from my VPS

I also have local forks of a few abandoned projects with a few bug fixes or minor added features. Ought to find alternatives but too lazy


I write little Python scripts (or use Excel) to simulate computer/video game mechanics and quickly iterate over them before passing them off to a real programmer. I also design boardgames, so often write die rollers to calculate probabilities (it's faster for me to write something that generates hundreds of thousands of results than to actually do the math, especially if I want to tweak in real time).

My bad habit is not properly archiving these little programs, so I invariably end up recreating them from scratch each time.


Most of my software is made for my own use. I write tools that help me test software.

For instance, I wrote a tool that tells me how many simultaneous users I need to simulate X number of real users who are hitting the server only intermittently.

If you write tools for your own use, unless they are really big you tend not to bother with coding style conventions. The important thing is being able to code it up quickly.

I used to write everything is Perl, but I’ve switched to Python. It’s a great rapid prototyping language, and basically everything I do is a prototype.


Assistive technology, I made a head-tracking mouse replacement. I’ve now used it almost every day at work for a decade. It’s open source but AFAIK no one else is using it:

https://github.com/aranchelk/headmouse


My wife is an optometrist (so, an independent worker), and she needs to check the list of patients she has seen (and the types of care they were given) to make sure that it matches correctly with the clinic's records. It is a tedious job, something that takes about 20 minutes per day. With a little scripting, I was able to bring that down to about 5 minutes per day. The time I saved her has added up to several dozens of hours now, and will amount to several hundreds of hours saved as the years go by.


Why are there two sets of records and why do they need to match?


I write various software tools to aid with running various role-playing and/or war games. Many I wrote before online code hosting was really a thing thing[1] and all of those are lost to the depths of time. For more recent tools, depending on how litigious a reputation the RPG game publisher has some were made public.

1: SourceForge launched in 1999; a lot of what I wrote predates that. Also CVS in 1999 makes git today look user-friendly; I didn't successfully get CVS to SF setup until around '02.


Several years ago, I picked up an old SmartBoard projector. The thing has no buttons, it's meant to be plugged into a remote control module attached to the board, or controlled over serial as part of a room system, or through its Ethernet connection.

I don't remember how I got started initially, there was some investigation of the network protocol it used. It was some crusty old standard that used an odd command scheme, but I managed to divine what magic packets to send to wake it up and select an input.

I ended up rolling it all into my first and only Android app. It eventually became a full remote control for the thing, including a sleep timer that would turn it off after an hour.

It was extremely basic and horribly ugly. I never looked into automatic discovery of the device on the network, so it had a hardcoded IP address. But it worked well enough to get a free projector running.

Apart from that, every programmer has thrown together innumerable scripts and throwaway programs for one-off tasks. My most recent was a thing that takes in a Diablo 2 save, then sets the version number and recalculates the checksum. My pirated copy is a bit old and won't accept saves made with modern editors. I don't know how many scripts I've written that walk through a file system to find or change something in the contained files.

I also have a version of Klondike solitaire written in C++ with SFML. I initially wanted to build a neural network to play solitaire, but after building the game itself, I found in my research that solitaire is actually a very difficult problem and far beyond my skills.


It's more "nobody else is interested" than "it's not out in the open", but I've made my own structured data format implemented as a Rust serializer https://github.com/rsaarelm/idm and am using it for a growing collection of command-line tools for managing personal notes written as outline files https://github.com/rsaarelm/idm-tools and to run a static site generator https://github.com/rsaarelm/blog-engine . I'm also writing a game that uses IDM as the data serialization format.

Idea for the format was that you can write structured data with a really minimal syntax if you have an external type schema running the parsing, and the syntax emerged from the line-and-indentation based outline note files I'd started writing for myself. It took some months of work and planning and a couple rewrites to get the core IDM library working right. The tools and site generator were simple and straightforward in comparison.


I wrote a video recorder specifically for one show on a streaming site that had low-level DRM enabled. DRM meant that tools couldn't grab the stream directly, but it was low level enough that the screen could be recorded. So what I did is write PHP software that used Selenium to open a browser with DRM enabled, logged in to the site, located my show, started it, started screen and audio recording with FFMPEG, and when the episode ended, stopped recording and moved to the next episode. To speed up the process, the program started multiple instances of this, and I used PulseAudio sinks to record their audios separately. It was magical to watch this work, like I was in a real TV studio. I also wrote an OCR solution with Tesseract and ffmpeg in bash, that grabs the title card frame and parses the text out of it. Time spent is around 20 hours. After the thing was done and I already had some episodes, my partner and I come to an agreement that the show is pretty toxic, and we don't want it in our lives after all, so I deleted the episodes that I had, and haven't used the software since.


I made a Rust program that, given a database of football players and projections about how they will do during the season, can be used during a fantasy football draft live to provide an indication of who the best player to draft is. I have other fantasy/sports gambling applications I’ve written to solve knapsack-type problems, choose the best team to pick in a “survivor” pool, and convert from various Vegas odds to win probabilities.

Outside of sports I have written code to download the entire Jeopardy archive to categorize most likely questions to appear to help study, and code to interact with Fitbit APIs to pull down data, code to analyze certain aspects of my full genome sequence.

I have unfinished code (and a desire to finish it) for creating algorithms to play certain board games, a tool to automatically request books and audiobooks from the library based on a Goodreads reading list (or other source of books), a tool to optimize Universal Paperclips, and several different business ideas I’d like to eventually pursue.

Creating something you find useful is a great way to get something done, and make a good tool. If you want it, chances are there are a lot of others who also want it.


Several -

My most recent was to manage dumbbell work outs.

Problem:

1) I hated changing weights in the middle of a session

2) Wanted to manage progression

Situation - I have 4 metal sets of dumbbell handles, a plastic set, some individual 3 - 5 pound pairs. A single, dial the weight dumbbell. Plates in metal range from 10 lbs to 0.25 lbs.

The SW:

1) using SQLite for DB

2) One application to calculate optimum weight settings - typical run takes 100 to 200 tries to get the best set.

3) Another application to present / track sessions. It was a BIG timer display to ensure I do not do the workout too fast (goal is 45 sec a set)

4) A tracking report.

Complexity issues: Not all dumbbell handles weigh the same - even the tightening nuts vary. Solution was to weigh everything and pair up the nuts so that each dumbbell handle with nuts matched its pair - oh I color coded the metal dumbbell handles.

First algorithm took too long - my first version would calculate ALL combinations of all weights. Then select the optimal solution - before adding the 0.25, 0.5, 0.75 and 1 pound plates would take between 15 to 30 minutes to run. This was solved with a slightly more complex algorithm. I calculated the all combinations for each dumbbell. Merged those lists into a big list, Found the perfect setting for each exercise. Check for conflicts. When a conflict occurred, found the exercise with the least delta from optimal and changed it to the next weight setting.... Check for conflict. Now it takes no more than 20 sec and reports 100 to 200 times it hit conflict before solving.


For a number of years, I built various To-Do (or more accurately To-Done) lists. I would use them for about a year and then just stop, and make something else. I don't currently have any to-do list solution, I'm just using pieces of paper or forgetting stuff. At one point I had:

- A single-page HTML app that saved entries in localStorage in 1 browser (like 1kb of vanilla CSS and JS, barely anything)

- A static HTML website, generated from a folder of Markdown-like raw text, with a CLI build script. Hosted on a public web server.

- A simple linux CLI command that let me check, set, and mark items as complete, build with Node and saved to a JSON on disk somewhere.

And there must have been one or two more. Nothing serious, nothing that stuck long-term, but I probably spent 1-10 hours making these, and then using them daily for a year or a year and a half.

I'm very conscious about time spent automating or building what you want, versus how much real-life time savings or experience boost does it give you. I'll likely make more of these in the future when the itch reappears and hopefully get another year's enjoyment out of whatever little thing I build.


I have a lot of examples but a funny one that comes to mind is: in the early 2000s when IM clients were all the rage, I wrote a VB6 application to go through my MSN Messenger logs and rank my friends by how much I talk to them. Kind of like a MySpace top 10 prior to MySpace.

I spent a decent amount of time tweaking the UI, improving performance, adding filters, providing different file output formats, etc. Never shared it with anyone.


I wrote an application to get GPS data for a sport activity from my cheap smartwatch and dump it to a GPX route file. The application uses the Qt Bluetooth libraries. I use it every day after my runs, then I upload the GPX files to Strava.

I did it because I wasn't happy with the official Android app, so I reverse-engineered the watch's Bluetooth Low Energy protocol (this was pretty fun actually).


I have a lot of DVD-Audio and BluRay audio disks that I rip to play in Kodi. I wrote a utility that scans through audio files to generate a list of track names, and script to encode images for each track into a video (without sound) to multiplex with the audio.

Years ago, I wrote a .exe to change screen resolution, and then revert when closed. I used it when my old laptop had a composite out, and I would watch moves that I downloaded on a TV.

A few years ago I wrote a grid-scale battery simulator, scraped some publicly-available cost of electricity data, and calculated how long it would take for grid-scale batteries to break even when buying and selling power to-from the grid. Short answer: About 18 months. Long answer: Permitting a power plant takes a few years, and the permitting process only recognizes buyers or sellers. The permitting process doesn't recognize "storage" yet.

More recently, I wrote a quick-and-dirty utility to restart explorer.exe, because it has some really silly multi-monitor bugs that require restarting the process every once in awhile.


> More recently, I wrote a quick-and-dirty utility to restart explorer.exe, because it has some really silly multi-monitor bugs that require restarting the process every once in awhile.

I have had some weird issues similar to this, which the following keyboard shortcut was helpful with:

> Windows Key + Ctrl + Shift + B - Wake up the device when black or a blank screen.

https://support.microsoft.com/en-us/windows/keyboard-shortcu...


Not the problem I'm having at all.

What happens in my case is that Explorer forgets the "When using multiple displays, show my taskbar apps on..." setting. It just randomly changes the setting in memory: I can either change the setting back and forth, or restart Explorer, and then it works correctly.

It's the oddest bug I've seen.


Ah, I see. That is indeed odd. Maybe one of the monitors is dropping out or sleeping? Are they daisy chained? Same connection type or different? Onboard or GPU connected? Maybe the GPU PCIe interface is going into some kind of power saving mode? I do tech support a fair bit so if there’s any commonalities, hypotheses, or coincidences you can think of that might help me figure this out if I come across it, please let me know.

The failure mode for the shortcut I referenced above is that the screen is all black but you do still have a mouse cursor.

Maybe try the shortcut I linked next time you experience this issue just to see if it works for you and/or on other devices which don’t have your particular fix?


It's not a driver or hardware issue. Please concentrate more closely on the problem as I describe it, not as you believe it to be.

It happens with many different kinds of monitors, with many different kinds of docking stations, and 3 different laptops. I started encountering the problem on Windows 10, and continue to encounter the problem on Windows 11.

I'm pretty sure it has something to do with the fact that I change a lot of the taskbar's defaults. It's probably a setting that most Microsoft Windows engineers don't use, and an unusual enough corner case that either Microsoft Windows engineers haven't seen it, or are aware of the issue but haven't prioritized it.

---

BTW: A lot of Explorer problems are not hardware related. Many software packages install Explorer plugins that crash Explorer or make it unresponsive. Ever have the right-click menu take forever to display? That's most likely a 3rd party plugin blocked on a network or filesystem call.

(I shipped a plugin and took care to architect the application so it wouldn't block Explorer.)


Many years ago I partially cloned the old iGoogle homepage because I liked it (and Google was killing it off). It's gone through a few experimental rewrites in various technologies for personal educational purposes.

Right now, it's ASP.NET/.NET 8 on the backend and still plain jQuery on the frontend.

I display RSS feeds, US National Weather Service data, and comics in it. I also have it send some things to friends and family as emails periodically.

Hangfire works on the backend to actually fetch new data at appropriate intervals.

I occasionally have to modify something and manually push a new build because something remote changes but it feels fairly stable right now (knock on wood).

I want to redo it to use ASP.NET AssemblyParts and work towards essentially giving each little box its own DLL as a sort-of-plugin-system so that I feel more comfortable adding more types of boxes (stocks, different weather data, etc) and maybe one day can open-source it. (I'd like to so I can point prospective employers at it and say "see, i can actually write reasonable real world code.")


I did mostly scripting as in sysadmin work. No FT programming per se.

I found no free optical design software that would run on Mac, so I coded something up to do some paraxial ray tracing ( maybe more, I'd have to dig up the code) and (this is the good part) draw lens diagrams from the specifications.

Pretty simple, but it was fun to do. Very little available for Linux either. Physics and optics people want to have fun, too.

Much of it was just parsing the input data.

I do recall a design and/or analysis program written in Basic, but it wanted a particular basic interpreter, and I forgot if it had porting problems. Must have. I don't recall using the program.

Oddest bit was something I did on my own for Sun flex office. I would get the list of scheduled occupants and their office choice and overlay that on a map of the office suite, for a "who is where" map.

On a "real" work task, I learned how to write graphics commands in Illustrator 3 format. I may have used that on this project.

But more generally, tacking the AI header code to the file made it valid Postscript/Illustrator format.


I built an OAuth proxy (only Auth0 currently works) hosted on Cloudflare workers. I'm a big fan of the self-hosted OAuth Proxy [1], but some projects don't lend themselves to hosting a container, sometimes you just want to set up a simple app on Heroku, Fly, Workers, etc. and have an auth proxy sit in front of it.

My solution also manages SSL via Cloudflare and integrates with Stripe for simple fixed-price subscription billing models. The idea here is to be able to iterate on product ideas quickly without spending a day each time figuring out authentication and billing.

I did set up a marketing site at the time so that others could use it, but I don't have any users, and I'm happy to maintain it just for my own projects (half a dozen now).

It took me 2-3 weeks to make so on net I have probably not saved much time, but it really helps reduce the friction of launching things which I think is valuable.

[1] - https://github.com/oauth2-proxy/oauth2-proxy


I wrote some PowerQuery functions and VBA macros to facilitate client invoicing that cut down ~2 hours of work to ~10 minutes (and shrinking, as I toy around with the scripts to delegate more of the work to the machine each time).

The billing data is pulled from an external vendor's portal. The contact data is pulled from our internal CRM. Both sets of data are then cleaned up and merged with PowerQuery, and then VBA is used to send emails out to clients.

I probably spent in the range of 3-4 hours getting a working version going and ~20 hours optimizing during downtime at work. I genuinely find it enjoyable to work on—there is something immensely satisfying about automating rote work away.

I use this once per month (a billing cycle). It will probably never see the light of day for anyone else, at least in its current state, because I work in a low-tech, nonprofit environment and using this kind of tool would be daunting for my co-workers (for reference, mail merging is sometimes intimidating at my workplace).


While not exactly software per-se, I created a system of multiple text files to manage todos, long term goals, and various reminders (eg, IOUs, deadlines, etc). This was inspired initially by Jeff Huang’s blog post [1] but then grew to a complex collection of different files. A problem I ran into was building an interface for displaying and editing these text files (each file has a different width and for some files I want to have different heights when editing them). Ultimately I settled on multiple vim tabs in a terminal window. Been using this for close to five years now and I couldn’t be happier with it. However, at this point the system of files (and the terminal “user interface”) is completely customized to my life and would likely never fit someone else’s requirements.

[1] https://jeffhuang.com/productivity_text_file/


I wrote an application that sends GPS coordinates to a Web API that plugs into Home Assistant which I use to open and close the garage door when I come home, or leave the house. 2 independent sensors confirm the action completes and if it doesn't, it re-sends the command until it does. Another sensor detects the door from the garage going into the house being opened, which closes the garage door when it was opened from the GPS application.

The application sits on a RPi in the truck powered through the cig lighter plug. I don't know how many hours I spent on it as it was a weekend project I did when I had free time (not every weekend). I can say I got it working in about 4 months though. So however many weekends I had free from that 4 month period is about how long it took. Probably 40-60 hours though.


I'm building a streetable rock crawler out of a 78 F150 with a custom radius-arm suspension in the front. I needed to measure the deflection of the drag-link & track-bar as the front axle cycled up and down while changing a few variables: mounting locations, drag-link/track-bar angles, and drag-link/track-bar lengths. I used plain Javascript to display a graph, lines representing the drag-link and track-bar, enabled them to be clicked+dragged along with their mounting points and sizes.

With this visualization I was able to determine the best way to package this on the vehicle with the minimal amount of deflection to avoid bump-steer and death-wobble.

I suppose it would be useful to other people building radius-arm/link-suspensions that incorporate a track-bar but I haven't got around to hosting it any where.


That is really cool. I haven't built a rig like you, but I have it on my bucket list to build a fully dressed rock crawler out of a Mitsubishi Montero 2dr (also sold as a Dodge Raider in the states). It'll be leaf sprung to start, but using your method of going 4 link would be preferable to the "build and pray" method that most people seem to take.


I'm a mechanical engineer, so I'm always intrigued by visual calculations like this that exist outside the standard CAD/excel paradigm. Is there any particular reason you went this route for this application? Do you use a pre-built framework to enable rapid creation and iteration of the setup?


I went this route because I'm a software engineer that mostly works with Javascript and needed it fast. No framework but bounced a lot of the problems off of Chat GPT to help me figure out how to get it done. I also figured it would be more useful than a CAD model to non-technical folks if I made it available online.


Basically everything I have at https://github.com/hiAndrewQuinn at least started this way, before I polished it up for external use. But, no, if it really is meant for my own eyes only, it lives and dies as a shell script.


I've had many use cases for using an event broker, but never found one that was simple enough that I would venture into hosting it myself, or cheap enough to rent/host that it was feasible. Once I realized that cloud object stores fit this problem perfectly (they provide durability and are cheap to use), I realized that it would be possible to write one myself. I wrote a post on it here, along with a tiny performance evaluation: https://blog.vbang.dk/2024/05/26/seb/

I spent the better part of a week working on it full time, but spread over months. I use it daily - it's serving the needs of multiple projects that I needed it for :)


My wife (mainly, my role has been system user/devops) wrote a custom angular UI on top of grocy for managing our household stock (basically any product we buy at grocery stores), there are two bluetooth barcode scanners and screens which allow you to quickly checkout a product when consumed. Since she started gardening she's also "abusing" the db with custom fields to track when she's planted vegetables, harvested them, etc...

Also over 70% of the lights / 50% of other devices in our house are smart (zigbee/wifi/ble) and are connected to home assistant (I very much recommend tradfri+xiaomi zigbee+home assistant). By now I'm sure we've spend over 200h on our "smart home/life", we love data :)


I wrote a static site generator. It was the first "real" Python app I wrote (many files, classes, etc and actively using Python features.) The code is _terrible_ in the way that code can be when you're learning a language. You can definitely see the progress of understanding Python as you see the project develop. However, it's robust and is used to generate my site right now.

It supports Markdown plus a custom template language to convert Markdown-plus-more documents into a website, which allows me to add footnotes, sidenotes, images with specific formatting, custom markers, etc. It has specific support for parsing English and splitting sentences, so each one is in their own span in the resulting HTML. This is used for specific typographic layout.


Not for me, but for my wife. She does financial advisor marketing & advertising compliance for a financial broker dealer.

Her rules for validating compliance were stored in a local Google Doc and her advisors are often uneducated & confused about what disclosure rules are applied to what types of marketing.

So I structured and organized the rules on a UI and then built an AI tool that let's her upload any advisor marketing materials to validate it against her ruleset.

It would be rad if I could sell it to her employer as an educational & compliance tool for their advisors, but currently this is just for my wife.

It took 3 days and I built it in Vue + OpenAI api.

https://master.dnwas22p6qpn6.amplifyapp.com/


Would love to hear more about your implementation


It followed something like this:

1. Defined the project intention: organize rules into a consumable/learnable format and create a tool to easily validate the rules against advisor marketing collateral. The audience is for my wife with an eye towards positioning it for her advisors.

2. Convert rules: I converted her disorganized rules document into usable rule objects, from:

> (for all tax related references/ Tax disclosure)

> [FIRM NAME] does not render legal, accounting, or tax advice. Please consult your tax or legal advisors before taking any action that may have tax consequences.

into:

{

      "title": "Tax Disclosures",

      "slug": "tax-disclosures",

      "original_text": "[FIRM NAME] does not render legal, accounting, or tax advice. Please consult your tax or legal advisors before taking any action that may have tax consequences.",

      "summarized_text": "Consult tax/legal advisors before actions with tax consequences.",

      "parent_rule": null
}

3. Educational portion of UI: I built and organized the rules into the UI and fought with an experimental vuetify plugin for far too long to display the rules in an easily consumable manner.

4. Rules validation tool (UI): Accepts an uploaded/dropped file and displays any rules that have been successfully or unsuccesssfully validated by the backend API.

5. Rule validation tool (API): Parses the incoming file into two different extraction pathways, either text extraction from a document or base64 extraction from an image. The extracted file info and the ruleset are passed to OpenAI api (using vision with an image) with a prompt looking for a list of all applicable rules that are either succcessful or unsuccessful, as well as a brief summary of the result.

6. Deployment: AWS Amplify & API Gateway/Lambda

That's about it. Currently waiting for feedback for any future iterations...


Couldn't find any CAM software which works as I want, so I've been working on implementing G-code and DXF export (so as to match tool movement) from OpenSCAD for a while now --- got a big boost when a Python-enabled version was made:

https://pythonscad.org/

and now have a pretty much workable tool:

https://github.com/WillAdams/gcodepreview

(Re)wrote it using Literate Programming techniques, probably hundreds of hours (I've been at this for years and am not a particularly good programmer) and I use it on pretty much every project I make which isn't just drawing stuff in a Bézier curve drawing program.


Not necessarily my own use, but I built a few tools (that I volunteer free for use on a perpetual license) for my day job to make life a little easier in a few departments. Nothing impressive imho, they're mostly 'basic' calculators for various parts of the wiredraw process, from basic reduction-of-area to full-blown multi-pass die calculators (all fairly basic math and algorithms). In all honesty they're the kinds of semi-basic tools that the company should already have after >50 years of continuous operation, but stubborn blue-collar workplaces can be set in their old-fashioned ways, no matter how inefficient (honestly it's somewhat nice seeing others using and appreciating something I built).


God yes, I made a personal API thing called mi that I've maintained over the last few years. It manages a few things, but the main thing it does now is repost my blogposts on a few other social channels when I publish new posts. I'm also working on having it manage the list of events I'm going to attend so that I can get reminded to create "trip report" posts.

I spend about 30 minutes a month maintaining it on average and I recently rewrote it in Go so that I could bring it back into my /x/ monorepo.

I have iOS automations that query and post to it about once every day. Eventually it's also going to handle photo uploads so that I can yeet it a photo and get the embed code for it shoved into a buffer note on my phone.


I made a macOS/iOS tournament clock for live poker games. I just wasn't happy with the few existing applications, so built and use my own. It is client/server so other people in the tournament can connect their devices and have the clock synchronized and shown on their own device, including phones and watches. It can also run dedicated/headless on Linux. I use a command line client to integration-test the networking and synching.

I never released it because 1. it's perpetually 98% done and 2. I don't feel like offering technical support for it and dealing with people who don't like it or find bugs. I may just open source it, but then I get to be a maintainer which is an even more thankless job.


I built a map based software to organize and archive places I have visited and want to visit in the world. It includes routes (hikes, driving, cycling) and various markers for different spots. I use that to plan photography projects. I now have over 10k markers all over the world.

Previously this was a mess with multiple kml files and lots of dupes. Now it lives in MongoDB and has a clean interface using mapbox and vue. Probably spent 150+ hours on that, and do use it for all trip planning (photography and normal travels).

Its public but I never announced it anywhere. So only my wife and I use it, but it could be quite useful to others, I guess…

I also have some scripts for automating my tax reports. That saved me hours and only took a few to build.


I made a stupid little utility to let me calculate how long to cook something in the microwave. Most instructions are written for 1000w or 1200w microwaves, and mine is only 800w.

I put a little NFC chip near my microwave, so I can tap it with my phone to open it up when I need it.

I also run into various scenarios where I need to copy some text, do something with it, and check it off a list, over and over. It’s not worth automatic each ad-hoc thing, so I made a little webpage where I can put it all my items and when I click one, it copies the next to the clip board, checks it off the todo list, and keeps track of the last item copied for reference.

Nothing huge, just little helpful things like that.


I wrote a screen reader for an MMO I play that does day trading on the in game market.

It hooks into the games public API to determine items in my price and demand range and does buy orders everyday.

Since it doesn't do any injections, it isn't a violation of the games policies or anti cheats. Been doing it for years.

The items I trade in don't stack in inventory, which makes it tedious for any normal human player. But given the buying and reselling is scripted, it doesn't impact me.

I've generated a few thousand dollars worth of gold over the years doing this.

The games marketplace also is deflationary. 15% buy and sell fees. I imagine I've burned a considerable chunk of gold keeping my operations going.


I have a bunch of small scripts I wrote to help me out when I'm working at the terminal. Most of them I've embedded in some way in my Prezto setup with ZSH. As the ecosystem develops though, I try to abandon my own code in favor of community code that's likely better maintained.

In general, I take a different approach than most folks, if at all possible I try to avoid writing any code whatsoever, after that I try to do it with glue code only, and I prefer scripting over compiled applications. It might be the ops/sysadmin in me, or it might just be because I'm lazy, but I really try to avoid just starting from scratch and building something.


I built a Twitter Deck clone for Reddit, because I wanted to be able to view multiple subreddit at a glance and I really dislike the new Reddit UI.

https://rdddeck.com/


Depends.

I have written a lot of software, but each one is done as a full-fat production library (most are SPM modules). They are of top-shelf Quality, and fully open to all.

I'm my best (and only) customer, for most (if not all) of them.

That's actually fine, with me. Publishing them, the way that I do, ensures they are "fugheddaboudit" quality, so I don't have to be worrying about my dependencies, and I reuse them, in lots of shipping stuff.

You can find links to all of them, in a couple of the orgs I manage on GH: https://github.com/ChrisMarshallNY#here-on-github


A long time ago - may be 15 years ago, I got really bored of listening to an hour of drive-time news on the radio twice a day. I then discovered podcasts, so I bought an MP3 player and wired it through my car stereo.

I found downloading podcasts manually a bit of a faff. So I wrote a bash/curl/xsd script to rattle through a list of RSS feeds, download new episodes and create a playlist. It ran on my laptop at home or work and plonked the result on my MP3 player via USB.

Now, years later, the script is essentially the same. Except it runs under termux on my phone, and VLC broadcasts the playlist direct to my car or home stereo via bluetooth.


> Just wondering if any of you has created any software that is meant for your use only and will never see the light of the day for anyone else.

Sure: For an operating system, I wanted to concentrate on exactly one, one of Windows or Linux. For this, that, and those reasons I picked Windows. Most important tool, KEDIT. Next most important, Rexx scripting language. Software for own use, console applications. Real effort, starting a business. For this, developing a Web site. Got the code -- Visual Basic .NET, ASP.NET, ADO.NET -- running as desired. Had a disaster, recovering, and spending too much time in system management mud wrestling and unanesthetised root canal procedures but getting back to the important computing and the business.

So, have ~100 of each of Rexx scripts, Kedit macros, and TeX macros. Will be moving back to working with Visual Basic .NET.

Just wrote a Rexx program of ~1200 lines of typing to do -- sitting down for this? -- file copying. WHYYYYY????? Did some copying, should have copied 99,024 files, but actually got only 41,462. Disaster.

So, wrote a Rexx script with three sections:

First, check and be clear on what copying want to do and use the Windows XCOPY, with carefully selected options, to do the copying.

Second, use Rexx function SysFileTree to get a list of the files/directories that should have been copied and then, in a loop, one name at a time, use Rexx function SysFileExists to be sure the copying was done.

Third, one file at a time, again from SysFileTree, use the Windows program FC.EXE (file check) with option "/B" for binary, to check that all the files were actually copied correctly down to the last bit.

My software does some good checking on the work to be done and some good reporting on what was done so that, now, if something goes wrong, e.g., the case of 99,024 files where copied only 41,462, at least I will know there was a problem.

Right, tools for such things should have been rock solid 20+ years ago and shouldn't have to do that. But, only 41,462 out of 99,024 provided a "reality check" so wrote the code I did.


I spend a lot of time listening to music and have a large collection of MP3s. I never found a piece of existing software that works exactly as I want it, so I created my own web app for hosting my library during the pandemic.

It works great! Over time I've added all the features that I care about (powerful playlist rules, easy methods for shuffling my music, detailed stats about my listening, tools for managing the MP3 library, etc.). I've learned a lot on the way. As I'm sure everyone here can relate to, it's quite freeing to build something that's only for you. So many concerns go out the window.


I "smartified" my Actiforce standing desk.

Originally, I found the up/down buttons to be mushy and unreliable to the point I got fed up. I just wanted to 3d print a new case to have nice metal buttons but I found out the desk supports a smart controller.[1] So I reverse-engineered the connection just enough to hook an ESP32 and buttons in parallel (either can control the desk) and add up/down actions to Home Assistant via ESPHome. And my buttons have RGB LEDs for notifications. 8)

[1] If I were a company, I would have bought the controller right away because their solution is certainly less janky than mine.


I made a blackjack scalable game simulation with configurable decision matrix, bet scheme, and card counting rule configs (1 simple file).

The software isnt novel, so I did it for myself both as a personal proof of concept of my blackjack years, and as an exercise to poke corners of my cpp knowledge base, google mock and google test knowledge base.

Its a phase 0 core tech tho for a vidya Im making that I really really hope I make time for this year.

Ill follow up if I manage to get a demo available before end of summer. After my sprint in the career hustle over the last 7 years I need to do some self care and I feel this is the outlet.


Couple of questions, feel free to correct my assumptions as I've only ever played backjack casually at a casino after a conference.

- I'm guessing that the configurable decision matrix is only of use because of the card counting right? Because I was under the impression that there is already a "perfect" blackjack strategy (that the casinos even give you as an instructional card because they have their house edge anyway) assuming standard rules and no counting whatsoever.

- is the bet scheme for kelly criterion so you can figure out how long on average your bankroll would exist given a certain amount of counting?

- is this a way to figure out how to play to maximise your drinks comps in Vegas given a certain bankroll? :-) /s

Looking forward to seeing your videos posted.


I made a Google calendar program for the Inkpad 6color eInk display. Because of the small number of displays I doubt there is anyone else using my code and that's ok.

It's Arduino based and it turns out the iCal format has a lot more complexity than I'd guessed so there are still bugs/incorrectly shows some events but it mostly works for me. I work on it very intermittently over a few years and find it quite cathartic.

It's cool having my code hung on my wall. https://github.com/jonquark/InkyCal


When I was looking for cars recently, I was frustrated by the inventory search on the websites. Subaru made it difficult to search a wide area for it's vehicles for example. I wrote a full stack application where I reverse-engineered the public APIs and did the inventory search for myself. I also made a web app to go with it to display the vehicles in a card format that gave me the important info I needed in an eye-pleasing format. I stopped development once I found my car.

I didn't spend a ton of time making it. It was honestly a lot of fun and I got to work out my development skills again. I really doing this kind of work too be honest.

I did use ChatGPT a bit to help rationalize some things as I was doing development.


I build https://maggick.fr/Spotify_RAS/ specifically for my need.

The tool select albums on my Spotify collection and add them to my playing queue. I am using the tool daily (I put 6-7 albums in the morning and I am setup for the day with a few hours of music).

It is just some Spotify API manipulation in Typescript and it was not that long to code (except some UI editing and tweaking, I am not good with front end :D). I am trying to open the application to everyone but the Spotify process is quit long.


I was upset with YouTube ads, so I made my own client. It is a Vue frontend that talks to a Supabase backend. The database has a list of channel IDs, and a serverless endpoint makes a request to the YouTube API to get a list of videos and returns them back to be displayed in a chronologically ordered list as embedded videos as to avoid ads. It could be configured for others to use, but I haven’t created a signup page, and I want to avoid YouTube API rate limiting, so I’m the only user now. It took me a day to make and I’ve used it daily for 3 years.


This is genuinely cool, but, just curious, why not use an ad-blocker like uBlock?


Thanks! uBlock works great when I’m on the PC, but not so great on my iPhone unfortunately. I’ve tried running Pihole and AdGuard servers too, but they don’t do well against YouTube ads. The small additional benefit of having my own UI is that I’m not distracted by channels that I’m not subscribed to, and just presented with a chronological list of videos from the channels I want to follow.


The static site generator for my website: http://beza1e1.tuxen.de/gen.py

A news bot: https://github.com/qznc/mrktws-news (the output is public, does it count?)

A TiddlyWiki server: https://github.com/qznc/tiddlywiki-py

Such stuff usually costs me a few frantic evenings to build the first version and then minor maintenance.


Do scripts count? I own 100+ .id domains (generic names, like player.id, awesome.id, and sweet.id), and owning a lot of domains is expensive both in money (renewals) and time (maintenance). So, I became a registrar reseller to get better price and API access. I wrote some Frankenstein scripts that cobbled up together those APIs, GitLab's, and Netlify's to automate publishing through CI/CD to those 100+ domains.

At the end of the day, I gave up and just put everything in Dan. Apparently, it's more lucrative to just sell them.


I have a webapp for tracking chores that need to be done periodically (but not on a fixed schedule). It just lists all chores sorted by how due they are (period / time since last completion) and surfaces a short history of who did what. I've rebuilt it a few times and played around with extra features, but the basics didn't take long.

I have a utility (dklocs) that'll turn git-formatted diffs into location-prefixed changed lines. It lets me do things like `git diff master | grep thatThing | vim - -c cbuffer!`. I built it for me (and probably use it multiple times a day) but threw it on github (along with another tool for intersecting a diff with code coverage that I don't think I've used since the motivating use case - where it did prove helpful).

I don't know if it quite counts as "creating software" but I've got some scripts and configuration to maintain separate context in named screen/tmux sessions, setting shell variables and aliases, adjusting the prompt, starting in some particular directory. Most useful there is segregating history by context, so ctrl-r when I'm coding doesn't step through system administration stuff.

I have a general project to decompose applications into utilities, and a pattern for dealing with the long-running bits that's been somewhat successful. I once wrapped up libpurple in a client that worked that way, and (in a separate project) it's how I get errors from `cargo watch` into vim.


I made a CLI tool called kilojoule that is similar to jq. I addition to the normal suite of JSON manipulations, it also has support for a couple of other file formats and can call other shell commands.

https://github.com/stevenlandis/kilojoule

I’ve found it to be a pleasant multi tool for interacting with the shell when I need something a little more than bash.

It took a couple of weekends and evenings to get working but was a really fun way to learn about parsers, interpreters and Rust.


Kinda lame compared to what other people post here, but this is what probably 90% of the "I just use it myself" software is IRL:

- the most "impressive" one right now, though barely realized since I've just had the idea this week, is a debug/log library that spawns child multiple processes with consoles attached (since you can allocate just one console to a process in windows) and allows for debug info stream separation and rich-ish text features like color

- i've started writing a custom "set of commands" (90% IO, tied to a specific hardware modules, no algebraic functions or anything, so not exactly a programming/scripting language) language specifically for, sort of, programming-illiterate people so that they could easily implement test algorithms without asking software devs to do it. It got nowhere bc I didn't have time to implement it

But my a high level hardware emulator I'm using to debug GUI in production-ish environment uses whatever is left of it.

- environment management software that assigns PATH and other variables. Those commonly ship with software, but I havent really seen people writing their own for general env management

- my friend couldn't get Sony Vegas to produce good chromakey results, so he cobbled together some javascript to do it

- I've built a python notebook to analyze my bike rides combining Garmin Watch (gps + HR) and OSM data (estimating speed vs power vs road conditions).

- since (last I've checked was like a year ago) cmake devs value json adherence over readability in CMakePresets, I've written some scripts that convert jsonc to json. Also, those can convert string arrays to strings and add other QoL and readability improvements

- also, since CI/CD pipeline at work uses pretty old cmake that doesn't support presets and is a ton of work to upgrade due to it being airgapped (I will not elaborate), I've written a script that converts a preset into a shell script. Not completely, just the options I use.

- i guess a lot of custom diagnostic software I've written at work, though that doesnt really count since this is what I'm actually supposed to be doing, instead of most of the above and jeneral software shenanigans


I have a very simple AutoHotkey script that remaps [h,j,k,l] to [left, down, up, right] when the LeftAlt modifier is pressed. This prevents me from having to switch to the arrow keys all the time.

I use it for navigating the list of browser search suggestions, or for quickly moving around any program that doesn't have vim bindings.

I've found it to be especially helpful while using the https://www.desmos.com/scientific calculator.


That's brilliant, very clever and practical use of remapping keys!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: