Along similar lines, I've adopted a hyper-frequent commit pattern in git. I do a bunch of meaningless micro-commits as I'm making progress, and then rewrite them all into one or two meanginful commits once I've reached a working state of whatever I was trying to do.
I find it's helpful for not losing work / easily backing up if as I'm going along I realize I want to change approach.
(For the micro commit I have a git command "git cam" that just commits all changes with the message "nt". Then once I'm ready to do a "real commit", I have "git wip" which rolls back all the nt commits but leaves them in checkout; then I can make one or two "real" commits.)
I wonder if dura would be even better, or if the commit frequency would end up being too fine-grained and obscure?
What you're describing is how git should be used. I would add it's important to push your branch to the remote just in case something happens to the local copy. I tend to squash/rebase to get the same type of results, but I can't imagine not saving work regularly or being concerned with the commit history while I'm actively working.
I usually commit often locally and push to remote right away. Then when I want to open up my PR, I use `git reset --soft <target>` where target is the local version of the target branch in my PR. That resets all the commits, but keeps all the changes in the staging area, and then I can clean up my history. Then I force push to override what's there.
This works well for my workflow because we squash all commits into target branches and don't really rely on commit history auditing during review. I understand that's not the case everywhere, but works for me.
I just gave this workflow a shot and it was great. Thanks for the tip! Do you use anything for committing more granular chunks of code? I'm just committing groups of files but after reading about magit on other hn threads, I feel like I could do better.
different strokes for different folks. I like to use my staging area as "I want to save this" until I get to a logical stopping point in my work. Then I commit it with its descriptive message. This way I can diff/reset against master while making progress, and show a nice clean progression of my work for who reviews it.
also, sometimes I just lump the whole thing into a PR because there arent more than one logical unit.
You can still get do what you want if you commit every 60 seconds (or whatever). It's just about postponing any and all cleanup activity until you're ready to shared it with reviewers/the world.
(... but of course, by all means do what works for you. Just be aware that you might be missing out on something because you're artificially constraining your workflow.)
I don't mean 60s literally. I mean arbitrarily small commits at your own convenience.
You don't need to have even remotely working code when you commit, is the point. You just commit whenever. (It's almost like saving files, just commit.)
I think the word 'commit' might have been a mistake, now that I think about it. Maybe 'snapshot' would have been better.
I use the local history in Jetbrains/IntelliJ/PyCharm all the time. Can use it on a current file, or even mark a folder if you accidentally deleted something.
It annotates the checkpoints with metadata as well. Like "this is how the file looked when you ran a test that failed".
Clearly, my JetBrains IDEs paid themselves multiple times by saving me from « oh shit » situations. Best investment my employer did without never knowing it :D
Local history saves my bacon about once a month. It's incredibly helpful as it lets me adopt a fearless refactoring approach knowing that I can always get back.
I'll just throw out there that ever since I picked up Doom Emacs and the associated Magit, I have been doing the same thing and loving it, I'll commit every time I finish writing a logical scope of code, and then commit and push and know that everything is there in case anything happens, it also has made my commit comments much more descriptive as I know am actually able to describe exactly what the commit has done beyond "Added feature X, refactored function Y". Big fan of the continuous commit workflow.
I've tried that as well. For me it was really difficult as I use the current changes quite a lot, and it makes it a lot more difficult to grasp the work I've done so far without having a meaningful change set.
you can use separate dev/release branches, and do something like "git diff master" (if you dont git push, you don't even need a separate branch, git diff origin/master works, but you lose half the point of frequent commits then)
I do something similar but a little more manual that your solution. I `git commit -am "WIP"` to save random, odd-ball intermediate working states. Occasionally the commits get real messages but I try not to let it interrupt the flow.
Then when I'm ready to commit or cut PRs, I just squash them all down if it's trivial. If it's a bigger change: I push things to a backup branch, `git branch Branch-BK`, reset to a base commit, and use difftools to pull over the subset of changes I want and commit them repeatedly until there's no diff left.
I have a `git snap` command that's similar to your `git cam` command with a small twist. The commit is added to a special snapshots branch that isn't checked out. I push the snapshots branch and don't rewrite it, but rather keep it around as an accurate chronological record of things I've tried, warts and all. I also like that `git diff` continues to show the changes compared to the last "real" commit this way.
Edit: I guess my script is somewhere between your `git cam` command and Dura in terms of functionality and complexity.
I do something similar, but with larger intermediate commits than yours and more meaningful messages. Then at the end, I do an interactive rebase, squash the ones I don't care about, and reword the final commit message based on the hints I left myself in the squashed ones.
I do a combination of the two: the first commit gets a meaningful (but still draft) message, and their follow-ups are all committed with "." as a message - but only until "switching gears", i.e. until a commit comes that is logically separate from the bunch before it. Those commits that have messages then provide logical milestones for squashing.
This breaks down sometimes if you have to switch back and forth between different parts of code, breaking the linear sequence. But even then, the messages make it easier to connect the pieces when it's time to clean up history before the pull request.
(I use just external scripts rather than git aliases because I find it a little nicer to work with; git has a feature where if you enter "git foo", it will look for a command "git-foo" to execute.)
Yep, exactly. My terminal autocompletes to previous commands, so it's pretty easy to get to 'git commit --fixup HEAD', likewise for a rebase with --autosquash.
This workflow sounds similar to the one we use at my company! I use the git-ps tool we made to make stacks of usable micro-commits, and using the tool to keep things logical and working as things change and the code develops. https://github.com/uptech/git-ps
Have you considered using `git commit --amend --no-edit` after making your first commit? It simplifies the unwinding step.
This is pretty much my workflow, too. I’ll make some changes, `git commit -m wip`, and then `g can`. When you’re ready to prepare the PR/diff, `reset HEAD^`. Then, a few cycles of `add -p`, `commit -v`.
`commit --amend` and `add --patch` are super powers!
It's a handful of commands because git-cam and git-wip referenced other little utility scripts, so hopefully I got them all. Probably it would be easy to rewrite to be standalone.
I'm on a mac, and I have ripgrep installed as "rg". Ymmv, glhf :-)
So:
"git cam" commits everything with message "nt"
"git wip" undoes all the nt commits but leaves the results staged, ready to be commited as a single properly worded commit (or play w/ what's staged and do as several commits)
This is also my pattern. To further assist with this, I wrote a short(ish) rebase script intended to be run when you want to squash your series of commits, also bringing your local branch up-to-date with the upstream. It relies on your initial commit in your feature branch having a commit message which corresponds to the branch name, but that's it. This does a great job of minimising unnecessary merge conflicts, even after working offline for an extended period.
I would like to go this approach. I simply forget to commit until I’ve wrapped up something big, but I’d like to submit more frequently so others can see my work. Is there something that will remind me to commit? Esp. In VSCode
> Along similar lines, I've adopted a hyper-frequent commit pattern in git. I do a bunch of meaningless micro-commits as I'm making progress, and then rewrite them all into one or two meanginful commits once I've reached a working state of whatever I was trying to do.
Aren't you describing a feature branch? That frankly sounds like git 101.
Nope. A feature branch is still expected to have meaningful commits, and is usually used for collaboration between several coders. Its history doesn't get rewritten.
What OP describes is a temporary work branch that belongs to a single person, and has a bunch of meaningless commits. So nobody else should be using it - or if they do, they need to sync with the owner, since the latter can squash or otherwise mutate commits at any time.
I think what you're doing is better because it's more explicit. I feel like Dura is yet another tool for people that don't know, and don't want to learn, Git.
Eh. I do tens of commits and then squash into 1 or a few, sometimes by resetting back and a few add --patch and sometimes by interactive rebasing.
But I can see times where Dura could be kind of nice. When I'm doing CAD or other not-very-source-code things, having a few snapshots be grabbed along the way sounds nice. Going to try and git commit in intermediate states feels a little too mode-switchy to me.
JetBrains "local history" for InteliJ IDEs has saved me several times. It has all of the diffing tools that are available for git commits. This looks to be a generic implementation of that. We should not live in a world where unsaved data is lost.
1. dura runs as a daemon while git-sync-changes is a one shot execution.
2. dura saves locally, while git-sync-changes syncs with a remote repo.
3. dura only does the save and the restore is manual, whereas git-sync-changes does both steps automatically.
I’m glad to see more people exploring this space. I think there’s a lot of untapped potential in tracking pending changes similarly to how we track committed changes.
I wrote it because I wanted to have a complete snapshot of a build context. Sometimes composer or npm can't be relied upon to reproduce dependencies in the state they used to be, or I just want a cache of artifacts. It has been pretty handy.
It's kind of insane to that in 2022 were still dealing with "save early, save often."
Our tools are so antiquated. Storage is cheap and computers are fast, every keystroke should be persisted somewhere I can recover from rather than having to manually save and commit works in progress.
Well, it's funny because the Apple stuff works like this, but not Xcode.
I pretty much never save anything with Page/Numbers/TextEdit. I just quit
Not only do I not lose changes, I don't lose editions. I can go back to older versions. And that's not including Time Machine. It's simply "built in".
From a user experience, it's really wonderful and no stress. I don't even think about it. At the same time, I have no idea where these extra versions are stored but, honestly, I don't care.
I do wish other applications worked similarly. Source code is tricky, but it probably wouldn't be awful to have a similar experience.
vim had an undo tree for 10 years (or longer?) [0] and there are plugins (eg [1]) that make it very easy to go back in history and also to explore different branches of your undo/edit history. Dura will not track changes that are not saved to disk IIUC
Saving a log of every keystroke is basically what a CRDT for an editor does today. We really just need to make local editors behave more like Google Docs and de-emphasize use of the save button (everything is always saved at all times).
A lightweight way to accomplish this is to at least set up frequent autosaves in your editor. I had `au FocusLost * :wa` in vim to save all buffers to disk whenever it loses focus. Now that I've converted to the church of VS Code (with vim bindings, of course), there's an "Auto Save: onFocusChange" config option to do the same thing. I don't know how people live without it!
You don't always want to save to disk though, you want to save in a consistent state. Vim allows you to set up a persistent undo that will let you load the modified buffer from a backup file without touching the original until you're ready. Or undo back to the saved on disk version. Or undo even further to previous versions. That's true persistence.
> It's kind of insane to that in 2022 were still dealing with "save early, save often."
Those of us who don't code in autosynced folders, that is. There is tons of software (IMO better than the approach in TFA) that has solved this problem for years now. Dropbox or Google Drive if you trust the cloud. Unison or looping rsync or syncthing if you don't.
It's rare, but I have lost history once or twice, possibly after a computer restart. It's great when it works (which is almost always) but not foolproof.
Navigating in this history might be a challenge. But I agree. My disk is filled by dozens of random docket images yet few megabytes of diffs are not stored.
LOL — author here — I definitely didn't intend it that way, but it does kind of jive with "Git's" other meaning. I should have known that all 4-letter words are an insult in some language.
I had originally named it "duralumin" after a magical metal in [a novel that I'm reading](https://www.amazon.com/Well-Ascension-Mistborn-Book/dp/07653...). I shortened it to "dura" after realizing that I can't even remember the name, so there's no chance anyone else will. Plus is has that "durable" vibe to it, which seemed appropriate.
As a Russian speaker, I would say that I feel our swear words become truly offensive when they are explicitly targeted at a person. "Dura" is also not considered to be an expletive, and I have not heard it being used in its original meaning after I finished 5th grade. Pronunciation in Russian is also different, word sounds like "doo-ra".
FWIW the same word "dura" may also be used as a slang word for a large and unwieldy inanimate object.
Sometimes the word "dura" has the meaning of "something big and an intricate nature," i.e., just a synonym for "stuff." For example, "положи эту дуру в шкаф" (put this stuff in the closet)
Not sure if it was obvious to you, but years after reading the novel I found out it’s not just a ‘magical metal’, it exists and is/was used in aircraft construction.
Not to mention Latin. The French equivalent is "dur[e]". The membrane surrounding the brain is "dura mater" to an anatomist, which is Latin for "hard mother".
I thought most every Russian was over it after laughing for a bit about the last name of the VKontakte founder, Pavel Durov. At least I didn't make this association immediately when I saw the name of this project.
It's pretty hard to name something without having it accidentally stand out in one of thousands of languages (or even a few major ones). I wouldn't read too much into the intention.
I’ve had this idea for about 10 years - it really pays off to wait and eventually someone will build all my ideas :)
There are many things to build on top:
- Squash history so that it doesn’t grow indefinitely.
- Be somewhat language aware to provide better commit messages
- IDE integration
Fossil has no such feature. Fossil sync synchronizes the remote and local saved state (checked-in/saved state only). It does not do anything with un-checked-in state.
That said, you can use fossil's stash to periodically take a (non-sync'd) snapshot using something like 'fossil stash snapshot -m "snapshot @ $(date)"' (i have that aliased and use it often). That does not enter the SCM history and is lost if you destroy the checkout dir.
‘sgbeal and I were doing some fossil dev work ourselves (I’m personally not at his level of fossil-fu, but am a long-running user and contributor). Our work was in a fossil repo (he in Europe, me in North America) and we were using the chat[0] feature to discuss our work when we noticed and discussed the GP post. Fossil has been self-hosting for ages, now is it self-correcting? /s
My biggest question after looking at the readme: What happens if your computer crashes while dura is making a commit? Can it corrupt your local git repository?
From my own experience, git is not crash safe, i.e., it can leave the .git directory in an inconsistent state if the computer crashes during certain git operations.
This is an interesting concept. I'd think it would need some kind of system tray icon to be aware if it stops running, otherwise, it might provide a false sense of security and you could lose work because you thought you were safe but Dura actually crashed three days ago. It also probably needs some sort of automatic syncing to a remote repo, so it isn't affected by spilling your coffee on your laptop.
Yes! I'm the author and this is the next feature I was planning on adding, I was even planning on naming it `dura status`. First I need to get better logging (to a JSON file), and then expose it via `dura status`. It occurs to me that having all that data could tell me a lot about how I work, so it could unlock some very interesting usages of dura.
Would you mind creating a Github issue? The project could benefit from more discussion around this.
I love this. One of the biggest reasons people don't frequently commit their code is fear of "polluting" their feature branch. Automatically creating and pushing to backup branches is the best of both worlds.
This feature is built-in to all the JetBrains IDEs. Right-click your project, open Local History, and you can diff any file at any two points in time. That's saved my bacon more than once.
I used to run a simple rsync script that copied all my active projects to an external drive every 15 minutes. I figured that if I had a major issue, I'd only lose a limited amount of work, which I could probably re-create without too much trouble.
Lately, I've been using private branches in the early stages of feature development, but you still have to remember to push in case of hardware failure. I also rely on my IDE's local history to get back to a good place if I need it.
I wonder if it would be good to combine these ideas: commit and automatically push every N minutes. Is this something that's being considered for Dura? Or is it a bad idea?
One big challenge must be avoiding commits when someone's right in the middle of typing, although having to stitch a couple adjacent commits together would definitely be better than losing work.
Automatic commits is generally not a good idea. Commits are supposed to be meaningful chunks of work, and meaning requires manual effort.
Automatic pushing is also probably not great. If it's just a backup mirror of some kind maybe, but otherwise you should be doing something like intentionally pushing what you're trying to share.
I don't really think that backups should be tied to git. There's already good backup software, wiring it into git doesn't seem to add anything.
For regular commits, sure, but for snap-shotting your work, I think it's fine. The backup branch would never be shared with anyone else, as you'd either push it to your own workspace/fork or to a clone on a mounted disk.
For this type of ephemeral backup of code-in-progress, I think storing it in git would be really convenient, because you'd just use standard git commands to find what you're looking for without having to deal with another tool.
Convenience could make it worth it. I can't say I'm all that convinced though, because you'll have to learn new concepts (and likely new commands, unless you're a git guru) about the backups anyway.
I always wanted a utility to run in the background, look for changes, run unit tests, and if they pass automatically do a side commit noting it. This looks close.
There is also the `undofile` option, which stores your undo-history permanently on disk. You can also go to an earlier version of the file from 4 hours ago with `:earlier 4h`.
Vim doesn't do it "well" with the settings above, just "good enough". The time stamp suffix is calculated just once when you start Vim, set in the global backupext option. If we :e edit files in an existing Vim session, they all get backups with that time stamp. It needs to be calculated as a buffer-specific value of that option, if there is such a thing, on every new edit.
I don’t understand these backup tools which masquerade as VCS extensions.
I want my VCS commits to be intentional. If they’re not then I’m just using a fancy VCS tool[1] for backup—just set up a _dumb_ backup routine that backs up every X interval and doesn’t have to care about the intentionality behind any changes made (the domain of version control).
And if you get into a situation where you haven’t committed for such a time while that you risk losing “days of work” then… seriously? You might want to get a better Git client where committing isn’t such a pain (IMO the CLI is too much typing for git(1). Magit is great).
And as already mentioned there is the reflog. Which is difficult enough for me to navigate sometimes without some pseudo-backup tool making a mess (on top of _my_ mess…).
[1] Git might be “stupid” but it’s also quite fancy.
A lot of people seem to regard version control as an adequate substitute for an automated offsite backup. This strikes me as a dangerous habit. There are too many gaps between "what I want to commit to a repo" and "what I would hate to lose in the event of a total system failure".
It's better to keep the two things separate. I'm happy with Backblaze on all my machines - although they have a disturbing habit of making unannouced changes to their default exclude filter which tripped me up (no VM images I knew about. But adding .git directories nearly lost me data). You can override these filters but they really shouldn't be changing them without clear warning.
> There are too many gaps between "what I want to commit to a repo" and "what I would hate to lose in the event of a total system failure".
You’re thinking about commit wrongly. You should commit all the time, and make use of branches liberally, and push branches to a remote for backup. Then, you should form your “permanent” commits from that work. The thing is, you have to get comfortable with git. For example, one common move is:
1. Commit everything
2. Make a temp branch (remember that “branches” are just labels for a commit; they cost nothing)
3. Switch back to the first branch
4. Use git reset to uncommit the last few commits
5. Clean up the code and commit it properly, with a good commit message
6. Run tests and check the diff from your temp branch to check you didn’t make a mistake when cleaning up
This tool Dura is totally unnecessary for people who are comfortable with git.
Yes, I think I stick to my ground. I'll address your points below. There are two camps here:
1. A large number, possibly a majority, of people who aren't yet familiar enough with git. Those people think of commits as being final: a commit is what you will push to the remote, your colleagues will see, and gets stored in public git history forever. They haven't yet learned that branches cost nothing, how to work with temporary branches, and how to create a new series of commits from already-committed work.
2. People who know git well and are the complement of the above set in all ways.
I'm sure you're right that backup systems have their place, but your comments here about how git shouldn't be used for backups are aligning with the camp (1) folks and making it harder for them to see that git can be used to save their work frequently, once they learn git better. Ultimately, positions like the one you're taking contribute to people developing misconceived tools such as TFA.
> 1. everything else on your system (potentially - a lot of stuff depending on your workflow
Sure -- backing up random files is good. I use google drive for that personally.
Yes, I don't worry about this. .gitignore is itself under git's control, so whatever's gitignored for me, is gitignored for my colleagues, therefore the project works without standardized content in those file paths.
> 3. changes not yet pushed
What reason is there ever to not push a branch that contains work you wouldn't want to lose? Planes and other no-network situations is one: I just push as soon as I'm on network. I never leave valuable work unpushed if I have a network connection.
> 4. branches that don't exist on remote
Why haven't you pushed them?
> 5. git stuff that's invaluable for disaster recovery
Not sure what you mean here
> 6. git stashes
Yes, good point to raise. Store work you're not prepared to lose in branches, and push them. You can stash it as well if that's convenient.
You probably know this, but the biggest hurdle newcomers to git face is they don't understand that branches are cheap, and they don't understand how easy it is to undo changes by using temp branches with `git reset --hard` and `git checkout`. Rather than telling them to supplement git with a backup system, it would be better to teach them to get comfortable using git. TFA is an extreme example, talking as if ctrl-z undo is an inevitable workflow for a git user when, in fact, it's only something that beginners do for more than trivial undo operations.
I don't trust any backup that requires me to remember and take a manual action.
> > 3 . changes not yet pushed
> > 4. branches that don't exist on remote
> Why haven't you pushed them?
Fair point. Because I'm mainly working in public repos, it's psychological. It requires some degree of mental effort whenever I think about pushing something because it's then in public for people to see. I judge people on their public commits so I rather suspect people will do the same as me. "Naming things is one of the hard problems in computer science" - a public commit requires several naming decisions (commit message, branch, deciding "foo" is not a great name for a variable). I sometimes don't want to break my flow to make those decisions. And sometimes that means I put off a commit for longer than I should.
Whereas - my backup happens automatically and continually. It's not tied to my indecision, choices or personal failings. It just works.
So - I strongly recommend everyone uses version control. And I strongly recommend they augment it with AUTOMATIC off-site backups.
Fair enough, that all makes sense. I do push branches and hope people won't look at them if they aren't connected to PRs :)
> I don't trust any backup that requires me to remember and take a manual action.
(Google drive has an automatic sync client thing like Dropbox so all it involves is having a special local folder where I put documents I want backed up. Or indeed git repos. https://www.google.com/drive/download/)
I've never wanted that sort of backup in the last 20 years of computer usage. I think it's a bit old fashioned honestly. Nowadays a computer is an ephemeral thing; all that's persistent is files in the cloud, git repos, and provisioning config to get a new machine in a similar state to the last one. And yes I know "similar" there will have made your heart skip a beat :)
One question I have about this approach: if I temporarily include a security token in one of my files, will that get included in the temporary commits? And if so, how can I make sure that particular piece of history never makes it off my dev machine?
IntelliJ (and I assume other JetBrains IDEs) track local changes as if they were commits - you can diff history for example. I've seen it lose changes across a crash though, so something running on disk would be very nice.
I've also built a tool to support hyper-frequent commits, automatically rolling back the commit if the build, static analysis or any tests fail. This ensures a complete history of 'good/working' builds in the vcs.
A newspaper reporter I knew in the early-90s did much the same with an external drive. Notes, then stories that came from the notes, were saved with a name-datetimestamp name, every time that person got up for some reason, or answered the phone, or felt like it.
This tool Dura is totally unnecessary for people who are comfortable with git.
People are thinking about commit wrongly. You should commit all the time, and make use of branches liberally, and push branches to a remote for backup. Then, you should form your “permanent” commits from that work. The thing is, you have to get comfortable with git. For example, one common move is:
1. Commit everything
2. Make a temp branch (remember that “branches” are just labels for a commit; they cost nothing)
3. Switch back to the first branch
4. Use git reset to uncommit the last few commits
5. Clean up the code and commit it properly, with a good commit message
6. Run tests and check the diff from your temp branch to check you didn’t make a mistake when cleaning up
This is really cool. One drawback is that it seems to touch the index, which I believe should be avoided, since it can disrupt the users workflow. I experimented with something similar a few years ago and avoided the index. My learnings are partially documented in the repo.[1]
Git has a built-in Ctrl-Z that is called “reflog”. Acknowledging that git’s UI commands such as “reflog” may be technical, and nonstandard, and poorly named, does dura provide something that git doesn’t, and is it a good idea to add yet another application/layer/dependency to the git workflow? Would it be just as effective to add a git alias called “undo”, or something like that?
that's tangential to this tool though. the reflog is for any git changes. this is a tool that checks for changes every 5 seconds or so and commits them to a shadow branch. there are a few things that standard git will clobber too without any recourse from the reflog.
> there are a few things that standard git will clobber too without any recourse from the reflog.
True! Stash is one of those things, right? Does dura catch the stashes specifically? Clearly it’ll catch whatever work you did before stashing, but if it saved stashes specifically in it’s own thing, that would be extra handy.
True, fair. I was about to delete my somewhat dumb question. Yeah I guess if someone’s not saving early & often, this could definitely catch some mistakes.
It’s not dumb, I had the same question. Would’ve been great if the value proposition explained how it was different than using the reflog.
In my experience there’s a lot of Git tools out there that basically exist because people don’t want to read the manual. But seems like Dura is not one of those.
This is cool, but for me to use it I would need a LOT of confidence that the tool wouldn't just silently stop working one day. With my luck that would probably be the one day I actually needed to use this tool.
You may want to look at turning this into a systemd/launchd service so the OS can launch the tool on boot and handle restarting it on crashes.
Honestly I feel a bit weird running this process that's going to watch all my git repos and commit things like that. Also, if my files are saved, then what am I recovering?
I use VSCode and if my computer crashes it'll just recover the unsaved files automatically. That's useful.
I'd hate to be the guy reviewing a commit constituting "days of work". Micro-commits are dual purpose, providing a hedge against lost work and making commits readable. Worse comes to worse there's always "git push origin head:my_tmp_branch"
i just use this cronjob in the directory i keep all of my git repos. don't use software you can write and maintain yourself.
if you have like 200+ repos it might take 15 mins to git pull for all of them. I had a plan at somepoint to use logstash or something to parallelise this so it'd be all able to be done concurrently and logged into a single json somewhere but I never got around to it.
I keep my git repositories in a directory synced to other machines with syncthing. Those other machines keep a tiered version history of all synced files (not just git). One has zfs snapshots, as well.
It might be simpler for dura to stage your work every 5 seconds (git add -A) without creating a new commit each time. Not sure how git handles many small changes in the index.
I wonder if there is a way to get this to do commits on filesystem change notifications using fanotify/inotify/fswatch/etc rather than running every 5 seconds.
> I don't really see how this situation described at the readme.md could happen :-)
Start working on a big and complicated refactoring, don't commit because everything's broken and in flux, run the wrong command and lose your changes-in-flight somehow (reset the working copy, revert a file, overwrite a file incorrectly, ...).
Yep, this. Exactly like the docs say, you could recover entire directories after an accidental reset, or just avoid having to ctrl+z in your file 40 times.
I've been fortunate to "only" lose about 2-3 hours of work to mis-typing in git in the last year. It could have been 2 days or so if I was unlucky. For 2-3 hours of work it's maybe not worth installing this tool, but I'm definitely thinking about it because it's so much better than potentially losing 2 days.
"Commit often" doesn't work for me a lot of the time, I'd spend up spending almost as much time rebasing and patch committing as I would in dev/refactor. When you're exploring you try 5 things for every one that works, and it's not apparent til later which thing you want to keep. Committing junk every 10 minutes and then committing a rollback for most of it isn't ideal.
> Yep, this. Exactly like the docs say, you could recover entire directories after an accidental reset, or just avoid having to ctrl+z in your file 40 times.
I've definitely wished IntelliJ's local history could work across multiple files a few times, it did let me recover from fuckups more than once but having to find and revert each file individually was not fun.
"including me-a-few-years-ago". This also applies to me, sometimes you have to learn the hard way. I would say that in this scenario addressing the root cause is better than treating the symptoms.
I use Time Machine on an external drive. I learned this lesson the hard way… Obviously it’s not perfect, but you aren’t going to lose an entire days work.
This has happened to me unfortunately and didn't feel good at all. I ran `rclone sync` on the folder that had .git instead of the correct subdirectory, and that removed files not present in the source with confirmation (like rsync --delete-after). I've learned to commit and push to a remote frequently (which is easy with Git because you can use any machine you can SSH).
Nothing protects against rm -rf by design. You shouldn't use it to blindly clean up and delete files (unless as a last resort or some very specific and careful use case). Just use plain old rm and feed it the explicit list of files to delete, which forces you to actually look at what you're about to do.
"Dura" in Russian language is "idiot". Specifically female idiot. Is it really hard for developers to enter word in google before naming application? This is ridiculous.
I find it's helpful for not losing work / easily backing up if as I'm going along I realize I want to change approach.
(For the micro commit I have a git command "git cam" that just commits all changes with the message "nt". Then once I'm ready to do a "real commit", I have "git wip" which rolls back all the nt commits but leaves them in checkout; then I can make one or two "real" commits.)
I wonder if dura would be even better, or if the commit frequency would end up being too fine-grained and obscure?