Ideally it should be this way, but it's impractical in reality.
It requires that you either stop your development workflow to commit as you go along, or that you untangle all the pieces after they're already entangled.
If you commit as you go, it's an expensive mental switch to fire up git and also run all the tests (since surely part of this workflow is to apply the principle that no commit should ever break the build). You also take an extra productivity hit every time you change your mind about something a little later (e.g. you added the function getFoo() but realize it should have been called findFoo()).
If you work for a while and then try to bundle up small, atomic changes, that can also be very difficult. Tools like git group together contiguous chunks of changes when committing, and prying them apart later can be difficult. I often do this with a combination of "add -p" and then "stash save -k" to temporarily get rid of things unrelated to what I'm committing, but it's a chore. During a selective "add -p" session you have to mentally keep track of what belongs together, thus what dependencies are between every chunk you're adding.
Committing as you go is easier, but it's slow, and doesn't work well when you're working across many files with a big change that introduces new semantics in a lot of places. Both techniques require that you keep track mentally of which parts are related, of course.
Untangling is what I mostly do. I consider the untangling my own internal code review. I need to read my own diff and figure out what goes where and what each part does and why it's necessary. My commit messages are then my own code review comments.
I figure if I don't carefully read my own diff, why would anyone else? And once it's untangled, I am hoping others will find it easier to read too.
Git doesn't provide as many tools as I would like to make this process easier. It's partly why I don't use git. Mercurial's absorb command helps a lot: it absorbs changes from your working directory into the appropriate draft commit that corresponds to the same context:
That's a cool script. I will definitely try that. Augmenting commits by doing partial commits then fixing with "git rebase -i" and squashing with "fixup" takes so much time and mental effort just to not make a mistake.
It still doesn't solve how to disentangle changes that have become interdependent. For that you have to concentrate on committing atomically and planning ahead a lot.
I've been using this approach successfully for 8 years now on tens of open source projects and various company code bases of all sizes.
It does take a small amount of overhead (I measure this, and for me it's around 5%). But that pays off immediately as soon as you or someone else reads it a few weeks later when investigating an issue.
I use gerrit for everything and this workflow is exactly what it gets you (well, to be clear, my workflow is commit-per-issue resolved, not commit-per-function added, though you could use it that way also). I highly recommend it or a similar tool.
To toss in a counter-point: I do commits as I go and occasionally go back and make changes so it's a coherent sequence. For the most part, once you are fluent with Git[1], I've found it to be a productivity improvement, and code reviews have been both faster and more useful.
If you're doing two semantically different things, put it in two different commits. If it's one, put it in one (merge commits work too). That's just good change-hygiene, for the same reasons you try to isolate behavior in code, rather than mashing it all together into a single func just because you happened to be doing it all around the same time.
tl;dr we don't name our funcs "june_27_through_29", don't name your commits like that.
[1]: a huge investment, so I totally get why this isn't an early-coder practice, and it's rather painful. but IMO worthwhile, usually I see people spending far more time fighting it than it would take to learn it.
It requires that you either stop your development workflow to commit as you go along, or that you untangle all the pieces after they're already entangled.
If you commit as you go, it's an expensive mental switch to fire up git and also run all the tests (since surely part of this workflow is to apply the principle that no commit should ever break the build). You also take an extra productivity hit every time you change your mind about something a little later (e.g. you added the function getFoo() but realize it should have been called findFoo()).
If you work for a while and then try to bundle up small, atomic changes, that can also be very difficult. Tools like git group together contiguous chunks of changes when committing, and prying them apart later can be difficult. I often do this with a combination of "add -p" and then "stash save -k" to temporarily get rid of things unrelated to what I'm committing, but it's a chore. During a selective "add -p" session you have to mentally keep track of what belongs together, thus what dependencies are between every chunk you're adding.
Committing as you go is easier, but it's slow, and doesn't work well when you're working across many files with a big change that introduces new semantics in a lot of places. Both techniques require that you keep track mentally of which parts are related, of course.