Always good to revisit original phrasings of things. I think the catchiness of the lower-information acronym did this one a disservice. I find myself explaining with some regularity that repetition of code is not necessarily repetition of ideas, and if I have `f(x, g(y), h(z))` both here and there but for different reasons then it's introducing artificial coupling to break it out into a single function. The focus, in the longer expression, on knowledge is exactly right. DRY isn't a call for "Huffman coding".
That said, I think DRY can sometimes lead people astray. One problem is that the boiled down concept of "DRY" has nothing to say about quality. Reusing bad code can often be much, much worse than the alternative. Sometimes it's critical to "sunset" code that is "working" because it's not good code, and it would take too much to make it good. Then tie any new attempt to reuse that functionality to an effort to create a better replacement. If instead people think "oh hey, this exists already and it sorta works, whelp, better be DRY!" then they can turn a small problem of bad code into a big problem of horrible code that becomes even less maintainable and more of an effort sink as it acquires little patches and fixes to add support to all the new stuff using it, and as it becomes more and more indispensable and costly to replace. Whereas if the first person deciding to be aggressively DRY had looked at the code and instead said "this is garbage, I'm writing my own better thing" then maybe everyone else coming along later would have ended up using the better replacement and over time it would have become trivial to migrate the one usage of the original to the better version.
The mental model I've started to adopt is instead one around "littering". Basically: don't create garbage code, and don't use it. Garbage code is code that has future cleanup inherently attached to it. Repeated code can be garbage code because there's inherently either deduplication work or duplication of fixing/redesign/adaptation work attached to it, it's basically like littering in the code base. But by the same token you don't solve that problem by finding a use for garbage, you still have future cleanup and bug fixing work attached to the code, only now it's worse because it has different use cases to support and higher consequences if it breaks.
Regarding your first sentence, you should be frequently spinning off chunks of code, but you should be careful that you're not adding coupling where it doesn't exist in the domain.
Again, I like the knowledge formulation of DRY.
"This is the way we render a FOO" is one piece of knowledge, and shouldn't be repeated every time you try to render a FOO.
"There should be a FOO rendered here" is one piece of knowledge, and shouldn't be repeated at multiple layers in your stack.
Now... what if we've been good about the above, but there's some similar code in "how we render a FOO" and "how we render a BAR"?
If the code is similar because we want a consistent style across FOO and BAR, then "we want such-and-such a style" is a piece of knowledge, and it should be represented in one place!
If the code is similar, but only coincidentally, and either might change in any direction tomorrow, then there's a question: "Is it a coherent abstraction?" We want abstractions such that we can give them a clear name, such that when we make a change to how we render only FOO or BAR we'll be obviously moving to a new abstraction and won't be tempted to change the function, and such that we know to reach for this function when it's applicable in a new context.
Alternatively, is it just a gathering of a particular grab bag of functionality? If you're pulling that out, you're not improving your code - you're compressing it.
This has helped my career more than any other single piece of advice. The critical mindset of the engineer can quickly lead to cynicism (especially when faced with bureaucracy). Don't get me wrong; cynics make great advisors. But the money is in solving problems, not (merely) pointing them out.
I read both The Pragmatic Programmer and Code Complete (2ed) before Thinking Forth. The latter provides a wonderful software engineering foundation (in clearer, more concise language too).
This seems impossible to me, although I agree that it would be an extremely useful practice. I mainly develop Java and Swift, and both languages are extremely dependent on using an IDE. I honestly don't see how I would ever be able to use only a single editor across all the languages I use, even though Jetbrains seems to be trying their hardest to make that possible. (Unfortunately I don't think AppCode is quite there yet.)
Even using the same tools across operating systems causes issues: Using IntelliJ on MacOS at work has entirely different keybindings than on Linux, because MacOS doesn't monopolize the Super/Command key so heavily.
...Or is this just referring to things like having Vim bindings across all your editors? That seems much more doable.
Mind you most of my prior Java experience was 2007-2010, and the last time with the editor + CLI was in 2014, and just with a fun example. The editor + CLI workflow was still solid and similar to Node or Python workflow in that regard.
Losing Shift+F6 refactoring is a little annoying, but definitely not something that makes programming Java impossible. Autocomplete might get a little hairy as well if you're referring to classes that exist in Jar files too, or am I wrong about this?
As for everything else, using the CLI for source control and building is something I do anyways, even when using an IDE. At least with Gradle, that is. As long as some of those issues have solutions, though, I think this might be possible. Maybe I'll have look into it a bit more.
Many people don't like Eclipse but I find the Java-oriented distribution it pretty decent for an IDE. Auto-complete is handled well, and searching through classes/types is also easy. (Knowing the shortcut keys helps)
As for imports, when you start typing a class name that isn't currently imported or in scope, you can use auto-complete to select which one you want. There may be multiple matches from multiple packages if you are searching for a class name of `Result`, so you can select the `org.foo.someframework.Result` and the IDE will add the import statements for you.
In general, I haven't really had much trouble with Java and an IDE, even in large projects with a huge number of dependencies.
I haven't really tried IntelliJ but from the feedback I've heard, it's a great IDE as well.
I try to minimize imports, and am not touching them often. I'm very considerate of what I'm adding to a codebase, so I don't mind the extra space in my head.
Also keep reference docs for the language and libraries on hand or accessible, to alleviate some of the need.
And I consider frequent renaming to be a sign that I hadn't thought out the problem well enough before starting. When I see a true need, I use Multifile Find (& Replace) in Sublime, multi-step process for audit followed by action.
And I've always found auto-complete's to be more frustrating that a saver of time, especially those that would fire on whitespace, so disable them or put them under a secondary keymap.
I'm sure the IDE saves some time, for many, but all the reasons people use them are mostly annoyances to me. Though there are still some merit, like having an embedded debugger and intelligent index of signatures.
You get an astonishing power to refactor the code in ways I couldn't have dreamed about before.
It's not so much about reducing the boilerplate you have to manually enter. While that's nice, the real value of a good IDE is in its refactoring tools, taking trivial but tedious tasks like class, function, or variable renaming and automating them so that they can happen both instantly and error-free.
And variable renaming's the simplest refactor that a tool like IntelliJ or Resharper can perform.
For example, in my current project, there are four classes with the same name ( , each of which are used by a few other classes, some of which use subclasses of the parent rather than the parent itself. With IntelliJ, I can rename any arbitrary class with a single keypress and it just works, including renaming the files in git where appropriate. This is hard to do in vim or any other text editor.
The same applies to functions. Let's say there are ten different functions called getFoo, three of which are in completely different classes and some of which have different arguments like:
String getFoo(int a)
String getFoo(ArrayList b)
int getFoo(String bar)
SomeThing getfoo(int blah)
how do you always rename the correct ones, in every location, and never accidentally rename the wrong ones?
Making something like this refactoring (and others, like moving a method to another class and updating every reference, including in subclasses and interfaces) work 100% of the time, automatically, through a single keypress completely changes your workflow.
It's like having automated unit tests or a CI tool - sure, you can test things by hand, or do the build by hand, but all of that stuff takes mental energy and it's often hard to execute the same things manually and perfectly ten thousand times - that's the whole POINT of computers.
In practice, what this means is that friction is reduced. If I see that someone has named somethign poorly, or I see a method that belongs in a different class, I can just fix it within literally two seconds and then continue as I was. All the Javadoc is updated and published and everyone else (who is also using an IDE) can hit a single key to jump to the definition or all the usages of a particular reference.
Damn, usages, there is another thing. How on earth do you find all usages of a particular function with only a regex in anything other than a small codebase with only a few developers? Your regex doesn't understand scoping or method overloading or interfaces or abstract classes or any of that other stuff, so you have to wade through a lot of irrelevant data to find all the time where someone calls getFoo(bar) where "bar" is a String and not an int or something else.
There is a lot more, of course - detecting duplicate code chunks and automatically offering to turn them into a function, for example, or the detection of uninitialized variables or just general code smells.
I use VIM keybindings so I get all the benefits of a good, programmable text editor too and I can still do a regex search if I wish, but the power of an IDE (especially IntelliJ) is the reduction in friction and thus the improvement in continuous flow state.
Well to start, you limit the scope in which they are used so that you only use one such object per file, and you import it with a different name if you've got to mix them. This is a perfectly reasonable thing to do regardless of whether you have an IDE or not.
>How on earth do you find all usages of a particular function with only a regex in anything other than a small codebase with only a few developers?
Rename it and try to compile. :)
I don't have anything serious against IDEs. They're pretty awesome when they are well-maintained and reliable. I just don't think they're all upside. For one, when I was first learning Java and I'd get a compile error due to packaging or import conflicts, the IDE only served to remind me that I was incompetent about how Java actually worked. And learning the IDE meant learning a lot about what the IDE wants in its particular configuration, not what the actual Java compiler needed.
Moreover, if you ever move to a language that doesn't have as high degree of IDE support as Java, you start having to fight with it quite a bit, and many of those wonderful features aren't available, don't work well, or actively undermine your process.
If what you are doing is old news and part of an active "enterprise-y" code process, then it does reduce friction. Otherwise, it can be a source of friction unto itself.
I don't understand this statement. What does having an "enterprise-y" code process have to do with anything?
Someone else wrote a language. It powers 100 trillion dollars of business. Lots of people use it. Therefore, it will have ten IDEs all competing with each other to be the best, fastest, most popular, and least buggy.
In which environment are you going to use an IDE?
Is there much reason these really need to be integrated?
I'd be interested in an existence proof here: Is anyone aware of refactoring tools that are not bundled with an IDE, yet still compare favorably with something like Resharper?
Yeah, personally I prefer to do my integration in the shell, and then have thin hooks into my editor where applicable.
> Is anyone aware of refactoring tools that are not bundled with an IDE, yet still compare favorably with something like Resharper?
I'd also be very interested in this. I do most of my refactoring semi-manually (leaning heavily on my editor and compiler).
Also, I've been keeping an eye on https://github.com/landaire/deoplete-swift, hopefully in the future I can do everything through Vim and only jump into Xcode to setup bindings and make UI changes.
Vrapper  for Eclipse, Vintageous  for Sublime, VsVim  for Visual Studio, and Vimium  for Chrome.
I have become one with Vim :)
So - this one I have mixed feelings about. It is normally true. But you could have genuinely hit upon a bug. Granted it is much easier to investigate and prove with open source today.
14 odd years ago I was working on HP-UX & C applications. One of the signal commands caused things to crash. I finally figured out the test that could cause it to crash. Now HP support seemed to think I was crazy for asking me to debug their code. I took me ages to get them to understand - that the OS was their code and not mine.
Well, either way, they got a tier-3 OS support person involved who collected data, confirmed the bug and provided a fix.
The point is - it may very well be a platform bug. That is code too. It just shouldn't be your first (or may be even second or third conclusion) and you should have a unit test to exercise the bug.
The response of the senior engineers was "What's really wrong with your code?" Ultimately, I was able to prove an OS bug with IBM, but it required extraordinary proof.
And you know what? If some smart junior programmer came to me and said "I think there's a bug with a low-level OS call", my response would be "What's really wrong with your code?" As it should be. Heck, if a lead engineer said that, I'd have the same response. If I said it, that would be my response to myself.
That's because select is very rarely broken. The exceptions just prove the rule.
"select isn't broken" has been my experience since moving to open source platforms. However I started with VB on Access 2.0. (Yes, this dates me.) And there I found that most of my interesting bugs WERE of the "select is broken" form - Microsoft's software was simply that bad.
I understand that they have improved since then. But the bad taste from that experience stays with me, and is part of why I don't want to deal with the Microsoft stack.
This! Once you've got your test you can start factoring code out of the equation and if you get to the point where your application code is no longer involved, well then that's an OS / library / framework bug but yes that's pretty rare.
One thing I've always thought ironic is how much nicer the typography and design is in this book, which was published by Addison-Wesley, than the books the Pragmatic Press publishes. And don't even get me started on Pragmatic's embarrassing covers.
Example: The Agile Samurai https://imagery.pragprog.com/products/176/jtrap.jpg?12985898...
Dart 1 For Everyone is even better. https://imagery.pragprog.com/products/432/csdart1.jpg?140865...
Tracer bullets let you home in on your target by trying things and seeing how close they land."
All the other tips I appreciate but embarrassingly I have hard time with the analogy on this one (besides the bullet reference). I mean I think I get it but I can't figure out how it maps to anything I have done. I would say it is like a "software spike"  but somehow the analogy description doesn't fit.
Maybe someone has a real world example?
The tool I work on generates reports. Rather than defining requirements for reports, the only way to make progress is to understand their general needs through conversation, then provide them a draft report - your best guess. They tell you what to change or what they don't want, then you make changes and provide another draft. When enough tracer bullets are shot, you arrive at the report they actually need.
Personally I don't agree with this. It goes against my core beliefs to make something that I don't believe is good for the end user. I have a hard time however coming up with arguments on why we should as a developer spend our time creating basically a product plan.
Hiding developers behind analysts and product managers makes everyones job less fun.
In my first job we had analysts, product managers, user councils making specifications. Specifications that where terrible and the developers in the team would have loved to do this iterative design with the end users, because the software would have been better fit for use. For me that was a soul sucking experience. Much better to be able to understand why the software needs to exist and how to make it good for users and the organisation (not always 1:1 mapping either)
Wanting perfect specs is a good way to limit your career as a developer. Because that way you will never be more than a glorified type writer with a analyst to hold your hand. Solving the whole business problem from start to end is the way to grow professionally.
Also perfect specs are impossible :( so iterating with your users is the fast way to good enough for business applications.
If you want perfect specs write sudoko solvers...
Yes, definitely. The difference is that in my situation we are a software contractor, not a software development company that owns its products. If we don't work with the customer to discover and build what they want, then it doesn't get built, which means we've lost potential work.
It goes like this: instead of building software parts independently to completion, you start by building just the bare bones: classes, functions and other things you will need, define the interfaces early and connect those parts. At this point classes have empty methods and sometimes return dummy data.
Then I go from the bottom up, fleshing out the methods, and I can start to see some end results from the outside.
It may seem like it's poor's man TDD (verifying your code as you write it, but without actually writing tests).
But one advance of this technique is, that you can put the architecture to test early in the development. I have discovered many design issues this way.
Without this "tracing bullet", I have seen things like this happen: team discusses design on the whiteboard, and after a rough UML concept, each developer goes on to write a separate part of the system.
After the first pieces are completed, issues arise when trying to make parts play together, some situations where not considered, the design is not flexible enough to cover all cases. Heavy refactoring of freshly written code ensues.
Real world example: you want to write a web shop. You make a rough sketch of your design. It'll have a few models: User, ShoppingCart, Product. Also a few controllers and views. In your empty controller, you create some dummy instances of your models. Make them interact, and put the result in the view. The view shows some unstyled cart info. And voilà, you've got a tracing bullet. You've got the most interesting bits of the application sketched and connected. Now you can start fleshing out your models, controllers and views and see the progress in the browser.
Discussion is good (because ideas are surfaced and knowledge is shared) but long discussions are bad (because action is delayed). Action is good because that's how you learn things. By adopting an "Let's try it" attitude, you can move from discussion to action more quickly.
"I know the bug happens between this and this, so lets try this - nope, before then. Divide and conquer.
Can anyone comment whether Programming Pearls is as good?
<meta http-equiv="refresh" content="2;url=https://pragprog.com/no_js">