Hacker News new | past | comments | ask | show | jobs | submit login

A better formulation of DRY is SPOT (Single Point Of Truth). Definitions (code, data) that represent the same “truth”, i.e. when one changes all have to change to represent a consistent truth, should be reduced to a single definition. For example, if there is a rule that pizzas need at least one topping, there should only be a single place where that condition is expressed, so that when the rule changes, it isn’t just changed in one place but not the others. Another example is when fixing a bug, you don’t want to have to fix it in multiple places (or, more likely, neglect to fix it in the other places).



I agree with this as it puts emphasis on semantics rather than syntax and encourages focusing on intentionally similar code rather than unintentional.

A related principle is what I call code locality. Instruction locality is the grouping of related instructons so they can be the CPU's cache (can an inner loop fit all in cache). Similar for data locality. Code locality is for humans to discover and remember related code. As an example of this is those times you make an internal function because you have to but it has a terrible abstraction (say a one-off comprator for a sort), its best for comprehending the caller to be near where its needed within the same file rather than in a separate file or in a dependency in another repo.

Applying code locality to SPOT, when you do need multiple sources of truth, keep them as close together as possible in the code.


I usually say: if things tend to change together, then they should be closer together.

This is also why using layers as your primary method of organization is usually a bad decision. You're taking things that change together and strewing them about in different top level directories.


On a similar note, tree/graph structures should be avoided versus lists unless there is a good reason. Flat is better than nested. A linear block of code is far easier to reason about than a network of function calls, or (heaven forbid) a class hierarchy.

Not that such tools don't have their place, but I've seen too much convoluted code that has broken simple things into little interconnected bits for no reason other than someone thinking "this is how you're supposed to write code" and having fun adding abstractions they Ain't Gonna Need.


I think this can be generalized even further to the "principle of least power": https://www.lihaoyi.com/post/StrategicScalaStylePrincipleofL...

The graph data structure, I believe, is the most generic of data structures, and it can be used to represent any other data structure. This arguably makes it both the most powerful data structure and the, IMO, the one of last resort.


Yes, that's kind of a "fewer moving parts" way of looking at it.

To be clear, I (like the parent post) was using data structures as a metaphor for the organization of source code: "trees" being code that is conceptually like a branching flowchart with many separate nodes, and "lists" being lines of code kept together in one function/class/file (which the parent post points out has the benefit of "locality").


While it is a little easier to deal with flat data than tree data, the real test of this comes when you process the data.

Trees invite recursion, and as the logic grows telling what is going on gets more and more difficult. While it's easier to process trees recursively, it is possible to iterate over it. It's easier to process lists iteratively, but it is possible to handle them recursively. Some people do, much to the detriment of all of their coworkers.

I'm trying to unwind some filthy recursive code on a side project at work. There were two bugs I knew about going in, and two more have been found since, while removing mutually recursive code in favor of self recursion, and then modifying that to iteration.

Iteration is much, much easier to scale to add additional concerns. Dramatically so when in a CD environment where a change or addition needs to be launched darkly. Lists can help, especially with how carefully you have to watch the code. They're not the only solution, they're just the easiest one to explain without offering a demonstration.


I'm not sure I follow... Can you provide an example? (junior dev here)

If I understand some of it correctly, I was contemplating this when I started writing functions for "single functional concepts" like, "check for X; return true or false", then called each of those functions sequentially in a single "run" function. Is that what you mean?

I found that approach much easier to test the functions and catch bugs, but your comment seems to go against that.


Disclaimer, it's mostly a personal thing.

Dividing things into methods mostly gives the benefit of naming and reuse. If you can skip the method's content and reasonably assume it works in some way, it can make reading easier.

If the reader instead feels inclined to read it, they now have to keep context of what they read, jump to the function (which is at a different position), then read the content of the function, then jump back while keeping the information of the function in mind. In bad cases, this can mean the reader has to keep flipping from A to B and back multiple times as they gain more understanding or forget an important detail.

The same thing happens with dividing things up in multiple classes. Don't need to read other classes? Great. Do need to read other classes? Now you have to change navigation to not only vertical, but horizontal as well (multiple files). Some people struggle with it, some people hate it, other people prefer it.

You're basically making a guess what's best in this scenario, and there isn't a silver bullet. The extreme examples are easy to rationalize why they are bad. The cases closer to the middle, not so much.


My own understanding of the comment:

They're making a connection between non-jumping code with a list, and jumping code, like classes, functions, etc. with a graph. A list can be iterated from beginning to end and read in sequence, or can be jumped into at any point in the code at arbitrary points. Similarly, if I open a file and want to read code, I can jump to any arbitrary line in the code and go forwards or backwards to see what the sequence of code is like. I can guarantee that line n happens before line n+1.

With functions, classes, modules, whatever, the actual code is located somewhere else. So for me to understand what the program is doing I have to trust that the function is doing what it says it's doing, or "jump" in the code to that section. My sequence is broken. Furthermore, because I am now physically in a new part of the code, my own internal "cache" is insufficient. It takes me some effort to understand this new piece of code and re-initialize the cache in my mind.

The overall thrust of DRY is overrated is that we often times take DRY too literally. If I have repeated myself I MUST find a way to remove that repetition; this typically means writing a function. Whereas previously the code could be read top to bottom now I must make a jump to somewhere else to understand it. The question isn't whether this is valuable, rather, whether it is overused.


Specifics might depend on the language, domain, and team (and individual preference), so it's hard to avoid being general.

I would say some junior devs can get too fixated on hierarchies and patterns and potential areas of code re-use, though; instead, they should try to write code that addresses core problems, and worry about creating more correct abstractions later. Just like premature optimization is the "root of all evil", the same goes for premature refactoring. This is the rule of YAGNI: You Ain't Gonna Need It. Don't write code for problems you think you might have at some indeterminate point in the future.

When it comes to testing, TDD adherents will disagree, but if you ask me it's overkill to test small private subroutines (or even going so far as to test individual lines of code). For example, if I have a hashing class, I'm just going to feed the hash's test vector into the class and call it done. I'm not going to split some bit of bit-shift rotation code off into a separate method and test only that; if there's a bug in that part, I'll find it fast enough without needing to give it its own unit test. That's what debuggers are for. All the unit test should tell me is whether I can be confident that the hashing part of my code is working and won't be the cause of any bugs up the line.

Obviously I'm not in the "tests first" camp; instead I write tests once a class is complete enough to have a clearly defined responsibility and I can test that those responsibilities are being fulfilled correctly.


I'll start out by saying I have some pretty strong positions opposing what it sounds to me like yours are.

>I would say some junior devs can get too fixated on hierarchies and patterns and potential areas of code re-use, though;

Agreed.

> instead, they should try to write code that addresses core problems,

Still with you, agreed.

> and worry about creating more correct abstractions later.

I don't quite agree. I agree you shouldn't spend too much time, but I think "don't worry about it" gets you a hodgepodge of spaghetti code and half-baked ideas that no one can maintain in the future.

One of the most important things someone can do when adding a feature for instance is understanding the surrounding code, it's intentions, and how any abstractions it may have work, and then add their feature in a way that complements that and doesn't break backwards compatibility.

I'd go as far as arguing that only ever MVP'ing every story without a thought to design or abstraction is one of the major problems in industry alongside cargo-culting code maintenance rather than ever trying to form deep understanding of any meaningful part of the software.

> Just like premature optimization is the "root of all evil", the same goes for premature refactoring. This is the rule of YAGNI: You Ain't Gonna Need It. Don't write code for problems you think you might have at some indeterminate point in the future.

YAGNI is far too prescriptive and misses the point of programming that literate programming gets right:

Programming (like writing) is about communicating intent to other human beings in an understandable way and shuffling complexity around in the way it makes the most sense for:

- The typical reader skimming to understand something else (Needs to know: what does it do?) - The feature adder (needs to know how it works, so they need a high level, then easy way to understand the low level as needed) - The deep reader (needs to be able to take in all of the code to deeply understand it, needs straightforward path to get there)

What you describe sounds like no abstraction and just throwing all of the complexity in front of everyones faces all at once. I can appreciate the habitability advantage of that, but I think that discarding context of acquired domain knowledge as you work on issues is too great of a cost.

In case it's not obvious, the words came to me for the point I'm trying to make: Domain knowledge you acquire while working on something should be encoded in sensible abstractions that others can uncover later on, peeling away more and more complex layers as needed.

> When it comes to testing, TDD adherents will disagree, but if you ask me it's overkill to test small private subroutines (or even going so far as to test individual lines of code). For example, if I have a hashing class, I'm just going to feed the hash's test vector into the class and call it done. I'm not going to split some bit of bit-shift rotation code off into a separate method and test only that; if there's a bug in that part, I'll find it fast enough without needing to give it its own unit test. That's what debuggers are for. All the unit test should tell me is whether I can be confident that the hashing part of my code is working and won't be the cause of any bugs up the line.

This view sounds like it may be a direct result of a YAGNI/avoid abstraction style to me actually. If you avoid abstracting or code-reuse quite a lot (or even don't spend enough energy on it), you lose one of the largest benefits of TDD:

regression testing

If nearly all of your functions are single use or don't cross module boundaries... then the value add of TDD's regression testing never really has a chance to multiply.

For OOP I feel like this would be reflected in terms of testing base objects the most or static methods. For functional code, it would just be shared functions.

> Obviously I'm not in the "tests first" camp; instead I write tests once a class is complete enough to have a clearly defined responsibility and I can test that those responsibilities are being fulfilled correctly

I'd argue you are testing in your head anyway. Visualizing, conceptualizing, and trying to shape the essence of the problem into something that makes sense.

The problem is sometimes our mental compilers/interpreters aren't perfect and the mistakes are reflected as kludges or tech debt in our code.


> then called each of those functions sequentially in a single "run" function

If a lot of conditions are checked sequentially, why not write down exactly that? The sequence itself may well be what a reader would like to know.

The one benefit of the added function calls would be that the definition order does not need to change even if there are future changes to the execution order. But that is exactly also what adds mental overhead for the reader.

If the names add context, then that's a perfect use for comments.


I like your terminology of code locality. Personally, I've always thought of it as being the "ctrl+f" principle. If I'm reviewing a PR or looking at the source in GitHub, it's a lot easier if I can find definitions via "ctrl+f" without resorting to an IDE. Sure, it's probably best practice to checkout code changes I'm reviewing and open them in an IDE, but that often isn't what happens in practice and if I'm reading third party code, configuring the IDE to understand the project might take a lot of effort. The stronger rule is that it should not be necessary to use a stronger tool than a project wide find to lookup references to some function/data structure.


A closely related rule of thumb for organising code is: "Things which change together, belong together."


Right, “point” can mean any notion of “close vicinity” whenever it’s not practical to reduce something down to literally the same single syntactic expression.


This is a pretty good description of the motivation for OOP.


Right, if the "truth" is encapsulated inside an Object, it is the single location for that truth.

If you want a "different truth" create another object. And never assume that the truths of different objects must be the same.

That is a difficult condition to achieve in practice we often code based on what we know a method-result "must be, therefore it is ok to divide by it etc."


Especially for an object that has to have some consistency between its members, all the code that has to maintain that consistency (or that can break the consistency, same thing) should be in the same place - functions of that class. So if you ever see a class that has the data in an inconsistent state, you have a very small set of places to look for the culprit.


Occasionally, it's not clear if a single point of truth is entirely appropriate, or even if it is it can lead to tiresome extra levels of abstraction.

In this case, I sometimes prefer a slightly different approach; let's call it CRAP - Cross Reference Against Protocol: instead of definitions effectively referring physically to the same point of truth, they are designed instead such that they simply cross reference against a protocol, warn if there has been a deviation in protocol giving the developer the opportunity to go back and correct as necessary, or otherwise allow it and manually sever the link to the protocol if one is no longer desired.

This prevents against SPOT's weakness (at the cost of some extra manual work / due diligence by the developer), which is accidentally enforcing future convergence in situations when previous convergence was incidental/accidental, and divergence should have been allowed to take place instead.


The CRAP that you’re referring to is really only solving for a very minor downside of SPOT (and DRY), and that’s the inconvenient connection of technically unrelated code. In my own experience it’s far simpler to disconnect two code paths (copy a function, change its signature, etc) than it is to connect them to achieve SPOT.

In your CRAP model it seems like you’d be relying on tests or assertions to verify the “protocol”, if I’m understanding that correctly. Which means finding all the places where something needs to be true and then testing for it. Seems like a lot of work and wouldn’t necessarily catch all those places.


I have a feeling the acronym won't help to push your idea for wide adoption.


It's always a good idea to use an acronym that people aren't embarrassed to say out loud, especially for extremely successful widely adopted award winning open source projects.

Just watch what happened when the OpenVDB project won the 2014 Academy's Scientific & Technical Achievement Award hosted by Margot Robbie and Miles Teller on February 7, 2015 at the Beverly Wilshire.

https://www.youtube.com/watch?v=5FwOc4OSOR0

https://www.openvdb.org/


can you elaborate on what "protocol" looks like? or can you give us some example?


Not sure if it fits within the CRAP idea, but sometimes I just write a test asserting that all duplicative definitions of X are equal.


> Occasionally, it's not clear if a single point of truth is entirely appropriate

Feels similar to the monolith/microservice discussion in that it's mostly context sensitive. I think the term "programming principle" is misleading, these are tools with specific applications.


If you have client and server side and you have to check only in one place if conditions are met, this means that you cant check in client side anything and must do a server call, or implement both server side and client side in single codebase. Not sure if this is always feasible. Add DB to that and this means that you have to always check for constraints at DB level.


You could consider codegen from the single point of truth. Not always practical, but great when its set up with good ergonomics.


Codegen is great for lots of sync between client and server (I just wish there were more standardized tools), but what if it's a complex algorithm that you need to be able to perform both client and server side, assuming they use different languages?


You've answered one of the easiest ways to do it: just use the same language. Node (and Deno/Bun) is a good way to maximize code sharing and single points of truth.

But I definitely understand not everyone wants to write all of their backend in JS (or even TS). (Node ORMs sometimes don't have the polish of their cousins in other languages, for instance.)

There are great opportunities here for language mixing, however, in writing that complex algorithm in the same language for both frontend and backend but the rest of the codebases may not be in the same language.

1. V8/JavaScriptCore/SpiderMonkey are all very easy to rehost as "scripting languages" inside many languages people prefer to use on the backend. It's often easy to find wrappers to call JS scripts and marshal back out their results inside the backend language of your choice these days. You pay for transitions to/from JS to your primary backend language, of course, so it takes careful management to make sure these "business rules in JS scripts" aren't in any hotpaths, but a backend may have the cycles for that anyway.

2. WASM is offering more opportunities to bring some of your preferred backend language to the frontend. You probably don't want to write your whole frontend in WASM still today, as WASM still doesn't have the same DOM access, but using WASM for single point of truth business rules has gotten quite viable. You still pay for the startup costs and marshalling costs between JS and WASM, but in many cases that's all still less than network call. (I'm still skeptical of tools like Blazor for .NET, especially those building full frontends with it, but I definitely appreciate that reach of "write once" business logic for both client and server.)


You're assuming a web client. The example I had in mind was a mobile client, where we actually used a cross compiler to generate java byte code from swift to allow code sharing between iOS/Android clients, but that wasn't usable for the backends (which were C# mostly). Which is why I went with a server-precomputed lookup table in the cases the number of possible inputs made that feasible. If not, reg-exes for validation (with tooling to enable sharing between codebases) are a decent alternative in many cases. But I was curious what other options HN readers might have tried.


Option 1 applies to mobile frontends, too. In Swift you can always call JavaScriptCore to run a script. Android can use system-wide V8 or even bundle a smaller interpreter for JS.

(In fact, JS is the only "universal" scripting language in mobile due to Apple's fun restrictions that JavaScriptCore is the only JIT engine allowed on iOS. It really is the strongest option today for language to write things in if they absolutely need to be shared across all possible operating environments.)


That's where you need to consider practicality, and it degrades quickly. If it's fairly simple and compartmentalised code without external dependencies like services running on server, you could consider transpiling to target runtimes. But most likely at that stage you'll call a remote function through an API when you need that business logic on the client.


Sure, if performance isn't an issue (e.g. it might need to be done per keystroke). Small amounts of logic duplication like that can be annoying though, esp. when inevitably they end up not agreeing and users don't understand why they're being told their input is invalid. One option I've used is to have the server precompute all possible outputs for all possible inputs, it can work pretty well even with 10s of 1000s of entries.


One way to do this is to have a formal data schema as a separate artifact, which you then have as a dependency in your server and client projects, and generate the checks from the schema, as well as the SQL DDL.

But yes, if you have truly separate codebases it becomes more difficult, and protocols need to be quite stable and changes to them carefully managed.


> If you have client and server side and you have to check only in one place if conditions are met, this means that you cant check in client side anything and must do a server call, or implement both server side and client side in single codebase

Checks don't have to include the definition of knowledge independently, so multiple checks against the same rule don't need to be a violation. As a simple example, if you have a JSON Schema, that is the single source of truth for validation, and you can validate against it in 16 different places at different stages of processing and you haven't violated the principle that each piece of knowledge should be represented once in a system.


Have not thought about this. Good point. Constraints as a data not as a code. aka schema.


This is a huge advantage to using Node.js (or Deno) on the server. In a lot of my projects, I have a shared library that is used for data validation and constraints that is used on the frontend AND the backend. Makes validating data on both sides incredibly easy, and changing the library forces changes on the frontend and backend to match (enforced by automated tests)


> this means that you cant check in client side anything and must do a server call, or implement both server side and client side in single codebase

Or you derive one of the implementations from the other, or both from a third one.

But yeah, heterogeneous environments have a tendency of creating unnecessary code duplication.


Partially agree. I think SPOT (I'd always heard it called single source of truth) is a more universally applicable paradigm than DRY. Having said that, the cost of creating dependency chains is often underestimated. Overly dogmatic adherence to SPOT/SST can lead you to make the wrong tradeoff on coupling two unrelated areas of your codebase to unify some trivial truth.

I'd also say there is a lot of nuance about what "truth" is (i.e. is a pizza crust/sauce/cheese an essential truth that should have a single source).

Some DRY definitions I read actually tie in SST but I think many devs don't bring that nuance to it.


> I think SPOT (I'd always heard it called single source of truth) is a more universally applicable paradigm than DRY.

“Every piece of knowledge must have a single, unambiguous, authoritative representation within a system” is the verbatim definition of DRY from when the DRY principle was first articulated.


Yeah there are a lot of definitions out there that are along these lines and they do hollow out my argument.

But why call it "Don't Repeat Yourself" if it actually means something somewhat more subtle than that. I firmly believe many junior developers don't grasp the nuance and based on the comments I'm not the only one who thinks this. So if DRY is widely understood by developers to mean literally "don't repeat yourself" and nothing more, does it really matter how the formal definition phrases it?

In any event, if SPOT / SST and DRY do mean the exact same thing, I like SPOT / SST better because the names encode the essential concepts of the principle.


Unfortunately, words mean things and people will take names to mean what they say.


I think the rule should be "Try not to repeat yourself"

Rules are like alarms they draw our attention to some peculiar condition which gives us pause to think about if it's kosher and if not why not.


The art of programming is finding the fit and exceptions to the rules. It's just, frankly, a lot easier to be dogmatic.

Someone says "never do this" or "always do that" and you can apply those rules with abandon (often leaving a maintenance nightmare in your wake).

There are no rules to programming.


I find it useful to think of it as forces pulling on the design, similar to physical forces acting on an object. There are forces that try to keep individual truths/knowledge and responsibilities in a singular place, there are forces that try to minimize abstractions, coupling, dependencies, and indirections, there are forces that try to maximize coherence and separation of concerns, and so on. It’s an essential part of the job of a software engineer to balance those adequately in the design of the software.


Right. There are also "forces" like management who want the project to be finished yesterday.

Another metaphor I like for programming is Chess. Any line you add to the program constrains its future development, becomes "weight" or "force" that pulls your development into some direction. Sometimes you have to sacrifice features like pawns. Sometimes you may sacrifice security, you may think it is secure enough. The outcome of this game is often a draw, or stalemate. And the same game can continue for years.


I think it's an easier pointer to the concept for people who aren't already familiar with it. It actually came up the other day when I was talking to a junior dev who'd written a benchmark by copy-pasting the entire test harness; they couldn't get their head around my explanations of why centralized responsibility is important (although maybe I was doing a bad job) but once I mentioned DRY the pieces seemed to click into place.


There's nothing wrong with "Don't Repeat Yourself" except if it applies to code rather than knowledge.

Any principle like this that is applied to code is wrong.


And "knowledge" is a far more applicable word than "truth."

Truth implies facts, knowledge implies understanding the meaning and associated course of actions.

This is the second time in as many days that I've read something purporting to go beyond some original. The one yesterday was "we need more than the four types of documentation." All the examples fit into the four types as originally defined.

In HR training, there was even an entire segment on the "Platinum rule" because the Golden rule isn't good enough. Yet anyone who works to understand the Golden rule to any depth knows that it encompasses every "enhancement" the Platinum rule intends without any of the side effects.

What kind of failure is occurring such that definitions don't function any more, I wonder?


Maybe, but SPOT is a more memorable acronym than SPOK (or whatever). It allows you to talk about the “SPOTs” where stuff is defined.

One could also use “SPOTify X” to mean “reducing X to a SPOT”. :)


How about "reduce coupling" in that function, "increase cohesion" in this other function. The DRY principle is intending to get you thinking of coupling and cohesion, which, when gotten backward dramatically increase complexity.

The mechanism to "SPOT that code out, bro!" can be applying varying techniques for reducing coupling, and tightening cohesion. A proper review on a merge request should be making more specific comments about how, not a hand-wave to say "DRY that sucker up."

One final comment: DRY is three characters, therefore obviously more efficient :p


I really like your SPOT better than SSoT, the acronym is so much more on point, really hits the "spot"


I like this.

It is very hard to find out if the definition already exists or not in the codebase. This can lead to multiple definitions of the same thing or the truth.

anyone has a good way to deal with this?


Unfortunately the issue of lacking a single point of truth is exacerbated the more people who work on a project. I believe the issue in spreading around logic comes from not knowing the original intention, and asking the original authors is, IMO, the best way to fix something or add new features. Obviously knowing the original authors is not always possible, so I try to follow existing patterns.


If the codebase isn’t a total mess, one should be able to guess which components or code paths have to deal with a given truth by virtue of their purpose/function. Then one can investigate the code paths in question to find out where exactly the existing code is dealing with the respective thing.

It should be an automatic thought when implementing some logic to think about which other parts of the system need to be consistent with that logic, and then try to couple them in a way that will prevent them from inadvertently diverging and becoming inconsistent in the future.

In terms of software design, a more general way to think about this is that stuff that (necessarily) changes together (is strongly coupled) should be placed together (have high cohesion).


Some IDEs will warn you about similar blocks of code.


It's interesting to note that this principal doesn't just need to apply at a low level, e.g. code. It continues to add value when designing application architecture. Or, can be used to help refine features.


It also tends to shift the focus, at least in mentally framing the issue, from DRYing out implementations to information. If you're thinking less about implementations (man, I wrote a fold by hand for this thing--maybe I should put all the folds into the same function?) it guards against the "overcomplications" people seem to dislike about DRY. You're thinking more about replacing repeated materializations of the same information with references to a source of truth.

That's why I'm pretty dogmatic in applying DRY to infrastructure as code, for example, as opposed to generically for every code base, because finding or depending on repeated identifiers that have to be the same is such a source of error here.


...we've just hit a DRY SPOT

</inevitable_joke>


One underappreciated hard bit of SPOT is knowing for sure that something does in fact represent the same truth. One of the pain points of the DRY/SPOT model occurs when a new use case arrives that breaks existing truths for certain subcomponents. It can be real painful to decouple things.

This is not a reason to avoid SPOT altogether, but one think through that situation as part of their mental calculus on pros & cons.


It’s difficult to discuss this in the abstract, but one benefit of SPOT is that it tells you which code (the users of the SPOT) you have to consider when decoupling/refactoring. In contrast, when it’s decoupled in the first place, but actually represents the same truth, you may have no idea that the other instances exist.

Writing code such that it’s reasonably easy to decouple or recombine existing uses is mostly orthogonal, I think. Usually you can just duplicate whatever is at the SPOT when the need for more than one truth arises.


Also, these are principles, not compiler errors. These principles will complete and it sometimes helps to have an order of precedence. The number one rule is KISS and would have trumped all the examples in the article.

Nit: the article’s problems also have an obvious solution—named parameters with default values when not provided.


In total agreement. I might have couple of the same snippets of code calculating something in a program and generally would not loose my sleep over it (still will fix it when have nothing else to do). But replicating something like the source of truth is a crime in my book.


What are some patterns that can be used to implement this?

I can think of using a Rule Engine but not sure if there are any performant ones and they don't seem to be used much.


Usually it just means that you have a function like `isValidPizzaToppings(ToppingsList)` somewhere that you call in multiple places, instead of having a condition `myToppings.count() >= 1` in multiple places. So, just normal functional abstraction.


This is much better although I'd argue even it can be taken too far.


Any rule in engineering "can be taken too far" in the sense that there are times when other considerations will win. SPOT (which actually seems equivalent to the intent of DRY, although I recognize that that use might not be well represented in the memepool) is something that is probably always a win on its own terms, but in some cases introduces other costs that exceed the actual benefit. I think that's meaningfully different than the overly syntactic (mis?)interpretation of DRY, where it's true that things looking similar might hint that there's something to factor out, but sometimes factoring that thing out is a bad idea simply because it is actually two separate "pieces of knowledge" that just happen to be the same at the moment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: