Hacker News new | comments | show | ask | jobs | submit login
A set of best practices for JavaScript projects (github.com)
265 points by Liriel 10 months ago | hide | past | web | favorite | 144 comments



> Always use //todo: comments to remind yourself and others about an unfinished job

No please don't do this, no one will ever do the todo. The code base I'm currently working on has more than 2000 todos.

Instead of this, create a ticket in Jira, Asana or whatever issue tracker you prefer. This way it can be easily taken in account when doing a sprint planning and even if no one opens the problematic file ever again, the issue will still be visible.


This is misleading advice. Most linters can be configured to fail upon detecting a 'TODO'.

I make liberal use of TODOs when developing, typically for reminding myself to e.g. localize a string I've hardcoded during prototyping or to refactor kludgy code.

A pre commit git hook gets triggered to run the linter when I attempt to commit which helps avoid the possibility of adding code containing 'TODOs'. Obviously I can just add the '-n' flag to not trigger hooks, but this requires intention.

In short, using TODOs appropriately is a massive boost to productivity and the alternative is either to just do those random tasks that pop into your head during the course of writing code or to just hope that you remember to do it later.


Combine the two: I always leave a link to a ticket with my TODOs. An improvement would be to also use a commit hook or linter to enforce it.


I think combining the two is the best. I will edit that, thanks for your feedback.


I generally do a TODO with a date written then use a tool to scan for them and rank by date


I don't think anybody's saying not to do it before merging. (How would anyone even tell?)

> A pre commit git hook gets triggered to run the linter[...]

Git hooks actually need to be set up to work, so this incurs overhead for onboarding.


This is one reason why I find GitHub (especially, Enterprise) so frustrating. I want simple server-side githooks to verify that client-side hooks were used (or, at least a good forgery). I don't want to setup an additional CI environment just to run the most basic SED commands. Nor do I want to make my existing CI environment more complicated by merging disparate validations; unit test validation and commit validations.


I built a simple system for some of our repos that fails the build if you don't have the shared team git hooks set up properly, and on failure directs the new engineers on how to fix the situation.


And really, one should just set up the hooks automatically when the build script is run, if you really want to reduce friction.


A great idea, but this also takes time, albeit once rather than N times. Any chance you could share some of the scripts (etc.)?


Your pov is understandable but i treat it differently.

They are essentially comments that explain missing functionality. You shouldn't add future features or random ideas but obvious missing functionality or workarounds that happen at this place.

i also prefer to leave my name b/c blame never works as it should. it's not the name of the person doing the todo but writing it

    // todo(andreasklinger): feel free to remove this when the normalization api 
    //   workaround (see fileX.js) is no longer needed
Related: https://dev.to/andreasklinger/comments-explain-why-not-what-...


How about, instead of comments, you put function calls to aptply named comments?

Comments are, in general a code smell.

If a comment is about validation of input or invariants, call a validation function.

If a comment is about doing some extra thing, then call a function.

This makes you think about how your code is supposed to be structured instead of someone repeating the same code or comment in 1000 places, for example.

It's fine for functions to be blank or private but at least we see that there are real pieces of the API missing.


The problem with this is that you're coming up with an abstraction at a time when you're possibly the least informed about what the abstraction should look like.

And then the actual implementor is going to probably roll along with the scaffolding you started.

I don't see why people are so hostile towards TODOs. If they don't get done, that sounds like a process problem, altogether something that is going to change project to project, not something to bikeshed on HN every. damn. time.


I am not just talking about TODOs, but ALL comments.

Comments do not use the structure of the language or the IDE to help you. They can't be statically analyzed as easily as the existing syntax. At best, there can be some DSL that is implemented in the comments.

I would submit that 99% of your comments can be changed to closures in Javascript.

The problem with this is that you're coming up with an abstraction at a time when you're possibly the least informed about what the abstraction should look like.

I would say that one of the best times to figure out what the API of the abstraction should look like, is when you're implementing the API of functions that apparently rely on it.

You don't need to implement it, but you do need to think about the implementation and API. Otherwise your comments are just wishful thinking, and may not even be implementable. Perhaps the invariants you rely on might be too expensive to compute, for example. When do you think the best time to figure that out is, if not the time when you're writing code that relies on those same invariants?

For example a comment like this:

  function something(platform, options) {
    if (platform != 'ios') {
       throw new Error("Not implemented yet folks!");
    }
    doingSomethingOnIos();
    // todo: implement for android
  }
  
should probably instead be replaced with an Adapter pattern where you implement the ios thing now, and you can add an empty android adapter. Much better to assign tasks to developers and have them already know where to fill in the code, AND have an example in your iOS adapter.

  class Something_Ios extends Something implements Something_Interface {
    actual tested implementation is here, for others to see
  }

  class Something_Android extends Something implements Something_Interface {
    // TODO: see interface and implement
  }
What's better... the first thing or the second thing?

In an ACTUAL PRACTICAL SENSE the second is better.

Now the second one may have a "TODO" but the instructions may just as well be in the issue.


I prefer TODOs. At the places I've worked filing an issue was a sure way to bury something to never be seen again. You file-and-forget. At one point we even wiped the tracker just because it had so many issues. Definitely a management problem but still a true story.

How do you know that a function is unfinished if you have thousands of issues? Adding a TODO puts it in your face so pretty hard to miss.


A variant I've seen is //TODO(#7362) where the number is a ticket and is enforced by a lint rule.


This is what I have found to be a great compromise for my team. Some like todos, others want tickets, so we enforce/support both.


If it works then great, but that sounds like a way to piss everyone off by forcing them all to do both.


Yeah I find our "TODOs" don't live very long in the code base.


That's a great idea


this guy gets it


This is a really good idea.


> No please don't do this, no one will ever do the todo.

I've worked in numerous places where TODO placeholders were regularly processed by devs as an explicitly scheduled exercise. If the TODOs are being left to rot, this seems a symptom of bad dev process management, rather than being inherent to the pattern itself.

> create a ticket

While I've never seen it rigidly enforced, most TODOs I've worked with have been linked to relevant tickets (with ticket #number being referenced in the TODO comment).

It's also clearly common enough practice to be supported out-of-the-box without config both by many IDE syntax highlighters and also, I think, by some linters.


Also, it isn't necessary that a ticket creates accountability for the task to be completed. If your process management doesn't include triage, and triage doesn't consider tasks that haven't been touched in awhile, it probably needs some work, too.


At least, today, if you want // TODO or // FIXME - do both :add a ticket number, like: // TODO Refactor for Greater Good #123 (or https://<github>|<jira>/#123.

Filling in issues can feel like "busy work" - but it really is the least bad way to split up work, and keep track of things.

I think fossil has the right idea here, distributing issues along with the source code repository (and there's a few attempts to bring that to git/mercurial - but I've yet to find one that really works).

In general project "meta data" like documentation and tickets needs to be kept in a database, and there needs to be cross-reference between code revisions and ticket state as well as wiki/documentation - but they probably are 2 or 3 "different" projects.


> do both

Not all workflows/priorities support this.

Sometimes you have windows to meet. Sometimes one task has been authorised, while another has not. Sometimes there is a chance a piece of code will soon become obsolete, so a TODO is conditional on that not being the case.


> No please don't do this, no one will ever do the todo. The code base I'm currently working on has more than 2000 todos.

Visual Studio integrates TODOs into its UI. After rolling off of feature work I can pull up the TODO list in VS and start picking tasks off, jumping right into the code.


What exactly is an "unfinished job"?

If it's functionality that doesn't exist yet, it doesn't need to exist until there is something that requires it, in which case a TODO is useless and your ticket will be a meaningful business requirement if and when that requirement exists.

If it's "tech debt," then you have to ask yourself "can it ship like this?" If they answer to that question is "yes," then you should really be asking yourself "does this need doing?"

If at that point the answer really is "yes" (unlikely) then hopefully your team has enough context and understanding of their codebase that they can do it when appropriate - if you really really have to, raise a ticket.

TODOs are a symptom that something is wrong up the chain.


Like said already create a ticket instead of a to-do.

But also, instead if writing todos, explain the context and reasoning behind what needs to be done instead. In the future when this code is visited again this will help you or others to get into the right mode to make a proper decision about what to do with this code. Maybe the reason to do something is no longer valid, if the intention is properly documented the decision can be made at that point. This allows for proper refactoring instead of skipping todos and keeping code for legacy reasons.

So instead of: //todo: remove this code

Do: // required for legacy API clients, should be removed after last client is deprecated.


> //todo: remove this code

IS fine when the code isn't "finished" within one ticket/cycle etc. After that it needs more detail - However whether there is a ticket or not, I'd still add a TODO (that refers to the ticket id), so that it is more discoverable.

There just as great a risk that information gets lost in the ticketing system.


Nothing as permanent as a temporary solution.

I've seen countless todos that would be resolved the next sprint/day but never where due to priority shifts or 'it works'.

It also doesn't matter wether you loose track of a to-do in your ticket system or your codebase. You will loose it and that can be ok. As long as context and intentions are documented with the code. External or even in repo docs often get forgotten to be updated. Document where it counts.


> Nothing as permanent as a temporary solution.

Maybe so, but what does "doing them immediately" solve if you're pushed for time anyway?

> It also doesn't matter whether you loose track of a to-do in your ticket system or your codebase

No, but it does matter if one is more likely. If a TODO is listed right next to the code, anyone who begins working on it has that context immediately.


I'm not suggesting to solve them immediately, I was merely stating even the smallest temporary solution will stay around for longer than expected at that time. Instead of leaving only a todo (issue tracking) in code, leave a decent context in code so it is not lost no matter what issue tracking system is used. Using codebase as issue tracking is fine as well. But issues will get forgotten or become irrelevant. Having just todo's with no context (ticket was removed from old issue tracking, issue was never tracked) just creates legacy code which is hard to refactor or delete without proper time investment.


Do both and reference the ticket in the comment (something like "// TODO: deal properly with NULLs here, see Jira://PRJ-123" with the ticket going into as much extra detail as it relevant).

That way someone looking at the code can see that it is not in its intended final form rather than having to intuit the fact from other clues or knowing every ticket in the work list. Particularly useful if trying to find/diagnose a bug that may be caused/worsened/highlighted by the code in question. Also works in reverse: of the ticket doesn't give accurate location information for the related code you can search for the ticket ID and find where you should start looking from the comments.


On the other hand, if you are using `TODO` comments within your code, a good idea is to place something like the `fixme` CLI [0] into a prepush hook [1] so that it is always highly visible.

The length of this output then acts as an intuitive measure of accrued technical debt and can be leveraged within a team to get time to fix the underlying problems.

[0] https://github.com/JohnPostlethwait/fixme

[1] https://github.com/typicode/husky


On a side note, I really can't stand Jira, and don't understand why it's become so popular. Even visual studio online is far less cumbersome, and easier to navigate.


The main issue with that is you can't just clone the repo and walk off with the list of things to work on, you ALSO need the ticketing system constantly accessible and up to date to reflect the code you're working with. but it does seem way neater and easier to use the issue tracking for this purpose, right tool for the job.


Also "unfinished jobs" shouldn't be getting merged in the first place. If you see something unrelated to the feature you're working on in that particular commit, make a ticket on JIRA/Other and address it in a separate commit/PR.


I came here to write this exact same comment. If people insist on having todos in code. I make sure they at least reference a jira ticket.

The code should describe your business logic, not the work to do to build that code. What if no one looks at that file for a year?


Why not put business logic in the ticket too? The think about the TODO is it is more discoverable by devs who are likely to be looking at that code.

Nothing wrong with referencing tickets in TODOs, except when references go stale. It's easier to just do periodic 'TODO' searches on mature code.


Agreed. And moreover, the developer already credentializing in a section of code (like working on the router for the past week) is the one that's likely to smash through some outstanding TODOs in that section as well.

What doesn't seem to work is to turn every tiny TODO into an issue and then hope someone decides to recredentialize in some possibly large code just to fix some small issues because they stumbled upon it on the issue tracker.


It should also be noted:

When working on X, and noticing another piece of work Y, thinking too hard about Y can break context/concentration and your focus on X.

Focus needs to be preserved, which is why you drop TODO markers, rather than "just doing them" - you are already doing something, doing something else means not doing what you are meant to be doing now - not pragmatic at all.


Agreed. The more simple answer is to never commit unfinished or broken code. Inline todo's are a definitive no-go. This implies a lack of control of workload management.


Or better, don't add more tech debt!! The todo-list will just keep growing and for every todo it will slow down further development. Just do the extra work.


This reminds me of the old saying "if you don't have time to do it now what makes you think you'll ever have the time to do it later?"


What I need right now might be a working algorithm which gives the right output and gets the job done for the handful of inputs it's being given.

What I know right now (since I've just been working on the code) might be a much more efficient way to do the same thing, which requires some tricky implementation details, handling of edge-cases, etc.

YAGNI, premature optimisation, etc. says I should stick with the working, slow version until it becomes a bottleneck. The future dev (or future me) who hits a speed issue, does some profiling, and sees this function eating resources, may appreciate finding a comment there listing known issues and potential alternatives.

Depending on how bad the performance gets, they may definitely have time to implement the replacement, as they wait for the slow version to finish ;)

(Disclaimer: I'm currently trying to speed up an O(n^3) algorithm which was fine for n=200 but is now taking hours for n=3500)


Exactly - YAGNI essentially says you don't need TODOs littering your code. You can cross that bridge if and when you get there.

Your disclaimer is interesting. What's more useful than a code base littered with potentially useless TODOs is a codebase containing an explicit set of assumptions agreed upon by the product stakeholders, product owner and implementation team. Then when some aspect of that implementation was influenced by one of those assumptions, say your n=200 and the assumption was n<=500, then you can reference the assumption right there in your code. Then if at some later point in time that assumption should change you can easily find all the code needing to be revisited.


Because thousands of times in my life I've had more time the day after today.


What makes you think you ever won't?

Plus, sometimes, it might not need to be done tomorrow, if it turns out to be obsolete.


Because I need to push the code out today and I have other obligations to meet in my life.


"Tools, not policies".

I must say, this page reminds me of how lightyears ahead Perl was/is in terms of tooling.

For example: are you coding a package? Before even starting brainstorming, just boostrap your package with ExtUtils::MakeMaker. Long story short, that package will boostrap a mostly blank (but customizable via various command-line switches) Perl package with most things you will need for development and distribution: a directory structure, a Makefile (for building and installing), some pre-set comments, mostly blank test files etc etc.

This is something that, for example, the Go programming language team has rightfully picked up in terms of directory structure conventions and tooling (gofmt).


JS ecosystem tries to do that by having bootstrapping repositories.

Although I will have to say that it is nowhere near as high-quality and convenient as Go has it. Standardising best practices most often seem too restrictive to programmers, but Go has done an excellent job at drawing that line and staying behind it.


> I must say, this page reminds me of how lightyears ahead Perl was/is in terms of tooling.

When Perl has such a mess of a syntax that its meaning is ambiguous to any static parser, I find it difficult to believe Perl has such amazing tooling - JetBrains for one refuse to implement any official support for it (see https://youtrack.jetbrains.com/issueMobile/IDEA-87976#).


This seems like an unrelated dig on Perl. There are many things that Perl did better than any contemporaneous language, all while having an ambiguous static parser.


This is a mishmash of coding standards (at ABC CO we use tabs!) randomly sprinkled with best practices. It's important to standardize on things without one way necessarily being better than another. The organization also seems a bit rushed. For example, not exposing the DB schema in a URL shouldn't really go into the section on encoding. Moreover, SQL injection is almost never fixed by using an escaping function, as by far the most common problem with SQL injection is letting the client specify field and column names, or things that aren't quoted (e.g. numerical id's) and so don't benefit from escaping.

I don't want to discourage any company from setting and documenting coding standards, but this particular document feels a bit rushed and could use adherence to best practices of its own. For example,

1. If making an opinionated mandate, provide a link to an explanation

2. Consistently provide links to examples

3. Maybe do some spell-checking: "Hallo"?

4. Try to maintain a uniform level of specificity

E.g. in the section on URL security, there is this vague comment: "Attackers can tamper with any part of an HTTP request, including the URL, query string,"

Which, while a true, doesn't describe what the programmer should or shouldn't do. It's just hanging there. Compare with the weird specificity of listing git commands in section 1.

That tells me the author knew a lot about checking stuff into git, but was a bit out of his/her league when it came to Security, so they just made generic security statements while giving precise instructions for using git.


A lot of companies these days create "Best practices" guidelines just to boost their online recognition. Very often I find these guidelines useless or even harming.


What in this document do you find useless or harming?


Basically every statement that says "Never do A" or "Always do B" without providing any justification for it. Documents like this only contribute to holy wars between junior developers by creating an impression that they should blindly obey some random set rules rather than forming balanced opinions.

This document should rather be called:

"Some practices for JavaScript projects that work for devs at Hive but might not necessarily work for you."


> What in this document do you find useless or harming?

The use of TC39 proposals IN PRODUCTION. The authors of that document aren't serious.


Presumably they use a tool like Babel to compile down to present day ES in production??


> 3. Maybe do some spell-checking: "Hallo"?

I figured the string was German.


Any recommendations on best practices for JS?


Sorry, I figured with all the critiques here someone might have an alternative reference.


It's fine to have company wide coding standards. You don't need to call them "Best Practices", "Our practices" is enough. You can iterate and find out which of these are OK through experience and feedback. Some of the rules here are very specific and just wont work well in the general case (e.g. directory organization, which should probably be decided at a lower level than a company wide policy). Others are too vague to be usefully actionable (security stuff). Having developers instead of legal read Licenses and decide which projects are OK to include in your source seems dangerous.


Sure, that's why I asked what other sources are more legitimate/tested.

This way I can develop standards for my team, which as of now really doesn't have any.


All the big companies have style guides, but protocols are harder to transplant because it's not reasonable for smaller companies to follow the same practices as big companies.

I wouldn't adopt more process overhead without a specific reason.

E.g. if you are having quality issues where R&D is shipping stuff before it's ready, then institute a more formal process of sign-off for a release so R&D needs approval from QA.

If you are having problems making builds repeatable, then start standardizing the environments with golden images, put everything in version control, etc.

If you have issues with licenses for third party code, a 3PP (Third party program) process with sign off from legal/other teams (security would be nice).

For each set of problems, there are processes designed to address those. You will also encounter less resistance when the process is a solution to a problem rather than adherence to a best practice. I also wouldn't try to adopt processes just to satisfy a general desire for more process, because there are costs to adopting process overhead, especially the cost of becoming a bit dumber, as an organization, each time you adopt a process to formalize what was previously a judgement call by some decision maker.


"Use stage-1 and higher JavaScript (modern) syntax for new projects. For old project stay consistent with existing syntax unless you intend to modernise the project."

That seems doesn't seem like a good idea. I can understand recommending Stage-2 and/or Stage-3, but stage-1 seems premature. Stage-1 in the TC39 process is still proposal stage that is flagged for expecting "Major" changes in the spec and possibly the syntax of the feature. Being stage-1 also is a poor guarantee that it will progress to future stages or become part of the spec. Stage-2 at least signals that the committee think its likely to eventually become part of the spec with only minor revisions.


You're actually right. I will change that to stage-2. Thanks for feedback.


> Organize your files around product features / pages / components, not Roles:

Oh really? Can someone explain why is this a good thing and what's the benefit?

[1] : https://github.com/wearehive/project-guidelines#6-structure-...


you want to slice your abstractions by domain logic boundaries

essentially by "what it does, not by how it does it"

eg let's say you want to do notifications for your site.

you want to have everything related to notifications within one folder and have the outside connect to it via one api (eg component)

in backends this would enable you to easily scale in performance (eg move to microservice or separately hosted instance) or team size (isolation in code, less awareness of the overall codebase "needed")

imo the next big web framework after rails will have this setup as a default

    `app/services` instead of 
    `app/controllers, app/models, app/decorators, app/workers, app/assets, app/views, etc`


Interesting point.

    app/services/signup
    app/services/signin
    app/services/newsletter_subscribe
    app/services/create_report
    ...etc
In larger Rails projects I've been on, we often start with a monolith but then break it into different services to scale out.

Starting with different services at the beginning may cause some other problems: what if the boundaries between services was wrong?


if the abstraction was wrong you have the same issues. no matter if you colocate files or not. most likely it is just less of a mess


Higher cohesion and modularity.

I'm glad to see this in an article for JavaScript projects. Here are a few articles for Java projects, that go in more details (I think the concepts still mostly apply to JavaScript projects):

http://www.javapractices.com/topic/TopicAction.do?Id=205

http://www.codingthearchitecture.com/2015/03/08/package_by_c...


It helps the case in which you want every feature/section of your application to be a smaller self-contained application which can be modularised and loaded into other applications and environments.

I think this is the first time I saw the approach: https://github.com/erikras/ducks-modular-redux


Unix tools in general fit this approach ? Do only one thing and do it well, offer an interface to other tools and users.


creating or updating a component is usually cross-concern, so if you want to change some functionality in a component you'll have to update it's controller,view,model etc...

If they are in the same folder, it makes moving between the relevant files for a feature easier (especially if you have a lot of components and finding the right file in your "controllers" folder means searching through a few dozen files).

It also makes reasoning about a certain feature easier - since all the relevant files are grouped next to eachother you can easily move between a feature's controller and it's view.

It also makes moving a feature easier- instead of renaming each file separately you can simply move the folder to anywhere else in the structure (for instance if you move a feature from a specific page to something more generic/shared)


We always try to think that one folder ideally should be able to be copied to its own project and live on as its own npm module.


I'm not so sure that's a good idea. A component is one possible interface to the business logic, but there isn't necessarily a 1:1 mapping.

Colocating data logic together makes it easier to build a data model that is logical and consistent, rather than one that is coupled to UI decisions.


It's not a black and white thing, you can have a feature structure for your UI and services, and one folder with just the data models.

Sometimes your frameworks kinda force it on you too.


Wow. Came here to make this comment. It seems like the whole point of a model layer is to decouple it from the UI and the Controllers.


In addition to notes in other comments, vertical slicing helps to avoid a tendency for rigid layers to emerge (think the rigid and doomed-to-be-messy Controller -> Service -> Repository "pattern"), which ultimately helps lead to looser, more naturally composed component structures.

You can still have files and folders that cut horizontally across those modules (e.g. for things like logging, sending an email, etc. etc.) - in fact, organizing your hierarchy in vertical slices helps to _promote_ good isolation of those cross-cutting concerns.


I think this kind of structure is really useful for frameworks based on components (like React), you can just copy/paste the content of these folders in other projects and it will work.


Yes, when your project is just a single functional layer -- e.g react -- you can then subdivide your code along feature lines. But the entire codebase will include server side code, DB code, and lots of other code that cuts across functional areas and for which you are going to have a hard time organizing it according to features.

Do you put bits of the DB schema definition files/SQL code in there as well? Back end server code? What if it's a completely different team working on that code? What happens when you decide you want a permissions engine or data access layer and need to couple your UI logic to those layers? When you add a replication engine to synch state across different instances, or a dedicated code handling session management, etc.

Over time, the code is going to reflect team structure and functional roles rather than features, and you can really organize by features only when there is one functional area -- e.g. a react app.


Hi guys, I published this document. Here at http://www.wearehive.co.uk we work with different developers with various levels of experience. We had to come up with a set of rules to bring some consistency to what we produce and make it easier for others to pick up and maintain it. It does look random and it's not perfect, that's why we decided to make it public and receive some feedback. I've already made notes from some of these comments. If you think something is wrong and shouldn't be there, make a pull request, or just let me know if you're too busy to do that.


Can someone please explain to me the rational for shoving your tests in the same place as your code? I have seen this before, and every time i have tried it on anything beyond a simple project it has ended up becoming a nightmare.

Often times you have numerous test specific helper functions, and tons of other test oriented cruft, where do all these things go? The root of the project? Also where do tests that encapsulate logic between multiple components go?


  > Can someone please explain to me the rational for shoving
  > your tests in the same place as your code?
Basically, (1) you are quickly able to see whether a test exists for each file without needing to explore a different directory structure (and therefore it's easy to open the tests for a file), (2) the require/import paths are not like '../../../../../' (which is awkward to reason about) or requiring alias resolver configurations (which are environment-specific and fragile), and (3) you are not maintaining an extra directory structure which could diverge by accident.

  > Often times you have numerous test specific helper 
  > functions, and tons of other test oriented cruft,
  > where do all these things go?
  > The root of the project? Also where do tests
  > that encapsulate logic between multiple components
  > go?
Modularise and publish any test helpers which are actually general so they can be imported where they are needed. And if they are too specific for this to make sense, why are they being shared?

Tests which encapsulate logic between multiple components should be next to the file in which you implemented this logic.

If you're still unsure on how this is possible, investigate a large modern JavaScript project [0]. Many of these are successfully implemented in exactly this way. There are always edge-cases that go against what I've said, but in a big enough project you will also be able to see this.

[0] https://github.com/facebook/react/blob/master/src/renderers/...


These aren't "best practices" no more than how I organize code is "best practice". These are purely opinions and shouldn't be called "best practices".


Points (2) and (3) are fact and not opinion.

And it's clearly not an opinion that many large, modern JavaScript codebases are following this practice effectively. I linked to 'React' itself as proof of this...

Also, I personally never used the term "best practice" however I will say this: (1) these practices are common within some of the most popular JavaScript projects, and (2) having used them and also previously stored tests separately, I do prefer storing tests alongside the files I am testing for the reasons I stated. I am not sure why, but your response felt a bit like you were angry with me, and putting words in my mouth.


> Points (2) and (3) are fact and not opinion.

None of your points are facts.

> And it's clearly not an opinion that many large JavaScript projects are following this practice. I linked to 'React' itself as proof of this...

React isn't that large of a project. It has a lot of contributors, it doesn't make its codebase large.


Sure they are.

  > 2) the require/import paths are not like '../../../../../' (which is awkward to reason about)
  > or requiring alias resolver configurations (which are environment-specific and fragile)
Compare writing an import path within a test of `import Button from '../Button'` to `import Button from '../../../../src/features/abc/components/Button'`. The latter requires you to count the directories in the path and replace each with '..'. This is a pointless waste of mental energy, and the former solution removes the need to do it.

The other solution as I described is to use things like aliases but this causes you to create complex configuration for your tests [0] which must be maintained and means your test are now dependent on this environment.

If this is just an opinion, please demonstrate that it's incorrect.

  > (3) you are not maintaining an extra directory
  > structure which could diverge by accident.
You are suggesting to me that `src/features/abc/components/Button.js` and `test/features/abc/components/Button.js` are not separate file-system structures which can diverge?

Because they are separate directory structures it is possible for them to diverge. This is a basic implication from the fact that they are separate structures.

If somebody moves `src/features/abc/components/Button.js` to `src/features/def/components/Button.js` you are telling me that you will not need to make any changes at all to `test/features/abc/components/Button.js`?

[0] https://github.com/trayio/babel-plugin-webpack-alias/issues/...


If you use type annotations you basically already do that. The type (assertions) are mixed right into the business code! And from there it's not much of a leap to keep the unit tests close by.

Integration tests and other, larger tests should still be kept somewhere else, because they often not test a single component (or a single module) but a larger part of the whole application which is harder to pinpoint in the filesystem. So most people put those tests under a top-level tests/ folder inside the project.


> If you use type annotations you basically already do that. The type (assertions) are mixed right into the business code! And from there it's not much of a leap to keep the unit tests close by.

Unit tests are about testing behavior, not types. Using type annotations has nothing to do with testing.

Putting tests right along the source code is a matter of convenience thus an opinion.

In some languages like Go it is even "good practice".


Good types express behaviour.


That's a problem I always have with these kinds of things - when someone says "Place your test files next to the implementation" I always find myself thinking what the reasoning is behind the rule.

For that particular recommendation I can see arguments both ways - mind you I keep tests separate myself but I'd be interested in reasons for keeping them together.


I suspect it's an audit trail. You see the set of tests there. One issue is that I think this conflicts with the recommended Component/index.js or Component.js not Component/Component.js

I find you do want to find components through your editor (e.g. <cmd>+p in vs code) Finding lots of index.js doesn't help. You will in effect create a Component/Component.js but also use Component/index.js to export the Component.js

Also if you use Jest this sort of falls apart.


For me (and the teams I work with) it's to encourage modular thinking - everything is a candidate module that can be extracted, put in source control and shared.

For tests that go beyond small (unit/spec) level we have a project level tests folder because these tests make use of multiple modules and it wouldn't make sense to place them closer to the code.

Someone else in comments mentioned cohesion - that's another way of thinking about it.


One issue that is sort of addressed in this that I've been trying to find a solid answer on is how do you strongly ensure a given NPM package and its dependencies are safe to use?

So far the only real solution I've come up with is to do a signals review (github? issues? downloads/day/ etc...) and then review the code block by block carefully examining any complex or potentially dangerous code.

It works, and you learn a lot about the packages and why they depend on each other, but at the same time this process is exhausting.

What would be a godsend is some major security brand who can vouch (even at cost) for a given version of major packages including their dependency tree.


Are you looking for something like https://snyk.io ?


That does seem pretty solid, thanks!

I'll have to start testing dependencies there.


also maybe look at https://nodesecurity.io/


Well this is definitely one way to do things... The reality of it is if you're making a project, and everyone is on board with the standards of how to do things and it's not making it unnecessarily confusing, you're practicing good coding. For example, if you have your tests isolated to a testing folder vs in a product/feature folder and everyone is on board, then it doesn't make any difference. I'll just look in the test folder for the tests rather than the feature folder.

Honestly, I don't get the obsession and combativeness around "best practices". Best practices are whatever your team uses effectively to get shit done.


A set of best practices? It rather looks like a set of random thoughts.


I can't say if these are the BEST practices for JS, or if even such practices could exist. But I do think that every company should have a standard set of written out practices/guidelines and publish them. They should probably even include them in their job listings. I hope people will comment on the potential pitfalls of doing so, but a benefit would be that new and prospective engineers would have a clearer idea of what writing code for the company would be like.


> "Only use nouns in your resource URLs, avoid endpoints like /addNewUser or /updateUser . Also avoid sending resource operations as a parameter. Instead explain the functionalities using HTTP methods:

GET Used to retrieve a representation of a resource. POST Used to create new new resources and sub-resources PUT Used to update existing resources PATCH Used to update existing resources. PATCH only updates the fields that were supplied, leaving the others alone DELETE Used to delete existing resources"

This used to be hip and cool and "look ma, i'm modern, i'm doing it REST-style!", but i don't think the fad holds anymore.

In real life, you are going to do much more "operations" to a "resource" than just "retrieve", "create", "update", or "delete", and thus you need to use operation names. It makes more sense to always send the operation name and to standarize on operation names. Also operation names can be more explicit and unambiguous.

It is, for me, a very bad idea to mix up your business model semantics (actions performed on the business model) with the semantics that belong to the OSI-model application layer (HTTP methods: GET, POST, PUT, PATCH...).


This is also something I think about. It seems that this approach is slowly getting out of fashion and people are starting to prefer APIs similar to SOAP (but JSON based instead of XML).

However if it is a HTTP based API, I'd still think it should be exposed in a way to use HTTP methods correctly. As they align quite well with what web applications will mostly do (CRUD operations).

If you need to do some extra operation, you could create a resource which wraps that operation, for example triggerScheduledJob could be POST /scheduled-job-triggers with body describing parameters to triggerScheduledJob method.

Another approach seems to be to use protocol buffers and something like gRPC. This is good for command line tools or internal communication between services but still, public API exposed over HTTP protocol should probably be designed to accommodate HTTP verbs.


Actually, I think it is more like simple RPC than SOAP. I'm personally betting big on GraphQL.


Do both together.

Use GET/POST/PATCH/DELETE for the simple CRUD on the resource.

Then, put your actions under POST /resource/{id}/actions/{action}, e.g. `/repositories/1/actions/archive`, `/users/1/actions/ban`, `/customers/1/actions/credit-score-with-bank-of-xyz`, etc


I find the "/actions" slug a bit goofy and unnecessary. Why not just put it at e.g. "/repositories/1/archive"? Additionally, why not maintain restful semantics by using PUT (or more likely, PATCH) instead of POST (which implies creating a new resource)? And "/customers/1/actions/credit-score-with-bank-of-xyz" sounds a lot like a GET request.


Because there could be related resources, e.g. "/repositories/1/users", or "/users/1/comments".

I agree it's not necessary, but having /actions just makes it more clear. For example:

"/repositories/1/bans" shows me all bans

"/repositories/1/ban" ban action

I know it's a bad example, but you could end up with other similar related resource and action names that could increase chances of silly mistakes.


I agree about /actions, it is unnecessary. However, for these kinds of "action resources", POST is definitely the best-matching request method. It does not mean "create a resource" (although this is one case it is often used for), but "process this request in your custom resource-specific way". The HTTP RFC formulates it like this [1]:

> The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.

Given that what an action resource does is, by definition, resource-specific, POST fits perfectly.

[1]: https://tools.ietf.org/html/rfc7231#section-4.3.3


I don't see why you'd want the /actions/ segment at all.

That's like using a /resources/ prefix at the start of every endpoint.


(Removed this comment as it was incorrect)


No. HEAD is the same as GET, but the server should not return a response body. The response code and headers should all be the same. This is to guarantee that a path is serviceable, response body size, and for verifying the client's cache.


You are correct, sorry about that. Removed my comment to not spread incorrect information.


How do you collaborate on a branch using this workflow? The constant rebasing means you can't have multiple devs pushing to the same branch.


The ironic thing about this is that the link they post as a justification for rebasing feature branches onto develop is actually a very explicit reason _not_ to be rebasing in this situation.

The section reads like the author thinks that the problem with rebasing public branches is merge conflicts (it isn't; it's rewritten/diverging histories) and that interactively rebasing somehow resolves this problem (it doesn't).

This smacks of something that I come across all the time: strong opinions on how to use Git held by people who don't actually understand Git.


Create a feature branch. All collaborators branch off the feature branch and PR into the feature branch. When the feature branch is ready, make a PR into develop.

The problem with this workflow is that it prevents rebasing unless someone rebases the feature branch manually. You would have to make a merge commit to follow this workflow safely.


"Keep README.md updated as project evolves"

Sure man.


And this is why I like opinionated frameworks like Ember.js so much. Most of the best practices are baked in. Plus you usually do not encounter a random subset of 'best practices' with added personal preferences/exceptions in other peoples projects, making collaboration much easier.


> Place your test files next to the implementation.

Java background is probably biasing me, but I don't like this. It interferes with my ability to find code because the file list in every directory is twice as large.

What are the benefits of putting it together?


If you are creating a separate directory for each "module" then it would just be one more file in each directory.

Component

- index.js

- Component.js

- component.scss

- Component.spec.js

This structure has worked really well for me, you have everything you need in a single directory so you don't have to jump around to another directory to find the test or styling or whatever.


Putting them together is AMAZING. So when I used to do Java development I thought the organization was great but every time I had to find a unit test I had to dig through a folder structure.

Then I tried GO. GO puts them together. I was amazed at how such a simple concept could keep my code so much better organized. I could immediately open both the code and the unit tests for said code. Just don't put a ton of files in any one directory (if you have a lot of files in a directory you're actively coding it I'd argue your file structure is not optimal).

Now when I do JavaScript development I have adopted 3 extensions:

- .aspec.js is for unit tests that should work in all environments

- .nspec.js is for unit tests that only work in node.js

- .cspec.js is for unit tests that only work on a client like a web browser

If you're board you can poke at some of my open source projects that use this pattern https://github.com/KrisSiegel/msngr.js


Keeps things that are coupled closer together. This is beneficial for several reasons:

a) forces parity between your code structure and test structure.

b) restructuring takes potentially half the time.

c) easily see and navigate to tests for a file.

d) less named things in your project since you aren't duplicating your structure.

If your directories are getting too lengthy to look at with tests in them, you should restructure them.

Edit: formatting and additional reason.


As a node developer I agree with you, I think the benefit with nice might be that relative paths are shorter in node imports?


"While developing a new project is like rolling on a green field for you, maintaining it is a potential dark twisted nightmare for someone else."

There is nothing about this list of best practices that makes any applicable project more maintainable without understanding the practices being followed.

As long as these practices are well-known within the company and enforced over the lifecycle of code/product/project/etc., it will succeed. Without that understanding, this list eventually becomes untrusted.

Good on the folks at hive for following what appears to be solid practices in their JS projects.


Thanks, for your comment. I do agree that "best practices" isn't the best word. I probably should have said, "Hive common practices". Most developers have understanding of the process but just have a different taste and styles, we just tried to make things more consistent. We thought having such a document is better than not having it, and even if a developer doesn't understand the practices being followed this document brings an opportunity to learn them. I know it's not perfect and looks abit random, that's why we made it public to get some feedback.


Yep and understood -- it's what works best for your team. And yes, consistency is where the value lies.

I've found these efforts to be successful as long as those guidelines are socialized among the team and it's treated in a consistent manner.

Looks like a great list thus far!


I hadn't noticed before but... Please do not recommend that people use "stage-1 features". Maybe if it is your own money, but not if you do it with other people's money.


_facepalm_ ugh! Gitflow again! Get on board with Github Flow people - it's going to save you so much pain


Assuming you have a CD pipeline, sure, but many of us are still stuck with companies following a traditional structured release model, in which case Gitflow still makes a lot more sense.


First time I hear about the Github Flow but from the presentation page it does not address any of the basic needs like having several concurrent branches, releasing versions, their maintenance... Is this only adapted to webservices?


These practices aren't even good.


If you want a single best practice for working on maintainable JavaScript projects, here it is:

Use TypeScript.


I don't think the author would disagree. Using a compile-to-JS language with stronger correctness guarantees is valuable for large projects.


> 9.2 Operating on resources

Sure, let's pretend that GraphQL is not a thing. Yep.


Other than for a proverbial handful of people in the world, it isn't


I am a little bit more optimistic about it..I mean GitHub has recently introduced the GraphQL API..I guess that other SaaS's will follow - twitter, reddit, HN(or probably not) and so on.


Sarcasm is the laziest way to present an idea.

Why not just say what you mean?


On the other hand sarcasm is the gentlest way to criticize an idea :-)


We haven't developed anything with GraphQL yet. I wasn't even sure if there are best practices out there. If you have a link that lists them, I'd happily put it in the doc. You're more than welcome to make a pull request as well.


The very idea of rebasing in git, it just gives me the creeps


Well that's fine, cause I think merging without rebasing is gross. I want to commit when component x technically works, when I've improved it, when I've renamed crap, all the time. I only need those to be 1 commit and after that I can write a nice message that covers what it did.

My process for developing the code should be separate from the organization used to store it historically for others to see.


It seems like you're talking about squashing, not rebasing.


> Place your test files next to the implementation.

ok.

> A resource always should be plural in the URL. Keeping verbs out of your resource URLs.

What does it have to do with Javascript projects?

> Avoid js alerts in production

alert is a DOM API, it doesn't have anything to do with Javascript.

-

Barely anybody pointed out the fact that this "guide" mixes up things that have nothing to do with each others.

How is file organisation related to how one name "subresources" in a rest API or browser alerts ? it isn't.

This isn't a serious document. It was hastly put together with no real considerations whatsoever and named "best practices" without any sources or links to give weight to these opinions.

Finally, as I said multiple times, coding style guides, "best practices" that cannot be validated automatically on a CI server are worthless and a waste of time in an business environment.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: