
A set of best practices for JavaScript projects - Liriel
https://github.com/wearehive/project-guidelines
======
zer0tonin
> Always use //todo: comments to remind yourself and others about an
> unfinished job

No please don't do this, no one will ever do the todo. The code base I'm
currently working on has more than 2000 todos.

Instead of this, create a ticket in Jira, Asana or whatever issue tracker you
prefer. This way it can be easily taken in account when doing a sprint
planning and even if no one opens the problematic file ever again, the issue
will still be visible.

~~~
underwater
A variant I've seen is //TODO(#7362) where the number is a ticket and is
enforced by a lint rule.

~~~
CTrom
This is what I have found to be a great compromise for my team. Some like
todos, others want tickets, so we enforce/support both.

~~~
jon-wood
If it works then great, but that sounds like a way to piss everyone off by
forcing them all to do both.

~~~
CTrom
Yeah I find our "TODOs" don't live very long in the code base.

------
znpy
"Tools, not policies".

I must say, this page reminds me of how lightyears ahead Perl was/is in terms
of tooling.

For example: are you coding a package? Before even starting brainstorming,
just boostrap your package with ExtUtils::MakeMaker. Long story short, that
package will boostrap a mostly blank (but customizable via various command-
line switches) Perl package with most things you will need for development and
distribution: a directory structure, a Makefile (for building and installing),
some pre-set comments, mostly blank test files etc etc.

This is something that, for example, the Go programming language team has
rightfully picked up in terms of directory structure conventions and tooling
(gofmt).

~~~
lol768
> I must say, this page reminds me of how lightyears ahead Perl was/is in
> terms of tooling.

When Perl has such a mess of a syntax that its meaning is ambiguous to any
static parser, I find it difficult to believe Perl has such amazing tooling -
JetBrains for one refuse to implement any official support for it (see
[https://youtrack.jetbrains.com/issueMobile/IDEA-87976#](https://youtrack.jetbrains.com/issueMobile/IDEA-87976#)).

~~~
Afton
This seems like an unrelated dig on Perl. There are many things that Perl did
better than any contemporaneous language, all while having an ambiguous static
parser.

------
rsj_hn
This is a mishmash of coding standards (at ABC CO we use tabs!) randomly
sprinkled with best practices. It's important to standardize on things without
one way necessarily being better than another. The organization also seems a
bit rushed. For example, not exposing the DB schema in a URL shouldn't really
go into the section on encoding. Moreover, SQL injection is almost never fixed
by using an escaping function, as by far the most common problem with SQL
injection is letting the client specify field and column names, or things that
aren't quoted (e.g. numerical id's) and so don't benefit from escaping.

I don't want to discourage any company from setting and documenting coding
standards, but this particular document feels a bit rushed and could use
adherence to best practices of its own. For example,

1\. If making an opinionated mandate, provide a link to an explanation

2\. Consistently provide links to examples

3\. Maybe do some spell-checking: "Hallo"?

4\. Try to maintain a uniform level of specificity

E.g. in the section on URL security, there is this vague comment: "Attackers
can tamper with any part of an HTTP request, including the URL, query string,"

Which, while a true, doesn't describe what the programmer should or shouldn't
do. It's just hanging there. Compare with the weird specificity of listing git
commands in section 1.

That tells me the author knew a lot about checking stuff into git, but was a
bit out of his/her league when it came to Security, so they just made generic
security statements while giving precise instructions for using git.

~~~
th4dv
A lot of companies these days create "Best practices" guidelines just to boost
their online recognition. Very often I find these guidelines useless or even
harming.

~~~
magic_beans
What in this document do you find useless or harming?

~~~
camus2
> What in this document do you find useless or harming?

The use of TC39 proposals IN PRODUCTION. The authors of that document aren't
serious.

~~~
magic_beans
Presumably they use a tool like Babel to compile down to present day ES in
production??

------
a_humean
"Use stage-1 and higher JavaScript (modern) syntax for new projects. For old
project stay consistent with existing syntax unless you intend to modernise
the project."

That seems doesn't seem like a good idea. I can understand recommending
Stage-2 and/or Stage-3, but stage-1 seems premature. Stage-1 in the TC39
process is still proposal stage that is flagged for expecting "Major" changes
in the spec and possibly the syntax of the feature. Being stage-1 also is a
poor guarantee that it will progress to future stages or become part of the
spec. Stage-2 at least signals that the committee think its likely to
eventually become part of the spec with only minor revisions.

~~~
vahidpg
You're actually right. I will change that to stage-2. Thanks for feedback.

------
drinchev
> Organize your files around product features / pages / components, not Roles:

Oh really? Can someone explain why is this a good thing and what's the
benefit?

[1] : [https://github.com/wearehive/project-
guidelines#6-structure-...](https://github.com/wearehive/project-
guidelines#6-structure-and-naming-)

~~~
underwater
I'm not so sure that's a good idea. A component is one possible interface to
the business logic, but there isn't necessarily a 1:1 mapping.

Colocating data logic together makes it easier to build a data model that is
logical and consistent, rather than one that is coupled to UI decisions.

~~~
Illniyar
It's not a black and white thing, you can have a feature structure for your UI
and services, and one folder with just the data models.

Sometimes your frameworks kinda force it on you too.

------
vahidpg
Hi guys, I published this document. Here at
[http://www.wearehive.co.uk](http://www.wearehive.co.uk) we work with
different developers with various levels of experience. We had to come up with
a set of rules to bring some consistency to what we produce and make it easier
for others to pick up and maintain it. It does look random and it's not
perfect, that's why we decided to make it public and receive some feedback.
I've already made notes from some of these comments. If you think something is
wrong and shouldn't be there, make a pull request, or just let me know if
you're too busy to do that.

------
Azeralthefallen
Can someone please explain to me the rational for shoving your tests in the
same place as your code? I have seen this before, and every time i have tried
it on anything beyond a simple project it has ended up becoming a nightmare.

Often times you have numerous test specific helper functions, and tons of
other test oriented cruft, where do all these things go? The root of the
project? Also where do tests that encapsulate logic between multiple
components go?

~~~
arethuza
That's a problem I always have with these kinds of things - when someone says
"Place your test files next to the implementation" I always find myself
thinking what the reasoning is behind the rule.

For that particular recommendation I can see arguments both ways - mind you I
keep tests separate myself but I'd be interested in reasons for keeping them
together.

~~~
awjr
I suspect it's an audit trail. You see the set of tests there. One issue is
that I think this conflicts with the recommended Component/index.js or
Component.js not Component/Component.js

I find you do want to find components through your editor (e.g. <cmd>+p in vs
code) Finding lots of index.js doesn't help. You will in effect create a
Component/Component.js but also use Component/index.js to export the
Component.js

Also if you use Jest this sort of falls apart.

------
YCode
One issue that is sort of addressed in this that I've been trying to find a
solid answer on is how do you strongly ensure a given NPM package and its
dependencies are safe to use?

So far the only real solution I've come up with is to do a signals review
(github? issues? downloads/day/ etc...) and then review the code block by
block carefully examining any complex or potentially dangerous code.

It works, and you learn a lot about the packages and why they depend on each
other, but at the same time this process is exhausting.

What would be a godsend is some major security brand who can vouch (even at
cost) for a given version of major packages including their dependency tree.

~~~
joshrivers
Are you looking for something like [https://snyk.io](https://snyk.io) ?

~~~
YCode
That does seem pretty solid, thanks!

I'll have to start testing dependencies there.

------
nscalf
Well this is definitely one way to do things... The reality of it is if you're
making a project, and everyone is on board with the standards of how to do
things and it's not making it unnecessarily confusing, you're practicing good
coding. For example, if you have your tests isolated to a testing folder vs in
a product/feature folder and everyone is on board, then it doesn't make any
difference. I'll just look in the test folder for the tests rather than the
feature folder.

Honestly, I don't get the obsession and combativeness around "best practices".
Best practices are whatever your team uses effectively to get shit done.

------
sAbakumoff
A set of best practices? It rather looks like a set of random thoughts.

------
baron816
I can't say if these are the BEST practices for JS, or if even such practices
could exist. But I do think that every company should have a standard set of
written out practices/guidelines and publish them. They should probably even
include them in their job listings. I hope people will comment on the
potential pitfalls of doing so, but a benefit would be that new and
prospective engineers would have a clearer idea of what writing code for the
company would be like.

------
flavio81
> _" Only use nouns in your resource URLs, avoid endpoints like /addNewUser or
> /updateUser . Also avoid sending resource operations as a parameter. Instead
> explain the functionalities using HTTP methods:

GET Used to retrieve a representation of a resource. POST Used to create new
new resources and sub-resources PUT Used to update existing resources PATCH
Used to update existing resources. PATCH only updates the fields that were
supplied, leaving the others alone DELETE Used to delete existing resources"_

This used to be hip and cool and "look ma, i'm modern, i'm doing it REST-
style!", but i don't think the fad holds anymore.

In real life, you are going to do much more "operations" to a "resource" than
just "retrieve", "create", "update", or "delete", and thus you need to use
operation names. It makes more sense to always send the operation name and to
standarize on operation names. Also operation names can be more explicit and
unambiguous.

It is, for me, a very bad idea to mix up your business model semantics
(actions performed on the business model) with the semantics that belong to
the OSI-model application layer (HTTP methods: GET, POST, PUT, PATCH...).

~~~
HatchedLake721
Do both together.

Use GET/POST/PATCH/DELETE for the simple CRUD on the resource.

Then, put your actions under POST /resource/{id}/actions/{action}, e.g.
`/repositories/1/actions/archive`, `/users/1/actions/ban`,
`/customers/1/actions/credit-score-with-bank-of-xyz`, etc

~~~
urethrafranklin
I find the "/actions" slug a bit goofy and unnecessary. Why not just put it at
e.g. "/repositories/1/archive"? Additionally, why not maintain restful
semantics by using PUT (or more likely, PATCH) instead of POST (which implies
creating a new resource)? And "/customers/1/actions/credit-score-with-bank-of-
xyz" sounds a lot like a GET request.

~~~
HatchedLake721
Because there could be related resources, e.g. "/repositories/1/users", or
"/users/1/comments".

I agree it's not necessary, but having /actions just makes it more clear. For
example:

"/repositories/1/bans" shows me all bans

"/repositories/1/ban" ban action

I know it's a bad example, but you could end up with other similar related
resource and action names that could increase chances of silly mistakes.

------
ricardobeat
How do you collaborate on a branch using this workflow? The constant rebasing
means you can't have multiple devs pushing to the same branch.

~~~
saltedmd5
The ironic thing about this is that the link they post as a justification for
rebasing feature branches onto develop is actually a very explicit reason
_not_ to be rebasing in this situation.

The section reads like the author thinks that the problem with rebasing public
branches is merge conflicts (it isn't; it's rewritten/diverging histories) and
that interactively rebasing somehow resolves this problem (it doesn't).

This smacks of something that I come across all the time: strong opinions on
how to use Git held by people who don't actually understand Git.

------
deadwisdom
"Keep README.md updated as project evolves"

Sure man.

------
okket
And this is why I like opinionated frameworks like Ember.js so much. Most of
the best practices are baked in. Plus you usually do not encounter a random
subset of 'best practices' with added personal preferences/exceptions in other
peoples projects, making collaboration much easier.

------
tieTYT
> Place your test files next to the implementation.

Java background is probably biasing me, but I don't like this. It interferes
with my ability to find code because the file list in every directory is twice
as large.

What are the benefits of putting it together?

~~~
dntrkv
If you are creating a separate directory for each "module" then it would just
be one more file in each directory.

Component

\- index.js

\- Component.js

\- component.scss

\- Component.spec.js

This structure has worked really well for me, you have everything you need in
a single directory so you don't have to jump around to another directory to
find the test or styling or whatever.

------
jroseattle
"While developing a new project is like rolling on a green field for you,
maintaining it is a potential dark twisted nightmare for someone else."

There is nothing about this list of best practices that makes any applicable
project more maintainable without understanding the practices being followed.

As long as these practices are well-known within the company and enforced over
the lifecycle of code/product/project/etc., it will succeed. Without that
understanding, this list eventually becomes untrusted.

Good on the folks at hive for following what appears to be solid practices in
their JS projects.

~~~
vahidpg
Thanks, for your comment. I do agree that "best practices" isn't the best
word. I probably should have said, "Hive common practices". Most developers
have understanding of the process but just have a different taste and styles,
we just tried to make things more consistent. We thought having such a
document is better than not having it, and even if a developer doesn't
understand the practices being followed this document brings an opportunity to
learn them. I know it's not perfect and looks abit random, that's why we made
it public to get some feedback.

~~~
jroseattle
Yep and understood -- it's what works best for your team. And yes, consistency
is where the value lies.

I've found these efforts to be successful as long as those guidelines are
socialized among the team and it's treated in a consistent manner.

Looks like a great list thus far!

------
Principe
I hadn't noticed before but... Please do not recommend that people use
"stage-1 features". Maybe if it is your own money, but not if you do it with
other people's money.

------
cottsak
_facepalm_ ugh! Gitflow again! Get on board with Github Flow people - it's
going to save you so much pain

~~~
larzang
Assuming you have a CD pipeline, sure, but many of us are still stuck with
companies following a traditional structured release model, in which case
Gitflow still makes a lot more sense.

------
mattkenefick
These practices aren't even good.

------
pdog
If you want a single best practice for working on maintainable JavaScript
projects, here it is:

Use TypeScript.

~~~
Spivak
I don't think the author would disagree. Using a compile-to-JS language with
stronger correctness guarantees is valuable for large projects.

------
sAbakumoff
> 9.2 Operating on resources

Sure, let's pretend that GraphQL is not a thing. Yep.

~~~
weego
Other than for a proverbial handful of people in the world, it isn't

~~~
sAbakumoff
I am a little bit more optimistic about it..I mean GitHub has recently
introduced the GraphQL API..I guess that other SaaS's will follow - twitter,
reddit, HN(or probably not) and so on.

------
quotha
The very idea of rebasing in git, it just gives me the creeps

~~~
wry_discontent
Well that's fine, cause I think merging without rebasing is gross. I want to
commit when component x technically works, when I've improved it, when I've
renamed crap, all the time. I only need those to be 1 commit and after that I
can write a nice message that covers what it did.

My process for developing the code should be separate from the organization
used to store it historically for others to see.

~~~
Spivak
It seems like you're talking about squashing, not rebasing.

------
camus2
> Place your test files next to the implementation.

ok.

> A resource always should be plural in the URL. Keeping verbs out of your
> resource URLs.

What does it have to do with Javascript projects?

> Avoid js alerts in production

alert is a DOM API, it doesn't have anything to do with Javascript.

-

Barely anybody pointed out the fact that this "guide" mixes up things that
have nothing to do with each others.

How is file organisation related to how one name "subresources" in a rest API
or browser alerts ? it isn't.

This isn't a serious document. It was hastly put together with no real
considerations whatsoever and named "best practices" without any sources or
links to give weight to these opinions.

Finally, as I said multiple times, coding style guides, "best practices" that
cannot be validated automatically on a CI server are worthless and a waste of
time in an business environment.

