
Ask HN: How do you organise and discover your team's internal scripts? - OJFord
As a team we have many scripts for doing various internal tasks - domain-specific things, checks for questions like &quot;what&#x27;s in prod?&quot; and &quot;how far along is this commit?&quot;, etc. ad nauseam.<p>The problem is that, with a few exceptions, the person who wrote the script knows it exists and how to use it; and might be around to chime-up with &quot;hey, I wrote a script for that!&quot; when someone has a problem.<p>But discoverability is hard, even if they&#x27;re in the repo not everyone will have seen or reviewed the PR that added them.<p>Does anyone have some good tips to share on how to (or not to) organise such scripts or tools, and allow colleagues to better discover them?
======
guruant
We encourage our teams to set their utility scripts as Jenkins jobs
(integrated with source control).

\- The Jenkins job can have a friendly name, a description, a link to wiki
article explaining how it works, and default inputs.

\- Troubleshooting wiki articles also link back to the job (_"Getting this
error? Try running this job"_)

\- We don't need to worry about people having prerequisites on their machines
(assuming they are installed on the Jenkins runner). They shouldn't even need
to understand the language the script is written in, or how it works.

\- You've got the opportunity to integrate and chain jobs together, or trigger
them based on events.

\- Pretty much everyone knows how to create and run Jenkins jobs

Of course having solved one problem (discoverability and organisation of
scripts) you now have another one: Jenkins.

~~~
neduma
> Of course having solved one problem (discoverability and organisation of
> scripts) you now have another one: Jenkins.

I called this phase 2. Discovery, documentation and ongoing maintenance going
to be the problem. We need to another jenkins to CI/CD the main tooling
Jenkins.

My other though is that there is significant market for this kind of back
office tooling clean up and ongoing maintenance. If we could come with a
process to clean up and package it. we could sell it. I would definitely buy
it cause it gives lot of time saving for day to day toils especially in
operational space.

~~~
scarface74
There are hosted CI solutions that can orchestrate locally installed agents.
The two that come to mind are Microsoft’s Visusl Studio Team Services and AWS
Code Deploy.

For both, you install a local agent that polls the website for commands. You
can manually kick off a “deployment” that just downloads and run scripts. Both
have agents for Windows and Linux.

VSTS is free for private environments for five users and like $5 a month for
additional users for one simultaneously running agent - you can have as many
agents as you want installed but only one will deploy at a time. Additional
simultaneously running agents are cheap or free for each registered MSDN
subscriber who is a user.

AWS Code Deploy is similar. It’s free for AWS hosted EC2 instances and a 1
cent for each on prem deployment.

They both support the concept of deployment groups where you can run the same
deployments on a group of machines.

Code Deploy uses a simple YML file that tells Code Deploy which scripts/batch
files to run.

They both support integration with GitHub via webhooks.

------
CodeCube
I can't believe no one here (currently) has mentioned this ... but IMO in an
ideal world, the issue of too many utility scripts means that "your system"
(whatever that means to you) is lacking. This leads to users/administrators
needing to find ways around the system to get what they need.

The (idealistic) answer, is that when a given script finds more than a few
users, or even gets used more than a handful of times ... it should be
incorporated into, and formalized, in the system. Make an admin
panel/portal/dashboard, put a UI in front of the script for any variables, and
make it so that this function doesn't require cryptic institutional knowledge
to execute.

------
AntiRush
I've found that most scripts can/should belong to a specific project.

We've started adopting the `s/` method described here by Chad Austin:
[https://chadaustin.me/2017/01/s/](https://chadaustin.me/2017/01/s/).

It means you always know where to look and it's easy to see what scripts
exist.

------
SatvikBeri
Version control + code review has worked for me. Even if you don't do code
review normally, it's particularly helpful with making sure that someone is
aware of scripts.

------
mosesschwartz
I think the best solution is to use a tool like StackStorm, Rundeck, Wooey
([https://github.com/wooey/Wooey](https://github.com/wooey/Wooey)) or even
Jenkins (as mentioned in some other comments), and standardize on adding new
scripts to that system. This has the benefit of not needing to cater to
individual environments -- you can get it running on a server and call it
good.

I wrote and open-sourced a tool called Scrypture
([https://github.com/mosesschwartz/scrypture](https://github.com/mosesschwartz/scrypture))
some years back that partially addressed this need. It served as a central
place that everyone could add their own scripts to, and it made it easy to
expose the scripts through a web interface. Today I think going with one of
the tools mentioned above would be more robust, but it goes to show that this
is a real need for a lot of teams, and there really isn't a recognized single
best solution.

------
oh_sigh
Have an uber-script which allows the various other scripts to register with it
and provide the namespace, command, help data, args, etc.

Let's say our project is called cob, so typing `cob` would print a help screen
of all the tools it knows about. typing 'cob db' would show all of the tools
in the db namespace. For simplicity there is only one level of namespaces
allowed. Anything under that is a command in that namespace - eg 'cob db
recompute_index --column=foo' to recompute an index, 'cob deploy
show_live_version --prod' to show whats in prod right now, etc.

If you're doing a task and think a tool might be available, searching through
a limited set of tools based on the likely namespace and with simple help text
makes it easy to find.

~~~
tdumitrescu
In Python world, we've had some success using Click
([http://click.pocoo.org/](http://click.pocoo.org/)) this way, and for SSH-
heavy workloads Fabric ([http://www.fabfile.org](http://www.fabfile.org)).

~~~
neduma
We ssh-heavy workload, we use ansible. We have jenkins master with dokerized
slaves where we throw the scripts in dockerized containers

------
amirathi
Disclaimer: I built [http://nurtch.com](http://nurtch.com) for managing
executable runbooks/playbooks.

There are two core problems here,

\- Discovery: Does the script to do X exist?

\- Documentation: What does this script do? What are the parameters? How do I
execute it?

Writing these scripts with Jupyter notebook solves both these problems
effectively,

\- For discovery, you can search by keyword & organize in a hierarchy (e.g.
folder for separate services)

\- For documentation, code snippets and instructions are interleaved in a
single document.

You can host Jupyter remotely, your teammates can login, search for their
scripts and execute from within the browser. No need to worry about
dependencies/environment etc.

------
lima
\- Everyone runs the same Linux distro. There a signed Ansible repo that
everyone uses to set it up, and an internal package repo for things missing
from the main repos (for security reasons, nothing is installed that's not in
either the distro or our repos). This ensures that everyone has the same local
environment and scripts can be re-used.

\- Each project has a "scripts" folder with utility scripts and there are
playbooks in Confluence on when and how to use them.

------
nAwYz
First make sure that the scripts are in a vcs and if it is possible make sure
that someone else looks over them before they are checked in. Often the
awesome "DB_backup_script.sh" is not so awesome. If the scripts are not
documented document them, so that others can use them and know their
limitations. One way to allow other users to find scripts is to embed them in
a grouped/sorted way into a wiki with instructions on how/when to use them.

------
alexchamberlain
Have you considered using 1 script? For example, you could have a script
called foo with multiple subcommands that do various jobs, then the process of
discovery is as simple as running foo help occasionally. Coupled with some
good docs, the occasional tech talk and sharing command examples on IM will
soon make it part of your culture.

------
jerluc
Our current solution is to use a "toolchain" git repository that helps to
automate installation of 3rd-party software (so that we're using similar
versions of things like Docker, Python, etc.), but also creates a company-
specific directory with installed scripts that can be added to the
individual's PATH.

------
citilife
The real trick I've found, is having regular conversations about your problems
and knowing the domain experts.

For instance, I know this one guy is good at Dev Ops. I send him a ping asking
a question, he forwards me to the other guy.

That's the most efficient way I've found after years of development...

~~~
lozenge
Except when you're on the other side: unable to do any development because
people keep coming to you with questions.

~~~
citilife
I do have it the other way, it's just redirecting or providing a link.

I ensure my management team understands and sees how many people I help. If I
can spend 5 minutes several times a day - each time showing someone where a
script is that saves hours.. well I've just saved the company a lot of money.

------
lifeisstillgood
You have three kinds of problems wrappped up in here and my software-socialist
hat tells me it's all the fault of poor management:

1\. Good software engineering is about writing tools (scaffolding) around the
main project (sistene chapel). If a team is not putting in something like 1/3
to 1/2 its time on tooling it is probably being pushed too hard or seeing too
many of the other project warning signs we all know and love.

2\. Social contact and pride. If devs have _plenty_ of time to write their
tools, then they get proud of them, make the more generic and usable and they
start to market it (hey I have a script that does this is marketing as much as
OSS ) Brown bag lunchtime sessions seem the perfect way to encourage this - At
a Big Bank we had a kind of show and tell for internal scripts for a while.

3\. This is what senior devs, tech leads, what have you, should be doing -
talking amoungst themselves and emerging the critical missing parts in the
company (besides recruiting and code reviews). Again if they are not - it's a
sign of too much pressure, too much silo'd working.

In the end I have some code I happily tell people about and encourage them to
use - because I have invested the time to make it work well, and if I have
not, I don't coz its embarrassing.

Fix the management / time / good engineering problem and you will find the
scripts being shared in the canteen.

Of course fixing that is not something you have authority or influence to do.
This could be a problem.

------
osivertsson
In my experience doing pair programming is a great opportunity to discover
such scripts.

------
jamessantiago
For powershell I've setup a master profile that manages and loads modules
stored on an admin file share. Scripts are then added to individual modules
(e.g. ad, exchange, etc...) on the share as needed which will prompt
individuals to update their local copy. Version control, script signing, and
other features would be nice, but in reality it's only one or two individuals
updating the modules so it's more of a way to kiddy-ise scripts for other team
members to use.

------
RpFLCL
An internal wiki for engineers and that has a list of concerns with individual
articles for solving known issues/tasks. Plus a culture of encouraging each
other to write or update wiki pages when they have an issue.

If you have the time and motivation to write an internal tool, you should also
have the time and motivation to write down the how and why.

If you want to make a tool a part of your process, document that process and
where to use the tool.

------
rlopezcc
I have a .local_profile script in my project root and added this[1] to my
.bash_profile

So, each project has its own set of aliases and utilities in the repo.

It's not the cleanest thing to do, but it works pretty well.

[1] -
[https://gist.github.com/rlopezcc/7d545a2b09a9c7a391483608a51...](https://gist.github.com/rlopezcc/7d545a2b09a9c7a391483608a51a3f3f)

Edit: Formatting.

------
wink
Treat them as runbooks and include them in your troubleshooting guide (or
oncall guide).

Small snippets can be in a wiki or in the docs, bigger scripts preferably in a
repo that everyone can run.

If they provide useful info that's not just for emergencies or
troubleshooting, maybe let them run via cronjob and expose the results in a
dashboard or a dedicated internal "status web page".

------
nunez
Enforce a single Git repository for all common scripts, and enforce pull
requests for every script inserted into that repository, no exceptions. Lock
down `master` to either an approved releaser or a release bot to further
enforce this.

Scripts that aren't in that repository are landmines waiting to explode into
Critical bug reports.

------
abakus
Most scripts do not have reusability and not warrant sharing. If they do, make
them into a library.

------
edoceo
Our projects put all one off scripts in a ./sbin directory (organized as one
sees fit, task-sub-dir, user-sub-dir). And each script follows a specific
header. It's discovered by both Doxygen and when we grep the code base.

------
base1996
Maybe having a dedicated slack channel would be interesting. Each new message
would be pointing to documentation/PR, pinging teams that might find it useful
for later use.

We use a private wiki grouping how to's, scripts, guidelines.

------
tyurok
We are trying to use [https://slab.com/](https://slab.com/) to share those
kind of stuff, it's been a great addition.

------
keeler
Using Rundeck is always an option:
[https://www.rundeck.com/](https://www.rundeck.com/)

------
hhsnopek
We have a repository that contains various checklists & tools/scripts for all
of our projects.

------
meowface
We use an internal scripts repo and a chat webhook so our chat is notified
when something is pushed.

------
KaiserPro
There are a couple of ways:

1) a git repo in an obvious place

2) a shared drive in $PATH

even better, a share controlled by a git repo.

------
usainsured
i have not any idea about it.

------
zer00eyz
Your problem (just about everyones problem) is the same problem.

If you did it, document it.

Make sure that what ever tool your using for documentation supports full text
search, and tagging.

~~~
OJFord
> what ever tool your using for documentation supports full text search, and
> tagging.

Any suggestions?

~~~
eropple
Basically any wiki should suffice for those requirements. However, integrating
those requirements with other workflows tends to be trickier. Atlassian's
tools (Jira, Confluence, etc.) tend to win by default, but if you're willing
to do some duct-tape work other solutions are fine too.

~~~
OJFord
Harder to keep in sync though, if it's not somehow derived from or embedded in
the source code for the scripts.

