
SQL Is No Excuse to Avoid DevOps - YesThatTom2
https://queue.acm.org/detail.cfm?id=3300018
======
pjungwir
I really enjoyed Thomas Limoncelli's _Time Management for System
Administrators_. I've also got _The Practice of System and Network
Administration_ on my shelf, and hopefully one of these years I'll read it---
maybe once my time management is better. :-) I like that Tom was practicing
and advocating system-administration-by-writing-code way before DevOps, and to
me he represents a tradition of automation that far predates the buzzwords.

I've lived in the world of Rails and Django so long, I sort of take schema
migrations for granted. It's interesting to see such a high-level person
covering this ground; it reminds me of how much these frameworks have given
us, and how so many organizations are still practically emailing around zip
files of source code.

His Technique 2 feels harder to me: write the code to work on both old & new
schemas, and deploy the schema change after the code change. I know this is
what Google does though, and I think it's the only safe way to deploy
rollback-able code changes. I'm curious who else is doing it, and especially
how you are scheduling it. Especially with a CI/CD workflow, I'm used to
getting things integrated and letting them get deployed "whenever". So how do
you make sure the schema changes go out "later", and what gating is required
first, and how do you get buy-in to spend time on that after the feature is
already live? I'm not asking about the technical how-to, but the
organizational/political how-to. For instance if you're using Pivotal Tracker,
do you make a separate PT story? Leave the old one open until the schema
changes are deployed? What does it concretely look like?

~~~
djrobstep
> I've lived in the world of Rails and Django so long, I sort of take schema
> migrations for granted.

And there's lots more room for improvement over Rails and Django migrations.

Some of the problems:

\- Annoying version number management

\- Management of long "chain" of migration files

\- Unnecessary coupling between ORM and migration tool

\- Heavyweight process to generate and run each migration, which slows down
local development

\- Bad testability

Autosyncing local dev databases, and automatically generating migration
scripts with a diff tool is a better approach, imho.

~~~
all_factz
Don't disagree with the rest of these points, but I do want to point out that
Django lets you squash long "chain" migration files just as you'd do with
commits.

------
rhacker
DevOps is about removing a secondary team that you throw something over the
wall to for deployment. Instead the build pipeline (be it Circle, GitabCI,
Jenk, Go, etc...) can control ALL environments and things have been optimized,
in such a way, that a single click updates an environment. (including prod).
The pipeline "code" and all its related scripts are the actual devops work
products. They CAN be done by a single team that supports multiple other
teams, but truthfully it should be a shared responsibility. Perhaps one team
provides base scripts and answers the questions about where logs and
monitoring data go (and of course those questions are answered with scripts
essentially). So "ops" can still exist if your organization is large enough,
but I definitely say there's no "thing" you throw over the wall anymore.

~~~
nickthemagicman
So it's a secondary team to replace the secondary team.

~~~
camelNotation
Always. The question with any form of process optimization is how much smaller
does the human presence need to be? The idea that we would replace people
entirely is a sales pitch.

------
LaserToy
I always thought that DevOps are developers who can do ops. But in recent
years I see more and more people calling themselves DevOps, even all they do
is ops with simple scripting.

I think we need a new name for developers who run their code in prod, as
DevOps is diluted.

So, I’m this light name of the article is utter BS. As devs we deal with a lot
of different databases, be is SQL, NO SQL or graph and dealing with them is
part of our job. If all you do is run stuff and release - you don’t have Dec
component, so you just Ops.

~~~
avitzurel
I have been doing DevOps since before it had a name.

Been running on AWS for the better part of 8 years now and running multiple
production environments for clients and as an employee.

Over the years, I have seen the term "DevOps" slapped on so many things I
don't even know what it means anymore.

For some, it's a culture, for some, it's a team, for some, it's a role.

SRE is now getting the same treatment. No idea what it means most of the time.

~~~
busterarm
IMO, the only places where it's done successfully view it as a culture and
neither of the other two.

That said, the easiest way to make yourself more 'in-demand' right now is to
make it your title.

As someone who does ops and development, when I look around in the market and
see companies hiring full time devops roles, I usually see big red flags. The
only jobs that I tend to want lately are on development side at small
companies or the operations side at larger ones.

~~~
paulie_a
I tend to see Dev ops as mostly ops but a background in dev. They can step in
and understand the development environment, but generally they are going to
make life easier for the devs. it's an important role, but having some
developer background is nice while they kick ass getting things done. Your
standard sys admin will do the job but generally get a big eyes look on their
face when you ask about something outside of their wheelhouse.

------
alexandercrohde
No idea how this made front-page. Sounds like the author met one crazy person
and wrote an article about it.

Totally irrelevant to me, I've never seen a company like this in practice in
my long career.

~~~
enjo
If we're talking about "a company that only deploys rarely and requires many
manual steps with many people involved to do it", then I've worked with a few.

Typically they're shops that got very big very fast. They deal with problems
of scale where schema changes are _very_ expensive and data migrations can
require weeks without heavy optimization.

They typically have tried to automate some stuff, but the crush of feature
work to support the massively expanding business always came first. They
usually have CI, but never enough tests (unit, integration, or functional) to
deploy with any real confidence.

They used to deploy (by hand) every couple of days because it was easy. Then
they just kept getting bigger and it became every week. Then maybe once a
month.

Typically they'll talk about why they don't do continuous delivery or
deployment because deploys are RISKY and the business depends on deploys being
perfect.

I've seen this situation in video games (twice), a mobile app that had reached
the 100 million user threshold, and an old school data marketing company.

It definitely is a thing that exists.

~~~
alexandercrohde
What I would say, if I could rephrase, is this:

Sure some shockingly-behind-the-times companies exist, and may even be a non-
negligible percentage. But I'm surprised this is is front-page HN because I
assume that the engineers on this website know that you don't need downtime
for a DB migration. It feels very common-knowledge among this community.

------
CaptainZapp
In the old methodology, any schema change requires the entire application to
be shut down while a team of experts (or one very overworked DBA) modifies the
schema _manually_.

 _WHAT_ ?

I'm working pretty much in an environment described in the article. Big bank,
complicated database setup with multiple replicates and databases up to a size
of ~25TB.

On a (pre defined) release weekend there are essentially two entities to be
released: The application (obviously) and the database changes.

For any DB change a patch is created and tested through an environment of half
a dozen instances. If bugs are found on already deployed db patches they are
_never_ fixed in the patch, but a subsequent patch (also tested through all
environments) is created.

Needles to say that _anything_ provided, which applies database changes is
version controlled.

The idea to apply database patches _manually_ is so damn outlandish to beggar
believe.

Anybody doing this on a production database without a strictly controlled (and
tested!) break-fix should be fired on the spot.

~~~
edoceo
Maybe training rather than termination?

~~~
CaptainZapp
I'd consider this such a gross case of malpractice on the level of a surgeon
forgetting some instruments in patient, before sewing her shut.

Errors and mistakes happen, no question about that. And any ol' mistake
certainly shouldn't be a firing offense.

But this behaviour is so grossly incompetent that I don't believe that it
can't be helped with any training.

~~~
geofft
Surgeons are generally trained with a grueling multi-year academic and
practical program about how to conduct surgery. DBAs aren't. If you're lucky
you have someone who has been trained to be a DBA professionally - but I've
never worked with one, all the SQL I've interacted with has been run by people
with unrelated job titles who learned how to operate databases by Googling
when necessary.

You're welcome to fire people if you think it's going to help. I'm just
curious how you think a skilled, trained DBA is going to show up at your desk
when that happens.

~~~
C1sc0cat
You pay them money :-) though think the guy that was our DBA at BT has now
retired - his first boss was Diskaja at NPL labs

------
peterwwillis
My lame attempt at clarifying DevOps confusion goes something like this:

DevOps is an analogue of Agile and Lean. Lean, which came from the Toyota
Production System, doesn't just exist in one department and in one position.
Saying that _" DevOps is <x position> doing <x thing>"_ is like saying _" Lean
is a factory floor worker pulling an Andon cord when a product is defective"_.
It massively understates all of the aspects of the method, and all the
different roles involved, and how they get work done.

Nothing about these methods prescribes that you _have_ to use X technology, or
that there is just one position that does work in one way. They prescribe a
method, and you choose how you implement that method, and lots of different
kinds of people in an organization are needed to make it all work.

I started a Wiki recently to try to give some more in-depth understanding of
DevOps
([https://devops.yoga/overview.html](https://devops.yoga/overview.html)). It's
still pretty bare-bones, but I think you can quickly get the idea that DevOps
isn't one particular thing, or one particular team, or one set of
technologies.

~~~
saint_abroad
Agile set about to deliver working software frequently.

Devops set about to deliver working software frequently.

Both optimise for frequent delivery. The theory and practice of how working is
measured is where they differ.

------
thoughtexprmnt
Even though I've given in to using the "automated schema updates", or
migrations, approach (Technique 1 in the article), I'm still uncomfortable
with the idea of the application tier having permission/ability to manage
database schema and execute DDL. I feel that it removes a key layer of
defense-in-depth, and opens up the possibility for bugs and exploited
vulnerabilities in application code to have hugely negative impacts.

Even ORMs make me nervous for similar reasons, since they essentially enable
the application tier to pass into the database any/all queries and DML. But
like migrations, I allow ORMs for the convenience they provide during
development.

~~~
aschatten
The live application serving traffic does not need to have DDL permissions.
The migration code can live in the same code base/application, but can be
executed as a separate task with escalated privileges.

Ideally, the same build should be used in every environment and all the
configurations such as db endpoint, username and password must be external to
the built and fed from the environment. The migration command can be a part of
application, but only invoked automatically in dev environment. Most of ORMs
allow to configure whether schema initialization happens at the startup or
not.

During deploy, the migration part can be executed separately in prod
environment, before the actual deploy. Different tools provide different
"hooks": e.g. heroku has release
phase([https://devcenter.heroku.com/articles/release-
phase](https://devcenter.heroku.com/articles/release-phase)), in spinnker you
can add a separate stage for db migrations:
[https://blog.spinnaker.io/deploying-database-migrations-
with...](https://blog.spinnaker.io/deploying-database-migrations-with-
spinnaker-and-kubernetes-6e6761f9d28a), which can use a different db user. In
this case, migration functionality is disabled at startup in production
environment and runtime db user need not have DDL privilege, because by the
time it starts up, migration phase must have been finished.

------
bcoughlan
Treat a database as any other service. SQL is its API. Don't break the API
contract of a view or procedure, create a new one and deprecate the old one.
Not rocket science.

------
vaultcool
Actually, running integration tests against SQL RDBMS works really well. It
might be a tad slow but the CI can easily handle that. We have Docker and all
of that for this, and it works x-platform.

~~~
rodeoclown
You can even do SQL DDL ALTER TABLEs in a Transaction on production, do the
change, run smoke tests in a nested transaction that are all rolled back when
complete, and if anything fails you roll back the DDL changes and are back
where you started clean.

~~~
tzs
Support for DDL in transactions seems to vary quit a bit between DB servers.

In mysql (and I think also mariadb) ALTER TABLE cannot be rolled back, and
also implicitly does a COMMIT on any in-progress transaction before altering
the table.

In PostgreSQL you generally can do ALTER TABLE in transactions, but there are
dome restrictions or exceptions. If the alteration is to add a value to an
enum, it cannot be done in a transaction.

MS SQL Server seems to be OK with ALTER TABLE in the midst of a transaction,
according to this article [1]. Oracle seems to be similar to mysql.

Even the ones that support this well, like MS SQL Server, have restrictions on
other DDL, such as creating indexes.

I think I'd rather just assume no DDL in transactions and design my schema
update procedure accordingly, rather than asking whoever is designing the new
schema to try to limit themselves to changes from the old that avoid whatever
statements whatever DB server we are using doesn't allow.

[1] [https://www.mssqltips.com/sqlservertip/4591/ddl-commands-
in-...](https://www.mssqltips.com/sqlservertip/4591/ddl-commands-in-
transactions-in-sql-server-versus-oracle/)

------
C1sc0cat
"rather than using SELECT _. If you do use SELECT_ , don't assume that the
fields are in a particular order."

There are professional developers that use Select * EVER!

Maybe add write a simple Select, Insert and Update Statement to the FizBuzz
test.

~~~
porpoisely
You should never use "SELECT * FROM SomeTable" to retrieve data because the
column order isn't guaranteed. Also, you should always use column names to
access data.

However, "SELECT * FROM SomeTable" can be used to determine if a table had
data ( for "EXISTS" clause ) or to "COUNT" the rows of the table. It's
industry standard.

You could also use something like "SELECT 1 FROM SomeTable" but it isn't
natural.

~~~
twic
> You should never use "SELECT * FROM SomeTable" to retrieve data because the
> column order isn't guaranteed. Also, you should always use column names to
> access data.

If you always use column names, why does column order matter?

~~~
0xFACEFEED
It's not just column order. Sometimes the columns you select affect
performance of the query.

~~~
greenleafjacob
More precisely, if you only select indexed columns you can read only from the
index rather than read from index then read the rest of columns from table.
Usually the indices are entirely in memory as well, so it's buttery smooth,
whereas a read from the table will probably require a disk read.

------
andmarios
Sometimes, devops, admins, DBAs, etc are there for QA. It's not that they
don't know how to automate everything, it's that they know if you permit
someone to do a dumb thing, they'll probably do.

It's one thing to talk about good practices and another to build a large
development team that can understand and follow them. By moving the
responsibility to devs, you tax your senior devs, who have to monitor middle
and junior devs more closely and sometimes micromanage them as a consequence.

A few schema changes a year are more easy to test, justify and —most
importantly— track than many small ones going on at the same time. How do you
even manage the schema version when there are multiple branches that require
changes and you don't know the order they will be merged in? Instead of a
version number, you start to use flags. How about deprecated features? How and
when do you clean up? Something else that might be important, are performance
optimizations.

I agree on the suggestions of the article and it's the way I try to write code
that uses a SQL store, just saying that sometimes in real life you have to
make things harder in order to stay safe.

------
ianamartin
Everything about this article is really weirdly written. It's glib in the way
that preachers are glib from the pulpit and trying to get you to just believe
something while glossing over your particular concerns.

Is "doing DevOps" impossible? "No! It's simply a difficult transition."

Oh, I see now. That makes it all better. Never mind that the author's "friend"
never claimed it was impossible. Rather that it was fundamentally a bad
decision.

It's like a parable, but a really bad one. I wonder when the author is going
to bring up Agile and just assume we're all true believers and that anything
that incorporates Agile must be good without question. Oh, there it is. Right
there.

So the difficult transition of DevOps is presented here as necessary and
appropriate because 1. Agile, 2. Faster, and 3. Because it avoids some
incredibly horrible practices the friend's DBA team does.

1\. I won't get into here. That's a different flame-war for a different time.
But people should know better than to write articles with the unquestioning
premise that Agile automatically justifies any and every difficult transition.
2. I'm highly allergic to justifications that involve moving faster. Most
companies do not have a problem delivering value fast enough. They have a
problem delivering any value at all, and delivering a reliable product at any
pace. Faster time to value is the worst possible justification if you're
trying to convince me to experience some pain.

I am a big proponent of DevOps, and I bake it in to all my work, both at my
work-work and in my side projects. It's worth the difficulty to me because it
allows me to _reliably_ get my code from point a to point b. That it's perhaps
faster (is it really faster if you add up the person-hours? I don't know. I
don't really care) is a little extra niceness if it's there at all.

3\. DevOps has basically nothing to do with the pain that's causing the
situation described in this parable. DevOps won't fix bad DBAs or sloppy Devs.
DevOps won't fix a manager that allows this kind of situation to happen in the
first place. DevOps won't fix a company culture that allows Management to run
this kind of a dev shop. It's like the "friend" here is dying of dehydration
in the desert and this dude rolls up in his car and says, "Hey man, you really
need to eat more vegetables and less red meat and exercise at least 30 minutes
a day. It will totally change your life! Also, I have these essential oils
that will cleans toxins from your spirit aura and help you focus on your life
goals. Good luck!"

------
luord
This is interesting. For the very first application I had to create from
scratch, the first thing I learned was about SQLAlchemy and the second was
Alembic. It's almost alien to me to see that there are large organizations
that have never heard of migrations.

------
gregw2
For scripting database changes so they are deployed with app changes... or so
they are automated and auto-tested upon deployment and in version control,
I've had good success with open source Liquibase. Worth looking at Flyway
also.

~~~
clemson20
For those using Microsoft SQL Server, we've had success using SQL Server Data
Tools (SSDT), to automate database deployments for a few years now. Prior to
this, developers would check-in 'delta scripts' \- which were just all the sql
changes that were manually applied. Now, the entire schema for multiple
products is stored in source control as the source of truth, and deployments
themselves generate and apply the deltas.

------
exabrial
Great example of blaming the tool [SQL] to try and bring in hip technology
when the people are to actually blame.

We use Liquibase, which is event-sourcing for database schema changes, and a
database that supports online schema changes.

------
danellis
> There is always some fixed cost to deploy. You should always, in principle,
> be driving down the fixed cost of deployment toward zero. Increasing
> deployment frequency without driving down that fixed cost is detrimental to
> the business and irresponsible.

Surely it's the incremental cost you want to drive down. If there were only a
fixed cost, it wouldn't matter how often you did it.

------
ed_elliott_asc
Microsoft isn’t given enough credit for what they have done investing in ssdt
it really is a game changer for sql developers

~~~
mdpopescu
Yep; discovering - and using - SSDT to replace a 3-day manual migration with a
half-hour automated one is still one of my "most proud moment" stories.

------
giobox
The premise of this article is just too bizarre for me - even the most
inexperienced software engineer today surely can see the fallacy of claiming a
SQL database prevents use of DevOps style practices.

Either the author A. Met a crazy person, or B. Invented the anecdote for sake
of what they beilieved to be a snappy title.

~~~
hobs
context: I work as a DBA.

It is a bizarre premise, and its alive and well at an insane amount of shops
today.

When I walked in the door at my last gig(and a few others), they were hand
running all deployments, and often manually troubleshooting in production when
it fucked up because they didn't have a QA environment.

~~~
ianamartin
Everyone has a QA environment. Some people have separate production
environments.

~~~
dvtrn
You and I may be defining "everyone" differently.

~~~
Falling3
What the parent was getting at is if you don't have an environment devoted to
QA specifically, your production environment _is_ your QA environment.

------
unnouinceput
The technique described in the article would fail if you'd have instances of
different versions of software each expecting different version of database to
connect to in a distributed database environment. So let me tell you how I did
it. So my client was running multiple venues each having a database with local
records and the client (meaning the CEO, the company board, and middle
managers too, so in total more then 100 people running the software on their
laptop) would connect to one venue or even to multiple venues at once to get
reports of what is going on. So downtime of the venues was out of the
question.

Also some of them would have some other problems to solve and would fire up
the software on their laptop even a month later, meaning the venues would
already have different DB schema. So to solve this I've done the following:
-each venue would have a table for SQL statements to upgrade the software
local database. Also this would specify what schema No is current one. -each
venue would have a table for software upgrades, that's it inside a blob field
the software was there for that specific version. The software on client
laptops would also have a local database, used for aggregating reports from
multiple venues mainly, but also to automatically perform a venue DB schema
upgrade if required: -a table with SQL statements to upgrade the connected
venue schema DB to latest one. -a table the software upgrades, the same blob
as described above.

So, when the CTO (usually he was the one receiving the latest and the
greatest) would get a new software version from DevOps, he would basically
receive it by connecting his software to my DevOps DB. The software on his
laptop would see that it has lower number than the connected DB schema No,
access the upgrading fields, get the new software version in a temporary file
and restart the software. The restart process would upgrade the software from
that temporary file, run the necessary SQL commands to modify the local DB as
well, if it was the case and launch the software with the latest upgrade. Now
the CTO would connect to multiple venues, usually by just asking for the daily
report on those and for each connected venue the software would see the
connected venues schema No are older. Would run the necessary SQL commands to
upgrade those venues, pushing the software latest blob as well and then get
the records to compile the actual report. When the other members would connect
to those upgraded venues they would receive the latest software as well same
as the CTO received from my DevOps one. This ripple effect of upgrading
software and schema was done without a single second of downtime from those
venues.

Also the venues did ran a local software on their own, which was also
restarted and upgraded as soon as it was detecting the changes, in the same
fashion as the software on the client laptops was upgraded. In practice, since
the venues were scattered around US and Mexico from Boston all the way down to
California and Mexico City, took around 30 seconds each, as they were all
connected through VPN. So the most downtime was experienced by CTO for like 2
hours (~100 venues), and usually the guy would do this during lunch leaving
the laptop to do this while he was attending other matters. The rest were only
seeing the message "Upgrade in progress, please wait" for only 30 seconds and
was very acceptable. Also the local venue software was less then 5 seconds
with this message up, since it was connecting to a local DB, not over VPN.

I did this in 2010, and while I departed in 2015 from that company this
methodology is still in use without hiccups today by them.

------
evanelias
Ignoring the rather bonkers anecdote at the start of the article, I'd also
argue against one of the other recommendations here, even though it is quite
common in practice:

    
    
      > The command in array index n upgrades the schema from version n-1 to n.
      > Thus, no matter which version is found, the software can bring the
      > database to the required schema version. In fact, if an uninitialized
      > database is found (for example, in a testing environment),
      > it might loop through dozens of schema changes until it gets to the
      > newest version.
    

There are 3 major problems with traditional incremental migration strategies:

1\. Once you've accumulated years of migrations, it's unnecessarily slow to
bring up a new dev/test db. The tool is potentially running many ALTERs
against each new table, vs a declarative approach which just runs a single
CREATE with the desired final state.

2\. The migration tool does not have a way of handling databases that aren't
in one of the well-defined versioned states. This can happen easily on dev
databases, where a developer just wants to experiment before landing on the
final desired migration.

3\. Most of these tools require a developer to code (in a specific programming
language, or a proprietary DSL) the migration logic, as well as its rollback.
There's plenty of room for human error there, especially since most of these
tools don't automatically validate that the rollback actually correctly
reverts the migration.

In contrast, a declarative approach simply tracks the final desired state (a
repo containing a bunch of CREATE statements), and the _tool_ figures out how
to properly transition from any arbitrary state to the new desired state. This
permits the state to be expressed in pure SQL, with no need for a DSL or
custom programming, nor any need to define rollback logic.

A declarative tool can also be used in an automated reconciliation loop, which
can handle situations like a master failing mid-schema-change in a sharded
environment. This is analogous to other declarative tools, such as k8s.

There are several recent open source projects using the declarative approach
for schema management. I'm the author of one, Skeema [1], for MySQL/MariaDB.
Others include migra [2] for Postgres, and sqldef [3] for MySQL or Postgres.

Although declarative schema management is only recently picking up adoption,
it isn't a new approach. Facebook has been handling their schema changes using
a declarative approach for over 8 years now.

[1] [https://github.com/skeema/skeema](https://github.com/skeema/skeema)

[2] [https://github.com/djrobstep/migra](https://github.com/djrobstep/migra)

[3] [https://github.com/k0kubun/sqldef](https://github.com/k0kubun/sqldef)

~~~
tracker1
1\. This burden is only for new ground-up environments... you can turn to
snapshots/backups at specific points and start from that instead of running
all migrations from the start.

2\. This is where having a ground-up script is helpful. If you can
destroy/recreate a local dockerized db (as an example, if not specifically)
that's far less of an issue.

3\. Developers writing code against a SQL database, should understand the
variant of SQL being used. This is also where things like code reviews can
minimize the issues. I'd also suggest that issues would be caught in QA or UAT
before production providing decent test coverage prior to production release.

As for your contrasting approach, what about the example in TFA where you go
from a composite Name field into separate FirstName, LastName fields?

~~~
evanelias
1\. True, but it depends on the size of your data set. If you just want the
tables but not the full prod data, backups may not be usable as-is for this
purpose.

2\. What if you want to capture the set of local dev changes, rather then
destroying and starting over? Most migration tools that I've seen don't have
diff logic built-in, so they have no way of saying "take whatever I did to the
dev db, and capture that state in the filesystem, so that I can turn it into a
pull request". Whereas in Skeema you can simply run `skeema pull development`
and then `git diff` / commit / etc.

3\. Agreed; but the more automated tooling, the better IMO. Otherwise it's
easy for a code review to overlook something like the wrong table name or
column name being used in a migration rollback, due to bad copypasta. And in
an environment where schema changes occur quite frequently, I'd be surprised
if QA always involves testing and confirming every single rollback.

re: your last question, that operation is a mix of DDL and DML... I'd agree
that the declarative approach doesn't natively handle it, since DML isn't
declarative. My personal advice as a database professional is to treat that
example as a data migration, managed separately from pure schema changes.
Migrations of this sort should be less common -- as well as riskier to execute
due to deploy-order considerations.

FWIW, I'm approaching all this from a background of consumer-facing web/mobile
tech companies, where schema changes need to occur quite often, and making
them automated/self-service is game-changing. Processes/requirements will
certainly be a bit different for traditional enterprise, finance, etc.

On a side note, it's really good to run into you here tracker1! It's been a
long time. Are you still involved in the BBS scene?

~~~
tracker1
On you fwiw, I understand. Different environments can have different
challenges. Right now, I'm working on enterprise/corporate style software that
is deployed into government organizations, where the versioning between Agency
A and Agency B may be at different deployments and wanting to bring both to
the current version prior to the next election cycle. Currently using SQL
project in VS, which I'm not a big fan of, as I think upgrading could be very
challenging. Fortunately there's a long window between necessary releases, and
other challenges with legal requirements get interesting.

On BBSing, on several BBS groups via FaceBook... I've been pretty hands off,
would like to get a board up. I put bbs.io on freedns.afraid.org for dyndns
support, and have BBS.land if I ever get around to doing anything with it.
It's hard to find motivation after hours to work on stuff. Also been wanting
to get my board back up.

Didn't realize you open-sourced tournament trivia. Was recently thinking of
reaching out recently to try and get a bigger doormud instance running.

~~~
evanelias
Awesome, I'll send you an email soon! Don't want to go too far off-topic in
this thread :)

------
djohnston
what?

