
Why Are People into Event Sourcing? - adymitruk
http://adaptechsolutions.net/eventsourcing-why-are-people-into-that/
======
btown
Event sourcing isn't nearly as common knowledge among new programmers as the
CRUD-one-row-per-entity pattern, and it really should be. I liken it to
introducing version control for your data; when immutable updates are your
canonical source, no matter how much the system behind them changes, or the
business requirements change, and no matter how many teams are deriving
different things from them in parallel, they can all work off of the same data
and "merge" their efforts together.

The one downside is that shifting your business logic to read-time means that
you need to have very efficient ways of accessing and memoizing derived data.
For some applications, this can be as simple as having the correct database
indices over your WhateverUpdates tables, fetching all updates into memory and
merging on each request. For others, you'll need to have a real-time stream
processing pipeline to preemptively get your derived data into the right shape
into a cache. And those are more moving parts than your typical monolith app,
but the

One benefit to actually using event sourcing with a stream processing system
is that, in many cases, it can be the most effective way to scale both traffic
capacity and organizational bandwidth, much in the same way that individually
scalable microservices can (and fully compatible with that approach!). Martin
Kleppman at Confluent (a LinkedIn spinoff creating and consulting on stream
processing systems) writes some great and highly-approachable articles about
this. Highly recommended reading.

[http://www.confluent.io/blog/making-sense-of-stream-
processi...](http://www.confluent.io/blog/making-sense-of-stream-processing/)

[http://www.confluent.io/blog/turning-the-database-inside-
out...](http://www.confluent.io/blog/turning-the-database-inside-out-with-
apache-samza/)

~~~
blowski
The CRUD one-row-per-pattern is common because it's enough for most projects.
It works well with ORMs so you can build quickly and securely. And most of the
time, performance isn't an issue and having a history of an entity is
unnecessary.

I'm worried that event sourcing is going to become this year's over-applied
design pattern with libraries in every language for every database with blog
posts that recommend it be used on every project.

It's a good idea, very useful - in the right hands on the right projects. But
it makes sense that junior devs normally use CRUD because that's normally the
right solution. At least until better tools come along.

~~~
hcarvalhoalves
> The CRUD one-row-per-pattern is common because it's enough for most
> projects. It works well with ORMs so you can build quickly and securely.

If by "works well", you mean it works until someone asks for historical data -
then IT guy has to say w/ a straight face "we lost it". This is unacceptable
considering the value of data and the strategic leverage it can have today.

Considering immutable facts tables are the most stable data model; companies
often have to re-invent it (poorly) on top of relational at some point; that
storage is often not a problem; and that having clean historical data is
crucial for data science; there are increasingly fewer excuses to not adopt a
sane data model from day one.

I agree partially w.r.t. to tooling - few implementations aid adopting this
pattern, but I believe the value of historical data, over time, overcomes not
being able to slap some quick Rail CRUD together and then being stuck at local
minima.

~~~
coldtea
> _If by "works well", you mean it works until someone asks for historical
> data - then IT guy has to say w/ a straight face "we lost it". This is
> unacceptable_

You'd be surprised.

For tons of projects it's totally acceptable, has worked for years, nobody
paying to implement them cares about historical data and their leverage. In
fact the majority of web apps is like this.

I always find it strange when people use "unacceptable" with wild abandon,
like they're generals receiving some demand of unconditional surrender.

~~~
hcarvalhoalves
The full sentence is:

> This is unacceptable _considering the value of data and the strategic
> leverage it can have today._

The last part is important.

Just because it's been true in the past, it doesn't mean this trend will
continue. Maybe it keeps being true for your run-of-the-mill MVP, but I don't
see it being acceptable for a system in any industry w/ any chance of making
serious money in the mid-term.

~~~
blowski
As long as managers have limited budgets and projects have deadlines, then
tradeoffs will still have to be made.

Event sourcing is an extremely expensive design pattern to implement, and it's
also very easy to get wrong. Implementing it tends to preclude junior
developers from working on the project, makes it harder for database admins to
understand the data, and it requires a lot of thought on how to structure the
events.

So on a project with, say, a £20K budget, it might triple the cost. On a
project that would take 4 weeks to implement with CRUD, it might take 3 months
with event sourcing. You've got to justify that extra cost. It's better to let
a BA decide what they will need, and by all means explain the pros and cons of
different solutions.

But I don't for a second believe that every single project should now be using
event sourcing instead of CRUD.

------
EdSharkey
Here's the term I wish was unfashionable with the kids: reshaping.

Did you spot all those command-to-query-to-event-to-log-to-storage data type
conversions in those pretty diagrams? That's a whole bunch of needless
reshaping of data as it flows through the system.

For each one of those data transformations to be successful, there has to be
accurate communications between people and bug free code written in the data
conversion and routing of messages through the system. All those moving parts
make changing the system extremely painful, lotsa ripple effects - and every
time you have to make a change to your events, you'd have a data migration
project for any running event streams.

Naming things is hard too, and there's a lot more naming of entities needed in
a CQRS-ES system.

I like all the promised benefits of a CQRS and ES, but I can't imagine a case
where I'd take the risk of attempting it on anything but a toy project.
Perhaps if I was on the version 5 rewrite project for an insanely profitable
system where the requirements and design are completely understood up-front. I
would need to grok some canonical example of a large, well-architected, well-
implemented representative system before I would ever attempt to implement
one.

Are there any non-toy examples of successful CQRS-ES with open source
available to read? Did those projects go over-budget, and by how much? Would
the authors of those examples still recommend the architecture now that
they've gone through the experience?

~~~
rreppel
Open sourced ones? The largest example I'm aware of is
[https://github.com/MicrosoftArchive/cqrs-
journey](https://github.com/MicrosoftArchive/cqrs-journey). There's a pretty
extensive write-up of their experiences too. [https://msdn.microsoft.com/en-
us/library/jj554200.aspx](https://msdn.microsoft.com/en-
us/library/jj554200.aspx)

~~~
EdSharkey
I can't tell if this is a toy experiment or not.

------
taeric
As someone that has fallen for the "event sourcing" promise before, the
article does a decent job explaining the promise. Not sure if it will be the
next article, but the actual task of delivering on this work is where things
break. Hard.

The vast majority of the things you will ever program are pretty much
guaranteed from one statement to the next. Hard boundaries, where things can
fail, are often decently understood and actually quite visible in the code.

Moving everything to be an event completely throws this out the window. You
can take a naive view, where you pretend from one event to the next is safe to
happen. However, to start building up the system to cope when this is not the
case starts to build a complicated system. In areas that are decidedly not
related to your business domain. (Well, for most of us.)

Maybe some day there will be a system that helps with this. Until then, my
main advice is to make sure you have solved your system with a naive solution
before you move on.

~~~
rreppel
Agree with the potential for complexity. Here's how we've dealt with this (on
a so far / so good basis): Didn't seem necessary to go asynchronous and beef
up on heavy infrastructure, so we went with simple in-thread, in memory
message buses to start with. I think a lot of the perceived complexity of
building event sourced systems comes because people start with the heavy
plumbing instead of going YAGNI in order to get the domain model implemented.
Easier to beef things up as needed once everything works.

I became disillusioned with doing the naive solution, for two reasons:

Found it to be impossible (from a time/project mgmt perspective) to ever
replace it with the "non-naive" one, so it turns into the usual mess because
CRUD doesn't work well as you load more functionality on it over time.

Secondly ... it's thinking machines. From a business perspective, does it
still make sense to hand code glorified rolodexes without behaviour? Maybe
Excel does the trick. Seeing it as a red flag if someone asks me to build dumb
data entry forms in 2016.

Therefore, I always start eventsourced theses days. YMMV.

~~~
taeric
This seems solid advice. I guess my main questions are:

* If you are in-memory and in-process, why bother with the events in the first place? (Simpler put, why not go with the simpler process based solutions?)

* If you are not testing distributed, how do you know you will be able to distribute?

In particular, there is a very large chance that you will have the same
difficulty in replacing an in-memory/process solution that you would have had
with a naive one.

And don't underestimate the amount of manpower you can get with success. Nor
the amount of features that will not help get success.

~~~
rreppel
I don't really do eventsourcing for technical reasons. To me it's about
creating an environment for project delivery where teams can succeed because
they own more of their stack, from an org chart & organizational perspective.

As far as code is concerned, I see that as coming down to managing coupling &
cohesion in such a way that the various pieces can be designed, built,
supported, deployed and enhanced by the same team, with minimal need to wait
for, coordinate with or be on the same page with any other team.

I think event sourcing & CQRS are great for enabling that because they result
in the lowest form of coupling: Commands are fire and forget. You don't care
who subscribes to your events. You don't care who publishes events you
subscribe to. You never query anything you don't own in order to get
information you need for your business logic. While the system is still small,
we seem to get these advantages just fine when we do in-memory and in-process.
Once package management, performance, units of deployment, etc. become an
issue, we build out to add infrastructure, as needed. Because the pieces are
pretty standalone within the code, it's not all that complicated to do,
compared to "traditional" CRUD systems.

Not testing distributed: It's possible to deduce a lot with testing locally if
it's simple to understand what's coupled to what and if good shared contracts
for commands and events are in place, but ultimately the proof is of course in
the pudding. When the time comes, there is a big advantage: The system is
already fully operational and testing comes down to "does it still work?" If
one were to do the plumbing before the business functionality and tests that
in isolation ... does that really amount to knowing that it'll still be OK
once it actually runs what it needs running ...? For starters, there is a
danger of over-engineering because the required performance characteristics of
the end products are more difficult to asses if there is no end product yet.

~~~
taeric
I think the problem is best seen by considering the statement "You don't care
who subscribes to your events." This is great when you are literally doing
something only on one end of the event creation barrier. This makes sense when
creating something takes a lot of effort.

However, for many many shops, this is an abstraction they will never get to.
They care intimately about who will subscribe to the events they publish,
because they are planning on doing something as the primary subscriber.

------
barrkel
Architecting around events has several ramifications.

For building up a picture of the world, it's pretty good. It's very nice to be
able to replay a log of events and recreate a view of the way things are
expected to be; if there's a bug in your code, you can fix it and repeat the
replay to get back into a good state (with caveats, sometimes later actions
creating events may be dependent on an invalid intermediate state). Whereas
mutating updates erase history, perhaps with some ad-hoc logging on the side
that is more often than not worthless for machine consumption.

For decoupled related action, it's not too bad. If you have some subsystem
that needs to twiddle some bits or trigger an action when it sees an event go
by, it just needs to plug into the event stream, appropriately filtered.

For coordinated action OTOH, e.g. a high-level application business-logic
algorithm, you need to start thinking in terms of explicit state machines and,
in the worst case, COMEFROM-oriented programming[1]. Depending on how the
events are represented, published and subscribed to, navigating control flow
involves repeated whole-repo text searching.

It's best if your application logic is not very complicated and inherently
suitable to loose coupling, IMO.

[1]
[https://en.wikipedia.org/wiki/COMEFROM](https://en.wikipedia.org/wiki/COMEFROM)

------
sanderjd
FYI in case the author reads this, since this seems to be intended as an intro
for people who aren't already familiar with this stuff: I didn't see "CQRS"
defined anywhere in this article or in the two or three links I followed from
it; they all begin with an assumption that you know the acronym, and delve
straight into details. It might be good to define some terms in the front
matter (unless I've misunderstood the target audience).

~~~
rreppel
Always a problem with techie - acronymania. :) Thanks, noted. I'll do an edit.

------
SEJeff
Two must-read documents for those who want to learn more about this method of
building reactive applications:

[https://engineering.linkedin.com/distributed-systems/log-
wha...](https://engineering.linkedin.com/distributed-systems/log-what-every-
software-engineer-should-know-about-real-time-datas-unifying)

[http://martinfowler.com/eaaDev/EventSourcing.html](http://martinfowler.com/eaaDev/EventSourcing.html)

Note that Martin's blog is what inspired the event bus in [https://home-
assistant.io](https://home-assistant.io), an open source home automation
project I occasionally contribute to.

------
kazagistar
I've tried working out how to move to an event sourcing system, but I always
struggle with locking behavior. Do you just have to invent your own locking
mechanisms on top of event sourcing?

~~~
biot
Combine this with the actor model (using Akka or similar) gives you guaranteed
"one message at a time" processing and you don't have to deal with locks.

~~~
taeric
I question any amount of guarantees around "one message" anything. There might
be this guarantee per actor, but you have no such guarantee per system. And,
assuming a real system, this will be a problem.

So, you get to pick, "at most once" or "at least once." And then you need to
build your system to act accordingly.

~~~
hew
Slightly shooting from the hip here (as I'm still learning).

Low-ish volume- design your system such that data flows [at the relevant
crucial points] through a single actor to ensure proper concurrency.

High volume- trickier but I think same idea in principle. First thought that
comes to mind here is the new GenStage stuff in Elixir.

~~~
taeric
I'm not sure what you are suggesting. There is no getting around "at most/at
least once." You can shift the posts, some, but at some point that is your
choice. This is a good read on the problem: [http://bravenewgeek.com/you-
cannot-have-exactly-once-deliver...](http://bravenewgeek.com/you-cannot-have-
exactly-once-delivery/)

~~~
eweise
"Exactly once delivery" is not the same as "One message at a time". Akka
Actors process one message at a time. Akka does no provide exactly once
delivery. It default defaults to "at most once".

~~~
taeric
So, then to go back to the original.

You may not have to do deal with locks at a local level. You absolutely have
to deal with locks at a system level.

------
tofflos
Axon Framework [http://www.axonframework.org](http://www.axonframework.org) is
a great place to start if you're into Java and want to get a feeling for how
event sourcing works.

There's also a great presentation by the developer, Allard Buijze, at
[https://www.youtube.com/watch?v=s2zH7BsqtAk](https://www.youtube.com/watch?v=s2zH7BsqtAk).

------
grandalf
There is a lot that could be done to make event sourcing easier to work
with...

Imagine tooling that allowed an event stream to be used to create state for
testing modules, crudlike helpers to allow crud-familiar developers to think
that way at first, and workflows based on snapshots, rewind, etc.

I think a model that used events that correlated to graph deltas rather than
crud deltas would be the cat's ass, and many queries about the near-current
state could be handled efficiently using ephemeral subgraphs as indexes
located at the network's edges.

If anyone wants to discuss and possibly build some of this stuff, let me know
:)

~~~
karmajunkie
> Imagine tooling that allowed an event stream to be used to create state for
> testing modules, crudlike helpers to allow crud-familiar developers to think
> that way at first, and workflows based on snapshots, rewind, etc.

i know where you're going with this, and i honestly believe its a terrible
idea (not to be discouraging or rude—just experienced.)

if your event streams contain mostly CRUD (possibly ANY) then you're most
likely applying it incorrectly. Its not just a version history of your data.
The event type itself _is_ data, which provides context and semantics over and
above the notion of writes and deletes. If you're falling back to crud events
all you're doing is creating a lot more work for yourself and deriving almost
no benefit from the use of ES—in that case, you should just use CRUD and the
ORM of your choice.

~~~
dragonwriter
> if your event streams contain mostly CRUD (possibly ANY) then you're most
> likely applying it incorrectly. Its not just a version history of your data.
> The event type itself is data, which provides context and semantics over and
> above the notion of writes and deletes.

Right. A good way to think about this is that as with rows in an RDBMS, events
in an ES system are facts, and just as tables in an RDBMS define a category of
facts with a particular shape, event-types in ES do the same thing. The
difference is that whereas in an RDBMS the facts represented by rows can be
general (and are often, in many designs, facts about the current state of the
world), events are facts about a specific occurrence in the world rather than
the state of the world (and the "state of the world" is an aggregate function
of the collection of events.)

~~~
Terr_
Right^2: Good events are facts that occur at a higher level of abstraction,
trying to capture more of the "why" behind what goes on. It's not about
describing the effect on data, but the business-decision itself. (Which, when
reapplied to a set of rules, will do the actual data-change.)

------
impostervt
I was looking into Event sourcing for a system I built recently, and the
tooling just doesn't seem to be that widespread yet. How do you read out of
the entire event stream to figure out the current state? While there are tols,
they seem to be .net focused. Just didn't seem to be a "standard" answer yet.

We ended up going with microservices that pub/sub events into Kafka, but
maintain their own databases. There's another microservice that lets you query
past events for statistics.

~~~
rreppel
We find that a simple in-memory synchronous message bus + event logging to
files goes a long way. See e.g.
[https://github.com/robertreppel/hist](https://github.com/robertreppel/hist)
for an in-memory bus + file system (and DynamoDB ...) helloworld which isn't
.net.

Scaling that up by adding asynchronicity and more ambitious plumbing when
needed seems reasonably straightforward. For something more out-of-the-box,
see [https://geteventstore.com/](https://geteventstore.com/) . It has clients
in a variety of languages. Comes with a nice HTTP API too.

I wouldn't normally read the entire event stream; usually, only the state of a
particular object (aggregate, in Domain Driven Design speak) is of interest,
E.g. the customer with id 12345. Events contain the aggregate ID, so the query
to whatever event store you use would be "give me all events with aggregate ID
12345".

~~~
burnout1540
Are you using DynamoDB Streams at all? I've been toying the idea of using
DynamoDB as an event store and having other services listen to a table's
stream, allowing them to update caches/views (the read-side of CQRS), report
analytics, perform asynchronous tasks, etc.

~~~
mbrock
You could quite nicely use AWS Lambda for the materialized views, I think.

------
mamcx
I for some months now have tried to build a small test-case for a invoice app.
I wish to have a good syn strategy and the use of ES sound good. However, I
have find how replicate the functionality of a normal app with this: For
example, what to do for avoid duplicates and in general pre-saving
validations. Also, I need to anyway to use RDBMS tables for hold current-data
and RDBMS have not a good history for stream back results.

------
zarkov99
I have been working with this sort of patterns for a while but I have yet to
find good texts exploring the topic. Does anyone have book or paper
recommendations for event sourcing? The stuff I have seen is mostly
programmers reporting on something that worked on their particular domain. I
am, looking for something more rigorous and comprehensive.

~~~
karmajunkie
Lurk on the CQRS/DDD list [1], lots of good info there. I'm not aware of any
textbooks on ES per se but there are a few good books on areas that overlap.
[2] [3] [4]

[1]
[https://groups.google.com/forum/#!forum/dddcqrs](https://groups.google.com/forum/#!forum/dddcqrs)

[2] [https://www.amazon.com/Enterprise-Integration-Patterns-
Desig...](https://www.amazon.com/Enterprise-Integration-Patterns-Designing-
Deploying/dp/0321200683/ref=sr_1_1?ie=UTF8&qid=1475563462&sr=8-1&keywords=patterns+of+enterprise+integration)

[3] [https://www.amazon.com/Implementing-Domain-Driven-Design-
Vau...](https://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-
Vernon/dp/0321834577/ref=sr_1_sc_3?ie=UTF8&qid=1475563596&sr=8-3-spell&keywords=vaugn+vernor)

[4] [https://www.amazon.com/Domain-Driven-Design-Tackling-
Complex...](https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-
Software/dp/0321125215/ref=pd_bxgy_14_2?ie=UTF8&psc=1&refRID=0HVKJD3MP0QACV5SJK6F)

------
freditup
As an interesting comparison, some people see the Redux/Flux pattern as a
front-end parallel to event sourcing.

[0]:
[https://github.com/reactjs/redux/issues/891#issuecomment-158...](https://github.com/reactjs/redux/issues/891#issuecomment-158693484)

------
avodonosov
How strange, just today I've heard the Event Sourcing name and thought I don't
know what it is. (Turns out it is this old idea I knew under various different
names). And at the same day I hear about Event Sourcing on HN. What's the
buzz?

~~~
karmajunkie
Its been slowly building steam (under that name) for about ten years, first in
.Net and now filtering out to other ecosystems. I think its kind of inevitable
given the recent popularity of functional programming models.

------
willvarfar
Very curious: if you have multiple datastores, how do you ensure they are
consistent? If you scale sideways, how do you ensure nothing gets lost if
there's a partition? Etc?

~~~
PallarelCoedr
Embrace eventual consistency. A good deal of collaborative domains (things
involving human decisions) are naturally eventually consistent. Meat computers
appear to be particularly good at resolving conflicts and compensating.

------
GundersenM
Having been part of a project to rewrite a monolith e-commerce site into an
event-sourced, domain driven, CQRS system, let me tell you in which situation
that is not possible: when you already have data. Remember that in a DDD, ES,
CQRS system, the event store is the single source of truth. If you already
have data in a relational database, then the existing data is the source of
truth. You can't have two sources of truth, that completely defeats the
purpose. So it's not actually possible to migrate to an event sourced system,
you can only create one from scratch, with no existing data.

~~~
dragonwriter
Conceptually, that's not really true: you just transform the pre-ES state into
one or more events (in an basic accounting system, which is pretty much the
simplest ES system, long-predating the name for the model, this is just
creating "starting balance" entries as transactions.)

In practice, that can be challenging, but it doesn't seem fundamentally _more_
challenging than any other legacy data conversion effort.

~~~
GundersenM
Sure, if the existing DB is simple, that is straight forward, but remember
that likely this is a monolith that is so bad that even management have agreed
that it needs to be rewritten. Likely there are lots of DB tables with foreign
keys and relations (sometimes documented and enforced, most often not). This
means you can't really convert the entire database into an event sourced
system, as that means converting all of the tables in one single go, instead
of a gradual change. And believe me, in a system like this you want slow
gradual changes! Also, even of you got it into events, what happened to the
domains? There are so many relations between the different events sources
(because you didn't put everything into just one event source, right? What
happened to bounding contexts?) that you are no better off. And this means you
have to prevent anything else from using the database anymore, and in a legacy
system where you can just join across any two or three tables to extract
whatever information you want, you can be certain there are some analysis
engines that are just feeding directly on the sql data. And there might be
other systems writing to the database too!

So the first step is to disentangle all the data and encapsulate it, trying to
prevent others from using it, so you have full control over it. This includes
tracking down any other system using this data, and ensuring they too go
through the database. And you have to do this for one subsystem at a time,
often in several iterations.

~~~
dragonwriter
> Sure, if the existing DB is simple, that is straight forward, but remember
> that likely this is a monolith that is so bad that even management have
> agreed that it needs to be rewritten.

Yeah, but that's not a "converting legacy data to ES" problem, that's a
"converting legacy data to any non-broken thing" problem.

> This means you can't really convert the entire database into an event
> sourced system, as that means converting all of the tables in one single go,
> instead of a gradual change.

Whether its ES or something else you are converting to, you either do a big-
bang conversion and eat the pain of that (which can be tremendous, sure), or
you instead eat the pain of taking the monolith and finding a way to break out
components and do it incrementally, even though that takes not only building
the new components, but reengineering parts of the old monolith to support
that. Which, also, can be tremendous pain. But, again, this isn't really
essentially tied to event sourcing, you face this dilemma even if you are
going from a (broken for current needs, which is why it is being replaced)
classically-designed "current state" RDBMS-backed system to a (meeting current
needs, and hopefully more adaptable to future needS) classically-designed
"current state" RDBMS-backed system.

~~~
GundersenM
Yup, agree, this is the problem of having a legacy monolith RDBMS that needs
to be rewritten and split apart. It's tempting to throw every new fancy
technology at the problem when that is suddenly an option, but it's better to
focus on the goal of splitting it apart only. If you have split it out, and
it's now simple to convert to ES CQRS, then you are probably in a situation
where you don't need to do that, as it works quite well.

~~~
dragonwriter
> It's tempting to throw every new fancy technology at the problem when that
> is suddenly an option, but it's better to focus on the goal of splitting it
> apart only.

Splitting it apart involves:

(1) Dividing the data and functionality into a legacy component and a new-
implementation component,

(2) Making changes to the DB and application code for the legacy component,

(3) Implementing the new-implementation component.

In a monolith that you are breaking apart, the reusability of legacy code for
the new-implementation component is likely to be low (you'll actually likely
have to do extensive changes to the larger "legacy component" as well, but the
reusability should be somewhat higher there.)

You have to use _some_ technology for the new implementation component, and
what you should aim for is whatever is the best fit for the job, whether it is
similar to what existed before or not.

> If you have split it out, and it's now simple to convert to ES CQRS, then
> you are probably in a situation where you don't need to do that, as it works
> quite well.

I disagree. The hard part of converting to ES/CQRS for the components that are
broken out ("new implementation" components, not the "legacy" reduced-
monolith) is done in the analysis phase of what you are breaking out. Once
that is done, _implementation_ in a ES/CQRS manner is fairly straightforward,
since defining the events that the component will handle is a core part of
analysis, as is defining the impacts those events have on stored, reportable
data (the query side of CQRS).

~~~
fuck_google
"The hard part [...] is done in the analysis phase..." smells like big design
up front that is usually more likely to fail than not, especially so for
complex systems.

~~~
dragonwriter
Big design up front would be a complete system replacement, not incremental
replacement by component. An incremental replacement still requires definition
of the components to be replaced with new implementation and the part to be
essentially retained with only the changes necessary to interface with the new
component.

