
Git is Inconsistent - sheffield
http://r6.ca/blog/20110416T204742Z.html
======
davidmathers
Here's the short version:

    
    
      I am the original sentence.
    

Alice commits a change in her repo:

    
    
      I am a different sentence.
    

Bob commits a change in his repo:

    
    
      I am the original sentence.
      I am the original sentence.
    

Now Alice pulls Bob's commit. What should happen?

The argument is that in certain cases it can be known which of Bob's 2
sentences is the original and which is the copy (due to context provided by an
intermediate commit) and that therefore a correct VCS will figure out that the
original is on the bottom:

    
    
      I am the original sentence.
      I am a different sentence.
    

But git doesn't look at history so will always produce:

    
    
      I am a different sentence.
      I am the original sentence.
    

I don't care. If you force me to care then I actually prefer git's behavior.
Git is consistent: a merge will always produce the same result for the same
files. I don't want history to matter.

The problem is not actually solvable. So git doesn't try to solve it. I think
that's why it's called "the stupid content tracker."

EDIT: Is there anything worse than "smart" features that only work, say, 80%
of the time? The closer they get to 100% the worse it gets, because then you
start relying on them and they break right when you stop paying attention.

~~~
Peaker
> Git is consistent: a merge will always produce the same result for the same
> files

I thought the point was that if you pull the exact same commits in different
order the merge will produce a different result for the same files, meaning
that in git the history _does_ matter. Whereas darcs/etc will always produce
the same result, such that history does not matter?

~~~
davidmathers
_pull the exact same commits in different order_

Sort of. The OP doesn't write clearly. He's also confused about how git works.
What he means is..

Say Bob has 2 commits (B1-B2) and Alice has 1 (A1)

Scenario 1: Alice merges each of Bob's commits in sequence (i.e. she replays
his commit history onto her repo: A1-B1-B2).

Scenario 2: Alice merges only B2 (A1-B2).

The point is that, with git, Alice's repo will be different in each scenario.
Because in scenario 2 git doesn't examine commit B1 and use that info to try
and figure out what the content in commit B2 "means".

With darcs, on the other hand, her scenario 1 repo will be identical to her
scenario 2 repo.

The flip side is that in scenario 2 git will always produce the same result
for the same B2, because B1 is irrelevant. With darcs a change in B1 will
change the result.

NOTE: "git pull --rebase" actually does "replay commit history" instead of
"merge" when pulling code into your repo (result: B1-B2-A1). I use it as my
default. The outcome is the same as darcs, the difference is that everything
is explicit.

~~~
knowtheory
That's what i don't get.

I don't understand where or how you could encounter a circumstance where this
would matter. This complaint seems to be an abstract theoretical point (maybe
to support git alternatives? dunno) that even esoteric usage of a DSCV would
never come across.

I dunno, maybe i'm not being creative enough in my use of histories.

EDIT: Okay this explains everything in a considerably more concise fashion
than the article does: <http://news.ycombinator.com/item?id=2456529>

~~~
Groxx
It would matter in this situation:

In the beginning:

    
    
      function A(){
        return 1;
      }
    

Now commit this in one branch:

    
    
      function B(){
        return 1;
      }
      function A(){
        return 1;
      }
    

then this:

    
    
      function A(){
        return 1;
      }
      function B(){
        return 1;
      }
      function A(){
        return 1;
      }
    

And then this in another branch off the base:

    
    
      function A(){
        return 2;
      }
    

Now merge the two end points. Which is correct? This, assuming a purely line-
based diff:

    
    
      function A(){
        return 2;
      }
      function B(){
        return 1;
      }
      function A(){
        return 1;
      }
    

or this, assuming knowledge of the history of events?

    
    
      function A(){
        return 1;
      }
      function B(){
        return 1;
      }
      function A(){
        return 2;
      }
    

In Javascript, where such code is acceptable, `A()` now returns 1 or 2.

In Git, or by applying patches manually, it depends on the order in which you
merge. If you merge the `B()A()` branch with the `return 2` branch and then
the `A()B()A()` one, you'll get the second result. But if you merge the
`A()B()A()` directly with the `return 2` branch, you'll get first one. The
same set of changes producing different outcomes.

In Darcs, the history between `A()`, `B()A()`, and `A()B()A()` are checked,
and it's seen that the second `A()` is the "original" one, so the `return 2`
is applied to that one.

Which means that you won't necessarily get the same behavior merging two Darcs
patches as you would merging it within the repository, where there is a
history. Git behaves exactly as if you were dealing with patches. I side with
Git on this, personally, but it's a valid point - you have history, why not
use it?

~~~
saalweachter
You know, I suspect that in many production cases, neither merge is "correct".
The example involves a lot code duplication, and a change to the block of code
which was duplicated.

The probable case is something like:

    
    
      function foo(){
        do_something_complex_but_not_correct();
      }
    

with one person making the change to:

    
    
      function foo(){
        something_else();
        do_something_complex_but_not_correct();
      }
    

and then:

    
    
      function foo(){
        do_something_complex_but_not_correct();
        something_else();
        do_something_complex_but_not_correct();
      }
    

in the stated two-step change, while another author makes the change to:

    
    
      function foo(){
        do_something_complex_and_also_correct();
      }
    

The correct "merge" is going to be to apply the second change to _both_ blocks
of code, not just the first or the second:

    
    
      function foo(){
        do_something_complex_and_also_correct();
        something_else();
        do_something_complex_and_also_correct();
      }

~~~
Groxx
Which is why I side with explicit, patch-like behavior. Interpreting a `move-
and-copy` as a `move` when there's a chunk of duplicate data that could mess
things up means it's essentially doing a primitive semantic analysis of what
you _meant_ to do. It may be correct _more_ of the time, but it can't be
correct _all_ of the time.

What I "meant to do" could have been as you stated, where both should have
changed. Or I could have copied the internals of a function to a new one, and
made minor changes around it, and actually do wish to use that _new_ copy as
the official version. There is no way to 100% accurately detect such intent
without being explicit about it, so I'd prefer something dumb and therefore
extremely predictable.

------
tytso
I've contributed a tiny amount to git (the high-level "git mergetool") so I
can't speak for all of the git developers, but I've spent enough time hanging
around for them to say that the general feeling they have is that git's
algorithm which is "3-way merge, and then look at the intervening commits to
fix any merge conflicts" is good enough.

You can always try to spend more time trying to use more data, or deducing
more semantic information, but past a certain point, it's what Linus Torvalds
has called "mental masterburation".

For example, you could try to create an algorithm that notices that in branch
A a method function has been renamed, and in branch B, a call to that method
function was introduced, and when you merge A and B, it will also
automatically rename the method function invocation that was added in branch
B. That might be closer to "doing the right thing". But does it matter? In
practice, a quick trial compile check of the sources before you finalize the
merge will solve the problem, and that way you don't have to start adding
language-specific semantic parsers for C++, Java, etc. So just because
something _could_ be done to make merges smarter, doesn't mean that it
_should_ be done.

It's a similar case going on here. Yes, if you prepend and postpend identical
text, a 3-way merge can get confused. And since git doesn't invoke its extra
resolution magic unless the merge fails, the "wrong" result, at least
according to the darcs folks, can happen. But the reason why git has chosen
this result is that Linus wanted merges to be fast. If you have to examine
every single intermediate node to figure out what might be going on, merges
would become much slower, since in real life there will be many, many more
intermediate nodes that darcs would have to analyze. Given that this situation
doesn't happen much in real life (not withstanding SCM geeks who spend all day
dreaming up artificial merge scenarios), it's considered a worthwhile
tradeoff.

~~~
adambyrtek
Good point, and another argument for maintaining a reasonable test coverage.
I'd even argue that a merge strategy that is too clever (like the one you
described) could be more risky than a dumb one. It could lead to a resolution
that is valid from a compiler standpoint, but wrong semantically, which makes
it even harder to discover.

------
yuvadam
To quote Johannes Schindelin [1] :

    
    
      This all just proves again that there can be no perfect merge strategy;
      you'll always have to verify that the right thing was done.
    

[1] - [http://thread.gmane.org/gmane.comp.version-
control.git/10574...](http://thread.gmane.org/gmane.comp.version-
control.git/105748)

~~~
Andys
Amen. There's no way I ever do this in a real code base without checking that
the result is what I intended.

~~~
ams6110
Yes, I also always look at diffs after a merge, and also before I commit.
Several times I've caught changes that I really didn't want to go back to the
branch.

------
saalweachter
Is there any reason to assume that merges should be associative? Hell, of the
four normed division algebras, only three are associative; just because you
can say "operations on octonions should be associative" doesn't mean that you
can necessarily create a system of octonions where it's true.

For what it's worth, "git pull --rebase" does enforce a specific order to
changes (local changes always happen after remote changes) which will produce
the same results regardless of when user Bob pulls user Charlie's changes:
regardless of whether Bob pulls change c1 after commiting both b1 and b2 or
after commiting b1 and before commiting b2, the final commit order will always
be "a, c1, b1, b2".

Of course, if Bob commits and pushes b1 before Charlie commits and pushes c1,
the final commit order will be "a, b1, c1, b2", but how could it ever be
otherwise?

~~~
pjscott
There are ways of making a DVCS that allow all merges to be associative, and
all patches commutative except when there's a causal dependency between them,
e.g. if patch A creates a file, and patch B edits that file, then they cannot
commute. I believe darcs makes these guarantees, and making a correct
implementation is relatively straightforward. (Making it _fast_ is more
complicated, but definitely doable.)

Ultimately, though, what you really want is for the VCS to just do what you
mean. That's a lot trickier than providing mathematical guarantees about patch
reordering and convergence.

------
KirinDave
Not to be grumpy about it, but git's shortcomings are well-known and most
people don't run into them on a daily basis.

Some DVCS, like Darcs, might behave better, but they all seem almost comically
slow even for medium-sized repos. If I have to sacrifice git's speed for
certain types of correctness (that don't trouble me on a daily basis), I will
be VERY reluctant to make that choice.

------
nevinera
>There are still some people who still think nothing is wrong with git; that
it is okay for the result of a merge to depend on how things are merged rather
than on only what is merged; that is it okay for two git repositories that
pull the same patches to have different contents depending on how they pulled
those patches. I don’t know what to say to those people. Such a view seems
like insanity to me.

Git merges _files_ , not file-histories. Git's behavior is simple, clear, and
easy to understand.

I can see why you might _expect_ merges to be transitive like this (it would
be an elegant property, if it were true), but why does it matter to you? In
what way do you use merges that could rely on this expectation?

~~~
Peaker
Here's a quote from the article explaining what would rely on this
expectation:

> There are still some people who still think nothing is wrong with git; that
> it is okay for the result of a merge to depend on how things are merged
> rather than on only what is merged; that is it okay for two git repositories
> that pull the same patches to have different contents depending on how they
> pulled those patches. I don’t know what to say to those people. Such a view
> seems like insanity to me.

~~~
jerf
I think I kind of know what the author was getting at, but I'm not sure, and
ending with the moral equivalent of "If you disagree with me, I guess you're
just stupid" was a bit disappointing.

I _think_ the idea is the potential problems with this could emerge if you
have two people simultaneously doing somewhat larger complicated merges that
have this core problem perhaps more than once. I think that may be true, but
the probability of this occurring is well below just plain-old-fashioned human
screwups, and the solution to both ("laborious history comparison,
examination, and a reset --hard to a hash by somebody") is the same in both. I
really don't see how fixing this would solve any real-world problem.

~~~
copper
I thought the author was trying to get at the old Babbage quote on confusion
of ideas.

FWIW, I use git-svn to handle complex merges in svn because git has a better
merge algorithm. While this particular situation doesn't affect that use case
- I think it could, but it should be rare with (svn) branch discipline - the
fact that it might is something to keep in mind.

------
ob
There are two things most commenters in this thread have missed:

1) The article talks about auto-merges. If the code is "too close" by some
definition of close, you get a conflict that needs to be manually merged. The
article does NOT talk about manual merges.

2) The article is titled "Git is Inconsistent", it doesn't claim Git is WRONG,
it claims it is INCONSISTENT. It does different things depending on how you
merge and when.

I think consistency in a DVCS is a desirable goal. It should not matter
whether you pull A then B, or pull B then A, or whether given a series of
commits, you pull after each one, or just once at the end. The end result
should be the same.

That it is a rare occurrence only makes it worse. You will mostly trust the
auto-merge algorithm until you hit the corner case and it will be _very_
expensive in terms of time/money to fix the mistake.

Git's brilliance/stupidity is precisely that it _only_ tracks contents, so
although it _could_ get the right answer it makes it very expensive to do it.

~~~
davidmathers
_The article is titled "Git is Inconsistent", it doesn't claim Git is WRONG,
it claims it is INCONSISTENT._

Ok. The claim that git is inconsistent is wrong. From OP:

 _The problem with git’s merging is that it doesn’t satisfy the “merge
associativity law” which states that merging change A into a branch followed
by merging change B into the branch gives the same results as merging both
changes in together in one merge._

There is no such concept in git as "merging both changes in together in one
merge".

 _I have modified a shell script written by Simon Marlow that illustrates,
using git, how merging two patches separately can give different results than
merging two patches together._

The shell script doesn't do what is claimed. It can't because git has no
facility for "merging two patches together". Git can only do 2 things with
patches:

1\. generate a patch

2\. apply a patch

But! git has a function which is equivalent to combining 2 patches in a single
merge:

git pull --rebase

The shell script does not use this command. It first applies 2 patches
separately. It then applies 1 patch separately.

 _There are still some people who still think nothing is wrong with git; that
it is okay for the result of a merge to depend on how things are merged rather
than on only what is merged; that is it okay for two git repositories that
pull the same patches to have different contents depending on how they pulled
those patches. I don’t know what to say to those people._

This is just incoherent. I have no idea what to say in response because I have
no idea what the intended meaning is.

~~~
ob
If you _never_ merge, but only use "git pull --rebase", you will have a
straight line history and thus lose all of the "distributed" nature of the
history. That's fine, but limiting. Any system that allows distributed
development has to deal with parallel work that gets merged in stages.
Otherwise you are no better than diff/patch (FWIW, rebase merges before
rebasing, so it is also vulnerable to this problem, rebasing just A, then
rebasing B is NOT the same as rebasing A + B).

See: <http://pastebin.com/SxmwpFkY>

~~~
davidmathers
OP is saying something like "when I cook things with my freezer they don't get
hot." It's that non-sensical.

Git can't do (at all) what he wants to accuse it of doing wrong (because it
has nothing to do with what git does). So I'm just pointing out the closest
approximation to what he's aiming at is to use pull --rebase.

Personally I like to have a straight line history as a default and only merge
when required. Rather than always merge by default.

Edit: Ok, I'm not sure I understand the point of the pastebin. Maybe. If you
want the lower C to become X you need to git checkout master and then git
rebase c. Not the other way around. Is that it?

~~~
ob
> OP is saying something like "when I cook things with my freezer they don't
> get hot." It's that non-sensical.

No, OP is saying "when I cook my food in the microwave for 3 minutes, I get it
to a _very_ different temperature than if I cook it for 1.5 minutes first and
then another 1.5 minutes"

------
Groxx
Super-simple-summary:

Git doesn't use history to determine merge behavior (edit: in this
circumstance). Git behaves like applying patches. Darcs uses the history to
make "intelligent" patches.

It's a matter of taste. If you look at Git as _having_ a history, therefore
should _use_ the history, yes, it's incorrect. But if you look at it as a
patch manager, it's behaving as it should, and Darcs is frighteningly
unpredictable - the numbers on the patch might not match the numbers of the
lines it modifies.

I side with Git on this. I can generate patches from Git that will work
anywhere, and use them 100% identically within Git as manually applying them.
The same cannot be said for Darcs.

~~~
ob
Of course Git uses history. It doesn't _have_ to, but it does. As a matter of
fact, as soon as you use diff3, you are using history (that's where the GCA
comes from).

~~~
Groxx
Know which situations it does use it, similar to this setup? Apparently not
for moves, any other potential gotchas? I prefer patch-like behavior, because
it can be predicted by looking at the patch.

------
__david__
After reading this it strikes me that git is imperative--it stores files as
they were when you checked them in and merges what you tell it in the order
you tell it.

Darcs, however, is more declarative--it stores patches. And not just patches
but patches with dependencies. This set of patches describes how the current
state of the repository is constructed. So when you merge you're really just
adding new patches to the repo and it knows exactly what to do to make it
work.

The interesting thing is that git _has_ all the information there... It
_could_ go through the relevant history, diff everything and put the resulting
patches in a darcs-like data structure and then commute patches with darcs'
patch theory.

But in the end I'm not sure I'm ready to call darcs' style _right_ and git's
_wrong_. Both of them have a fairly easy to understand object models and they
both have merges that act in accordance to the internal philosophies of those
object models.

------
etherealG
I agree with you completely, but want to know how this can be fixed in git?
Surely there has to be something about the merging algorithm that can be
changed to fix this, and if that's the case we can just patch it and move on.

What is the specific problem with the algorithm that causes this?

~~~
pmjordan
I assume this will reduce the quality of the merge algorithm from a stand-
alone point of view, which is presumably a very hard sell.

~~~
etherealG
I don't know this is true for sure, perhaps introducing the patch would
increase it's quality. If someone offered such a patch we could discuss,
instead the article only shows the broken test case. It's almost a darcs plug
without and reasoning.

~~~
etherealG
see the link below posted by tonfa, seems this patch isn't worth it anyway :)

------
daviddavis
I wonder how mercurial compares in this aspect. Also, I'll keep using git
because for sure, it's a helluva lot better than SVN or CVS (which my company
was using when I got there).

~~~
tonfa
Same as git, and you'll probably get the same reactions.

"""

In other words, we're already at the point of significantly diminished,
possibly negative returns on effort. The last few percent will always require
some level of human-equivalent intelligence. I think effort here is much
better spent elsewhere, like researching general AI or playing on waterslides.

""" [http://thread.gmane.org/gmane.comp.version-
control.mercurial...](http://thread.gmane.org/gmane.comp.version-
control.mercurial.general/26109/focus=26110)

~~~
etherealG
Thanks so much for this link, this is exactly the kind of analysis I was
hoping for. Clearly this is all a bit FUD, and darcs which gets this right, is
trying too hard. I wonder how fast the general merge algo that darcs is using
to get this right is? <trollface>

~~~
tonfa
Matt's point is that while some algorithms will fix this particular case, you
can still come up with a different edge case which makes it break. The whole
"prefect merge tool" was very popular five years ago (during git's and
mercurial's infancy), but it didn't lead anywhere.

Simple merges strategy are "good enough" in practice.

~~~
ob
Matt's point is that they've chosen a system that makes it really hard to get
that last 10%.

"We have tried to draw spirals using cartesian coordinates, what we have gets
us 90% there, but there are infinities and edge cases involved in getting a
perfect spiral. The equations describing them would get so complicated it's
just not worth it."

What we have in BitKeeper is the equivalent of polar coordinates... it makes
drawing spirals much, much easier ;)

~~~
tonfa
Do you have a page describing how that would differ? The bk website seems
awfully outdated: there's no mention of the existence of other DVCS, there's a
quote from MySQL being happy with bk -- they switched to bazaar two years ago
--, etc..

I would be nice if you could give some examples where bk gets the merge right
while git doesn't.

~~~
ob
Yeah, the website is awfully outdated and information free. BitMover is
working on it.

One example that bk gets right and git doesn't is precisely the one explained
in the article.

------
jojo1
Hmmm, nobody seems to care: [http://article.gmane.org/gmane.comp.version-
control.git/1057...](http://article.gmane.org/gmane.comp.version-
control.git/105748/)

------
tzs
The article mentions that some systems do have the associativity property--
that is, extra rungs in the merge ladder do not affect the result.

I can see how that can be achieved in the case of fully automatic merges. When
merging B2 into C1+B1, you'd effectively un-merge C1+B1, merge B1 and B2, and
then merge C1 and B1+B2.

But how would that work if C1+B1 had a conflict that had to be manually
resolved? Assuming merging B1+B2 into C1 has the same problem (a fair
assumption) will I have to do the same manual fixes again?

Or are they smart enough to look at the failed automatic C1+B1 merge, and
generate a patch to that from the manual fixes I did, and then try to use
those to resolve the merge of C1 and B1+B2?

I suspect there will be cases where this is just not going to work well.

------
dmoney
Off topic, but the link to the shell script and the images in the article use
Data URIs, which I didn't know existed:
<http://en.wikipedia.org/wiki/Data_URI_scheme>

------
gnosis
Does anyone know how bazaar would handle this?

~~~
tonfa
Just try or check the source. If they use patience or some kind of cdv merge,
I expect they would get the same merge in both direction.

------
mml
hmm. i was hoping the article discussed git's mind-bogglingly horrible user
interface.

can't have everything i guess.

~~~
hasenj
git's UI is great; as long as you understand how it works.

The good thing is: "how it works" is really simple.

You should treat it like a language (just like all system/unix tools), not an
"app".

~~~
Peaker
I think git is one of the best tools we have, but its UI is really bad:

 _checkout_ and _reset_ do completely different things when given files or
when not given files.

 _reset_ on files should really have been called _unadd_. _reset_ on refspecs
should really have been _jumpto_ , _moveto_ or something else indicative that
the current branch ptr is moved to a new refspec. _\--soft_ and friends could
have been _\--no-update-index_ or _\--no-update-files_.

 _checkout_ on files should really have been called _overwrite_. _checkout_ on
branch names should have probably been _switch_ , _setcurrentbranch_ or a name
indicative that the current branch is being changed.

 _pull_ and _push_ are symmetric names for asymmetric behavior. _pull_ could
have been a flag for merge ( _-f_ meaning _fetch first_ ).

 _reset --hard_ was for a long time the only way to move a branch ptr to a new
position along with the files, but it has the potentially _unintended_
consequence of also irreversibly deleting working tree changes. If you use it
to delete, that's fine, but since you _had_ to use it to move the branch ptr,
it is simply wrong to have irreversible damage as a side effect. Especially in
an RCS which is used by many as the fail-safe against their own user mistakes.

There's no easy way to see which branches are tracking what. And until
recently it was a big PITA to even make the current branch track a remote
branch.

Deleting remote branches has awkward syntax (pushing an empty string to a
branch name) and then you have to use a specialized command ( _remote prune_ )
if you want the deletion to be propagated to other repositories.

Another annoyance: Git doesn't let you push a detached head to a new remote
branch, so you have to create a temp branch ptr to the detached head position
and later delete it.

Git also doesn't have good support for versioned sub-projects. submodule is
sub-par, and requires a multitude of extra commands even in the cases that
should have been seamless.

~~~
cmurphycode
"checkout and reset do completely different things when given files or when
not given files. reset on files should really have been called unadd. reset on
refspecs should really have been jumpto, moveto or something else indicative
that the current branch ptr is moved to a new refspec. --soft and friends
could have been --no-update-index or --no-update-files."

I can understand your confusion, given the seemingly separate use cases for
reset, but in fact, it makes perfect sense. Reset always does what it says it
does. Let's break it down:

git reset --mixed <commit> will make your current HEAD point to <commit>,
reset the index to <commit>, and leave your working tree alone. This is useful
for "uncommitting" the last commit, e.g. so you can split it up into smaller
commits. Example:

    
    
      git commit -am "lots of changes"
      # realize you should really do better
      git reset --mixed HEAD~1
      git add myfile.py
      git commit -m "implemented feature x"
      git add yourfile.py
      git commit -m "bugfix #3182"
    

Handy. Now let's look at the "unadd' scenario:

    
    
      git add dontstage.py
      git reset HEAD dontstage.py == git reset --mixed HEAD dontstage.py, since --mixed is the implicit default
    

git doesn't touch your commits, since you are already on HEAD. Git does reset
the index to HEAD, which is before you added dontstage.py. If you had other
changes that you added, it won't reset those, since you provided the limiter
of dontstage.py. Git does not touch your working tree, so dontstage.py stays
modified. The end result? Your working tree, index, and commits look exactly
like before you ran git add dontstage.py.

Now, if someone (e.g. easy git: <http://people.gnome.org/~newren/eg/>) wants
to make git reset HEAD to unadd, that's fine by me. I'm speculating here, but
I imagine that the Linus/git dev point of view is, why call it anything other
than exactly what it is? It's just nice and elegant that it happens to suffice
multiple use cases.

The more you get into git, the more you start to realize why some of the
commands that seemed arcane in the beginning are simple and elegantly named.

~~~
Peaker
Even after your explanation, the name "reset" and "--mixed" make no sense to
me. "reset" is not indicative of what's being reset. "--mixed" is almost
meaningless. "--soft" and "--hard" are also mostly meaningless.

I'm OK with having a low-level primitive like "reset" that doesn't have a
simple meaning so cannot have a meaningful name. But then, it should be
wrapped with meaningful commands such as "moveto" with flags to avoid touching
index or working tree, and "unadd" on top of "reset". Then, I don't think
anyone would ever use reset directly, so it would probably be phased out :-)

