
Tumblr Has Lost Almost a Quarter of Its Value Under Yahoo’s Ownership - prostoalex
http://www.buzzfeed.com/williamalden/tumblr-has-lost-almost-a-quarter-of-its-value-under-yahoos-o
======
tptacek
This is a misleading, bait-y title. Yahoo took a write-down on Tumblr, but
that doesn't mean it "lost value" post-acquisition. A far more likely reason
for the write-down is that Tumblr was overvalued at acquisition. Paying too
much for something is not the same thing as reducing its value.

The title leaves the impression that the way Yahoo has managed Tumblr has
harmed the site. It may have, but the article presents no evidence to back
that argument up.

~~~
Marneus68
What did you expect from a fair, balanced and accurate source like Buzzfeed?

~~~
tptacek
Nothing, but this warrants a better title. My suggestion:

"Yahoo writes down 230 million dollars on Tumblr".

------
ikeboy
>Yahoo, which bought Tumblr in 2013, said it had reduced its valuation of the
blogging service by $230 million, or about 23%. The move was basically an
acknowledgment that Yahoo overpaid in the $1.1 billion deal.

Either it lost value, or Yahoo overpaid. Both can't be true at the same time.
Make up your mind, buzzfeed. Was Tumblr worth $1.1 billion and has now
declined, or did Yahoo overpay?

~~~
elthran
If the "correct" valuation was halfway between the 2 valuations, then they
could have overpaid at valuation, and still have declined in value.

Of course, value is relative and only based on what anyone will pay for it.

~~~
ikeboy
The title implies the entire decrease was after the purchase. That's
inconsistent with the actual article.

------
Grue3
Yeah, and how much value has Twitter lost since the IPO? Or Snapchat since not
selling to Facebook for $6B. Surely nobody believes these valuations were
reasonable in the first place.

------
CPLX
This is a write down, they could change the value on the books to basically
any number they want assuming they can get an accountant to say it's not
crazy.

This has everything to do with Yahoo messing with its balance sheet and
manufacturing an earnings number and very little to do with the value of an
independent Tumblr.

~~~
jliptzin
Once again journalists writing about things they know nothing about and coming
to conclusions they shouldn't be, such as standard wall st
shenanigans/accounting games (not expecting much from buzzfeed). Another
example: Every tech IPO that pops on day one is spun as a company in huge
demand and doing so well when in reality it's just that the whole IPO was
orchestrated in the first place for a huge first day pop so bankers could hand
hefty returns to their most valuable clients.

------
tacos
Prepping for a sale or spinoff. Yahoo's hoping that by writing Tumblr down by
$230 million they can slip the new valuation past auditors.

------
p4wnc6
When companies lose value (or are overvalued) why don't we ever highlight it
when they use Agile / Scrum? Since those techniques are expressly marketed as
value- and productivity-enhancing techniques, I think it's very fair to call
out attention to cases when they aren't working.

Of course there are confounding factors. Maybe Agile/Scrum are working well in
Tumblr but other things are keeping them from achieving their higher value.
Or, maybe Agile/Scrum bleed away productivity and fail to extract value from
engineers. Or maybe it's a mixture.

In either case it seems useful to point it out, ask questions about it, and so
on. We shouldn't let Agile-adhering failures go un-called-out, whatever the
post-mortem analyses might bring.

~~~
bluejekyll
What methodology do you subscribe to?

Agile, when implemented and executed correctly, really just shortens delivery
cycles and allow for small corrective changes along the development path. What
it's best at is delivering a product that more closely aligns with needs, than
something like Waterfall.

In many cases the overall delivery time might be longer with agile, but the
correctness of the project is better. Which means less QA at the end of the
cycle, and more throughout the entire process.

~~~
p4wnc6
I've only experienced Agile lengthening delivery cycles, but more critically
Agile also greatly reduced quality, because the lack of global context and
long term planning caused the products to suffer. All of the micro-planning to
make tinier-scale adjustments merely _sounds like_ it should incrementally
improve quality. But when your short term sprints aren't informed by long-term
planning, what happens is more like a random walk. Some sprints randomly add
quality; some randomly reduce quality, and overall quality grows slowly, like
sqrt(n) due to this random walk effect.

I also see a lot of this No True Scotsman fallacy of saying "when implemented
correctly" that Agile, seemingly by definition, cannot fail.

The teams I've seen be most effective at delivering quality products quickly
always just created their own methodologies, and they would scrap them and
change them often particularly based on changing needs of the project, new
personnel, and so forth.

Instead of the Agile approach of trying to say there is a one-size-fits-all /
cookie-cutter solution (e.g. just use this timeframe for every sprint, just
use this way of dividing up work for every project, just use this way of
measuring progress for every team, ...), productive teams seem to negotiate
this stuff organically, and with common sense.

Even teams that nominally used Agile but still managed to be productive did so
precisely by only adhering to Agile as a ritual to appease managers, and
outside of what managers wanted to see, the teams just eschewed Agile and
found ways to be productive _in spite of_ it. That may not be true everywhere,
but it informs my experience in three different Agile organizations of varying
sizes and ages.

~~~
bluejekyll
> saying "when implemented correctly" that Agile, seemingly by definition,
> cannot fail

Poor choice of words, you are correct. And I actually agree with what you're
saying in the rest of that response. Allowing scrum teams the ability to
decide how to work together is huge, and I think correct as you point out some
of the benefits of that. Rigid inflexible systems are horrible.

One thing that I know puts me generally in conflict with most agile purists is
that a sprint should only have a time estimate component, and not a firm end
date. Shoot for two weeks,but if it's three so be it.

~~~
p4wnc6
Sometimes things need to be on a different order of magnitude than a few
weeks. Sometimes a single sprint should be months, especially when the
subtasks involved require novel research and pilots / prototypes (which is way
more often, I find, than what typical "Agile thinkers" believe is true about
business projects). You just cannot know what will be hard or what reasonable
time estimates or how to divide up the work until you begin trying things for
a significant amount of time and hitting the unanticipatable failure cases.

Agile is great in small, well-controlled cases where the path to a solution is
easily seen by everyone to consist of commoditized, been-done-a-million-times-
before tasks. But the issue is that once you use the Agile hammer, now
everything looks like a we-must-solve-it-in-only-the-been-done-a-million-
times-before-and-fits-into-sprints nail.

The other big thing is that so, so few Agile implementations are remotely like
the Agile ideal that we should probably stop pretending there even is an Agile
ideal. If everyone using a given tool consistently messes it up in the same
ways, we should just stop trying to defend the tool and admit it's the tool's
fault for being intrinsically easy to misuse/abuse, and seek to invent better
tools that make it very hard to misuse/abuse them.

~~~
bluejekyll
I don't think I can agree that things should be months long. You build in
discovery at different points in the project, aka spikes. It should be
possible to come up with tasks that can be accomplished in roughly two weeks.

If you break it down, you probably already do this: 1) get build for project
going, 2) get initial config/entrypoint for system up, 3) implement stubs for
API, etc... you can break it down, but each of those is a constrained set of
work. It might be part of a very large project that can't "ship" at the end of
two weeks, but you can verify different components along the way. It also is a
helpful exercise to allow for multiple people to collaborate on different
portions of the problem.

~~~
p4wnc6
The trouble with this is that it assumes very particular kinds of projects.
For example, I once worked on a user-facing mobile and web application that
did forecasting and prediction to dynamically steer students through online
college course materials.

The portions of the app that were very "customer facing" were exactly as you
describe -- things like the user interface, APIs for educators who made use of
the tool, etc.

But the statistical modeling and "science" code at the core of it was
completely different. You could not propose a series of step-wise changes that
could be pre-planned over short-term horizons. You might propose something and
after 12 days of working you might discover it was _mathematically_ not going
to work, regardless of engineering effort, and you could not have foreseen
this (or at least our experienced team of stats people could not).

If you say something like "solve this hard cryptography problem" or "find a
way to forecast X with Y% accuracy" or "determine an architecture that will
boost performance by Z%" \-- these questions are hilariously ill-suited to
someone just pre-specifying a set of steps for a 2-week period. There's
nothing magical about 2-weeks that implies you can _always_ make meaningful
progress on _every_ problem in that time frame. Sometimes you can't, and the
extra constraints of trying to pretend like you can and structure work like
you can act as significant hindrances to the real processes of discovery
underneath.

I agree this is _somewhat_ rarer than the types of work that do fit neatly
into 2-week chunks. But a lot of Agile evangelists want to politically recast
the engineering efforts as if it's _always_ that way, and that _any_ request
for more of a research approach to a problem must just be coming from whiny
engineers who don't want to deal with real-world deadlines. And, predictably,
this leads people to try to solve totally non-Agile-suited problems in rigid
Agile-only ways, which ends up bad for everyone.

~~~
bluejekyll
> You might propose something and after 12 days of working you might discover
> it was mathematically not going to work

This is perfect though! In two weeks, you discovered the wrong thing. That's
as valuable, IMO, as finding the right thing. The goal I find in my work is to
try and fail fast... in other words, discover as quickly as possible whether
or not a particular solution is feasible or not.

For truly complex issues, I have found that it often can take upwards of three
failures before you discover the 'right' way to solve complex problems. You
might have worked in an environment that didn't allow for failure and/or
changing of dates. That definitely sucks.

~~~
p4wnc6
> This is perfect though! In two weeks, you discovered the wrong thing.

Maybe I didn't describe it very well. 12 days was perhaps a poor choice
because it is convenient to a sprint cycle, but in my comment it was meant to
function as a random placeholder.

This type of work happens like Poisson shocks, or like lightning strikes. You
work and work and it seems like you don't get a single thing done, zero story
points completed, etc., and then boom, in one fell swoop you figure out a huge
advancement in the problem. That's the defining aspect of the work I am
talking about. The burndown graphs will look like (and _should_ look like) a
flat line that suddenly drops to zero near the end.

It has nothing to do with "failing fast" because it's not "failing." It is the
process of discovering what the actual problem is. This is very different than
a situation where you, say, try out a MVC architecture but later realize it's
not optimal for a business use case and need to change it to MVVM or
something. You can _always_ make measured progress on the first attempt, and
if it doesn't work, that would constitute valuable failure-case knowledge.

But with more abstract questions it's more like you are saying this: we tried
to solve it, and we are not even sure whether or not we've made any progress
yet, and we won't be sure until right at the moment we've either solved the
problem completely or demonstrated why our whole approach cannot be used to
solve it.

That's perhaps a very idealized version; most problems don't fall that far on
the research spectrum. But the point is that many problems still do fall
_pretty far_ on that spectrum, and if you set up your approach to them based
on doing Step A then doing Step B then reporting about metric C, etc., it
actually _hinders_ your ability to solve it, since the very nature of the
problem is not amenable to that sort of thing.

~~~
bluejekyll
At this point we should probably agree to disagree on some of these finer
points.

But I do think that you are tending to blame agile for what really sounds like
poor management. Management wants to see what progress is being made, so they
use burndown graphs. Development, as you point out goes through spurts, except
for the most basic tasks. So to management it can look like there is no
progress being made.

What I'm trying to point out is that trying to divide the problems into as
small of a set of isolated components as possible may help get to a solution.
But that doesn't say your wrong. It might take months of time to find that
solution, the problem is you need to explain to someone over that time what
you're doing, otherwise the money and project will be cut. This is where it
can be good to try things out and show what you've learned over that time,
this is important for managing up.

~~~
p4wnc6
I agree with this, except that I think in the presence of Agile what you are
calling "poor management" is the norm, largely _because of_ Agile and the way
it structures thought and planning. I think it's very important to make that
distinction, so that people stop defending Agile as if it wasn't to blame.
When a tool is politically subverted by management and results in this problem
over and over almost everywhere it is used, it's time to just admit it's
mostly _the tool 's fault_ for not having codified ways of avoiding that
management subversion.

Basically, what I'm looking for in an engineering process is one that raises
engineering concerns above manager and executive political concerns. In any
organization you always have a struggle over who gets to use the phrase
"business priority" \-- do engineers get to say that the engineering work is
the priority, or do managers get to say that political circumstances are the
priority? Agile makes it _easier_ for managers to promote political concerns
_while at the same time acting and talking as if they promote engineering
concerns more_. I'm perfectly happy being shown real business evidence about
the cases when the engineering concerns are not primary -- but Agile offers
ways for managers to _never_ provide such evidence, to keep people busy
generating more and more progress metrics that give them more political
surface area with which to manipulate.

I guess you could say this is just "poor management" but then I think we'd
have to agree that 99.9%+ of management is "poor management" because this all
repeats itself constantly in organizations of all ages, sizes, and stripes.
And Agile is a hallmark indicator of such a dysfunctional engineering culture.

~~~
bluejekyll
Politics... is there anything that prevents that other than working at a
company that is too small for them to exist?

For me Agile is the least bad solution, it's definitely not ideal, and it's a
million times better than Waterfall.

> agree that 99.9%+ of management is "poor management"

I'm not sure about the percentage, but good managers are rare.

