
Why Self-Track? The Possibility of Hard-to-Explain Change - harscoat
http://quantifiedself.com/2012/08/why-self-track-the-possibility-of-hard-to-explain-change
======
doktrin
I have to admit I cringed at some of the examples provided, as well as the
associated causal evidence.

1\. Eating Fruit for Breakfast => Reduced Sleep Quality [in fact, " _...I
figured out that any breakfast, if eaten early, disturbed my sleep_ ".

2\. Removal of Mercury Fillings => Improved Brain Function. The evidence was
surprisingly lacking :

"The evidence for causality — removal of mercury amalgam fillings improved my
arithmetic score — rests on three things: 1. Four other explanations made
incorrect predictions. 2. The improvement, which lasted months, started within
a few days of the removal. Long-term improvements (not due to practice) are
rare — this is the only one I’ve noticed. 3. Mercury is known to harm neural
function (“mad as a hatter”). As far as I’m concerned, that’s plenty."

I am all for self-observation, but I can't help but think that some of the
causal links the OP has established may not be particularly well founded.

~~~
beagle3
If this was the only writeup Roberts ever produced, the cringing would be well
deserved. However, he does have a long history of correctly quantifying
changes in himself and identifying their source.

Specifically, his (on its face, ridiculous) shangri-la diet works well for 80%
of the people -- probably better than any other diet (Although it should be
noted that about 20% of people who try report it does not work at all).

Also, his flax seed, butter and vitamin D results have been replicated
consistently by many (though again, not all). You can wait 15 years until
these things get verified/shown wrong, or just try them yourself.

And about "well founded" - it's easy to dismiss some guy on the internet
(regardless of his track record and academic qualifications, as this
particular writeup was not done in the context of these qualifications). But
you should be aware of your confirmation bias - e.g., if you ever considered
any of the examples I give here <http://news.ycombinator.com/item?id=4427910>
well founded.

------
rubidium
"Professional scientists almost never use this method."

That's because of confirmation bias. All of his examples seem to be:

1) Experience a 1 to 2-sigma variation in normal performance.

2) Look for something different in what you did that day.

3) Keep doing that, and voila, a minor improvement.

You'll find yourself doing some pretty goofy things if you follow his
suggestion too regularly.

Edit: misspelling/layout.

~~~
npsimons
_You'll find yourself doing some pretty goofy things if you follow his
suggestion to regularly._

Isn't that what science is? It may be biased, and it's very hard to isolate
variables, but by the very nature of self-experimentation it's very difficult
to do double blind tests. At least some people are trying to live the examined
life and look for causes, rather than blaming it on genetics or luck.

~~~
kennywinker
He gets to the "hypothesis forming" stage, and draws a conclusion. That's fine
if you're trying to figure out how to live your life best. The negative
consequences of removing his mercury fillings are probably null, he can accept
his hypothesis as true and move on. But this is _bad science_ , without
question. We don't know that his mental function has improved because of
removing fillings... it's a potential hypothesis, sure, but to say "Mercury
Damage Revealed by Brain Test" in a headline seems a bit bold for a sample
size of n with no control and a bunch of other confounding variables at play.

~~~
npsimons
Yes, he does seem to pull out the "jump to conclusions" mat a bit much. That
being said, I can't help but think that this would only justify more data. Log
more variables; have someone review your methodology; etc. I've always
wondered, how do you test your own intelligence? If you're writing and scoring
the test, doesn't that invalidate scores you get? Ditto for taking the same
test more than once. Maybe I'm just not smart or creative enough to think of
ways around this (is that a test? :)

~~~
beagle3
> I've always wondered, how do you test your own intelligence?

He is not measuring intelligence; He's measuring reaction times (how long he
takes to solve a very simple arithmetic problem; or how long he takes to see
which letter in a 4 letter string appears twice) and fine balance/motor
function (how long he can keep his balance on an unstable platform he has
built)

He went over these things in his blog in great detail over the years. He keeps
trying new tests, keeps using those that are statistically stable in a
"stable" environment (without changing nutrition / sleep / etc, and after a
learning period) - and after he has a statistically significant baseline, he
measures how e.g. eating more butter or flax oil changes the results of these
tests.

Yes, this is drawing conclusions for n=1, but for himself, these conclusions
are very scientific - and us humans are not unique enough for them to have
wider applicability.

No, you cannot deduce from n=1 to the entire human race. Yes, his conclusions
are plausible and most of them are verified by others. Read e.g. gwern.net on
vitamin D.

~~~
drostie
I'm sorry, but science is neither "we measure the things" nor is it "we look
for the cause." Yes, those are things which scientists do, but no, that is not
the essence of science.

There is no attempt being made to come to any wholesome understanding. That
is, a scientist is trying to model nature, and this consists in two parts: (a)
develop a model, (b) test it against nature. Seth Roberts has perhaps hit upon
a way to develop models, but he doesn't seem to test them. That is, _cum hoc,
ergo propter hoc_ \-- "with this, therefore because of this" -- is not
actually a _test_ for causation, but merely a _guess_ for causation.

To do a proper test, you need _variable isolation_ , and the _stability_ of
test results is no guarantee that it is an isolated variable. A good example
of an isolated variable is a simple light switch. A terrible example of an
isolated variable would be your time of waking up, because that time cannot be
isolated from your own thoughts and beliefs -- that is, many people,
especially if they're not sleep-deprived, can wake up earlier simply by
telling themselves "I'm going to wake up early tomorrow." (I myself usually
wake up before my alarm clock goes off.)

Humans in general are not light switches. As has been documented repeatedly in
medicine, dual effects of placebo and hypochondria plague us; things we expect
to be medicinal often placate us even when they have no medicinal content and
you can suddenly feel the symptoms of a malady soon after reading about it on
Wikipedia.

So even if you want to say "for himself, these conclusions are very
scientific," you're going to have to account for the problem that he is
probably going to confirm whatever he already expects. That is, any "follow
up" tests after the first guess are already tainted by the fact that _Seth
knows what's being tested_.

~~~
beagle3
> There is no attempt being made to come to any wholesome understanding

Do you actually follow Roberts, or is your understand based on this one short
summary? Because, e.g. "What Makes Food Fattening" (77-page pdf here
[http://media.sethroberts.net/about/whatmakesfoodfattening.pd...](http://media.sethroberts.net/about/whatmakesfoodfattening.pdf)
) is very much an attempt to come to wholesome understanding, develop a model,
test it against nature. (The test is "putting it out in the wild", and the
result is "works beautifully for 80% of people who try, not at all for the
remaining 20%". He's not funded to test this, nor is he making any money of it
- I don't think there's a better route for him to take).

> To do a proper test, you need variable isolation, and the stability of test
> results is no guarantee that it is an isolated variable

Yep. Except, in the real world, NO published result related to nutrition, and
almost no published result regarding medicine, is a proper test with isolated
variables. Including almost every guideline your doctor works by.

> that is, many people, especially if they're not sleep-deprived, can wake up
> earlier simply by telling themselves "I'm going to wake up early tomorrow."
> (I myself usually wake up before my alarm clock goes off.)

That is true. And yet, a lot of people want diets to work, and they don't. A
lot of insomniacs want a placebo to work, and it doesn't. Placebo is powerful,
but is not all-powerful. It is stronger with some people, weaker with others.

Discarding results just because they were not the result of a double-blind
placebo-controlled test is not rational. Neither is accepting results just
because the author thinks that they are double-blind placebo-controlled:

e.g., almost all placebos today are sugar pills; If the tested-against-
material has a side effect, such as causing flushing or a dry tongue (though
does not produce the wanted outcome -- which is very often the case), this is
not in face a double-blind; the patient knows that they did not get a placebo,
and the whole test is useless. Yet, this is the gold standard.

Furthermore, if you read the NEJM / BMJ / Nature / Science medical papers,
you'd notice that their result are tested on (and are therefore only valid
for) a small ethnic group, a small age group, etc. But then, very
unscientifically, it is assumed (by others -- usually not by the authors of
said paper) to apply much more widely.

> That is, any "follow up" tests after the first guess are already tainted by
> the fact that Seth knows what's being tested.

Yes, that does not make the results any less true or useful: Butter makes seth
faster at arithmetics, in a consistent and statistically significant way. That
may not be true for others. And may be entirely a placebo. Nevertheless, if
Seth wants to be faster at arithmetics, he can do that by eating butter,
regardless of the cause. Is that not science?

~~~
drostie
(1) Indeed, I am commenting on _this particular mode of understanding_ , where
one keeps a journal, waits for a deviation of a test from normal, and then
tries to correlate the journal with the deviations after the fact.

(2) "NO published result related to nutrition" is overbroad. Obesity for
example is clearly related to nutrition and there are many published results
where many variables which could affect obesity are quite well-isolated --
twin studies to isolate genetics, studies of how obesity rates vary based on
physical location in various cities, and so forth.

(3) I aim to "discard the results" only insofar as they claim that they have
measured an _agentic relationship_ , which these results have indeed claimed
to measure. Seth's official conclusion is that this was "data that suggested
butter improved my mental function." If the variable is not isolated, then
attributing the agency to butter is worthless. It might have been an otherwise
random variation at around the same time as he switched to butter -- the data
he's describing has a 40ms variance as well as certain long-term patterns and
the effect that he's describing is a 30ms improvement, so it's quite possible
that in an autoregressive model you would just see one substantial "step down"
of 80ms which doesn't get compensated due to hysteresis. Or perhaps the
correlation is indeed correct but wrongly attributed -- perhaps Seth has not
noticed but he tends to use butter to fry peppers and pork fat to fry meats,
and now that he is using butter his diet contains 2% more vegetables. Perhaps
now that he uses more butter, he eats more toast and gets 2% more fiber in his
diet, without having written _that_ down in his journal.

The problem is precisely the word "makes" in your sentence "butter makes Seth
faster at arithmetic." You have absolutely no evidence that it's the butter
which is making Seth faster at it. And that is why the conclusion "butter
makes Seth faster at arithmetic" is not a scientific conclusion.

~~~
beagle3
> Obesity for example is clearly related to nutrition and there are many
> published results where many variables which could affect obesity are quite
> well-isolated -- twin studies to isolate genetics, studies of how obesity
> rates vary based on physical location in various cities, and so forth.

Care to show me, for example, a single nutrition study where "skin color of
participant" was a variable controlled for? It is known that cholesterol
metabolism (eventually into) Vitamin D is greatly dependent on skin color --
and yet, it does not appear in a single nutrition study, despite the greatly
known significance of both serum cholesterol and serum vitamin D.

I'm taking this ad absurdum in an attempt to show you that if you apply this
standard rigorously (as one should), then no result really holds up.

And by the way, "obesity is clearly related to nutrition" is only true in the
sense that "everything is related to nutrition". There are no results of
things that affect obesity that are "quite well isolated" in humans that I'm
aware of.

> Or perhaps the correlation is indeed correct but wrongly attributed --
> perhaps Seth has not noticed but he tends to use butter to fry peppers and
> pork fat to fry meats, and now that he is using butter his diet contains 2%
> more vegetables ...

If you actually followed him, you'd know that's not the case. He is very
diligent about isolating variables as much as possible, and thoroughly
documenting his day, including food intake, how much TV he watched, etc -- and
the butter hypothesis was actually an attempt to strengthen a hypothesis
having to deal with animal derived fats (which among other things, included
bacon). It wasn't just observational - it was part of a variable isolation
experiment, that (within limits) was comparably rigorous to any other non-
blinded test (which are far more common than you'd think; in fact, many if not
most supposedly double blinded tests aren't).

> You have absolutely no evidence that it's the butter which is making Seth
> faster at it.

He has much more evidence for than you'd expect from this short posting, if
you care to look at it. This conclusion is indeed supported by data, and is
scientifically valid.

~~~
drostie
> Care to show me, for example, a single nutrition study where "skin color of
> participant" was a variable controlled for?

<http://www.ncbi.nlm.nih.gov/pubmed/19656435> .

> I'm taking this ad absurdum in an attempt to show you that if you apply this
> standard rigorously (as one should), then no result really holds up.

Then you should be willing to do what happens when a reductio ad absurdum
fails: give up and admit you are wrong.

> And by the way, "obesity is clearly related to nutrition" is only true in
> the sense that "everything is related to nutrition".

Uh, no. Obesity is a _nutritional disorder_ , as distinct from other things
like having cats, which are neither caused nor hampered by good nutrition.
What the hell are you smoking?

> He has much more evidence for than you'd expect from this short posting, if
> you care to look at it. This conclusion is indeed supported by data, and is
> scientifically valid.

I searched his website and all I could find was one particular crappy-looking
graph with a bunch of discussion about his responses to vague questions, and
an ad-hoc explanation (a competing omega-3 deficiency) when the data did not
fit his expected pattern.

~~~
beagle3
> <http://www.ncbi.nlm.nih.gov/pubmed/19656435>

Thanks. For some reason, I was unable to find this when I last looked (might
have been before this came out).

> Uh, no. Obesity is a nutritional disorder, as distinct from other things
> like having cats, which are neither caused nor hampered by good nutrition.
> What the hell are you smoking?

> <http://www.medicalnewstoday.com/articles/249289.php>

Are antibiotics with no nutritional value considered nutrition these days?
There are hundreds of substances that seem to cause obesity independent of
nutrition (that is, when _isolated as a variable_ compared to nutrition), most
prominently psychiatric drugs, but -- as is shown by this study -- also
antibiotics, which are supposed to be "nutritionally inert".

Also, this was not shown in Humans and might not apply, but
[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjo...](http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001212)
shows that obesity and type-2 like diabetes can result from gut flora. Are gut
flora considered nutrition these days?

> I searched his website and all I could find was one particular crappy-
> looking graph with a bunch of discussion about his responses to vague
> questions, and an ad-hoc explanation (a competing omega-3 deficiency) when
> the data did not fit his expected pattern.

He did not yet bother summarizing these results (the way he did for his weight
loss theories, in
[http://media.sethroberts.net/about/whatmakesfoodfattening.pd...](http://media.sethroberts.net/about/whatmakesfoodfattening.pdf)
\- but there's a lot more discussion of it through the years) , I suspect that
it is because he does not believe he nailed anything yet. However, he asks
people to "try at home", and several of his results (mostly regarding dietary
fat, but also about mood and sleep) have been confirmed by other people as
well.

He's under no "publish or perish" stress like most papers you see, so he isn't
trying to get anything production quality. I've been following him for 7 years
now, and while it isn't blinded or double blinded, it's otherwise as good as
(and usually better) than observational studies I find in top rated journals.

------
npsimons
Most serious athletes already know about this (in fact, on that same site:
[http://quantifiedself.com/2011/03/personal-data-
visualizatio...](http://quantifiedself.com/2011/03/personal-data-
visualization/)). Keeping a record of your resting heart rate (plus blood
pressure, temperature, etc) every morning will tell you many things. For
example, I managed to get my resting heart rate down to mid 40s when I was
training regularly (and heavily). Now that I'm more sedentary, my resting HR
is in the mid 50s to low 60s, and I definitely don't feel as good or make as
good times. Spikes while in training were also usually indicative of
overtraining. It's also mentioned in "The Hacker's Diet" that if you weigh
every day, but pay attention to the moving average it can be much more
encouraging.

------
cunninghamd
But HOW do you track all that data? Today, with a smartphone, it seems easier,
but even then I’d like something akin to a master spreadsheet that I can store
it all in.

Seth: what format are you tracking all that data in?

Of course, now that I type that, I’m worried the answer is “pen and paper.”

~~~
mahyarm
One of my someday projects to create a generic self tracking app that presents
a dashboard for quick entry tracking and visualization and syncs with a
website. It will also export to a spreadsheet. It will also allow you to
visualize data with graphs and so on. There will be a bunch of presets for
various special types of graphs, like with blood pressure and so on. You could
also specify zones for numbers so you can show 'red' for a danger zone glucose
level for example.

Managing spreadsheets on a smartphone is very kludgey.

~~~
raelshark
That's basically a project I'm actively developing now - generic self-tracking
(manual and via sensors and imported from other platforms), with presets and
intuitive visualizations. This is a big gap in most tracking tools that are
out there - letting you track anything you want, while also making sense of
all the information in a useful way.

My motivation is to create something that's effective for sufferers of obscure
chronic illnesses, since most tracking tools out there focus on major well-
known illnesses, or simply aren't flexible enough to track weird symptoms that
are so obscure that the developers have never heard of them. But the side-
product is that the design will therefore be flexible enough to be useful for
anyone who wants to track any element of their health.

~~~
sinak
Sounds really interesting, would you mind sending me an email? Im organizing
the SF QS group and it'd be great to have you come and demo if you're local,
and even if not we're looking for a tool to help us perform group QS tracking.
My email is sina dot khanifar at gmail.

------
asher_
I love the QS movement, but I sometimes worry for it. It is young, and there
is a wide variety of methodologies being used to draw conclusions, some are
solid, others are far from it.

This can be expected from a group of people that have no requirements for
joining, and this is a strength as well as a weakness. I have noticed that QS
has attracted some "alternative [to] medicine" types who seem to be happy to
have the validation, as well as some seriously interesting projects from hard
core science types.

I am hoping that as this movement evolves, it will mature in the right way.
Having amateurs participating is incredibly worthwhile, but guidance and best
practices will hopefully develop in the community so that when people want to
draw conclusions about causality, they can do it legitimately.

~~~
freshhawk
I feel the same as you, but the movement is evolving in the alt-
med/pseudoscience direction really quickly. Amateurs are collecting tons of
cool data (amazing!) and then doing their amateur analysis (Ancient Aliens
level depressing!).

Without a very strong internal pressure to get their shit together and act
like smart people who understand that some problems require expertise the
movement is screwed.

------
mthoms
Are we sure this isn't trolling? I mean _come on_ :

"Sleep and breakfast. I changed my breakfast from oatmeal to fruit because a
student told me he had lost weight eating foods with high water content (such
as fruit). I did not lose weight but my sleep suddenly got worse. I started
waking up early every morning instead of half the time. From this I figured
out that any breakfast, if eaten early, disturbed my sleep."

~~~
beagle3
It isn't trolling. It's a short summary piece. If you are into these things,
his blog and semi-formal papers ("what makes food fattening" and others) are a
gold mine.

------
A1kmm
This is a method for hypothesis generation, but is not a valid way to draw
inferences.

The reason why scientists don't draw conclusion from single data points
(analogies) is that you can come up with flawed conclusions as a result. The
changes could be purely random, or due to some other explanation that wasn't
thought of.

In order of reliability: 1\. Do an experiment where an intervention is
assigned at random, and scored by a method that is blind to the intervention
(preferably on subjects who are blind to the intervention - but that is hard
for some cases). 2\. Observe historical data where there have been many
instances of each of the variables under study, avoiding correlation between
variables with things like time. 3\. Use anecdotes.

