Last year, upon Iteration 27 of this same subject, I threw something like this together in jest:
Quality of HN Comments Over Time
| . .
| . .
q| . . . .
u| . . . . . .
a| . . . . .
l| . . . . .
i| . . . . .
t| . . . you are here -->. .
y| (that's all)
M J J A S O N D J F M A M J J A S O N D J F M A M J J A
I must have been on to something because so many didn't realize it was a joke. What fun that was. All I have to do is shift the x axis every n months: some things never change.
Wouldn't it be nice if we could actually keep track of the quality by some measure and thus validate your model? Things said in jest are often true, and I think you actually have a valid model.
It would be interesting to have some live feedback on the current quality level, and see if that affects the response of the community, but this would require some sort of metrics on quality, which is quite a subjective field.
I believe a while ago there was a submission about parsing text to see whether the author was male or female, and wonder if there is a similar algorithm to check whether a comment is positive or negative, technical or shallow and thus create some objective measure of quality.