Interesting seeing this posted, as I'm one of the authors. The citation of the eventually published paper is here[0].
My coauthors are the experts on beef markets. They asked me to join for the econometrics. Possibly of interest to programmers is that this paper got me started using the D programming language. I needed something that was fast and easy to interoperate with R. Been using it since.
[0] Pozo, Veronica F., Lance J. Bachmeier, and Ted C. Schroeder. "Are there price asymmetries in the US beef market?." Journal of Commodity Markets 21 (2021): 100127.
That's a reasonable summary of the motivation behind the literature. If we have this kind of asymmetry, it's because consumers are getting ripped off, is how the argument usually goes.
I think it's important to recognize that if consumers are temporarily buying lower quality beef, they're still hurt (but probably not enough that anyone cares).
So, basically, there are price assymmetries (BLS sticker price data), but consumers adjust the mix of cuts of beef they buy in favor of cheaper cuts (scanner data) so the overall effect is to produce something that looks like symmetry?
I asked my coauthors this question when they asked me to join. Consumer beef purchases are only one of the markets. If there's a reduction in the amount of beef on the market, that might be absorbed at the high end by consumers, so that others (particularly restaurants) continue to get the good cuts. Consumers substitute into product that would otherwise be used for something like hot dogs.
That's a fascinating and complex system. That suggests that restaurant consumption is relatively fixed as well but quality of cut is flexible at the lower margin. I wonder if there's any way to analyze the whole picture. I suppose that would require insight into the total sales of processors, which may or may not be proprietary.
I also wonder if the quality adjusted analysis for Consumer consumption would show a different price sensitivity to shock for high-end cuts in the market
That's a fascinating and complex system. That suggests that restaurant consumption is relatively fixed as well but quality of cut is flexible at the lower margin. I wonder if there's any way to analyze the whole picture. I suppose that would require insight into the total sales of processors, which may or may not be proprietary
I've been buying whole and half cows for years. It allows me to know the origin of my food, and get to knkw the ranchers and their ethics. Being cheap is almost secondary.
How long does it take you to consume a half cow? I can't fathom buying that much beef at one time. Aside from storage, it would take us years to eat it all, even if we switched all our animal protein consumption to beef.
My friend's family did this when they had growing boys. They might have split w another family, down to 1/4 of a cow, and it still took them the better part of a year to get through it.
I think half a cow is like ~250lbs of beef, so with an 8oz serving sizer per person that’s 125 meals for a family of four so maybe a year eating beef every third dinner?
Form a small, informal coop with friends and family.
My grandparents were part of what they called the meat circle. It was a bunch of farmers that took turns slaughtering a cow of their own to share with the group so they didn't have too much meat to deal with at home.
My in-laws did this (think they got 1/4), but they ended up with a bunch of cuts they weren't used to/didn't like and ended up throwing away a significant amount.
Yeah, I think a lot of first-time buyers forget how few of the premium cuts are on one cow. With a 1/4 purchase you're gonna get half a tenderloin roast, half a rib roast, etc.
It can vary wildly depending on numerous factors, but you can get a 1/4 cow online right now for $900, if you find a local supplier it can be less. I see prices of $5 a pound + $60 butchering fee.
Note that you get all the various types of meat, not just one type.
That's maybe typical of a butcher? After the 'named' cuts are taken, there's plenty of meat left close to the bone. Trim that off, grind it up, maybe it's 1/3 to 1/2 of the total.
Yeah, ground beef and sausages often play a significant role, and I'm not really sure I'd recommend it to most people unless they have other reasons to do it. It's not some top-secret method to save big money.
>> Beef packers have flexibility to store beef in their coolers when short run wholesale prices decline and as such they can buffer wholesale beef price changes through adjustments to beef inventories.
>> The inventory flexibility beef packers have is a probable explanation
Rather than conjecture, why couldn't they just phone up each packer and ask if that's what's happening? Why do we need pontification in this case when we could just go look?
The language in the paper just makes me think the whole thing should be ignored. E.g. there are sentences like:
>> This indicates that farm prices generally respond similarly to downstream market price increases and decreases
Which i can't help but read as "inidcates" but doesn't prove, "generally" but not always, "similarly" but with differnces.
> Rather than conjecture, why couldn't they just phone up each packer and ask if that's what's happening? Why do we need pontification in this case when we could just go look?
We know meat packers do this. The question is whether this behavior explains price asymmetry. There are reasons why it might not (e.g., other market actors in the beef market, goods substitution, cost of demand smoothing, shelf life, etc.)
The details matter, and no sane packer is going to release the models/algorithms/decision criteria they use to determine when and how to release stored inventory.
> The language in the paper just makes me think the whole thing should be ignored. E.g. there are sentences like:
I hate this.
It's researchers accurately reporting their findings. They aren't "pontificating". They are explaining the nuances of what they discovered when they tried to answer this question.
These days, you can't just ask a real questions in plain English and then accurately report your findings. If it's not TED Talkified bullshit, you'll get pilloried by the commentariate for being mealy mouthed. Our appetite for nuance is dead, and with it our appetite for truth.
Also, given the point you're trying to make, you picked a weird sentence for focus on when this sentence is in the intro:
> We do not reject the null hypothesis of symmetric responses to shocks
at other points in the distribution chain when models are estimated using scanner retail price data.
Asking a question and finding that the answer is "yes, but not as bad as previously believed" is a contribution to our knowledge about the true state of the world.
>> Asking a question and finding that the answer is "yes, but not as bad as previously believed" is a contribution to our knowledge
Wait, that is not an accurate characterisation of the paper. "maybe yes, but maybe no" is more accurate:
Maybe yes:
- BLS data supports the proposition, but possible BLS data flaws are highlighted.
Maybe no:
- using other data instead of BLS allows a different interpretation
- perhaps asymmetric in favour of feedlot owners ("this effect is negligible, or cannot be captured by our impulse response-based test")
- perhaps symmetric ("Our preferred interpretation is simpler, with the relationship being symmetric").
>> It's researchers accurately reporting their findings. They aren't "pontificating". They are explaining the nuances of what they discovered when they tried to answer this question.
I'm not convinced that's what's going on here. Could it be that someone(s) has decided the existing literature weights too heavily in favour of a view they don't like. If the existing literature is broadly correct, one thing you could do is publish another paper which broadly agrees but leaves room for some doubt.
If it broadly agrees, could it be less likely to come under robust scruitiny - that could lead to outright public rejection by others.
Introducing a small amount of doubt with the broad agreement could have 2 effects:
1. there now exists some doubt when summarising multiple papers on the topic, a contrarian view has now been injected into the discourse. The contrarian view is not backed up by hard facts, it's reasonable conjecture but nonetheless, when summarising the available papers on a topic, its views should be included in that summarisation, thus unfairly skewing the summarisation view. A view from conjecture is summarised next to views more directly evidenced in underlying data.
2. you can count on someone misrepresenting your findings. In our conversation you said "yes, but not as bad as previously believed", something the paper doesn't actually give hard evidence for but does try to suggest.
To be clear, i don't claim that this is what's going on here, but going back to my original comment, the lack of weight of evidence for these findings does lead me to dismiss the paper.
>> our appetite for truth
I can only speak for myself here but my comment - misguided or otherwise - was in search of truth.
> It's researchers accurately reporting their findings
In a low-confidence, hedged way. A "good" article has the balls to come in with a real question, and use statistics as tools to answer it. If the answer is messy or nuanced, so be it, but at least try.
Leading with the methods used and prior studies etc. etc. is the data equivalent of cowering behind cover.
Science and economics isn't about having big swinging balls, and thank god it's not.
It's about slow, incremental progress towards understanding. Yes, being low-confidence and hedged at first because that's truthfully accurate. Methods and prior studies place your work into context. The idea that this is "cowering behind cover" is completely incorrect.
Science, and the social sciences, aren't sports competitions to win the championship -- and again, thank god they're not. It's not that you're supposed to be swinging for the fences or else you're worthless -- what a counterproductive idea. It's about slowly building up knowledge.
In other words, if one doesn't care for the grind hip deep in the nuance for years, decades, even centuries, then don't look for a TikTok'ified revelation in science, and wait for the swing the fences headlines in the mainstream news. Flashes of leaps forward in science are exceedingly rare; we don't say "on the shoulders of giants" for nothing, those giants are the towering innumerable generations that came before us, it is utterly humbling to see how long basic technologies like mastering control of fire took to spread across the world [1].
I see this outsized appetite for the easy button in a lot of programmers as well. There comes a time when one will have to grind through small shavings of parsimoniously-parceled out pico packets of understanding to progress in one's engineering maturity evolution. Categorically denying that by demanding there is an easy-quick-simple way or the ones doing the grinding are somehow character deficient is to close off one possible way to grow. Our work is already challenging enough, so I'm always open to any way to grow myself, though YMMV and one could even do better closing off that avenue to grow for all I know.
Just to perform junior-level programming acceptably well takes a stupendous amount of grinding that most non-programmers cannot fathom doing themselves through interacting with software, though many will happily grind just as much if not more at other pursuits. I cannot for example, fathom watching so much sports that I can quote on-demand specific games and winning shots/goals on specific days by specific players. And yet, there is an NBA star who is a walking encyclopedia of every game they've ever played in their NBA career, who not only knows winning shots but from where on the court, who was covering them, the relative position of nearby players, who they were, etc. The amount of grind to get into the game so to speak are penny-ante table stakes compared to the grind the pros put in. There are more such personal eras of grinding to accomplish successive levels of mastery in these pursuits, and programming is no exception in my personal experience.
This isn't some personal character issue, it is a cognitive bias (or two or more depending upon how you look at it) causing us to overestimate what we can accomplish in the short term and underestimate what we can accomplish in the long term. Projection Bias [2] and Planning Fallacy [3] are my favorites to explain this effect.
Wow, no. Just no. That's not how research is done. What you're describing sounds more like writing a clickbait article than writing a research paper. Writing a research paper in that manner would damage the researcher's credibility. A good research paper does lead with a discussion of related research and methodology and is careful to not accidentally make unfounded conclusions. In fact, some papers will include a "threats to validity" section, which I think is a good exercise in skepticism.
You'd be surprised what a conversation with someone who knows someone can yield. I used to cold contact authors and sources on interesting papers, including CRS, and had a better than 50% response rate.
Same with farmers. Whenever I called to talk about alternative crops to corn, wheat, and soybeans, once they realized I wasn't trying to get them to buy anything from me, they were happy to tell me about everything they were dealing with. Granted, it may have mostly been the coffee kicking in at 4 or 5am, but they didn't hold back about every pain in their ass from city hall to the capitol.
> You'd be surprised what a conversation with someone who knows someone can yield.
This is definitely true. But it's also true that key decision-making processes at large corporations -- particularly those that are publicly traded and/or concerned about regulatory oversight -- tend to be more controlled and process-guarded than your run of the mill farmer.
Sure, but those interviews cannot be included into a research paper. Because the moment you ask if what they say can be on-record, or if they can be named as the source of this statement. they would shut up.
My coauthors are the experts on beef markets. They asked me to join for the econometrics. Possibly of interest to programmers is that this paper got me started using the D programming language. I needed something that was fast and easy to interoperate with R. Been using it since.
[0] Pozo, Veronica F., Lance J. Bachmeier, and Ted C. Schroeder. "Are there price asymmetries in the US beef market?." Journal of Commodity Markets 21 (2021): 100127.