
A Kaggle Grandmaster cheated in $25k AI contest with hidden code - kick
https://www.theregister.co.uk/2020/01/21/ai_kaggle_contest_cheat/
======
paulgb
For an HN audience, the "How Bestspotting cheated" post on Kaggle may be a
better place to start: [https://www.kaggle.com/bminixhofer/how-bestpetting-
cheated](https://www.kaggle.com/bminixhofer/how-bestpetting-cheated)

~~~
7777fps
It's an odd way to cheat too, if they realised they had data from the
validation set, couldn't they have over-trained a model with the validation
set in the training data?

~~~
paulgb
Yes, that would have been harder to detect if it is allowed. My understanding
is that for a kernel competition (like this one), you can't use model weights
that you've trained outside the kernel. Oddly, I can't find a rule explicitly
prohibiting it.

------
ribs
It really worries me how many people are so quick to forgive him and tell him
so.

In my family if someone cheated they got called a cheater and suffered
consequences. At least, they would have, if someone did something like that.
But my parents didn’t raise mendacious villains.

Look at this crap on Twitter:

“Everyone makes mistakes. Thank you for the apology”.

“Kagglers will still love to have you back”

“It's great that you realize your mistakes. Looking forward to see your
comeback with more cool DS solutions and ethics than before.”

“Thanks for doing this. It's okay to make errors in judgement, we've all been
there to varying degrees. Y'all be gonna be fine.1!”

Those are the worst. I’m not so crazy about these below, either, although
there’s just a hint of steel in them, at least:

“I’m glad to see that you had a change of heart after sleeping on it and that
you will be returning the prize money. I hope you will consider donating to or
volunteering at a local animal shelter as well. Atonement here is more than
returning the money and apologizing.”

“I hope this can be used as a teaching moment as well. Many people clearly
look up to you because of your work. What can we learn from this? Something to
ponder in the days to come.”

~~~
luma
I'm confused by this reaction here - this was a _brilliant_ hack of the system
and I think his work should be celebrated. Was it in keeping with the
intention of the competition? Of course not. Were lives threatened by their
creative solution to the competition? Also no. At the end of the day it was a
fun, inventive approach to a made-up problem.

So what's the problem?

~~~
bleuarff
That's not a hack, that's blatant cheating, their solution litterally looked
at the anwsers. It bypassed the ML model prediction, so that's not a ML
solution, which to my understanding was the constraint of the competition. And
in the end, that solution is useless for the adoption site, since the
objective is to get adoption predictions (the animal has not been adopted
yet). I'm confused too, by how can anyone think it's an acceptable behavior.

~~~
luma
Their solution made use of the data available to them in a "prize" that was
nothing more than a made-up competition for fun. Did the rules expressly say
somewhere that one could not make use of available datasets?

~~~
gpm
Kaggle rules routinely restrict use of datasets to approved ones, almost
certainly yes but you would have to check to be certain.

The prize was $10,000, not "fun".

------
mellosouls
I'm surprised by some of the overly sympathetic comments here, the guy
_cheated_ , not "cheated" etc.

Of course, we're all human, and he's come clean, but his actions potentially
had a negative effect on the non-profit and the animals it places; and
competing talents were denied their rightful places.

This comment isn't about condemning him or anything, just let's be honest
about what happened here; it wasn't ok, or just system-gaming caught out.

~~~
archi42
> competing talents were denied their rightful places

Absolutely. What are the odds he (and/or others in his team?) did this just
for this one competition? It's possible, but unlikely he/they invented this
(or other) code hiding technique(s) just for this single occasion.

Also, the employer did the right thing and kick him out. Now they only have to
scrutinize the last few months of his work instead of looking over his
shoulders the next few years.

------
75dvtwin
It was briefly discussed about 7 days ago.

[https://news.ycombinator.com/item?id=22045696](https://news.ycombinator.com/item?id=22045696)

I posted there, same self-addressed question, that I cannot figure out the
answer to...

It seems that intensives to cheat, and environment where 'means justify the
ways' \-- are overpowering.

For people who are naturally gifted, successful at young age -- why cheat?

Was this historically, always like this?

These insensitive to cheat, to gain unfair advantage, to treat life
opportunities without any 'honor code' just seem to be so pervasive now, it
seems.

There is a cheating scandal every other week involving most prestigious
institutions, competitions, and so on.

These incentives to cheat, basically destroy from inside our commercial model,
academics, judicial system, political system and probably military too.

This also creates a new type of powerful currency, and therefore the
'billionaires' in that currency have infinite power -- and that currency is
'dirt on somebody'.

Dirt on somebody who cheated before -- forever makes the cheaters into tools
of injustice.

\---

Public shaming is reactive, there we need something more proactive at various
points. There are needs to be incentives for work verification, as an example.

I also think it is unfortunate but at least civil/commercial law in many
countries is pretty much riddled with 'more expensive lawyers produce better
results'. And it skews society into basically thinking 'anything goes, really.
means justify the ways, and cheating something one can get away with'

~~~
archi42
Hello fellow non native speaker. You don't mean "insensitive[s]", but "
incentive[s]".

On topic: My university offers "Ethics for Nerds [=CompSci]" lecture. The
lecturers have degrees in both CS and Philosophy, and the stated goal is to
make compsci students more aware of ethical implications - plus giving them
some tools/thinking to assess these implications.

~~~
75dvtwin
ah, thank you for the correction. It seems that I miss-spelled it different
ways too. I guess, my lack of attention to detail, could not be compensated by
re-reading my post 3 times :-).

I also took an ethics course, but in there it was mostly about AI impacts on
society, and what will happen when people loose jobs that will be automated
away...

I should keep up to date on it, as mine was many years ago.

After all, being a computer programmer has to be more than about VC funding,
mobile apps, functional programming, AI and kubernetes. :-)

To me, the seeming prevalence of cheating through out the society, and its
tacit encouragement, by lack of effective proactive and reactive deterrence -
is a cultural, as we well legislative problem.

------
vladislav
They would have able to win and get away with it if they incorporated the
knowledge of the external dataset directly into the ML model, provided they
had a reasonable estimate on the fraction of overlap between the external data
and the test set. A weak version of this would be to just train on the
external data in addition to the provided data. A stronger version would train
regularly on the provided training data and in addition overfit on a random
subset of some percentage of the external data (with some small random
prediction error thrown in to obfuscate), which would get equivalent results
to what they did with logic.

~~~
rahimnathwani
"A weak version of this would be to just train on the external data in
addition to the provided data."

In this competition, the training code was run on Kaggle's system, so you'd
still need to smuggle in the extra data.

------
niceworkbuddy
My two nuggets. This reminded me of Ijon Tichy's saying after one of his
voyages:

" Thus concluded one of the most unusual of my adventures and voyages.
Notwithstanding all the hardship and pain it had occasioned me, I was glad of
the outcome, since it restored my faith, shaken by corrupt cosmic
officeholders, in the natural decency of electronic brains. Yes, it’s
comforting to know, when you think about it, that only man can be a bastard. "

(source: The Star Diaries, Stanisław Lem)

~~~
reedwolf
Always happy to see a Stanislaw Lem reference.

------
sytelus
I'm not going to defend Pleskov but organizers shouldn't have put out the
competition with money attached that can simply be solved by scraping data.
Good ML competition in fact should even invite cheats because the end goal is
not ML for the sake of ML but rather cracking the prediction problem by
whatever shortest path possible.

~~~
mannykannot
"Winning" by anything that can reasonably called cheating, as in this case,
does not advance the general state of the art. Innovation is best served
through appropriate rules and competition structure.

~~~
sytelus
Yes, and that's the right thing to do in the academic research setting
("advance the state of the art"). But the public competitions with monetary
rewards are not the same setting. I can imagine scenarios where the guy stole
the test set from Kaggle servers (i.e. unlawful access) should disqualify him
permanently. But the essence of the competition should be the focus on
cracking a _given_ problem, not about a specific technique.

One test a good of ML competition: Can it be solved by simply hiring lots of
humans to make predictions without incurring significantly more costs than the
prize money?

~~~
joe_the_user
_One test a good of ML competition: Can it be solved by simply hiring lots of
humans to make predictions without incurring significantly more costs than the
prize money?_

What value to the organizers, to society or to whatever are you imagining
coming out of a free-for-all style competition?

I think the organizers now imagine that the result would identifying good,
generic prediction algorithms along with identifying good AI programmers
capable of producing general prediction algorithms.

It seems like the contest framework _already_ has become a bit problematic
through context winners just being good at contests and not otherwise
achieving anything.

But what are you thinking of? There are already hacking competitions btw.

------
Thorentis
> The goal was to create an algorithm that could predict how quickly a pet
> would be adopted based on its profile details, from its photo to its breed,
> sex, size, age, and whether it had been vaccinated or not.

> These predictions would be used to optimize and tweak future critters'
> profiles so that they are adopted as soon as possible.

Sorry but, how is this useful? You can't just change the age of an animal to
make it more likely to be adopted. The profile is meant to be an accurate
representation of the animal so people know what they're getting. What exactly
was the algorithm meant to achieve aside from being a predictor?

~~~
singron
You can use this to select which pets to put on a platform. For instance, no-
kill shelters have to decide which animals they intake since they have finite
room. They can save more animals if they pick animals that are likely to be
adopted quickly. Obviously, kill shelters have a similar calculus when
deciding which animals to cull (and indeed, animals that don't fit in the no-
kill shelter go to the kill shelter).

I'm not sure how this website manages "inventory", but they might have similar
problems.

~~~
endorphone
I'm pretty sure so-called no-kill shelters don't outsource their killing by
simply refusing less adoptable animals. And if this contest were advertised as
"help us decide which animals to kill first" it probably wouldn't gain
traction.

This contest sounds ridiculous. It sounds like an attempt to get in on that AI
gravy but do so with some sort of feel-good element. Only there is no feel
good to it, and the basic premise seems outlandish.

~~~
kaikai
> I'm pretty sure so-called no-kill shelters don't outsource their killing by
> simply refusing less adoptable animals.

They do, though. That's how they are able to limit the number of animals they
have at any given time.

~~~
endorphone
They limit the number of animals they have at any given time by not taking in
more when they are at capacity. This is very different from running a DNN
model on every applicant and refusing to intake those that aren't adoptable
enough, which is a preposterous concept.

And just to provide the full picture, most no-kill shelters of course have
scenarios where they euthanize -- violent animals, sick animals, etc -- but
they don't need a neural network to accomplish this.

This is all neither here nor there, as the contest had positively nothing to
do with any of this. Instead they wanted to determine the most adoptable
traits so they could adjust the less adoptable traits with the more adoptable
traits: The poodle goes through the hair straightener and gets a blonde hair
color treatment (clearly I am being satirical) to make it more like a lab, for
instance.

------
dlkf
A large number of commenters fundamentally misunderstand what happened. They
are saying "why did he upload his scraped training data? Why didn't he just
train on it and upload the resulting model?" If you are making this argument,
_it means that you don 't understand the contest._ User rahimnathwani
explains:

> In this competition, the training code was run on Kaggle's system, so you'd
> still need to smuggle in the extra data.

The question then becomes, _how_ do you smuggle in the data? This is a much
more interesting discussion than pontificating about the ethics of Pleskov's
actions. In particular, a better understanding of this problem could have
ramifications for how Kaggle could combat hacks of this variety. (By contrast,
"shame on him" and "aww but he's a nice guy" are both useless, except perhaps
as a form of virtue signalling).

It's essentially a cryptography problem. Does anyone know if this has been
widely studied?

~~~
KaoruAoiShiho
AFAIK it's impossible to smuggle it in a way that you can't be caught. Maybe
the use of MD5 made it easier to see but if examined I don't think there's a
totally hidden solution.

~~~
AstralStorm
Yes, he could have trained a network to recognize them and return memorized
values. It would not be nearly as obvious.

Solving this would require changing the contest so that it comes with
algorithm and instructions only (no data files, entropy checks) and is trained
by contest operators.

~~~
dlkf
This would also be very fishy to anyone who inspects the source code.
Immediately they would ask "uh where did this giant file of floats come from?"

One approach would be to write some handcrafted rules/features that look
likecthey were plausibly a priori, but have the effect of memorizing the
scraped data. (I don't know if this is actually possible.)

User KaoruAoiShiho is probably right that any approach along would look out of
place. Coupled with the fact that its removal would massively boost accuracy,
it's hard to imagine how this would get past a curious reviewer.

Perhaps peer review should be a component of the Kaggle process.

------
eanzenberg
I never understood Kaggle. Most competitions don't require code to be
submitted, just predictions to be made on a test set with missing labels. So,
you don't even need to apply machine learning and I'd bet money that lots of
winners don't and label by hand or outsource. I don't understand the
fascination and appeal of these so-called "grandmasters".

~~~
MasterScrat
> I never understood Kaggle. Most competitions don't require code to be
> submitted

Your point of view is outdated ;-)

In recent ML competitions, participants do submit code that is run on a held-
out dataset - as was the case in the PetFinder.my challenge in question here.

Most competition platforms are migrating to this format, as otherwise you can
just label by hand as you said.

Note that this competition went even further: not only was the evaluation code
run on Kaggle, the _training code_ was also run there. This means that you
couldn't even train a gigantic model then submit it: your model had to be
trainable within well defined time and resource constraints, which is a great
way to level the playing field.

Of course, there's still some unfairness as people with more resources can try
out more solutions before submitting a model to be trained on the platform. No
platform has a solution for this yet!

------
gojomo
Before reading they'd scraped public data that was likely to be the "hidden"
evaluation set, I thought they might have cheated using Python introspection:
inspect the caller's frame, find some variable already loaded with the
expected answer, return that.

Has anyone cheated at Kaggle/similar using that approach?

~~~
anoncareer0212
No, and this pattern of thought is bizarre, lacks technical grounding, and
more importantly, scruples

~~~
samatman
The comment you're responding to isn't cracking, it's penetration testing.
Hope that helps.

------
bart_spoon
I was pretty surprised by how common this type of behavior is on Kaggle. I
work in machine learning and data science, but I don't use Kaggle much because
it quickly became clear competitions boiled down to who could eek out the last
hundredths of a percent in accuracy from models trained for weeks on
multithousand dollar machines, and because the behavior described in the
article was surprisingly common.

That said, the site is a fantastic resource for datasets. Lots of fantastic
data uploaded both from old competitions and by the community.

~~~
QuercusMax
Is a "multithousand dollar machine" supposed to be expensive? Any company with
even half-decent resources should be able to put hundreds of thousands of
dollars of hardware toward training a model.

~~~
bart_spoon
It is when the competitions are aimed at the machine learning practitioning
public, and not companies themselves. If having a machine with a bare minimum
of $3000 of GPU power is the entry point for having a competitive model, then
most of these competitions aren't actually a matter of coming up with a unique
and clever model. It's simply throwing the biggest neural network you can at
it. Which is fine, but it definitely flies in the face of the perception of
Kaggle.

------
shkkmo
Am I missing something? It seems like "adoption speed" really isn't something
you want to over optimize.

The whole point is to match abandoned animals with _suitable_ homes. Adoption
speed seems secondary to post adoption measures of adopter satisfaction and
adoptee welfare.

~~~
xiphias2
It looks like the shelter has limited capacity, and too many dogs to be taken
care of, therefore the faster dogs are adopted, the less need to be
euthanized.

Also what you are writing are extremely sparse and weak signals compared to
adoption speed.

------
bitxbit
Never understood Kaggle. The Netflix Prize was great. Now it’s just people
gaming Kaggle to get a job.

~~~
endorphone
The Netflix Prize was interesting, and drew a lot of attention to it, but
ultimately wasn't that simply stuffed into the trash bin? Not long after that,
Netflix basically abandoned both user ratings of significance, and realistic
recommendations. Now it's just a nonsense engine with some sort of meaningless
overlap or whatever they call it.

~~~
blurps
> into the trash bin?

They did not implement the winning solution as is, but a lot of good stuff
came from the winning teams, including techniques (such as SVD) that are in
use as of today (or maybe a few years back).

> just a nonsense engine

No, it is safe to assume the recommendation engine of Netflix is close to the
state-of-the-art. A lot of money and talent went into it.

~~~
endorphone
"They did not implement the winning solution as is"

Shortly after that contest finished Netflix removed the five star rating
system, and _dramatically_ subdued the recommendation engine. Now the vast
majority of the content surfacing is universal beyond some small category
filtering (e.g. You like crime dramas and horrors so here's a bunch of the
most popular stuff from those categories).

Now they have a "match" rating that I have not met a single person who finds
useful (it is almost at the point of farce and seems more like a randomization
engine).

A lot of money and talent went into something that Netflix clearly decided
just wasn't important or useful enough for them. Now they just push Don't F*ck
With Cats on everyone.

------
laydn
Slightly off topic: The nature of the competition is a bit worrying to me.

They're essentially letting an algorithm decide which dogs have the best
chance of adoption and which to euthanize, aren't they?

~~~
nnq
THIS! Drop the "slightly"... I can't understand how people focus so much on
the competition cheating, and so less on "wait, wtf are they doing here"... I
mean, even if they are not deciding whether to euthanize or not based on this,
you're still building a system that introduces "good looks" as a factor in a
life-and-death decision regarding _a living being._

It's not hard to jump from this a system that would use your facebook file to
grant or deny medical health coverage or a similar life-and-death thing. Shift
the Overton-window a little, push is a few notches further, and you're re-
inventing phrenology with deep learning...

 _This is bone-chilling! I mean the fact that so many people overlook this..._

~~~
lolc
The euthanization is already there. Machine learning can't justify what
economy requires. The only thing machine learning promises here (and you're
free to doubt the effect) is to improve adoption rates and thus reduce
euthanization.

Of course this is highly problematic when applied to humans. But to be clear,
some humans are already judged by models (transparent ones so far) in live and
death decisions. That's how organ transplant decisions are taken. The
candidates with the best prospects get the scarce organs. Similarily, when
looking at populations, decisions that protect many are often taken in full
knowledge of the danger to a few. Vaccination for example.

Resource optimization problems mix badly with absolute morals.

------
quickthrower2
Wouldn’t it be smarter to use the hidden data as training data rather than
hard code it in?

~~~
blurps
This only works when you don't win. You have to upload your code, including
model training code, they won't accept a trained model binary.

~~~
MiroF
If they were smart, they would have used the whole set for hyperparameter
tuning. That would be essentially undetectable.

~~~
jeffshek
Not if the underlying model was bad, no tweaking of hyperparameters can change
that. It's safe to say, he probably did consider this (and it probably didn't
work well enough).

------
tastyminerals
Kaggle doesn't make a lot of sense to me. It's a good self education platform
but that's it. It is kind of sad to see job interview candidates for ML
positions whose only ML experience is Kaggle.

~~~
prodent
Where/how else would you get that experience if your current job does not
involve ML?

------
m3kw9
Also there may not be much correlation with profiles and speed of adoption.
Why don’t they prove that first then have the competition instead of assuming
it would be the solution?

------
sandGorgon
I remember reading about this on Twitter because of a reply from h2o.ai
account.

[https://twitter.com/ppleskov/status/1215983188876709888?s=19](https://twitter.com/ppleskov/status/1215983188876709888?s=19)

The person was originally employed at h2o.ai and as a consequence of this was
fired. Not sure if that was completely appropriate.

Wasn't this a personal participation? Or are there "company teams" on Kaggle ?

~~~
starpilot
At will employment.

~~~
C1sc0cat
"bringing the company into disrepute" is gross misconduct and normally a
firing offence, even in countries with more liberal employment laws than the
USA

------
nootropicat
How is this not criminal fraud for $10k? He deserves to go to prison. "Boo
cheater" would be an appropriate response if he did it purely for ranking, not
for money (or something easily sold for money).

~~~
elteto
How _is it_ criminal though? Please do tell which law his team broke here.

Put down the pitchfork and calm down. The guy is a raging a* and a cheater,
but a criminal he is not. He lost his job, was publicly shamed and his
reputation is tarnished basically forever. I think he got enough coming his
way.

------
jpdus
Original submission:

[https://news.ycombinator.com/item?id=22012763](https://news.ycombinator.com/item?id=22012763)

------
mmhsieh
Is there a public list of all known Kaggle cheats?

~~~
quickthrower2
Can you write a ML program to predict if someone is cheating based on their
submission?

~~~
nourse
Yes. Well I call it ML, but in reality it just checks HN for articles about
their username cheating.

------
jungletime
This contest was probably BS anyway. Google wanted to solve an analogous
problem, not pet adoption. As such, both parties cheated.

------
jl2718
Just a reminder how unique the conditions were around the discovery process,
which suggests something a lot more common.

------
fizx
Quick, give him a growth-hacking gig!

~~~
C1sc0cat
Black hat seo will get your site tanked no sane seo agency / client would take
some one like that on.

------
harry8
tl;dr

Competive ML model grading with a common training set using unseen data.

Cheat was to scrape the data that would then be used as unseen by the
organisers. Unseen is now seen for this model. Then, instead of training the
model with the "unseen" data, which would have been cheating and an advantage
it apparently wasn't enough of an advantage so they hard code 10% of the cases
to boost metrics and win.

Having more data to train your model is google & facebricks competitive
advantage. Their attempts to use that advantage for something actually useful
to society rather than just as a method of selling ads seems to have been a
complete bust so far. If that is wrong you know better, please link us up.

I'm suspicious that their predictive power to sell ads actually works for the
people who buy those ads but I guess we aren't likely to know for sure. I do
wonder "who dominated their industry segment in sales by being an early
adopter of google ads" I don't know anyone. It's not a great metric but what
else do we have?

~~~
MiroF
They train winning code from scratch, I think, so you wouldn't have been able
to just train the model with the "unseen" data.

~~~
harry8
They scraped data from the website of the org that wanted the results and
funded the competition. That data was supposed to be unseen for the
competitors and used to grade the models. This cheat was to use that scraped
data in its training set and, beyond that, hard code some predictions.

It's looking up the answers in the grading sheet while taking an exam.

~~~
iudqnolq
Your parent is saying kaggle takes the training code and runs it themselves.
This makss secretly training on the known right answers impossible.

~~~
harry8
There is no such thing as "training code".

You have a model that you have trained on some provided data - the training
set. You give kaggle this model. Kaggle grades your model on some different
data your model has never seen. The better your model classifies this data it
hasn't seen the higher it scores and the more money you win.

So again if you trained your model (code) on a training set that you have
illegally obtained. That is broken the rules of the competition to get the
additional data. Data that is also going to be used for competition
verification and grading, then you have cheated. Doubly so if you just
hardcode outputs to boost your score, which they did here.

They took the official training set. Said, we need more. And scraped websites
to get a bigger, illegal training set. This is against the rules and is
cheating. They got caught.

It really is equivalent to looking up the answer sheet while taking an exam.

~~~
Hercuros
People are saying that you do not just submit your trained model to Kaggle.
You also submit the code that was used to train the model from the training
set, which is used in the winning models to train them from scratch on the
training set. Of course, that wouldn't have prevented this type of cheating of
course, but it does mean that you can't submit a model that was trained on
your own private data set.

~~~
harry8
You can overfit a model to your hold out set quite easily with repeated
trials. It's a trap you have to avoid in normal circumstances! (Feynman: "You
are the easiest person to fool"). Even if you have to submit code to generate
your model parameters from "the training set" (which hasn't been explained at
all well by "People" if that is indeed the case) you could do that overfitting
deliberately here with the illegal unseen data as your hold out set. Aside
from the advantage of a bigger training set. Aside from the advantage in model
selection, which is not done with code from a training set. Aside from the
advantage to your feature engineering also not done in code. Aside from the
advantage to your regularization choices, bias parameters etc etc.

So yes you absolutely /can/ submit a model trained on your own private data
set even if what you submit is a model code that will be re-trained. Even if
"the training set" is different to the provided - you still have that scraped
data so you can slice it up with the provided training set so that any
selected training set does well against the rest. Now the overfit you've just
carefully engineered should win against the honest models unless you suck,
right? It's kind of risible that they had to go further and hard code certain
results, don't you think? Perhaps if they still couldn't win with scraped,
illegal additional data then everyone else had illegal data too? Perhaps
Kaggle is not a good indicator of how good ML techniques are in practise?
Perhaps Kaggle systematically overstates ML effectiveness due to this kind of
uncaught cheating in many of their competitions? I bet kaggle won't look too
hard at that.

------
juskrey
Kaggle has nothing to do with real world. It is a system which invites
everyone to cheat it and to be cheated.

Similar with school: how many times you, excellent grades holder, was cheating
because no one scrutinizes high grade holders too much?

------
floatingatoll
> " _For me, it was never about the money but rather about the Kaggle points_
> "

Accumulated integer counts that represent social value and/or standing
encourage destructive behavior.

~~~
nefitty
It’s so fascinating to see how easily a human brain can be hijacked by status.
I wonder if there’s a way to harness that power for improving individual
performance...

~~~
mmhsieh
Napoleon Bonaparte: “Give me enough medals and I'll win you any war”

~~~
quickthrower2
Also true if you exclude the bit about medals.

~~~
harry8
And this is why everyone in Russia speaks french to this day!

------
puneetchawla
Hilarious! I have picked my favorite text from the article.
[https://youtu.be/r6H14MIJRk4](https://youtu.be/r6H14MIJRk4)

