
The Oxford maths interview, shaking up admissions - fjmubeen
https://higheredrevolution.com/oxford-admissions-needs-a-shake-up-471f4e989918#.ifg7bzn0t
======
Smaug123
I wonder to what extent there is anyone who could cope with the Oxford course
but who could not breeze through A-level Further Maths. I entirely understand
that the converse is false: there are people who are extremely good at exams
but who simply couldn't cope with the Oxford course.

Almost all of those on the Cambridge maths course (which I'm taking to be
similar) found A-level maths/further maths very easy. Of course, the
admissions process selected for such people, but I think "finds A-level maths
content very easy" is a prerequisite for the course.

~~~
technotony
My Cambridge interview tested for this explicitly. They gave you 45 minutes to
prepare answers to 8 'further maths' type questions, then you had to solve
some creative problems at the end. These creative problems required knowledge
of further maths concepts but were unusual so you needed time to think about
them and develop a mathematical framework for the solution, this therefore
tested that you could do the further maths questions easily and quickly and
that you could solve new problems.

The 'creative' problem I had was this: A mouse walks along an elastic band at
2 meters per minute. The elastic band stretches by 1 kilometer every hour. How
far along the band does the mouse get? If you got that right, then they gave
you a stretch question where the band doubles in length every hour.

~~~
cgio
Am I wrong to think that the stretching of the band does not have anything to
do with "how far along the band" the mouse will travel? Maybe it's my English
as a second language, but my intuition is that the mouse still travels along
the band as it expands, and you would probably need a point of reference, e.g.
How far along the gap that the elastic band bridges to make the question more
challenging.

~~~
tfgg
The mouse is walking along the band, while the end of the band moves away,
however the mouse is also being carried along by the expansion of the band. I
think the best approach is to think in terms of the fractional coordinate of
the mouse along the band, and work out its effective velocity in this
coordinate.

~~~
cgio
The elastic band could expand in both directions or either. If I assume the
band expands so fast that I remain at the same point with regards to a point
of reference, have I not still moved by 2m along the band after a minute? I
guess it's more about the semantics of how moving along the band applies.

Also how the expansion carries the mouse has to do with its movement. E.g. if
it's galloping it would not carry it.

------
Houshalter
This is just speculation but I have some ideas of what effects might be in
play here.

First the exams test knowledge, not problem solving ability, which is what he
wants. It _may_ be possible to design a test for problem solving ability.
That's basically what IQ tests are designed to do. As I understand it, the SAT
test used to be like an IQ test and highly correlated with IQ.

Second exams probably do correlate with IQ (it correlates with literally
everything), but correlation isn't enough. You can't just take the top n
results from a test. You can use a test as a filter, e.g. sampling the top 5%
of results or whatever. But when you set the filter too high, you just get
outliers.

E.g. the top 100 people who are freakishly good at taking tests, and nothing
else. Maybe they have abnormally good memories and just remember everything
they read in the textbook. Or maybe they have parents that pushed them to
study excessively, or just did so on their own.

You only should use tests as a filter, a minimum standard. Not optimize for it
directly.

------
yomly
From speaking to people that have gone to Oxford and Cambridge, they are
definitely not necessarily letting in just the brightest minds. Nor do I think
they truly are trying to find them - there is a definite "fit" for these
places. If I had to characterise it, I'd say "smart, and very
academic/scholarly". They all seem to have a deep appreciation of some subject
or topic and it seems like their admissions process is very good at uncovering
it.

The other thing everyone from there talks (whines) about is the sheer volume
of work they are expected to complete, and for that, exams seem like a good
(albeit imperfect) proxy for gauging this.

------
reillyse
I didn't do the A-levels but from my understanding of them I have always
thought that they don't seem to discriminate between candidates well enough.
That is to say that there just are not enough grades. I think the passing
grades you can get are A,B,C or D. Most people who get in to the top
universities get straight A's. I think if each of these grades was split in 3
it would make the result a lot more meaningful and would go a long way towards
ensuring that the best candidates got invited to study at the university (I
think the entire interview process is fraught with huge bias problems).

~~~
tfgg
They recently (a few years ago) introduced A* grades for 90+/100 scores, where
A was 80/100 before and so, as you note, not very discriminating. I'm not
totally sure, but I think a standard offer for one of the elite universities
is A*AA these days.

I'm not sure I like the idea. Only having to hit 80 reduces stress quite a
bit, and it gave me time to do extracurricular things like programming, and
way more maths modules than anyone should :) If I had to hit 90+ in
everything, I would have probably been risk adverse and cut down. It also
feels like it selects a bit too much for people who over perform in exams.
That said, I did have to send in my raw module marks anyway when I applied ~10
years ago, and did hit 90+ anyway.

~~~
OJFord
[At least when I took them a few years ago] the admissions offices get your
raw results anyway - so even if it was just 'A', they could still have their
own internal A* for 90+.

Surprised nobody's mentioned STEP, three exams at least two of which are
required for the mathematics tripos (and I think engineering at some colleges)
and is significantly more likely to require preparation.

I don't think any applicant would be worried about FM - because if they were
worried about anything it would (should) be STEP.

But then, uh, I may be biased...

~~~
aninhumer
>the admissions offices get your raw results anyway - so even if it was just
'A', they could still have their own internal A* for 90+.

But as I understand it, they can't make an offer based on that, so they're
already obliged to accept you by the time they get those results.

------
dbcurtis
This article speaks directly to the problem that Art of Problem Solving is
trying to address.
[https://www.artofproblemsolving.com/](https://www.artofproblemsolving.com/)
Richard Ruszcyk wants to teach kids to think, how to solve novel problems, and
hates the normal approach of teaching rote formulas. No student steeped in the
AoPS way would crash during the Oxford interviews. An Oxford interview is
their "middle school normal".

The AoPS text books are the best math textbooks I've seen anywhere. The online
classes are great, but the pace is blistering. But most kids should use the
AoPS books, just at a more suitable pace. (And by blistering, I mean my
daughter was very seriously challenged to keep up with her peers all the way
through single variable calculus, where AoPS tops out. She then enrolled in
multi-variable calculus at a local engineering university at age 13 and blew
away the curve.)

AoPS is an example of where learning can go. AoPS teaches how to think, not
how to takes tests.

------
murkle
If he thinks exams are a bad indicator... why does Oxford assess its students
with... exams?

~~~
fjmubeen
Very fair question - alas, I did not have the authority to change the system
(and, for selfish reasons, would not want to as my exam scores flattered my
actual ability).

Worth noting that written exams do not feature at all in research. There's a
recognition at this level that mathematical understanding can not be captured
through blunt testing instruments.

~~~
harry8
Do you think you're confusing existing "bad exams" with all possible exams?

I don't recall ever sitting an exam with 10 questions where if you got 2 of
them out you'd come first. You'd mark such a "hard" paper based on "this is a
reasonable avenue of exploration", "this is a good idea that won't work",
"this approach will prove ultimately futile but given the student hasn't seen
anything like it before it's definitely worth some points."

Why are you trying to achieve just that in an interview?

Research isn't done in interviews any more than research is done by sitting
exams. However sitting down and exploring ideas with a pen and paper by
yourself is likely closer than "creating an impression" in an interview. An
interview where you, the interviewers, are saddled with the irksomeness of
having to roughly account and adjust for your bias as much as you can be aware
of it (does she really look like a mathematician? I didn't like sportsman
Ben's approach as much as bespectacled whoever - is the impression based on
the maths alone or is the form of it's presentation fairly important).

The huge advantage of grading an exam is you can exclude all of that gumf we
all carry (with differing expressions of it) by simply not knowing who wrote
it. Not their name or anything.

The other big win is that if you publish the blessed thing you can make such
exams more normal. Budding mathematicians might practise such skills and get
better at them, younger.

As it is you're selecting for the children of mathematicians who get such
practise at home and excluding the very smart who honed their exam skills
alone because that's all they encountered.

I also got very flattering scores on mathematics exams but I wouldn't mistake
that for anything beyond my ability to act the student performing seal at that
particular game. Any exam where it is remotely possible to get 100% does not
test anything like creativity - creativity is the thing that is most
important, no?

------
knughit
Ed Frenkel told a great story about how Soviet Russian oral exams were used to
discriminate against Jews by giving them harder questions and demanding more
sophisticated answers.

[http://www.npr.org/2014/03/28/295789948/the-real-
problem](http://www.npr.org/2014/03/28/295789948/the-real-problem)

------
PhantomGremlin
I was curious to see the plot of log(log x).

It's weird. Fortunately we have Wolfram Alpha so it was trivial to quickly see
it:
[http://www.wolframalpha.com/input/?i=plot+log(log+x)](http://www.wolframalpha.com/input/?i=plot+log\(log+x\))

~~~
ivan_ah
I was going to comment that Google search does plots too, but in this case it
seems to get it wrong:

[https://www.google.ca/search?q=plot%20ln(ln(x)](https://www.google.ca/search?q=plot%20ln\(ln\(x\)))

~~~
jamessb
How so?

The important parts to include on a sketch are that the function is
monotonically increasing, has an asymptote at x=1 [0], intercepts the x-axis
at x=e (approx 2.7) and quickly becomes almost flat. Both graphs agree on
these points.

Wolfram Alpha is using the complex logarithm
([https://en.wikipedia.org/wiki/Complex_logarithm](https://en.wikipedia.org/wiki/Complex_logarithm))
to extend to x < 1; Google isn't. I'm fairly sure an interviewer would expect
a sketch that looks like Google's, perhaps with a comment that achieving
ln(ln(x)) < 1 is possible (only) if the ln function is allowed to take complex
values.

[0]: For values of x < 1, ln(x) < 0, so ln(ln(x)) is not a real number.

~~~
ivan_ah
You're right, the plot is correct. I didn't look closely and thought the
vertical asymptote was at x=0, but actually it is at x=1 as you mentioned.

------
rickhanlonii
Link is 500

Edit: looks like Medium is having issues everywhere.

------
at-fates-hands
_The rise of continuous assessment models and the ability to track students’
every interaction with digital learning content may allow for broader, more
holistic evaluations of student potential._

This is merely a complex way of saying "metrics". Welcome to the real world
Oxford kiddies.

~~~
stuxnet79
Academese, the English disease.

