Matt Clifford just wants to hoover up the Turing’s funding for his buddies at Entrepreneur First. The same Matt Clifford whose interests weren’t declared [1] when he wrote the UK’s AI policy. Why clowns in the government let him is beyond my understanding.
Your understanding is quite limited in that case. There are strikingly few people in the uk who can provide a shred of legitimacy to the governments AI plans. It’s precisely because of his “interests” that he knows what he’s talking about.
Avoiding conflicts of interest is key, but lacking interests is worse imo. Having someone with experience in the role has value and I generally think his AI policy is excellent
This was inevitable and we'll see it playing out all over Europe.
You have a desire to be relevant in an important technological shift.
On one side, you have big tech companies laser-focused on attracting the best talent and putting them in a high-pressure cooker to deliver real business outcomes, under a leadership group that has consistently proven effective for the last XX years.
On the other side, you have universities, led by the remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution. Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies or lack the DNA to thrive in them, actively avoiding that environment. Sprinkle on top several layers of governmental bureaucracy and diluted leadership, just to ensure everyone gets a fair slice of the extra funding.
I don't think universities should become industry. I mean, that is exactly what we have industry for. If you want to be put in a pressure cooker under leadership focused on business outcomes, great, do industry.
The problem really is that universities are treated as if they have the same mandate as industry. Government people shouldn't tell a professor what kind of research is interesting. They should let the best people do what they want to do.
I remember an acquaintance becoming a professor, promoted from senior reader, and he was going to be associated with the Alan Turing Institute. I congratulated him, and asked him what he was going to do now with his freedom. He answered that there were certain expectations of what he would be doing attached to his promotion, so that would be his focus.
This way you don't get professors, you turn good people into bureaucrats.
Yes. The demand for increasing control, driven by the "taxpayer's money!" lot evident in this thread, strangles almost all state-funded research because it demands to know up front what the outcome will be. Which instantly forces everyone to pick only sure-bet research projects, while trying to sneak off to do actual blue-sky research in the background on "stolen" fractions of the funding. Like TBL inventing the WWW at CERN: that wasn't in his research brief, I'm sure it wasn't something that was funded in advance specifically for him to do.
Mind you, it was evident to me even twenty years ago when briefly considering a PhD that CS research not focused on applying itself to users would .. not be applied and languish uselessly in a paper that nobody reads.
I don't have a good answer to this.
(also, there is no way universities are going to come up with something which requires LLM like levels of capital investment: you need $100M of GPUs? You're going to spend a decade getting that funding. $10bn? Forget it. OpenAI cost only about half of what the UK is spending on its nuclear weapons programme!)
That doesn't sound like a fair appraisal of university research at all. How much do we rely on day to day that came out of MIT alone? A lot of innovation does come from industry, but certain other innovation is impossible with a corporation breathing down your neck to increase next quarter's profits.
Europe also seems to hand out PhDs like candy compared to the US (you can earn one faster, and you're less prepared for research), and there's a lot more priority put on master's degrees, which are largely a joke in the US outside a few fields like social work and fine arts.
Where I'm from, master's was the traditional undergraduate degree. Bachelor's degrees were introduced later, but the society was reluctant to accept them. For a long time, the industry considered people with a bachelor's degree little more than glorified dropouts.
Our PhDs also used to take really long, being closer to a habilitation in some European countries than what is currently typical for a PhD. But starting in the 90s, there was a lot of pressure towards shorter American-style PhDs.
These days, the nominal duration of studies is 3 years for a bachelor's, 2 years for a master's, and 4 years for a PhD, but people usually spend at least a couple of years more. Which is pretty comparable to how things are done in the US.
The other end of the spectrum is the British system, where you can do a 3-year PhD after a 3-year bachelor's. But they also have longer PhD programs and optional intermediate degrees.
I would argue a European PhD prepares you for research better than a US one. You're expected to hit the ground running with required prior research experience and you have no classes or teaching obligations which explains why they're typically 3-4 years long.
I think when talking about university research output it's pretty clear that the objective of university research is to produce output that is much earlier in the stack of productisation than something that comes out of a corporate entity. The vast majority of university research probably won't impact people on a day-to-day basis the way that running a product-led company will, but that's not to say it isn't valuable.
Take mRNA vaccines for instance - the initial research began in university environments in the 80s, and it continued to be researched in universities (including in Europe) through the 00s until Moderna and BioNTech were started in the late 00s. All the exploratory work that led to the covid vaccine being possible was driven through universities up to the point where it became corporate. If that research hadn't been done, there would have been nothing to start a company about.
It's the same in computing - The modern wave of LLMs was set off by Attention is All you Need, sure, but the building blocks all came from academia. NNs have been an academic topic since the 80s and 90s.
I suspect that in 2050, there will be plenty of stuff being built on the foundations of work conducted in academia in the 00s and 10s.
I wouldn't expect to see that many groundbreaking innovations being useful in day-to-day life coming out of contemporary university research. You have to wait several decades to see the fruits of the labour.
The problem is the "desire to be relevant in an important technological shift".
There's loads of worthwhile research to do that has nothing to do with LLMs. A lot of it will not or cannot be done in an industrial environment because the time horizon is too long and uncertain. Stands to reason that people who thrive in a "high-pressure cooker" environment are not going to thrive when given a long-term, open-ended goal to pursue in relative solitude that requies "principles and philosophical opinions" that aren't grounded in "actual execution". That's what makes real (i.e. basic) research hard and different as opposed to applied research. Lots of people in industry claiming to be researchers or scientists who are anything but.
A lot of business basically just steal from their future selves in perpetuity until the interest they’ve accumulated is too great and they implode.
It’s the GM and intel school of business. Constantly choose what makes money now, and avoid advancements and infrastructure. Wait now the competition is 20 years ahead? So you have 20 years worth of infrastructure to pay off, right now? Oh…
Everyone is just faking it until the chicken inevitably comes homes to roost. Then they walk about from the explosion and go to another company and say “see? Look how much share holder value I made! Nevermind that the company got destroyed shortly after I left!”
> For example, neither the key advance of transformers nor its application in LLMs were picked up by advisory mechanisms until ChatGPT was headline news. Even the most recent AI strategies of the Alan Turing Institute, University of Cambridge and UK government make little to no mention of AGI, LLMs or similar issues.
Almost any organisation struggles to stay on task unless there's a financial incentive or another driver, such as exceptional staff/management in place. Give them free money - the opposite of financial incentive - and the odds drop further.
I’m sorry to read this — it just doesn’t feel grounded in my own lived experience.
Many of the best Engineering and Computer Science departments, around the world, operate a revolving door for people to go in and out of industry and academia and foster the strongest of relationships bridging both worlds.
Look at Roger Needham’s Wikipedia page and follow his academic family tree up and down and you’ll see what I mean.
> remnants of that talent pool—those who were left behind in the acquisition race—full of principles and philosophical opinions but with little to no grounded experience in actual execution.
I do believe that these people at universities do have experience in the actual execution - of doing research. What they obviously have less experience in is building companies.
> Instead, you find a bunch of PhD students who either didn’t make the cut to be hired by the aforementioned tech companies
Or because they live in a country where big tech is not a thing. Or because these people simply love doing research (I am rather not willing to call what these AI companies are doing "research").
Jesus… are you this judgmental about everyone in society? Some people just value the university environment. It doesn’t mean they’re incompetent and had no other options. Not everyone values money above all else, nor does choosing to opt out of the private sector mean people are “remnants”.
From my perspective it's almost exactly opposite. Almost all of the people I consider exceptionally talented are vying for positions in academia (I'm in mathematics), and the people who don't make it begrudgingly accept jobs at the software houses / research labs.
I'm frequently and sadly reminded when I visit this website that lot of (smart) people can't seem imagine any form of success that doesn't include common social praise and monetary gain.
"Academia is an absolute fucking cesspool of political corruption, soul crushing metrics gaming and outright fraud functioning mostly as a jobs program for nerds that only produces valuable science completely in spite of itself, not thanks to it, because it manages to trap some genuinely smart and hard working people there like a venus fly trap its prey and keeps them alive to suck more “citations” and “grants” out of them."
This is not my experience with academia. Rather, my experience was that a lot of very idealistic people tried to make their best out of the complicated situation set up by incompetent politicians.
Have met lots of professors who are glorified managers who do not actual research taking sabbaticals for a fat paycheck. I doubt very much they do any real work during these sabbaticals either. If I had to guess, I would bet that these sabbatical positions are frequently sinecures.
"Considering how the UK treated Alan Turing while he was alive, he deserved a better institute to honour his memory."
Ouch. Almost certainly nothing like the insult his memory endured from "The imitation game" film though. Everyone associated with that film should feel a bit ashamed.[1]
I'm a UK based software engineer, but I almost never heard anything about this organization. Has anything useful come out of it?
[1] I could only bear to watch the first 20 minutes. But I can't imagine it got any better.
"GCHQ Departmental Historian Tony Comer went even further in his criticism of the film's inaccuracies, saying that "The Imitation Game [only] gets two things absolutely right. There was a Second World War and Turing's first name was Alan".
If you are making a fictional film, knock yourself out. But when you are using a real person's name, have some respect for them and their work.
The main complaint seem to be the minimisation of Turing's homosexuality and the focus instead on Joan Clarke. There's also a whole list of historical inaccuracies listed on Wikipedia[0], including changing the name of the Enigma breaking machine from "Victory" to "Christopher" and stating that Turning invented "The Computer".
The worst for me is the John Cairncross subplot that implied Turing might have committed some (light) treason because of his homosexuality. From Wikipedia[1]:
> Turing and Cairncross worked in different areas of Bletchley Park and there is no evidence they ever met. Alex Von Tunzelmann was angered by this subplot (which suggests that Turing was for a while blackmailed into not revealing Cairncross as a spy lest his homosexuality be revealed), writing that "creative licence is one thing, but slandering a great man's reputation – while buying into the nasty 1950s prejudice that gay men automatically constituted a security risk – is quite another."
I agree with the parent poster and a ton of people do. Anyone who actually knows the story of the breaking of Enigma and Turing in general would know that that film is terrible.
I know a great deal about both, Turing being my scientific idol. I thought The Imitation Game was great, though not nearly enough postwar events and his trauma. I realize the title The Imitation Game was meant to be a double meaning, about both Bletchley's cryptographic attempts and Turing's attempts to feign heterosexuality, but the movie ultimately was about breaking encryption, not a study of homosexual life in a bigoted nation. I wouldn't personally call it a biopic. It's a sensationalized version of actual events because the real thing would've been boring. No one wants to watch me, by analogy, sit at a whiteboard beside a computer staring at symbols for eight straight hours.
I'm sorry you didn't get the biography you wanted.
The main criticisms appear to be that the film inflated Turing's contribution to the conception and development of Enigma. "Horribly skewed" seems like a bit of an overstatement.
I think one of the core observations in this article is that the ATI is essentially just a funding programme for existing university research organisations.
At least in places like Rutherford Appleton you have a distinct institutional identity and culture, in a place where it feels like stuff is going on (Harwell campus is a pretty cool place for both private and public research). There's an organisation that people work for with distinct aims and objectives. I don't really see how the ATI is much more than a funding brand under EPSRC.
If you set up a research institute, you can do good research in specific focus areas and take credit for it. If you just have a funding body, you're going to feel like a funding body, and very little is going to be attributable to your new institute.
I think the UK could do with setting up a few more proper national labs like the US has with LLNL, Oak Ridge and friends. That's not to say we don't have any, but if we're going to set up an institute, that's the model we should follow.
Having been involved in UK research labs before moving to industry, none of this surprises me. The government is more interested in short term headlines than long-term actually doing something.
They'd rather make capital investments (which aren't a recurring budget line, so the same money can make different headlines next year) than pay for the sort of ongoing spend on paying experts competitive salaries to actually build a real capacity.
Prior to Turing Institute there was the Hartree Centre, which was hobbled from hiring anyone significant by being tied to a non-technical civil-service payspine that struggled to approach even "competent" level salaries for these positions as the posts didn't manage 100s of minions.
The Turing was my first major experience of the UK research community. It was immediately apparent to me that it was a bit of a snakepit---resources were denied or otherwise withheld between teams as a part of internal competition, service to the organisation was routinely forgotten or ignored, and changes in direction seemed to occur every quarter, if not every month. It could have been a very cool thing, and the community that was fostered there during the stable periods was extremely productive. But, there's now so much ill will around the latest purges that I doubt it'll ever regain that vibe.
> resources were denied or otherwise withheld between teams as a part of internal competition, service to the organisation was routinely forgotten or ignored, and changes in direction seemed to occur every quarter, if not every month.
Government funded
Loosely defined goals
Little oversight
No accountability
Yup, given the inputs, your description checks out.
Government contracts (ultimately tax money) are the primary way a lot of companies get their money, though. You don't sell fighter aircraft or massive rockets to the public. Even a lot of AI companies like Palantir primarily serve the government.
I could be totally wrong (please correct me if so), but IMO there are two things happening that weren't totally discussed.
1) My understanding is that most the "researchers" and even students at ATI have a main role at a home institution. Really the question should be something more like "did ATI funding help advance UK research generally?" not "was ATI able to claim its own successes?"
2) Nobody has really mentioned how PhDs are a little different in the UK. E.g. you propose a topic -> research for three years -> receive PhD. In the US its more like research for 2+ years -> find a promising area -> do groundbreaking work over 1+ years -> receive PhD. In other words, its significantly easier to change your research direction in the US, of course its easier to miss out on things like the LLM explosion.
Yes, I remember somebody doing a PhD in the US on theorem proving for a few years, but his last year he focused on doing deep learning work, did his PhD thesis on it, and got hired by OpenAI.
> really the question should be something more like "did ATI funding help advance UK research generally?" not "was ATI able to claim its own successes?"
This is a reasonable objection, but imho misses the point.
Any entity that's unable to account for its positive externalities will underestimate its value. This happened to the Turing. Universities did not want to lose their staff to secondments or buyouts, and Turing cancelled major independent programs like their PhD offerings. Leads fought for their universities, as the article notes---not necessarily to make Turing itself better!
While recent cuts/"realignments" are painful, they also have re-focused support for projects at scale---projects that are substantially more than an afternoon a week for two academics. It's still, in many cases, subsidized consulting for UK businesses w/ dubious scientific merit, but that's a broader issue with the UK's research culture.
I have no comment on any of the general whining about the ineffectiveness about academia and the public sector, but what struck me about this article is how tiny the funding amounts are. They’re talking about a £40 million endowment like it’s impressive? Maybe I’m just too American but it’s baffling that Cambridge or UCL cares about a £1 million grant.
While this is an excellently written piece and really insightful into the state of higher education funding, what seems to be missing from the debate is concrete ideas of what should be done differently (either in 2014 or today). A lot of US innovation success comes from deep pockets of private venture capital, which is just missing in the UK. So if you're a politician/bureaucrat with a (let's face it) relatively small budget and much politics to deal with, the best strategy to take is not obvious (at least to me).
It's well written but completely unjustified in its criticism of UK universities or their role given the resources required to train SOTA models. Are any US universities training SOTA models? No. Your point about the need for private venture capital is exactly correct. I think some kind of new funding stream needs to be identified for doing this. The US is forcing China to sell TikTok's US arm for national security reasons. We could try to do something similar in return for granting US Big Tech companies access to Europe - I guess the digital tax is a step in this direction. But it seems challenging to enforce that given the current power dynamics.
This is no different than what happened in Germany. Despite having tons of funding, tens if not hundreds of AI research institutes, an abundance of talent and enough cluster time to go around for everyone, they were also blindsided by LLMs and are still to this day completely out of the race.
Here in Germany, they tended to bet on "AI for science", because it sounds "sciency" and that somehow appears more substantive, even though a lot of these "AI for science" or ML4Science projects are utter bullshit. They also invested substantially in Quantum Computing, again because it sounds "sciency".
I have the feeling that UK is almost inexistant in the field of AI despite it's previous position on the tech sector line in neobanking and cryptos.
It probably doesn't help that software engineer salaries (and I guess researchers) there are a shame and does probably not attract talents.
For years I have add headhunters from UK contacting me living in Europe for positions there needing a relocation. And I had pity for them advertising for jobs that looks nice on paper but with salaries for senior developers equivalent to entry level positions here...
I don't think Americans know the extent of missed opportunity.
- The phone in your pocket? ARM chip. Was a British company.
- Google Deepmind? British too, but they weren't the Google Research/Google Brain people that made "Attention Is All You Need" paper that gave us today's LLMs. Still sucks though.
Most successful British teams I know are located in Miami, or funded from Palo Alto.
What an utter bullshit take. As if the ATI could ever be competitive at LLMs given the resource requirements needed to train foundational models. If we want UK universities to continue to make contributions in this sphere we either need to massively reduce the cost of training SOTA LLMs, create some national shared GPU infrastructure, or resource universities to access cloud infra. Unfortunately the latter is just introducing further dependencies on overseas (and increasingly problematic) private clouds. If things continue as they are then having the capability to train SOTA models will be a strategic imperative for every nation.
I see. Let's just assume that DeepSeek's V3/R1 budget of ~$5.5M was a lie and the Alan Turing Institute is just too poor to compete with their nine-figure sums. I guess I have no further questions.
> the $5-6M cost of training is misleading. It comes from the claim that 2048 H800 cards were used for one training, which at market prices is upwards of $5-6M. Developing such a model, however, requires running this training, or some variation of it, many times, and also many other experiments (item 3 below). That makes the cost to be many times above that, not to mention data collection and other things, a process which can be very expensive (why? item 4 below). Also, 2048 H800 cost between $50-100M. The company that deals with DC is owned by a large Chinese investment fund, where there are many times more GPUs than 2048 H800.
Oof, sounds like the budget for V3/R1 was exactly what I said. Having access to compute and running experiments is kind of the bare minimum for a supposed AI lab. And since this is a Western lab, their options for training are far more advantageous. But even if I were to accept all of those ridiculous fudged numbers, that's still within the Alan Turing Institute's budget.
Of course their anti-deep learning "most senior scientist" hasn't heard of DeepSeek, lol.
[1] https://democracyforsale.substack.com/p/revealed-starmer-ai-...