Before post-ChatGPT boom, we used to talk of "catastrophic forgetting"...
Make sure the new training dataset is "large" by augmenting it with general data (see it as a sample of the original dataset), use PEFT techniques (freezing weights => less risks), use regularization (elastic weight consolidation).
Fine-tuning is fine, but will be more expensive that you thought and should be led by more experienced ML engineers. You probably don't need to fine tune models anyway.
The only valid argument I see is that some brutalist buildings are historically important and even we don't like them today we should keep some of them for future generations to visit and see what we have tried, and disliked.
But to use the environment as an excuse is silly. There are always things that can be done for that: recycling the concrete, rebuild to reduce car dependency and improve energy efficiency, choose more sustainaible materials (like wood) that can easily be replaced in the future (instead of more concrete), etc
I understand wanting to leave a few of them as warnings to future generations, but i think the the surviving WWII-era bunkers and flak towers in mainland Europe capture the same basic style and are enough of a representative sample.
> And comparatively the French train network is excellent
I don't think we're talking about the same country here.
Are you a regular user ?
"Comparatively" ?
Sure, if you compare it to the US train network or to Uganda's ... "comparatively", you can always find something to compare to that makes you look excellent.
> I don't think we're talking about the same country here.
> Are you a regular user ?
Not OP, but I am. The French railway system suffers from a few faults (like the fact that it's Paris first, or not enough capacity on some routes, especially around popular times like Friday afternoon), but is otherwise extremely good. There's very fast coverage between all big cities aligned to Paris (so Lille - Lyon - Marseille is super fast, but mostly because they are destinations linked to Paris); Marseille - Bordeaux is slow because it isn't aligned to Paris, and has to pass through bad terrain in the middle of nowhere).
In terms of coverage of % of the population with regional or high speed rail, or hell, low cost low or high speed rail, it's among the best in the world. It's only serious challengers are Spain, Italy, Japan, China.
ML has been around for decades, DL for more than a decade.
In 2019, I had to explain to executives that 95% of AI projects fail (based on some other survey), top 1 reason is bad or missing data and top 2 is misaligned internal processes. I probably still have the slides somewhere.
One project I worked on was impossible because the data was so bad that after cleaning, we went from 4M rows to 10k usable rows in 4 languages. We could have salvaged a lot more if we restricted the use case but then the benefits of the projects would be not so interesting anymore. The internal sponsor gave up and understood the problem. Instead, they decided to train everyone on how to improve data entry and quality! In just 6 months I could see the data was getting better indeed.
But I had to leave this company, the IT dep was too toxic.
So I think the author is right. According to Scale, we'd have gone from 95% failures to 95% successes in just 4-5 years just thanks to LLMs? This is of course ridiculous, knowing the problem was never poor models.
Started a career in ML/AI years before ChatGPT changed everything.
At the time, we only used the term AI if we referred more than just machine/deep learning techniques to create models or research something (thinks operations research, Monte Carlo simulations, etc).
But it started to change already.
I think startups and others will realise to make a product successful, you will need clean data and data engineers, the rest will fill follow. Fundamentals first.
All the startups trying to sell "AI" to traditional industries: good luck!
I've worked as an AI engineer for a big insurance, contractor with a bank, and oh gosh!
I bet the old guard will refuse to change course and new companies using the new tools will displace the old ones. Like what is happening with online banks and similar. Like what happened with low-cost airlines.
I actually think the example of banks is a good one, because what's happened in places with a competitive banking landscape is just that the big players have upped their game and the benefit of the challenger banks has diminished, with them struggling to become profitable.
Monzo is the biggest new player in the UK and it's not making much of a profit. Revolut doesn't have a banking license because it can't comply with the regulatory requirements. Starling has taken much more of a conservative path and is being led by an ex-Barclays person, but even it is being investigated by the FCA for having poor controls around financial crime. All of those giving loans have unnaceptably high % of defaults from an investor perspective.
I think new hot upstarts will find that a combination intricate regulation, physical reality and institutional inertia would only allow them to make a tiny dent over years.
Make sure the new training dataset is "large" by augmenting it with general data (see it as a sample of the original dataset), use PEFT techniques (freezing weights => less risks), use regularization (elastic weight consolidation).
Fine-tuning is fine, but will be more expensive that you thought and should be led by more experienced ML engineers. You probably don't need to fine tune models anyway.