AllenNLP started before transformers, and so it provided high level abstractions to experiment with model architectures, which is where much of NLP research was happening at the time. Transformers definitely changed the playing field, as it became the basis for most models!
I'll give you specific examples where AllenNLP overdid it, while HuggingFace was better just by keeping it simple.
Vocabulary class. HuggingFace just used a python dictionary. I can't think of one person who said they needed higher level abstraction. Turns out a python dictionary is pickle-able, saving to a text file is one line code, while the AbstractSinglettonProxyVocabulary is not and no one wants to care in the first place.
Tokenizer class. HuggingFace just used a python dictionary to return strings and integers. I can't think of one person frustrated by it. It's printable, picklable, and everything in between people can fiddle with. And boy where do I start about AllenNLP's overdoing of Tokenizers.
Trainer class. vs. HuggingFace example scripts. The scripts are just much more readable, tweakable, debuggable etc. HF didn't bother with AbstractBaseTrainer class bs.
It just shows they never understood the playing field.
- First, I don't think anyone thought AllenNLP was a good choice for high performance production systems. Again HuggingFace clearly understood the problem and built a fast tokenizer in Rust.
- A math, physics, linguistics, or even CS PhD student who know basics of coding would prefer bare bone scripts. They just want to hack it off and focus on research. Writing good code is not their objective.
AllenNLP was written for research, not for production. Many of the design choices reflect that.
As far as the vocabulary goes, a lot of AllenNLP components are about experimenting with ways to turn text into vectors. Constructing the vocabulary is part of that. When pre-trained transformers became a thing, this wasn't needed anymore. That's part of why we decided to deprecate the library: Very few people experiment with how to construct vocabularies anymore, so we don't want to live with the complexity anymore.
Hugging Faces APIs really aren't that great, I hear lots of people complain about them. All HF did was make transformers very accessible and sharable with a neat UI.
"We're not in a recession, we're in the early stages of a depression. A depression is a self-feeding liquidity crisis - it's a cash-flow squeeze that occurs when the economy turns down, inventories are being sold, borrowings increased, and liquidity reduced."
- Raymond T. Dalio, New York Times, Jun 27, 1982
"For decades the numbers have told us that each time the capacity utilization figure moves over 85 percent, the inflation rate rises, and also that each inflation cycle tends to peak at a higher level than the preceding one. During this inflation cycle, we expect capacity utilization to cross the 85 percent line by early 1985, when the CPI should be running at an annual rate of 7.8 percent and well on its way to a cyclical peak of about 11.5 percent sometime in 1986."
- Raymond T. Dalio, New York Times, Jun 17, 1984
If you google "ray dalio 1980" you'll see he's talked about that being his biggest mistake and massive learning opportunity [1, 2, 3]. He even dedicated a chapter to it in his book (chapter 3) [4]. It's sort of hard to fault someone for being wrong 40 years ago and think they have nothing new to contribute.
Suspect we'd all be completely screwed if we were wrong, never learned from it, and just continued on. No idea if he's right or wrong but guess time will tell. What you're implying here though, that because someone was wrong at a point in time, learned from it, evolved and changed their way of thinking, then think that all their future stuff is tainted somehow, seems pretty strange.
I'm not impressed with his story about what he learned from it. He basically says he learned that sometimes he can be wrong. No indication of whether he's learned when he's more likely to be wrong or right.
If you must simp for billionaires, Jim Simons, Bill Gates, and the Collison brothers are somewhat more interesting people IMO.
Since you mentioned him, Jim Simons has a really good interview too and it was pretty cool to hear his story [1]. Worth a watch if you haven't seen it and are into RenTec.
while its sleazy to not be forthcoming, I've a feeling in general a private competition is better than a US gov. Sometimes it's hard to create right private marketplaces, and healthcare is one of those areas.
She was at Institute for Advanced Study at Princeton. It's -the- most elite intellectual society. The top 1% of the 1% academics have to compete for admission. If that doesn't wash away illformed biases, I don't know what can! What she accomplished is no joke!! Huge respect tbh.