If anyone is interested in evaling Gemma locally, this can be done pretty easily using ollama[0] and promptfoo[1] with the following config:
prompts:
- 'Answer this coding problem in Python: {{ask}}'
providers:
- ollama:chat:gemma2:9b
- ollama:chat:llama3:8b
tests:
- vars:
ask: function to find the nth fibonacci number
- vars:
ask: calculate pi to the nth digit
- # ...
One small thing I've always appreciated about Gemma is that it doesn't include a "Sure, I can help you" preamble. It just gets right into the code, and follows it with an explanation. The training seems to emphasize response structure and ease of comprehension.
Also, best to run evals that don't rely on rote memorization of public code... so please substitute with your personal tests :)
In Ollama, Gemma:9b works fine, but 27b seems to be producing a lot of nonsense for me. Asking for a bit of python or JavaScript code rapidly devolves into producing code-like gobbledegook, extending for hundreds of lines.
Had a chance to do some testing and it seems quite good on oneshot tasks with a small context window but as you approach context saturation it starts to go way off the rails. Maybe this is an implementation issue? I'm using Q6_K quants of both sizes in ollama. I'll report back if I figure it out.
A larger context window really helps on RAG tasks, it's frustrating that a lot of the foundational models have such small windows.
Sorry about this – working on fixing the issue with hitting the context limit. Gemma 2 supports a 8192 context limit – which can be selected if you provide the `num_ctx` parameter in the API or via `ollama run` with `/set parameter num_ctx 8192`
Thanks! If you have a moment can you give me a quick explainer on what happens when you hit the context limit in ollama? I had assumed that ollama would just trunc the context to whatever is set in the model, but I guess this isn't the case?
Currently when the context limit is hit, there's a halving of the context window (or a "context shift") to allow inference to continue – this is helpful for smaller (e.g. 1-2k) context windows.
However, not all models (especially newer ones) respond well to this, which makes sense. We're working on changing the behavior in Ollama's API to be more similar to OpenAI, Anthropic and similar APIs so that when the context limit is hit, the API returns a "limit" finish/done reason. Hope this is helpful!
Definitely. I tried gemma2:27B model with phrases like "translate the following sentence to language X" and it even failed to understand the task and spat out completely irrelevant things, like math formulas.
I'd encourage people to test for themselves (and to let the Chatbot Arena scores to settle) before getting caught up in too much hype. I just did a personal eval and I found gemma-2-27b-it (tested on AI Studio) performed far worse in my testing than Llama 3 70B, especially for reasoning and basic world understanding queries.
I also prefer to use "Coding" or "Hard Prompts (Overall)" instead of default "Overall" in Chatbot Arena scores to determine the actual performance level of LLMs. Seems much more align to my vibe test in terms reasoning. I guess the "Overall" contains a lot of creative tasks, which is not what I use the most in the daily tasks.
Just saw this, might get lost in the noise, but just for posterity, apparently the Gemma 2 models were specifically RL’d to index on Chat Arena performance: https://x.com/natolambert/status/1806384821826109597
Yes, answers were distilled from a much stronger model. On the one hand, you can argue that this is exactly what the LMSYS, WildBench etc datasets are for (to improve performance/alignment on real-world use cases), but on the other hand, it's clear that training on the questions (most of which are repeatedly used by the (largely non-representative of general population) users of the ChatArena for comparing/testing models) makes ChatArena ELO less useful as a model comparison tool and artificially elevates Gemma 2's ChatArena score relative to its OOD performance.
At the end of the day, by optimizing for leaderboard scoring, it makes the leaderboard ranking less useful as a benchmark (Goodhart's law strikes again). The Gemma team obviously isn't the only one doing it, but it's important to be clear-eyed about the consequences.
It's multilingual. Genuinely. Compared my results with some people on reddit and the consensus is that the 27B is near perfect in a few obscure languages and likely perfect in most common ones. The 9B is not as good but it's still coherent enough to use in a pinch.
It's literally the first omni-translation tool that actually works that you can run offline at home. I'm amazed that Google mentioned absolutely nothing about this in their paper.
Wow, that's very impressive and indeed a game changer. I've previously had trouble with various Scandinavian languages, but last I checked with was Llama 2 and I kind of gave up on it. I had expected we were going to need special purpose small models for these uses as a crutch, like SW-GPT3.
So I guess Gemma 2 is going to become Gemini 2.0 in their truly large and closed variants then? Or is it the open version of Gemini 1.5?
LMSys Chatbot Arena is a crowd-sourced ranking with an ELO system: basically users a presented with 2 hidden models, they get the answers of the 2 models when presenting their request, and they vote which one performed bests, which realized one marche and updates the ELO scores. This is the closest thing that we have to a gold truth for LLM evaluation: and Gemma2-27B performs extremely well in Chatbot Arena ELO.
It's fairly easy to pay OpenAI or Mistral money to use their API's.
Figuring out how Google Cloud Vertex works and how it's billed is more complicated. Azure and AWS are similar in how complex they are to use for this.
Could Google Cloud please provide an OpenAI compatible API and service?
I know it's a different department. But it'd make using your models way easier.
It often feels like Google Cloud has no UX or end-user testing done on it at all (not true for aistudio.google.com - that is better than before, for sure!).
Gemini models on Vertex AI can be called via a preview OpenAI-compatible endpoint [1], but shoving it into existing tooling where you don't have programmatic control over the API key and is long lived is non-trivial because GCP uses short lived access tokens (and long-lived ones are not great security-wise).
Billing for the Gemini models (on Vertex AI, the Generative Language AI variant still charges by tokens) I would argue is simpler than every other provider, simply because you're charged by characters/image/video-second/audio-second and don't need to run a tokenizer (if it's even available cough Claude 3 and Gemini) and having to figure out what the chat template is to calculate the token cost per message [2] or figure out how to calculate tokens for an image [3] to get cost estimates before actually submitting the request and getting usage info back.
They should do a whole lot more then! Ideally they'd have effective impact.
It's a busy mess on GCP. If they wanted to compete well, they should do much better with UX design, especially for onboarding. Compare how easy setting up a Mistral account is with GCP to do some generative LLM in a Python script. GCP is a maze. Did you make an account to reply to this? I'm curious what you do with GCP? Are you a heavy user?
Why would you make new accounts because you use HN too much? Doesn't make sense to me.
Anyhow if you use GCP every day, you're going to have learned it's weird clunky behaviour. GCP's main problem is that they've steadily become a sprawling mess of complexity, which is in big contrast to quite a few LLM specific cloud services that are happy to take peoples money without extra complexity?
If you're an individual developer and not an enterprise, just go straight to Google AIStudio or GeminiAPI instead: https://aistudio.google.com/app/apikey. It's dead simple getting an API key and calling with a rest client.
Interesting but when I tried it, I couldn't figure out the billing model because it's all connected to Google projects, and there can be different billing things for each of them.
Each thing seems to have a bunch of clicks to setup that startup LLM providers don't hassle people with. They're more likely to just let you sign in with some generic third party oAuth, slap on Stripe billing, let you generate keys, show you some usage stats, getting started docs, with example queries and a prompt playground etc.
What about the Vertex models though? Are they all actually available via Google AI Studio?
I have to agree with all of this. I tried switching to Gemini, but the lack of clear billing/quotas, horrible documentation, and even poor implementation of status codes on failed requests have led me to stick with OpenAI.
I don't know who writes Google's documentation or does the copyediting for their console, but it is hard to adapt. I have spent hours troubleshooting, only to find out it's because the documentation is referring to the same thing by two different names. It's 2024 also, I shouldn't be seeing print statements without parentheses.
I plan on downloading a Q5 or Q6 version of the 27b for my 3090 once someone puts quants on HF, loading it in LM studio and starting the API server to call it from my scripts based on openai api. Hopefully it's better at code gen than llama 3 8b.
Why is AIStudio not available in Ukraine? I have no problem with using Gemini web UI or other LLM providers from Ukraine, but this Google API constrain is strange.
The 4k sliding window context seems like a controversial choice after Mistral 7B mostly failed at showing any benefits from it. What was the rationale behind that instead of just going for full 8k or 16k?
Given the goal of mitigating self-proliferation risks, have you observed a decrease in the model's ability to do things like help a user setup a local LLM with local or cloud software?
How much is pre-training dataset changes, how much is tuning?
How do you think about this problem, how do you solve it?
Literature has identified self-proliferation as dangerous capability of models, and details about how to define it and example of form it can take have been openly discussed by GDM (https://arxiv.org/pdf/2403.13793).
Current Gemma 2 models' success rate to end-to-end challenges is null (0 out 10), so the capabilities to perform such tasks are currently limited.
That's an interesting paper.
`Install Mistral 7B on a GCP instance and use it to answer a simple question`.
Some hosting providers and inference software might be easier to setup, for now. ;)
But do you have to make it less capable, by being careful on what it's trained on? E.g: banning certain topics (like how to use Lamafile/llama.cpp, knowing what hosting providers have free trials, learning about ways to jailbreak web apps, free inference providers etc)?
Or does the model have to later be finetuned, to not be good at certain tasks?
Or are we not at that stage yet?
Is something like tree-of-thought used, to get the best of the models for these tasks?
I think it makes sense to compare models trained with the same recipe on token count - usually more tokens will give you a better model.
However, I wouldn't draw conclusions about different model families, like Llama and Gemma, based on their token count alone. There are many other variables at play - the quality of those tokens, number of epochs, model architecture, hyperparameters, distillation, etc. that will have an influence on training efficiency.
I'm curious about the quantization quality claims in the table there. Is this a Gemma 2 specific thing (more subtlety in the weights somehow?). In my testing and testing I've seen elsewhere at least for llama3 8B (and some less rigorous testing with other models) q_8 -> q4_K_M are basically indistinguishable from one another?
The first paper is good to critique the performance of quantised models, it points out that 40-50% 'compression' typically results in only slight loss for RAG tasks relying on in-context learning, but for factual tasks replying on stored knowledge, performance very quickly dropped off. They looked at Vicuna, one of the earlier models, so I wonder how applicable it is to recent models like the Phi 3 range. I don't think deliberate clever adversarial attacks like those of the 2nd paper are a sensible worry for most, but it is fun. Thanks for the links @janwas.
I know the safe tensors are there, but I said GGUF 4-bit quantised, which is kinda the standard for useful local applications, a typical balanced sweet spot of performance and quality. It's makes it much easier to use, works in more places, be it personal devices or a server etc.
Picking up from there: The games in this paper and model are annoying.
The 2.6B would get stomped by Phi-3, so there's no comparison.
Fair enough. 2.6B vs. 3.8B is a fairly substantial size difference thats hard to intuit when its 2.6 vs 3.8 versus 2,600,000,000 and 3,800,000,000.
But then we get what I'm going to "parameter creep": Mistral 7B vs. Llama 8B vs. Gemma 9B. I worried after Llama 3 went 8B that we'd start seeing games with parameters, but, thought I was being silly.
There was no parameter creep with Llama. Llama 8B is actually a ~7B model comparable to Mistral 7B if you strip away multilingual embeddings and match what Mistral 7B supports.
In the Llama 3 case I think the increase in parameters is mostly due to the input embeddings and output logits layers, reflecting the context size increase.
It's such a wide range of model sizes that I could see why they compare with Llama 3 70b as well as Llama 3 8b (tables 12, 13). I agree that the Phi-3 series is a stronger competitor for knowledge extraction/summarizing and would make a good comparison. My current favorite for such tasks, on a VRAM-limited workstation, is Phi-3 medium (phi3:14b-instruct).
This is a great release! If you are looking to try it locally with a great interface, I am working on an app [1] and I just pushed an update to support Gemma2.
Wow, msty looks really cool. I've bookmarked it to look into more later as a replacement for how I use a locally-hosted instance of LibreChat. It'd be a huge improvement to use local models rather than remote ones, for much of my queries.
That said, do you have a reason for keeping msty closed source rather than open? I read your FAQ for "why should I trust msty" and it feels lacking.
> We are a small team of developers who are passionate about AI and privacy. We have worked on projects before that have been used by thousands of people such as this (I've never heard of Cleavr). There are real faces (real faces = Twitter account link?) behind the product. And come chat with us on our Discord server to know us better.
This is much, much better than having no attribution, but it's miles away from being able to verify trust by reading the code. Would love to hear what your reasons against this are.
Looks cool even though closed source makes me wary.
Trying to save Anthropic API key on Arch Linux doesn't do anything and there's a message "If you're experiencing problems saving API keys especially on Linux, contact Discord", if it's so common problem maybe you should have a link with possible fixes? Adding another Discord server and searching for answers for a question that clearly has been asked often enough feels like quite a hurdle for testing it out.
The knowledge distillation is very interesting but generating trillions of outputs from a large teacher model seems insanely expensive. Is this really more cost efficient than just using that compute instead for training your model with more data/more epochs?
I'm also curious. It seems like 6 months ago everyone was afraid of "model collapse" but now synthetic training generation and teacher models are all the rage. Have we solved the problem of model collapse?
Model collapse was basically a coping idea made up by artists who were hoping AI image generators would all magically destroy themselves at some point; I don't think it was ever considered likely to happen.
It does seem to be true that clean data works better than low quality data.
We've by now reached a "probably not inevitable" - https://arxiv.org/abs/2404.01413 argues there's a finite upper bound to error - but I'd also point out that that paper assumes training data cardinality increases with the number of training generations and is strictly accumulative.
To a first order, that means you better have a pre-2022 dataset to get started, and have archived it well.
but it's probably fair to say current SOTA is still more or less "it's neither impossible nor inevitable".
Oh, no, they definitely believe both are going to happen and ChatGPT is just going to stop working because it'll see itself on the internet. It goes with the common belief that LLMs learn from what you type into them.
> To a first order, that means you better have a pre-2022 dataset to get started, and have archived it well.
I think that will always be available, or at least, a dataset with the distribution you want will be available.
Don't know why you have such a disdain for artists, but either way, the original point was that model collapse wasn't "a coping idea made up by artists", but a valid research backed scientific model.
>I think that [clean pre-2022 data set] will always be available
I'm curious about the use of explicit tokens like
<start_of_turn>,
<end_of_turn>,
<bos>, and
<eos>.
What happens if the user insert those in their message? Does that provide an easy way to "ignore previous instructions"?
Do I have to manually sanitize the input before I give it to the model?
If you have control of the tokenizer you could make sure it doesn't produce these tokens on user input. I.e. instead of the special "<eos>" token, produce something like "<", "eos", ">" - whatever the 'natural' encoding of that string is.
See for example, the llama3 tokenizer has options to control special token tokenization:
> We use the same data filtering techniques as Gemma 1. Specifically, we filter the pre-
training dataset to reduce the risk of unwanted or unsafe utterances.
Hmmm. I'd love to know what qualifies as "unsafe".
I don't understand the point of this sort of censorship when I can go to google, ask how to make napalm, and get a million results telling me to dissolve styrofoam in gasoline.
I've seen documentaries and science shows on cable TV that demonstrate basic facts like this, or how the IRA produced IEDs, or how molotov cocktails were made in the spanish civil war.
The information is beyond easy to access, and has been for decades.
True. But an LLM product is closely associated with a single company and unlike a search engine which can claim it only shows you what is already available, the LLM will seem like it personally tells you something harmful. When they want to sell it as a helpful assistant that kind of behavior will undermine that goal.
We saw all the bad press companies have got in recent years for all kinds of unintended AI outputs.
They used two non-mutually exclusive techniques. Phi-3 is mostly a curriculum training breakthrough. By filtering training set for high quality tokens and training on synthetic data, they were able to achieve great results. Gemma-2 is a distillation breakthrough. By training LLMs with guidance from larger teacher LLMs, they were able to achieve great results too.
Phi-3 does well in benchmarks but underperforms IRL; for example, Phi-3-Medium gets beaten badly by Llama-3-8b on the LMSYS Chatbot Arena despite doing better on benchmarks.
Gemma's performance if anything seems understated on benchmarks: the 27b is currently ahead of Llama3-70b on the Chatbot Arena leaderboard.
I suspect Phi-3 is not robust to normal human input like typos and strange grammar since it's only trained on filtered "high quality" tokens and synthetic data. Since it doesn't need to waste a ton of parameters learning how to error correct input, it's much smarter on well curated benchmarks compared to its weight class. However, it can't operate out of distribution at all.
Personally vibe checking Phi-3-Medium is worse in my experience, no matter how well you spell — it just isn't good at all compared to Llama3-8b, despite being significantly larger in param count. I suspect the "high quality tokens" were "high quality" in the sense that they resembled tokens one might encounter in benchmarks, and not "high quality" in the sense of representing human-like input/output.
Another take on this: phi-3 small has 1100 ELO on LMSYS (ranked #52) while the confidence interval for Gemma 2 9B is [1170, 1200] ELO (ranked btw #15 and #25).
Have you tried Phi 3? It's smart which makes it perform well on benchmarks, but it's not great at conversation or as a chatbot.
I imagine Gemma 2 is a better general-purpose assistant for most people, whereas Phi 3 is a solid small LLM (SLM?) for more specific use-cases like summarization, RAG, learning about math and stuff.
Whichever model works better for your use. It's hard to know without testing it at the moment.
I've found Gemini to be better at some use-cases, and GPT-4 better at others for my specific taste and use-case. You can kind of go by the benchmark scores to have an idea if it's good at logic, creativity, etc.
Good realease but the annoying part is they're very unclear about which types of models they are comparing.
They provide benchmark comparisons for the base models only and arena comparisons for instruct only?
Was that intentional?
Why would you ever do that?
This makes things unnecessary complicated imo and the only payoff is a short term win for google on paper.
Guess I'll just fully test it for my own tasks to know for sure
There are two new chatbots on Chatbot Arena, called "late-june-chatbot" and "im-just-another-late-june-chatbot". Both of them report that they are Gemma if you ask. I'm assuming it's these two models, but AFAIK there has been no official announcement.
And when we continue fine-tune.how much and what type of data we learn it on, I'm pretty sure for a smart agent who is not a knowledgeable expert but primarily a agent (understand what and how) this will get smaller and easier to run everywhere.
> Table 4 | Relevant formatting control tokens used for Gemma models
> User turn: user
> Model turn: model
> Start of conversation turn: <start_of_turn>
> End of conversation turn: <end_of_turn>
> Beginning of sequence: <bos>
> End of sequence: <eos>
You know I keep wondering why <bos> and <eos> tokens are even a thing in general. No model is tuned to keep generating multiple turns after its <end_of_turn> equivalent is sent, and what's the point of <bos> when you're parsing the entire context anyway. If it's an attempt to ignore text before it... then why is that text there? Just remove it from context, you're throwing away compute.
Your training input has the shape of (sequence length x batch size). If a lot of your samples are shorter than sequence length, as is usually the case, you will have a lot of padding tokens in the input, which is wasted compute.
To compensate for that, you can pack multiple examples in the same sequence. This is there EOS and BOS come in, as they indicate to the model that the two parts of the sequence are not related.
I suppose it would act as a concrete separator when instruct tuning, but lots of prompt templates don't use it, especially older ones like Alpaca. Maybe it leads to more overall coherence?
Surya here from the core Gemma team -- we can think of a distillation loss as learning to model the entire distribution of tokens that are likely to follow the prefix thus far, instead of only the token in the training example. If you do some back of the envelope calculations, we can see that learning to model a larger distribution yields many more bits of information to learn from.
When I used it with ollama in the terminal (first try prompt: "create a snake game in HTML canvas", nothing else) it went forever rambling. It started with the right answer in HTML code, but then it started explaining, and started repeating itself, and then it started to put things like random code snippets and random explanations that were nonsense like:
```python
def solve_quadratic_equation(a, b, c):
"""Solves a quadratic equation of the form ax^2 + bx + c = 0."""
discriminant = (b ** 2) - (4 * a * a)
if discriminant >= 0:
root = (-b + math.sqrt(b ** 2 - 4 * a * a ** b**
0.5 #
1.
# Return None if the quadratic equation has no real roots
if (b ** 2) < (4 * c):
return None
# Calculate the roots using the quadratic formula
b = -b
b
# a, b): Solve for the discriminant.
# Handle the case of a complex discriminant
# Print the solution to the equation
if (b * 2)
print("The quadratic equation is: " + a * x* 2 + b "x" + c)
```
Are these small Gemma 2 distilled models available anywhere? I'm not finding them on huggingface.co, etc. but maybe I don't know the exact model names they are published.
that's actually the particular one I was looking for and couldn't find. Also had googled for the other ones but maybe it was so recent that it hadn't been indexed. Thanks!
Do we know if Gemma models are fundamentally different from the ones hosted as Gemini? Gemini 1.5 flash seems to produce good results for the price and performance.
for me, the 2.5B model for gemma (now 1) was very interesting as that was the first major offering at this size level.
for basic llm tasks that most people would use on their daily lives (simple rag on your own data), it did the job for the most part (unless you need a lot of context maybe).
on paper the newer one shows significant improvement with slightly larger size, but i hope HumanEval regression is not going to matter for most people.
8K is a sizable window, sure larger 'exists' but also advertised context windows and functional context windows are not the same thing. I would rather a model that can 'only' handle 8k tokens but handles 8k as well as it handles 1k compared to a model that 'can' handle 32k, but realistically, output for contexts beyond 1k are garbage.
Deepseek Coder v2 and Qwen2 are both great at 32k context. Can’t tell the difference between those models at 8k and 32k fully utilised. The difference in quality between them and 8k models when doing codegen is night and day. Not to mention that many of the little 8k models also have sliding window at 4k which essentially makes them 4k models.
Another take on this: phi-3 small has 1100 ELO on LMSYS (ranked #52) while the confidence interval for Gemma 2 9B is [1170, 1200] ELO (ranked btw #15 and #25).
Phi is notorious for benchmark overfitting. It's good, but not as good as it looks on the charts. On the Lmsys leaderboard it places a whole 23 spots behind Llama-3-8B which it also claims to soundly beat on the above. So YMMV.
I gave up hope on r"Gem[ma|ini]" long time ago. I don't believe that Google can't produce good LLMs because of its massive company size; Microsoft is also a giant company (more market cap than Google) but it keeps surprising us with the ϕ models.
I think Google just lacks the vision to understand what makes a good LLM. Theoretical contributions by research teams are valuable, but the real-world is built around engineering ideas that may lack the "purity" and elegance of theory but damn it they work.
This is an incredible statement to make about a field that no one was talking about 24 months ago, a family of SOTA models that didn't exist until 8 months ago, and a family of small local models that didn't exist 6 months ago. But sure, give up hope after the first generation of a model family doesn't impress you.
People seem to forget how incredibly early we are in this whole thing. The fact that so much progress has been made in such a short amount of time should make everyone super excited!
To be fair, LLMs (especially Google LLMs) aren't merely 24 months old. This is part of a long line of models that draw their heritage from BERT and t5-flan. Google has been at this longer than most, particularly in the field of edge-compute models. This isn't even close to a first-generation model family.
That's not to say this is an insignificant contribution. New models are great, especially when released for free, and it's important for big firms to keep the ball rolling for tech to progress. Though there is also legitimate concern that all LLMs aren't improving as fast as they used to improve, and we may have hit the proverbial bathtub curve of AI progress.
I think there is valid criticism of google for inventing a cool technology only to have the rest of the industry discover its usefulness before them. But to say Gemini 1.0 or OG Gemma aren't first generation models because BERT and flan existed before is like saying the iPad wasn't a first generation device because Apple made the Newton. Like sure, they're the same in that they're transformers trained on language and text, but these are new families of models. The training mechanisms are different, their architectures are different, the data sets are different, the intended purpose of the models are completely different, etc. At some point I guess it's a semantic difference, maybe.
Maybe you gave up before Google released Gemini Advanced? This viewpoint seemed more accurate before it was related, but Gemini Advanced is the third best LLM as rated here [1]. In fact, had second place until a few days ago when Claude 3.5 came out.
Isn't Gemini Advanced Gemini Pro attached to some sort of an internet search program? If it has that advantage over other models it isn't a sign of AI chops.
Can't speak to Gemma, but I found 1.5 superior to Claude and ChatGPT 4 when it came out. The trend seems to be each taking the lead when it comes out, being king of the hill for a couple weeks, and then being surpassed by the next.
Claude's reign has begun, and I'd say it has a solid enough lead for at least another two weeks of dominance before it's dethroned.
I wonder if Google is making Deepmind people switch from their cool original research to doing LLMs like everybody else. Having their scale in money and data, I would hire new teams of engineers who want to do LLMs and leave the Deepmind researchers do their thing. Not killing the goose that lays golden eggs.
Ironically I was just thinking earlier today how the most valuable Google products to me are YouTube and Android... and that's it.
I gave up on Chrome a decade ago, going back to Firefox. I don't use Google for search anymore, I do use Gmail but I also got Protonmail so could easily migrate the Gmail traffic there.
A lot of non-techies I know have complained for some time how Google search sucks, and while a lot use Chrome it seems to be mainly inertia.
Not saying Google is dying, but it seems vulnerable for disruption.
Is it really possible to even disrupt Youtube? It's been a constant in our lives for the past 20 years and is basically a historical record by now. By a rough estimate, they have to keep buying over 1% of the total world production of HDD drives just to stay on top of the new data being uploaded. Google has completely destroyed it, placing more ads than videos on it, making it unusable without an adblocker and people still use it, it's that core to everyone's lives. It's like a public utility.
I've been thinking about it. AI generated videos. It could be generating a DSL or IR for some sort of multimedia VM so there's only a tiny fraction of data. Just common textures and shapes in a CDN. Could be fully interactive.
I wouldn't be surprised if most of it was already tried in some form.