Why are you ruling out the possibility that training on the material may confer an advantage when the data is presented, even if the advantage may not be strong enough to pass the test without the data present in the context window?
- electricity can be generated many different ways
- many generation sources aren't dependent on resupply. Spiking the price of lithium doesn't prevent existing batteries from working, it only makes new ones more expensive. Solar, wind, hydro and nuclear (to a lesser extent)
- electricity supply is heavily regulated, for better or worse.
What does matter though is if you can affect the distribution with your vote. I would guess it is harder to affect oil companies, that are often located in other countries than your own.
It's not that it currently is immune, it's that there's a compelling story for the energy sector to become immune from it as we reduce our dependence on fossil fuels.
With which model are you getting 100k responses? The models are limited and are not capable of responding that much (4k max). The point I am trying to make is written 3 times in the previous messages I wrote. GPT4 is extremely slow to be useful with API.
As expected, you do not know anything about its API limits. Maximum token is 4096 with any gpt4 model. I am getting tired of HN users bs'ing at any given opportunity.
1. Your original wording, "getting a response _for_ n tokens", does not parse as "getting a response containing n tokens" to me.
2. Clearly, _you_ don't know the API, as you can get output up to the total context length of any of the GPT-4 32k models. I've received output up to 16k tokens from gpt-4-32k-0613.
3. I am currently violating my own principle of avoiding correcting stupid people on the Internet, which is a Sisyphean task. At least make the best of what I am communicating to you here.
You bullsh*t saying "I dunno, I get a response back for 100k tokens regularly." A model that doesn't even exist, then you talk about a 32k non-public API. Stop lying. It is just the internet, you don't need to lie to people. Get a life.
Because if the client specifically requests GPT-3.5, but is silently being served something else instead, the client will rely on having GPT-3.5 capabilities without them actually being available, which is a recipe for breakage.
"Insecure mode" sounds a lot better than "default mode". If I didn't know what any of the options meant, I'd feel safe using BlockCipherMode.Default, but I wouldn't feel safe using BlockCipherMode.Insecure.
You are just describing a (good) recommendation algorithm. TikTok's is infamously good at figuring out your niches and catering to your taste by looking at your minute interactions with the content it shows you. My TikTok "for you" page has absolutely 0 mainstream politics, rage bait, or any other "normie" topics. It's mostly technically fascinating stuff and good absurd humor that caters to my absurd taste.
Optimizing for engagement is not inherently bad, nor does it necessarily result in socially suboptimal outcomes. My TikTok feed is very engaging without having to resort to triggering my anger.
A recommendation algorithm that only sticks to a handful of given topics (rage bait and furry porn?) is not a very good one.
> I'm not very interested in this "right to repair" stuff - it revolves around demanding modular parts for quick and easy replacement. People who are actually close to the metal, who actually get their hands dirty are repairing those devices since ever.
It also involves demanding access to proprietary ICs and information like schematics. A component level repair might become impossible if you don't have access to a vendor-specific replacement for some burnt battery charging IC. You can't really fix up a silicon die like you can a dead pixel.
reply