Hacker News new | past | comments | ask | show | jobs | submit | trifurcate's comments login

Why are you ruling out the possibility that training on the material may confer an advantage when the data is presented, even if the advantage may not be strong enough to pass the test without the data present in the context window?

What makes you think that the energy sector overall is immune to this while oil isn't?


Not immune, but more resistant:

- electricity can be generated many different ways

- many generation sources aren't dependent on resupply. Spiking the price of lithium doesn't prevent existing batteries from working, it only makes new ones more expensive. Solar, wind, hydro and nuclear (to a lesser extent)

- electricity supply is heavily regulated, for better or worse.


While all of this is true, there is a monopoly on distribution. Doesn’t matter where it comes from if one entity owns the pipes.


What does matter though is if you can affect the distribution with your vote. I would guess it is harder to affect oil companies, that are often located in other countries than your own.


Because there are multiple ways to generate electricity- including at-home options for many people.

There’s also an interesting factor in timing and latency of the grid. Peak usage is typically mid afternoon. While least usage is overnight.

There’s essentially excess capacity during the time period that most people charger their car.


More and smaller players in electricity production.

Half the oil and gas production comes from an official cartel so it's kind of in the oil sector's DNA with price fixing.


More opportunity for substitutions when your fuel is electrons instead of a specific blend of fossil fuels processed in a specific way.

That said for profit electric monopolies are indeed a scourge.


It's not that it currently is immune, it's that there's a compelling story for the energy sector to become immune from it as we reduce our dependence on fossil fuels.


Edge has this implemented in a pretty decent way.


Brave as well.


> But if I'm a massive landlord with multiple units, I have the same advantage?

The entire point is that monopolization confers the same advantage that this scheme does.


What monopoliziation? The company in question is not the only one offering such services and certainly doesn't have a monopoly on data.


Or, put your files under html/ in your repo, and push to /var/wwwroot (using the common /var/wwwroot/html setup)


I dunno, I get a response back for 100k tokens regularly. What is the point you are trying to make?


With which model are you getting 100k responses? The models are limited and are not capable of responding that much (4k max). The point I am trying to make is written 3 times in the previous messages I wrote. GPT4 is extremely slow to be useful with API.


As expected, you do not know anything about its API limits. Maximum token is 4096 with any gpt4 model. I am getting tired of HN users bs'ing at any given opportunity.


1. Your original wording, "getting a response _for_ n tokens", does not parse as "getting a response containing n tokens" to me.

2. Clearly, _you_ don't know the API, as you can get output up to the total context length of any of the GPT-4 32k models. I've received output up to 16k tokens from gpt-4-32k-0613.

3. I am currently violating my own principle of avoiding correcting stupid people on the Internet, which is a Sisyphean task. At least make the best of what I am communicating to you here.


You might want to see a specialist about your behavioral issues. Also gpt-4-32k is not open to public.


I've had access for many many months now


Skill issue.


You bullsh*t saying "I dunno, I get a response back for 100k tokens regularly." A model that doesn't even exist, then you talk about a 32k non-public API. Stop lying. It is just the internet, you don't need to lie to people. Get a life.


> which is a really bad idea.

Why?


Because if the client specifically requests GPT-3.5, but is silently being served something else instead, the client will rely on having GPT-3.5 capabilities without them actually being available, which is a recipe for breakage.


You do understand that the client will be written by the same people setting up the inference server?


Because it's lying to the client?


And why is that bad?

Your mindset would mean that Windows would have next to no backwards compatibility, for instance.


"Insecure mode" sounds a lot better than "default mode". If I didn't know what any of the options meant, I'd feel safe using BlockCipherMode.Default, but I wouldn't feel safe using BlockCipherMode.Insecure.


You are just describing a (good) recommendation algorithm. TikTok's is infamously good at figuring out your niches and catering to your taste by looking at your minute interactions with the content it shows you. My TikTok "for you" page has absolutely 0 mainstream politics, rage bait, or any other "normie" topics. It's mostly technically fascinating stuff and good absurd humor that caters to my absurd taste.

Optimizing for engagement is not inherently bad, nor does it necessarily result in socially suboptimal outcomes. My TikTok feed is very engaging without having to resort to triggering my anger.

A recommendation algorithm that only sticks to a handful of given topics (rage bait and furry porn?) is not a very good one.


Ah. I don’t use TikTok because I strongly dislike video content.


> I'm not very interested in this "right to repair" stuff - it revolves around demanding modular parts for quick and easy replacement. People who are actually close to the metal, who actually get their hands dirty are repairing those devices since ever.

It also involves demanding access to proprietary ICs and information like schematics. A component level repair might become impossible if you don't have access to a vendor-specific replacement for some burnt battery charging IC. You can't really fix up a silicon die like you can a dead pixel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: