Cheaper. Every month or so I visit the models used and check whether they can be replaced by the cheapest and smallest model possible for the same task. Some people do fine tuning to achieve this too.
Those bigger windows come with lovely surcharges on compute, latency, and prompt complexity, so "just wait for more tokens" is a nice fantasy that melts the moment someone has to pay the bill. If your use case is tiny or your budget is infinite, fine, but for everyone else the "make the window bigger" crowd sounds like they're budgeting by credit card. Quality still falls off near the edge.
Context windows getting bigger doesn't make the economics go away. Tokens still cost money. 50K tokens of schemas at 1M context is the same dollar cost as 50K tokens at 200K context, you just have more room left over.
The pattern with every resource expansion is the same: usage scales to fill it. Bigger windows mean more integrations connected, not leaner ones. Progressive disclosure is cheaper at any window size.
It helps with cost, agreed. But caching doesn't fix the other two problems.
1) Models get worse at reasoning as context fills up, cached or not. right?
2) Usage expansion problem still holds. Cheaper context means teams connect more services, not fewer. You cache 50K tokens of schemas today, then it's 200K tomorrow because you can "afford" it now. The bloat scales with the budget...
Caching makes MCP more viable. It doesn't make loading 43 tool definitions for a task that uses two of them a good architecture.
Regardless of the intentions behind this, this is very, very illegal(if you are in the US) and could open the company up to some serious liability down the line.
It seems like the previous browser antitrust ruling in Europe was 2010, with the browser ballot screen being required until 2014.
Given the scale of the fine Microsoft received after breaking their agreement by breaking the ballot system and not showing it to users (over 500m EUR I believe), it seems strange to me that Microsoft is really so willing to get back into this area of making it harder to switch browser, given the ample past precedent.
Is user data gathered from the web browser under default settings really valuable enough to justify the risk?
Obviously €500m wasn't enough of a deterrent, and anti-trust authorities need to be able to issue exponentially increasing fines to produce either compliance from or the dissolution of the repeat offender.
The only question up for debate is what the base of the exponential function should be.
Honestly, I don't expect antitrust to trigger here. With Apple's treatment of browsers on their phones being considered perfectly acceptable somehow (and the same can be said about pretty much every type of app store category, to be honest), I don't think there are any government bodies that even care about this type of antitrust anymore.
The fight for free access to app stores has replaced the fight for browser bundling.
Yes, it will be a definite red flag if there is no clear CEO in the company.
However, nothing says these titles have to map in any way to actual responsibilities inside the company. Just make someone CEO on paper, at random if you need to, and disregard the title internally.
I fully agree with you. I'm absolute skeptic about anything Facebook and Twitter and other multi-billion dollar tech companies support. Especially because, the whole project of a decentralized social network, if this succeeded, and really became mainstream, these same companies would loose A LOT of power.
It would be easier to compete with them, they would lose their role as being the gate keeper of acceptable debate on a huge part of the internet, so there's a huge down scepticism from my part, thinking "they will find some way to taint and contaminate this whole project... somehow". Twitter would never actually embrace this idea, certainly voluntarily.
It looks like the language is really maturing. The changes are all fairly minor optimizations and improvements to the stdlib.
Impressed with the Go team’s discipline for simplicity and stability.
Between the 1 release in 2012 and the 1.13 release in 2019, this has been the development model of the language. So it's nothing new in Go's evolution.
reply