Dunno why I can't reply to your other comment explaining what you mean but hot damn. False evaluation of a cheap painting to save on taxes? That's mental.
Different tax loopholes depending on region etc, but basically like this:
I’m a billionaire earning $100M this year.
I owe $40M as taxes for that. (Too much!)
I find a dumb banana painting by a starving artist.
I buy it from him for $1000.
I wait 6 months.
I go to a museum to get it appraised by “professionals”.
I pay the professional appraiser’s wife $50K as a gift.
The appraiser says the painting is now worth $30M!
Wow that’s awesome, I have such a keen eye for art.
You know what, I’m gonna donate this painting to a museum instead because I’m such a patron of art and culture.
Oh, look at that, I get a tax rebate for the value of my donated painting ($30M)
Now I only have to pay $40M - $30M = $10M in taxes on my $100M income.
There’s more nuance to it in practice, but that’s the gist of it.
-----
Edit: For some reason I can't reply to the comments below so I'm gonna do it here.
> That wouldn't explain the price here, since in your scam the whole idea is to buy cheap and donate dear. not buy for 139M
Now we're getting in the details but it's very suspicious for an appraiser to appraise a work of art from an unknown artist at millions. But it's not that suspicious if they take Van Gogh's Starry Night which was previously appraised at $500M to now be valued at $1B. this way the deca-billionaire still gets to save his taxes while appraiser avoids suspicion.
> As far as I know, that's not how taxes work. You can't get a rebate for the amount of taxes you would have paid, you can get a deduction for the amount of money you made.
There are a lot of loopholes in the complicated tax system for the ultra-wealthy, not for us. This video (still a simple explanation in an animated way) covers a few more of them: https://www.youtube.com/watch?v=dHy07B-UHkE
As far as I know, that's not how taxes work. You can't get a rebate for the amount of taxes you would have paid, you can get a deduction for the amount of money you made.
So:
You made $100M owe $40M in taxes.
Your painting is worth $30M! You have such a keen eye for art.
Now you made $130M and owe $50M in taxes.
You donate the painting, you're back at having made $100M and owing $40M.
Otherwise we'd all choose not to pay tax and donate our tax money to charitable institutions instead.
I’m pretty sure he’s right in how taxes work. There’s no moment where the value of the painting is realized but you are allowed to deduct the FMV if you make enough and if the donation goes to the charity’s exempt use (which it will if it’s a museum or whatever).
So if you buy painting for a dollar and wait a year then next year you make $3m and the painting is now worth $1m then if you donate it, your AGI is reduced to $3m-min($1m, 30% of income) = $3m-$900k.
You don’t count the appreciation of the painting as income. You don’t even count it as LTCG if you don’t sell it.
I think it also applies to stock option awards. When the startup I was at was acquired some people were talking about it.
Yes, there are lots of “loopholes” available if you are willing to commit tax fraud! But that’s something anyone can do, it’s not particularly harder to lie about the value of charitable donations if you’re not ultra-wealthy.
Correct. Fraud is fraud, loopholes are loopholes. One is legal, the other is not.
Or put another way - a loophole in law/regulations is found, then the law/regulation gets changed to close the loophole. If it were not legal this change would not be necessary - you would just prosecute.
Interesting they didn’t post any benchmark results - lmarena/artificial analysis etc. I would’ve thought they’d be testing it behind the scenes the same way they did with Gemini 3.
Large tech molochs don't care about any name, it seems. Their power and weight makes the name point to them. Seek on "Amazon" and find that, oh the 7th Wonder of Nature the "Amazon rainforest" is ranked second after some random Big Tech company run by a guy named Jeff. The "lungs of the earth" vs. cheap package delivery and AWS dashboards.
I mean, yeah. What percentage of searches for "Amazon" in today's world do you think is going to not be about acquiring cheap shit very quickly? I would expect the tech company to be a better answer than most when someone searches for Amazon. Searching for "the amazon" gives the expected results as that's how it is more commonly referred. So it does seems like your search query as performed was just a bad search
Amazon does not need to pay Google for this. There is no world where Google puts an organic result about the rainforest in the top spot, because it's not what most users are looking for.
At most there might be a world where Google puts someone else's ad above the organic results.
Well, we also know Google isn't trying to help the user leave Google's site as quickly as possible, because they get more ad money when the user clicks on a few pages or does a few searches before finding what they want.
you'll probably find a Google expense for the same value of Amazon services so that no money ever trades hands, but both companies' valuations are inflated
The website does label some relatively harmless elements as ‘dark patterns’, but out of your ‘really bad ones’, I don’t see ‘Competition’ as being a dark pattern.
Competition is a fundamental part of Play. Humans (and other animals) are social creatures and learn via playing and competing with others.
Can people play games by themselves? Yes.
Is competitive play bad or a dark pattern? Not at all.
Counterpoint: Humans visualize stuff in their minds before trying new things or when learning new concepts. An AI system with LLM based language center and a world model to visualize during training and inference would help it overlap more of human intelligence. Also it looks cool.
Edit: After seeing your edited (longer) comment, I have no idea what you’re talking about.
those two words only describe AI models, as they are models. A "world model" is worse than those two words as it is oxymoronic.
The idea that words and space are being conflated as a formula for spatial intelligence is fundamentally absurd as our relationships to space have no resolution, both within any one language and worse, between them, as language is arbitrary. Language and thought are entirely separate forms. Aphasia has proved this since 2016.
AI developers have to face the music, these impoverished gimmicks aren't even magic, they are bunk. And debunkable once compared to the sciences of the brain.
Is that a more convoluted way to say that a next thing predictor can't exhibit complex behavior? Aka the stochastic parrot argument. Or that one modality can't be a good enough proxy for the other. If so, you probably have to pay more attention to the interpretability research.
But actually most people should start with strong definitions. Consciousness, intelligence, and other adjacent terms have never been defined rigorously enough, even if a ton of philosophers think otherwise. These discussions always dance around ill-defined terms.
Neurobio is built from the base units of consciousness outwards, not intuited interpretation. Eg prediction has nothing inherent to do with consciousness directly. That’s a process imposed on brains post hoc.
Easily refute prediction or error prediction as fundamental.
The path to intelligence or consciousness isn’t mimicry of interpretation.
In terms of strong definitions, start at the base, coders: oscillation, dynamics,
Topologies, sharp wave ripples, and I would say roughly 60 more strongly defined material units and processes. This reverse intuition is going nowhere and it’s pseudoscientific nonsense for social media timeline filling.
I started writing the counterargument, but somehow I think you have a weird idea of what both interpretability in ML and neurobiology are, especially seeing how you're dealing with things nobody has a full idea about in such absolutes
Seems reductive. Some applications require higher context length or fast tokens/s. Consider it a multidimensional Pareto frontier you can optimize for.
It's not just that some absolutely require it, but a lot of applications hugely benefit from more context. A large part of LLM engineering for real world problems revolves around structuring the context and selectively providing the information needed while filtering out unneeded stuff. If you can just dump data into it without preprocessing, it saves a huge amount of development time.
Depending on the application, I think “without preprocessing” is a huge assumption here.
LLMs typically do a terrible job of weighting poor quality context vs high quality context and filling an XL context with unstructured junk and expecting it to solve this for you is unlikely to end well.
In my own experience you quickly run into jarring tangents or “ghosts” of unrelated ideas that start to shape the main thread of consciousness and resist steering attempts.
It depends to the extent I already mentioned, but in the end more context always wins in my experience. If you for example want to provide a technical assistant, it works much better if you can provide an entire set of service manuals to the context instead of trying to put together relevant pieces via RAG.