I think partly because Google and Apple controlled the contactless bits of the phones for many years, the non-OS-makers like WeChat and AliPay made use of the open technology of QR codes. I think theoretically you could build equivalent things as they have with NFC today on those platforms but on the other hand being able to set up a “POS” with nothing more than a printer does have an appeal to it, even if writable nfc stickers cost 5 cents you still have to go buy some.
I think there is also something about how easy it is for a business to adopt a QR code by just needing to print one out instead of having to go out and buy a whole payment terminal.
QR payments in china was already prevasive before contactless payments became prevasive in the west. And as others say: not all phones supported nfc at the time. Remember iBeacons on iP5? Wechat and Alipay was already everywhere by then
Having been there recently, it's about as annoying as taking out your phone to pay for something. Some systems also support NFC now, though the most common is still QR. Also helps that their QR scanning tech/transaction processing is really fast, many transactions were as fast or even faster than me scanning with a card from my experience.
(Also if you want to talk annoying payments don't get me started on how insane it is that the US still requires me to hand over a physical card at most restaurants to take over to their register... sorry I just can't help but get annoyed by this lol)
Also in adjacent countries like Vietnam etc., where even ragtag street food vendors have a QR code sticker on their stall/cart.
It's so common that people pay without even talking or confirming; I've seen customers just take their phone out, point at the QR, and walk away, and the shopkeeper says nothing. I'm assuming the shopkeeper gets a notification on their phone and trusts regular customers,
but how easy would it be to secretly place your own bank account's QR code on top of a shop's QR? People who wait for a confirmation notification will catch it immediately, but by then the customer has already paid the attacker and the transaction can't be just reversed. Repeat it in several places, and a thief to snatch quite a few payments before the parasite stickers are all taken down.
I agree. The intent is sacred. This should be the default and CLIs should make use of the available history (while preserving inputs you need to preserve outputs too because context matters).
The idea of having to repeat something to your computer is ridiculous.
If you're committed to Anthropic at an organizational level, there's no point to have a 'standard' AGENTS.md with a CLAUDE.md layer on top. Just commit the CLAUDE.md.
How do you envision short term and long term target usage of it?
And do you guys communicate between other browsers when doing something like this to try to settle on something common? I don't mean W3C but practically, it's a small world after all.
I can't speak for "you guys" anymore, as I'm retired, but from my personal perspective/recollection:
The target usage for the prompt API is anything that would benefit from the general capabilities of a language model, and can't be encompassed by the more-specific APIs for summarization/writing/rewriting. Realistic use cases currently are things like sentiment analysis, keyword extraction, etc. I have a number of ideas on how to integrate it into my current retirement project around Japanese flashcards, e.g. generating example sentences. If the small (~10 GiB) model class keeps getting smarter, the class of things possible on-device in this way gets larger and larger over time.
But overall, yeah, the goal with the prompt API, as with all web APIs, is to put something out there for discussion as early as possible, and get input from the broad community, especially including other browsers, to see if it's something that they are interested in collaborating on. https://www.chromium.org/blink/guidelines/web-platform-chang... (which I also wrote) goes into how the Chromium project thinks about such collaboration in general.
Code changes are cheaper to make now and kind of more expensive to verify.
So you can still contribute, you just not need to provide the code, just the issue.
Which isn't as bad as it sounds, it kind of feels bad to rewrite somebody's code right away when it is theoretically correct, but opinionated codebases seem to work very well if the maintainer opinions are sane.
And if the maintainer doesn't understand something about how the exploit works? Also, code changes aren't cheaper, its just that you can watch YouTube instead of putting in effort now. But time still passes and that costs the same. Reviewing the code is far more expensive now though since the LLM won't use libraries.
PS The economics of software haven't really changed, its just that people (executives) wish they have changed. They misunderstood the economics of software before LLMs and they misunderstand the economics of software now.
PPS The only people that LLMs benefit are the segment of devs who are lazy.
It is also allowing me to see all relevant associations easily when revealing the card in built in SRS, you add cards to SRS as you browse, so they are related to what you already know / currently exploring.
Mind you, all data visible is collected from different reputable available sources. When you click "explain" there's a clearly marked LLM explanation, but my explanation generation pipeline pushed all generated explanations through 5 different models including all top Chinese-first for verification, and on average it took a few iterations back and forth to iron out any information that could potentially mislead the learner.
this looks incredible and exactly like something i've been wanting. is there the same amount of depth for the 9k+ characters? if this is open source, id love to build on it;i was wandering if op had posted his on github.
Only about 5K explanations now I'm still trying to polish the pipeline before covering more. Due to all these verification and associated regenerations the cost is quite high.
It's not opon source but it is completely free. Open sourcing is on the table but currently it would be additional work and distraction. Just licenses seem like a headache, from a quick poke even when ok for use on non-commercial data redistributing may not be ok. So not any time soon, but if you are working on something similar I can at least share detailed datasources, finding good ones was not easy and LLM integrate them fast.
Mostly datasets but who knows if LLMs stuff also if I'm creating a datasets with them[0]. Long story short it's a potential headache, don't want that headache for now. Plus basically two options after open sourcing: people are not using it (time and effort wasted) or people are using it and then it's a chore. But still on the table. But I'm currently not close to the table.
From my experience, saying "this is not X, it will be not used for Y" is vastly increasing chances of this being classified as being X. Anybody can write "this is authorized research". Instead use something like evaluate security / verify security, make sure this cannot be (...), etc.
Of course these models are pretty smart so even Anthropic's simple instructions not to provide any exploits stick better and better.
Been using it for work and personal daily since release. First few weeks were rough but it's probably the best AI code editor out right now. But that's largely due to the models just be superior
Antigravity CLI or the Gemini one? When I tried the latter about 2 months ago it was shockingly bad, though I was a free user. I assume its better if you're a paying customer?
It doesn't appear that there is an Antigravity CLI, so the ladder. I'm using a paid account though.
For the last few months I was using paid versions of CC, Codex and Gemini CLI, and found them more or less equivalent for my uses. I'm just building web apps though.
I mean, yes but LLMs have been making me more cognitively active. I've learned how to do more stuff that I would have without them and it's a decent multiplier not some rounding error.
Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.
reply