Unlikely. The reason graphene doesn't run ön non-pixels even today is that it depends on certain hardware features that most vendors (beside Google) lacks.
Security theater, it has absolutely no use. If you can't trust your hardware that it won't actively listen to the microphone without your knowledge and permission then what are you even doing with that device?!
I do trust my device. However in specific circumstances where privacy may be critical, an additional protection might save me even from a state-sponsored attack.
And even then they still don't live up to their promises, it is still not open hardware - there are a bunch of proprietary firmware, but especially silicon on these devices.
That's just security theater. If you can't trust the very CPU/OS that it only uses the camera/microphone when the notification is on, then what are you even doing with that device?
Mr. Rich Guy sells me his personal device he used in the previous year because he wants new shiny phone, but he may have the very slightest chance of being a super evil genius? The government selling tampered phones on ebay, when they could just.. go directly to vendors and put their backdoors directly into new phones/software?
Sorry for the light snark, but this attack vector seems way too complicated for not much benefit. Unless you are some very VIP person being personally targeted.
Interesting project, but I believe the base assumption is already slightly wrong. Why do we assume that LLMs know what kind of language would benefit them? This information is not knowable without doing proper research, and even if there is some research like that, it would have to be a part of the training data. Otherwise it's just hallucination.
I agree, it´s mostly a silly whim taken too far. Too much time in my hands.
In particular the whole stack based thing looks questionable.
In fact the very first answer by Gemini proposed an APL-like encoding of the primitives for token saving, but when I started the implementation Claude Code pushed back on that, saying it would need to keep some sane semantics around the keywords to be able to understand the programs.
The very strict verification story seems more plausible, tracks with the rest of the comments here.
What has surprised me is that the language works at all, adding todo items to a web app written in a week old language felt a bit eery.
Have the LLMs generate tests that measure the “ease of use” and “effectiveness” of coding agents using the language.
Then have them use these tests to get data for their language design process.
They should also smoke test their own “meta process” here. E.g. Write a toy language that should be obviously much worse for LLMs, and then verify that the effectiveness tests produce a result agreeing with that.
For many (most) types of objects lifetimes can be a runtime property just fine. For e.g. a list, in rust/c/c++ you would have to do an explicit decision how long should it be "alive", meanwhile a managed language's assumption that when it's reachable that is its lifetime is completely correct and it has the benefit of fluidly adapting to future code changes, lessening maintenance costs.
Of course there are types where this is not true (file handlers, connections, etc), and managed languages usually don't have as good features to deal with these as CPP/Rust (raii).
You basically compose a description of the side effects and pass this value representing those to the main handler which is special in that it can execute the side effects.
For the rest of the codebase this is simply an ordinary value you can pass on/store etc.
> Lifetimes are a global property and LLMs are not particularly good at reasoning about them compared to local ones.
Huh? Lifetime analysis is a local analysis, same as any other kind of type checking. The semantics may have global implications, but exposing them locally is the whole point of having dedicated syntax for it.
> Lifetime analysis is a local analysis, same as any other kind of type checking
That's what the compiler is doing.
The developer (or LLM) is supposed to do the global reasoning so that what they end up writing down makes semantic sense.
Sure, throwing a bunch of variants at it and see what sticks is certainly an approach, but "lifetimes check out" only proves that the resulting code will be memory safe, not that it actually makes sense.
I wouldn't think this applies to Motorola.
reply