AI agents want your API keys. Don't give them your API keys.
I built agentgate - a proxy that lets AI read my GitHub, calendar, Bluesky, etc. But any writes get queued for me to approve first. I review them over coffee.
Made an open source MCP (model context protocol) server to let an AI coding agent like Claude Code see its canvas output and also send errors and debugging for quickly iterating on a codebase.
Video here: https://www.youtube.com/watch?v=z2on3KelaH4
HA does have a REST API, but with the way PageNodes works you'd have to hardcode the HA password right into the PN workflow. Have you considered adding the equivalent of environment variables, which can be set in a PN account and used as placeholders in workflows?
That's a good question. Our storage is in local indexeddb. And the site is https, so no one should see your flow if you don't share it.
That said there's nothing stopping you from reaching out to another secure service or plugin before making requests.
Ok, so less "prototype in the browser, then offload to a server once it works", but more for local "app" type things? Interesting idea, has some limitations but also opens up tons of interactions that are harder for a server-based solution (webcam, ...)
Bingo! It's always evolving too. We do use a lot of experimental flags from the browser, this helps work with up and coming features as well for learning about new APIs very easily.
Of course this was more of a fun example of something you can do with the Page Nodes platform than an actual echo replacement. And a way of getting started connecting services.
Definitely move on to some dedicated hardware if you're serious about this sort of thing.
I'd also love other performance hints with regards to optimizing for games or other things that require heavy animation.
- Which layers/elements are being hardware-accelerated, and which aren't? Just as importantly, why or why not?
- How much VRAM is being used by which element/etc, and how much total RAM/VRAM am I using at a given moment?
The Chrome beta on ICS is great, but it's only accessible to <4% of all android devices. ICS has been out for months, and the providers are completely dragging their feet on it.
Chrome Beta needs to be backported to at least 2.3
AFAIK it also doesn't make any difference to apps that use webviews. I mean, it's just as well- if you changed the underlying engine without notifying devs insanity would ensue. But it would be good to be able to set a flag that says "use the decent engine".
"ICS has been out for months, and the providers are completely dragging their feet on it."
Can anyone explain why this is such a problem? Does Android not have a proper hardware abstraction layer or driver model that would allow OS updates immediately, as long as these layers remained compatible and a build existed for your processor family?
Windows has supported disparate hardware configurations for many, many years. Yet, generally, you can run out and buy the new version and install it. Why can't Android do this (albeit with a version for each processor)?
Many providers have customized the OS for the look and feel along with adding their custom apps and settings including locking out tethering. One of the big advantages of 3.0 was it was suppose to provide an easier way to separate out the look and feel out of the OS so the providers didn't need to create a custom OS. This make pushing out updates very difficult.
For example, I worked on an app to be included with the device by the OEM. The OEM had roll a custom OS and kernel to handle their hardware and replace/update the mail and calender app because they are very tied to Google by default. We needed hooks into the the mail and calendar app in ways that are not supported by Google so the OEM also had to add those hooks for us. At this point, we were deep into unsupported territory by Google, but supported by the OEM. These things change unexpectedly with minor OS updates from Google which make them brittle to maintain. An update to a newer OS would be a hell of a lot of work.
All that said, if we could have based off of 3.0 instead of 2.3, many of these issues would not exist because of some abstraction layers added just for this purpose. I expect 4.0 to make that even nicer.
For one thing, last I looked, Chrome Beta requires the GL_OES_egl_image extension for off-thread texture upload, and had support for the ICS-only SurfaceTexture (although judging by the WebKit source that's ifdef'd out). GL_OES_egl_image support is very spotty on pre-ICS devices.
If you're calling i18n and a11y "plugins", you're missing the point. These types of considerations are taken thoughout the framework in a consistent manner.
I built agentgate - a proxy that lets AI read my GitHub, calendar, Bluesky, etc. But any writes get queued for me to approve first. I review them over coffee.