This is not a product or an agent proposal. It’s an attempt to reframe deployed AI systems (feeds, copilots, assistants) as cybernetic systems embedded in human attention and behavior loops.
I’m interested in feedback from people working in ML, control theory, alignment, or platform governance on whether this framing is useful, flawed, or already well-covered elsewhere.
a Web-based dashboard for real-time brain–computer interface research.
It connects to a backend via WebSocket, streams neural data (142 channels @ 40Hz), runs decoders (JavaScript or TensorFlow.js), and visualizes ground truth vs predictions live.
The goal is to make it easy to prototype and compare neural decoders with immediate feedback.
You can bring your own decoders, or even generate one directly in the app via a prompt.
I’ve been working on a research project exploring BCI decoding. I realized I lacked intuition for what 142 channels of motor cortex spikes actually “feel” like, so I built a tool to sonify them in real time.
Built with WebAudio + WebSockets, real-time MIDI mapping. Data from motor cortex recordings (142 channels, 40Hz update) streamed from my other open source project PhantomLink: https://github.com/yelabb/PhantomLink
I’m interested in feedback from people working in ML, control theory, alignment, or platform governance on whether this framing is useful, flawed, or already well-covered elsewhere.
reply