Hacker Newsnew | past | comments | ask | show | jobs | submit | AphantaZach's commentslogin

Modern encoders (especially Opus) are indeed impressive at preserving the stereo image at high quality settings. If you are on a stable connection getting the full 192kbps+ stream, the phase error is likely negligible.

The issue is that we can't control the delivery.

Streaming platforms use adaptive bitrate. If a user's bandwidth dips on mobile, the player might switch to a lower tier where the model starts aggressively quantizing the Side channel (L-R) to save space.

Since the binaural effect relies entirely on that Side channel difference, I wanted to remove the variable entirely.

By generating it client-side with the Web Audio API, we get mathematical certainty regardless of the user's connection speed.


I think if you pitched this as "binaural beats that always work regarding of network conditions" that would be better-received. I don't think there's much of a scenario where streaming services aren't able to consistently deliver high enough quality for 3D positional audio. If there were, ASMR artists would be pushing much harder for alternate platforms.


You make a great point. 'Determinism > Bandwidth' is definitely the stronger argument.

The main difference with ASMR is that it uses multiple spatial cues (reverb, tone) which survive compression well.

Entrainment is more fragile. For a cognitive tool, I wanted to engineer that risk out of the system entirely.


OP here. I built this engine because I realized auditory entrainment (for focus and my own Aphantasia) is often compromised by standard streaming platforms.

Most codecs (MP3/AAC) use "Joint Stereo" (Mid/Side coding) to save bandwidth. Since binaural beats rely entirely on the subtle phase difference between Left/ Right channels, compression algorithms tend to treat that data as redundant and smooth it over.

I didn't want to leave it to chance, so I moved the stack to the Web Audio API. It generates the sine waves in real-time (client-side), ensuring mathematical precision and perfect channel isolation.

Happy to answer questions about the physics or implementation.


This is awesome feedback, thank you. Some of this I have been trying to dial in, and I just pushed an update to almost everything you mentioned:

Fixes Deployed: Ghost Typing: You were spot on. I was defining the modal inside the main render loop. Moved it out, so the input is stable now.

Audio Focus: Removed the visibilitychange listener. It should now persist on multi-monitor setups without cutting out.

Texture Toggle: Added a specific button to toggle between "Neural Grain" (noise) and "Pure Light" (solid strobe) so you can customize the pattern.

Volume Taper: Switched the gain slider to logarithmic scaling so the jump from 0-10% isn't deafening.

UI Clutter: The "Tap to Minimize" overlay now fades out automatically after 3 seconds.

On Vibe Coding... guilty as charged.

This project evolved rapidly from a hacky passion project to help myself focus, into an attempt to build something worth sharing with others. I built the initial engine to prioritize the DSP/audio math, and the React architecture definitely suffered

Would love to know if the multi-monitor issue is resolved on your end now.


Fair critique. The wellness space is definitely flooded with 'quantum energy' nonsense so the skepticism is appreciated.

I look at this tool less like a 'magic brain pill' and more like a metronome for a musician. A metronome doesn't make you play better, but it mechanically forces you to stick to a tempo. This app just saturates your visual cortex with a steady rhythm so your brain stops scanning the room for distractions. It’s essentially a distraction-cancellation for focus.

As for the 'emotional regulation' bit—I do admit that comes off as marketing speak(not my background!) It really just refers to wavelength impact: Red light avoids triggering melanopsin (good for winding down), while Cyan/Blue light triggers wakefulness (good for mornings). No magic stones involved


Apologies to anyone who got flash-banged by the white screen earlier. I pushed a hotfix that accidentally nuked my Tailwind config and the animation engine. The site should be fully restored now. Thanks for your patience.


Totally fair question. "Higher performance" is definitely a subjective claim, so I should clarify the mechanism I'm relying on.

The core concept is Photic Driving (the "Frequency Following Response"). There is decent literature (e.g., Herrmann, 2001) showing that the visual cortex effectively synchronizes its firing rate to match high-amplitude external flickers (like a 14Hz strobe).

My goal with this tool is to induce Transient Hypofrontality (down-regulation of the prefrontal cortex), which is often associated with "Flow States" (Dietrich, 2003).

To be clear: I haven't run a clinical trial. I built it to replicate the "Ganzflicker" effects (Reeder, 2021) in a browser environment to help with my own Aphantasia. Subjectively, it helps me clear cognitive noise faster than silence, but I'm releasing it for free to see if that holds true for others or if it's just placebo.


All good - just interested in what the theory is behind it, and how effective it is purported to be.

Thanks for sharing your experience!


Good catch. That’s an unguarded speechSynthesis call in the cleanup function. likely crashing on browsers that block the Speech API (or in privacy mode).

Just pushed a fix to wrap it in a conditional check. It should be live in ~2 minutes. Thanks for the stack trace!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: