Hacker Newsnew | past | comments | ask | show | jobs | submit | Tooster's commentslogin

Cap your html bodies to 75ch width for comfortable reading. Minimalism doesn't conflict with nice layout and it's 1 line of css.


I want to create a local first, offline/p2p realtime multiplayer prototype app soon with reactive/signal data model and frontend agnostic design (considering solidjs/svelte). I'm on a tech research stage. How does it compare to rxdb, tinybase and zero sync? For reference right now I'm considering tinydb/rxdb.


For P2P multiplayer with Svelte/SolidJS, SyncKit might not be your best fit because:

- It's client-server (not P2P)

- No Svelte adapters yet (coming in v0.2.0)

- Multiplayer games usually need P2P for lower latency

Better options for your use case:

- Jazz (jazz.tools) - Purpose-built for P2P collaborative apps

- TinyBase - Perfect signal model for Svelte/Solid, but you'd add your own sync

- Yjs - Mature CRDT with good P2P support

- RxDB - Heavier but has everything (queries, reactive, plugins)

If you went client-server instead of P2P, SyncKit would work once v0.2.0 adds Svelte support.

What's your preference? P2P or client-server? That'll determine the best fit.


Currently I am leaning more into P2P for the zero-effort user side setup, although I was considering if hybrid P2P/client-server approach is feasible: free lite P2P vs paid, managed SaaS for user convenience and improved performance.


I was sure it must have been invented already! I've been trying to look for this idea without knowing it's called "spectral rendering", looking for "absorptive rendering" or similar instead, which led me to dead ends. The technique is very interesting and I would love to see it together with semi-transparent materials — I have been suspecting for some time that a method like that could allow cheap OIT out of the box?


I’m not sure carrying wavelength or spectral info changes anything with respect to order of transparency.

It seems like OIT is kind of a misnomer when people are talking about deferred compositing. Storing data and sorting later isn’t exactly order independent, you still have to compute the color contributions in depth order, since transparency is fundamentally non-commutative, right?

The main benefit of spectral transparency is what happens with multiple different transparent colors… you can get out a different color than you get when using RGB or any 3 fixed primaries while computing the transmission color.


The main benefit I see is being able to more accurately represent different light sources. This applies to transmission but also reflectance.

sRGB and P3, what most displays show, by definition use the D65 illuminant, which approximates "midday sunlight in northern europe." So, when you render something indoors, either you are changing the RGB of the materials or the emissive RGB of the light source, or tonemapping the result, all of which can approximate other light sources to some extent. Spectral rendering allows you to better approximate these other light sources.


Whether the benefit is light sources or transparency or reflectance depends on your goals and on what spectral data you use. The article’s right that anything with spiky spectral power distributions is where spectral rendering can help.

> sRGB and P3, what most displays show, by definition use the D65 illuminant

I feel like that’s a potentially confusing statement in this context since it has no bearing on what kind of lights you use when rendering, nor on how well spectral rendering vs 3-channel rendering represents colors. D65 whitepoint is used for normalization/calibration of those color spaces, and doesn’t say anything about your scene light sources nor affect their spectra.

I’ve written a spectral path tracer and find it somewhat hard to justify the extra complexity and cost most of the time, but there are definitely cases where it matters and it’s useful. Also there’s probably more physically spectral data available now than when I was playing with it. I’m sure you’re aware and this is what you meant, but might be worth mentioning that it’s the interaction of multiple spectra that matters when doing spectral rendering. For example, it doesn’t do anything for the rendered color of a light source itself (when viewed directly), it only matters when the light is reflected or transmitted through spectra that are different from the light source, that’s where wavelength sampling will give you a different result than a 3-channel approximation.


Conventional RGB path tracing already handles basic transparency, you don't need spectral rendering for that.


Not exactly what parent poster was saying (I think?), but absorption and scattering coefficients for volume handling together with the Mean Free Path is very wavelength-specific, so using spectral rendering for that (and hair as well, although that's normally handled via special BSDFs) generally models volume scattering more accurately (if you model the properties correctly).

Very helpful for things like skin, and light diffusion through skin with brute-force (i.e. Woodcock tracking) volume light transport.


I might be misunderstanding parts of the comment above, although I think it aligns with what I had in mind. Here’s what I meant:

If a ray carries full spectral information, then a transparent material can be described by its absorption spectrum — similar to how elements absorb specific wavelengths of light, as shown here: https://science.nasa.gov/asset/webb/types-of-spectra-continu...

In that view, transparency is just wavelength-by-wavelength attenuation. Each material applies its own absorption/transmission function to the incoming spectrum. Because this is done pointwise in the spectral domain, the order doesn’t matter:

OUT = IN × T₁ × T₂ (or in a subtractive representation: OUT = IN − ABS₁ − ABS₂).

So whether one material reduces 50% of the red first and another reduces 50% of the green second or vice verse doesn’t change the result. Each wavelength is handled independently, making the operation order-independent.


I’d also add:

* [Difftastic](https://difftastic.wilfred.me.uk/) — my go-to diff tool for years * [Nu shell](https://www.nushell.sh/) — a promising idea, but still lacking in design/implementation maturity

What I’d really like to see is a *viable projectional editor* and a broader shift from text-centric to data-centric tools.

The issue is that nearly everything we use today (editors, IDEs, coreutils) is built around text, and there’s no agreed-upon data interchange format. There have been attempts (Unison, JetBrains MCP, Nu shell), but none have gained real traction.

Rare “miracles” like the C++ --> Rust migration show paradigm shifts can happen. But a text → projectional transition would be even bigger. For that to succeed, someone influential would need to offer a *clear, opt-in migration path* where:

* some people stick with text-based tools, * others move to semantic model editing, * and both can interoperate in the same codebase.

What would be needed:

* Robust, data-native alternatives to [coreutils](https://wiki.archlinux.org/title/Core_utilities) operating directly on structured data (avoid serialize ↔ parse boundaries). Learn from Nushell’s mistakes, and aim for future-compatible, stable, battle-tested tools. * A more declarative-first mindset. * Strong theoretical foundations for the new paradigm. * Seamless conversion between text-based and semantic models. * New tools that work with mainstream languages (not niche reinventions), and enforce correctness at construction time (no invalid programs). * Integration of semantic model with existing version control systems * Shared standards for semantic models across languages/tools (something on the scale of MCP or LSP — JetBrains’ are better, but LSP won thanks to Microsoft’s push). * Dual compatibility in existing editors/IDEs (e.g. VSCode supporting both text files and semantic models). * Integrate knowledge across many different projects to distill the best way forward -> for example learn from Roslyn's semantic vs syntax model, look into tree sitter, check how difftastic does tree diffing, find tree regex engines, learn from S-expressions and LISP like languages, check unison, adopt helix editor/vim editing model, see how it can eb integrated with LSP and MCP etc.

This isn’t something you can brute-force — it needs careful planning and design before implementation. The train started on text rails and won’t stop, so the only way forward is to *build an alternative track* and make switching both gradual and worthwhile. Unfortunately it is pretty impossible to do for an entity without enough influence.



And that's a great thing! I look forward to them being more mature and more widely adopted, as I have tried both zed and helix, and for the day to day work they are not yet there. For stuff to take traction though. Both of them, however, don't intend to be projectional editors as far as I am aware. For vims or emacs out there - I don't think they mainstream tools which can tip the scale. Even now vim is considered a niche, quirky editor with very high barrier of entry. And still, they operate primarily on text.

Without tools in mainstream editors I don't see how it can push us forward instead of saying a niche barely anyone knows about.


"Many communities depend on AI tools to detect and ban AI content" interesting...

If not depending on AI tools then depending on a... hunch? So like a modern-era witch hunting?


Someone committing poor quality LLM generated code and deeming it appropriate for review could create equally bad, if not worse, handwritten code. By extension, anyone who merges poor quality LLM code could merge equally poorly handwritten code. So ultimately it's up to their judgement and about the trust in the contribution process. If poor quality code ended up in the product, then it's the process that failed. Just because someone can hit you with a stick doesn't mean we should cut down the trees — we should educate people to stop hitting others with sticks instead.

"Banning LLM content" is in my opinion an effort spent on the wrong thing. If you want to ensure the quality of the code, you should focus on ensuring the code review and merge process is more thorough in filtering out subpar contributions effectively, instead of wasting time on trying to enforce unenforceable policies. They only give a false sense of trust and security. Would "[x] I solemnly swear I didn't use AI" checkbox give anything more than a false sense of security? Cheaters gonna cheat, and trusting them would be naive, politely said...

Spam... yeah, that is a valid concern, but it's also something that should be solved on organizational level.


Cheaters are gonna cheat, but filtering out the honest/shameless LLM fans is still an improvement. And once you do find out that they lied, you now have a good reason to ban them. Win/win.


Well, Torvalds says in the interview ‘we already have tools such as linters and compilers which speed up the work we do as part of software development’

I get the impression he agrees this road to LLM content is inevitable, but also kind of emphasises the role of the reviewer who takes the final decision.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: