Hacker Newsnew | past | comments | ask | show | jobs | submit | alex_w_systems's commentslogin

I’ve been working on something I call the EMIS Framework v0.5.2(Energy–Matter–Information-Spacetime)— Modeling Economics as a Low-Dimensional Energy System.

The core idea:

Treat the economy as a constrained 2D energy manifold where money is a bookkeeping layer over energy allocation, and macro structure emerges holographically from boundary-level discrete transactions.

Very roughly:

L1 — 2D Manifold Hypothesis Social/economic systems behave like low-dimensional constrained surfaces rather than high-entropy volumetric systems. This reframes growth, inflation, and inequality as curvature problems rather than equilibrium problems.

L2 — JT-Gravity Analogy Introducing a Jackiw–Teitelboim–like action to model macro constraints. Policy acts like boundary condition manipulation rather than “force injection.”

L3 — Holographic Mapping Discrete micro-transactions at the boundary construct macroeconomic structure in the bulk. This attempts to dissolve the 100-year micro vs macro divide.

L4 — Random Matrix Regime In high-complexity phases, the system transitions to random-matrix statistics (crisis, bubbles, phase shifts). Stability becomes a spectral property.

I’m currently working on:

- Formalizing the action functional (so it’s not just metaphor)

- Defining the admissible ensemble boundary for economic RMT

- Building a small simulation engine to test curvature vs liquidity stress

This is still early and probably wrong in 20 different ways. But if the geometry holds, it could provide:

- A unification layer between econ and complex systems physics

- A compression model for macro indicators

- A path toward AI-native economic modeling (neuro-symbolic)

Would love to hear from:

- Theoretical physicists willing to sanity-check the gravity mapping

- Quant folks familiar with RMT edge cases

- Anyone who thinks this is obviously nonsense

Happy to share drafts if there’s interest.


I think the interesting tension here is between capability and trust.

An agent that can truly “use your computer” is incredibly powerful, but it's also the first time the system has to act as you, not just for you. That shifts the problem from product design to permission, auditability, and undoability.

Summarizing notifications is boring, but it’s also reversible. Filing taxes or sending emails isn’t.

It feels less like Apple missing the idea, and more like waiting until they can make the irreversible actions feel safe.


> An agent that can truly “use your computer” is incredibly powerful, but it's also the first time the system has to act as you, not just for you. That shifts the problem from product design to permission, auditability, and undoability.

Or rather, just reveals that the industry never bothered to properly implement delegation of authority in operating systems and applications, opting instead to first guilt-trip people for sharing their passwords, and later inventing solutions that make it near-impossible to just casually let someone do something for you.

Contrast with how things in real life function, whether at family level or at the workplace.


Half-agree. As an industry, we do suffer from not-invented-here syndrome, so have a lot of mediocre attempts to implement it that don't learn enough lessons from historical human-human examples.

Delegation can also be scary with other humans. "Power of attorney" and all that. Or even just micro-management.


Clicking `Submit` is easiest step of sending email / filling taxes.

All steps before it are reversible, and reviewable.

Bigger problem is attacker tricking your agent to leak your emails / financial data that your agent has access to.


I worry we'll click "Submit" as fast as we click "I accept the terms and conditions."


Of course we would!

How in the world can you double check the AI-generated tax filing without going back and preparing your taxes by hand?

You might skim an ai-written email.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: