Hacker Newsnew | past | comments | ask | show | jobs | submit | zc2610's commentslogin

we encountered that issue and fixed it + a separate sse event type to signal start/end of summarization for ui/ux.

we also build a lot on top of it like more accurate token estimation, customized offloading mechanism etc.


Oh nice, that separate SSE event seems like a big improvement for UX.

This is some quality critique i am expecting.

We currently persist logs for every single line of code agent executed in the sandbox. We also have agent trajectory persisted with infra from langchain ecosystem.

That said, i share the same believe that lots of work need to be done for compliance if deploying this for any firm on the street.

>Curious whether LangAlpha has thought about signed execution logs per session.

This is defiantly what we have on the roadmap but feels a little heavy to introduce at early stage.


there will always be people lose money regardless, that's part of stock market. i hope at least with tools like this, people can make investment decisions more systematically and with discipline by relying on research rather than impulse or memes.

Thanks for feedback. i am working on that already.

it should be easy to self host in docker though.


Exactly, this is especially important for agents given the limited effective context window.

we do have something similar to a personal or workspace level investment wiki on the roadmap.

As for now, it would be more like how swe working on a codebase and build stuff incrementally by commits. We are taking a workspace centric approach where multiple agent sessions can happen in a workspace and build based on previous work.


Great advice!

For demo purpose and to attract attention, i was primarily picking some cases with cool visuals (like the screenshot of the AI supply chain you mentioned). we have some internal eval and will try to add more cases in the public repo for reference.


More signs of the AI bubble. Completely unprofessional behavior ("cool visuals" not "real results"). And don't give me that "hacker culture" bullshit, these people are targeting Wall Street as paying customers.

would it be more professional in your opinion if i am claiming i make $xxxxx via this tool? I thought i have clearly stated that cool visuals is for >demo purpose and to attract attention. I do not want to post any dramatic statement to trick people using it. This is an early stage open source project to help investors and traders organize their thoughts, not an auto money making machine that guarantee profit. its the mind who use the tool decide if they will profit from market.

>And don't give me that "hacker culture" bullshit

I couldn’t help but be genuinely curious: if you believe AI is a bubble and aren’t a fan of hacker culture, then why are you here on Hacker News?

great to hear your input anyway!


First of all this project is great and finance is ready for a disruption like this. I'm sure a lot of good research and development went into this.

Quality research indeed doesn't always make money, so I agree that it doesn't make sense to present these type of metrics. But at the same type, it will be hard to trust this sort of thing immediately without having a way to validate its output. At the very least I would like to know that the financial metrics it calculates (esp those based on 20/30 data points) are correct. Looks like there is some transparency build in and that's a good thing.

But people that are not a pro in investment research wouldn't know that it messed up a certain metric and therefore the output is different from what it tells me. Or maybe it is not messing up entirely, but a certain sector-specific detail doesn't get picked up making a signal less strong than the output made you believe. Maybe you already have it but if not maybe you could get some sort of validation layer added, that could also serve as some sort of customisable calculation engine, I'd use it right away.


Thanks, very valid point. We are building towards a benchmark as well. hope we can share more quantitive metric soon.

"Cool visuals" are "dramatic statements". Neither have any substance nor basis in reality.

What would this possibly add over existing AI chatbots if all it's for is "organizing thinking"? There is no value add here.

I love hacker culture. This isn't hacking, this is the exact opposite of that.


Hi HN. We built LangAlpha because we wanted something like Claude Code but for investment research.

It's a full stack open-source agent harness (Apache 2.0). Persistent sandboxed workspaces, code execution against financial data, and a complete UI with TradingView charts, live market data, and agent management. Works with any LLM provider, React 19 + FastAPI + Postgres + Redis.


Some technical context on what we ran into building this.

MCP tools don't really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window. And data vendors pack dozens of tools into a single MCP server, schemas alone can eat 50k+ tokens before the agent does anything useful. So we auto-generate typed Python modules from the MCP schemas at workspace init and upload them into the sandbox. The agent just imports them like a normal library. Only a one-line summary per server stays in the prompt. We have around 80 tools across our servers and the prompt cost is the same whether a server has 3 tools or 30. This part isn't finance-specific, it works with any MCP server.

The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that's day one. You update the model when earnings drop, re-run comps when a competitor reports, keep layering new analysis on old. But try doing that across agent sessions, files don't carry over, you re-paste context every time. So we built everything around workspaces. Each one maps to a persistent sandbox, one per research goal. The agent maintains its own memory file with findings and a file index that gets re-read before every LLM call. Come back a week later, start a new thread, it picks up where it left off.

We also wanted the agent to have real domain context the way Claude Code has codebase context. Portfolio, watchlist, risk tolerance, financial data sources, all injected into every call. Existing AI investing platforms have some of that but nothing close to what a proper agent harness can do. We wanted both and couldn't find it, so we built it and open-sourced the whole thing.


You can make MCP tools work for any type of data by using a proxy like https://github.com/lourencomaciel/sift-gateway/.

It saves the payloads into SQLite, maps them, and exposes tools for the model to run python against them. Works very well.


How does it insert itself between the agent and the mcp, does —from edit the mcp file to add itself as a mcp

It becomes the only MCP. All other MCPs are registered within the tool.

You shouldn't dump data in the context, only the result of the query.

Exactly, context should call queries and analyze results. TBH the more I develop my MCP the less context-window anxiety I have even on a basic (non-enterprise) plan. Of course I'm not dealing with the deluge of data the FA appears to handle.

Their charts and UI look pretty, too.


Yes, thats is the idea and exactly what we did

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: