Hacker Newsnew | past | comments | ask | show | jobs | submit | mvkel's commentslogin

"as an ai safety company, we only believe in -partially- autonomous weaponry"

Ads are coming.


I'll be glad if they could open their platform enough so that it could run on ads and not 200 dollar subscriptions

for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability

These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."

The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.

The "safeguards" you are referring to are contractual, i.e. words. There are no technical safeguards, per the article.

The memo literally says that the reason they have these policies is -because- actual technical guardrails are not reliable enough.


It’s a contract dispute. Contracts are more than just talk.

While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.


Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.

NSA and other three-letter agencies happily do it under cloak and dagger.


I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.

What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?

On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"

https://thefulcrum.us/trump-state-control-capitalism


Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.


I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.

Are the guardrails not part of their core? Isn't that the whole premise of their existence?

If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.

Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.


That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe

The ability of some people to never be happy, and to find a way to twist a good situation into bad, will always impress me.

Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.

I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.


A little pessimistic of a take, IMO. You may very well be right, though.

Convenience trumps everything, including privacy and security.

Tell the average person that they have to install their own model is a deal breaker at the outset.

As for 99% capabilities being on device, battery life makes it a non starter.


Regardless of their intent, this is the opening move in a ratchet that only turns one direction.

Platforms almost always begin with "90% of users will never notice," then gradually expand scope as regulatory pressure, liability, and competition increase. We saw similar dynamics with real-name policies at Facebook: initially narrow safeguards become infrastructure.

Auditability, fraud prevention, enforcement reporting, etc, historically pushes platforms toward more persistent verification regimes.

"On-device only" promises often erode once regulatory audits demand server-side attestations.

This is the only thing these days that Apple gets right: they make it technically impossible to backdoor, not just promise that they won't.

The second-order effect is normalization. Once large platforms operationalize age assurance, regulators point to them as proof that stricter mandates are "feasible," accelerating a global compliance cascade. Smaller platforms then adopt similar systems to avoid liability.

In past cycles (COPPA, GDPR, cookie banners, payment KYC) the burden disproportionately favored incumbents who could afford compliance. The likely long-term equilibrium is a stratified internet where meaningful participation in adult spaces increasingly requires some portable proof-of-age token, whether nominally anonymous or not.

It's not that Discord is acting in bad faith, they are simply the first domino to fall in what will be yet another GDPR cookie banner, a further erosion of privacy, and another nail in the coffin of a free internet.

Discord should have just "poasted through it," Huberman-style, and the mob would move on to the next platform that will inevitably be forced to enact similar policies.


Good question! It serves as a bootstrap for each module design file (effectively a deeper expansion of what is defined in the PRD). Once you have (at least) design files for each module, PRD is no longer necessary. It usually ends up being inaccurate after the brainstorming phases, which change fundamental bits of the wiring that the initial brainstorm wouldn't cover.

Excited for the inevitable "Windows 3.11 - 98K MRR" in pieter's X bio.

Love this forum UI/UX. It's clear, snappy, intuitive. How far we have fallen.

That said, allowing posts with no auth is a choice.


It's nostalgic, but good lord does it need a bit of contrast...

Seems pretty readable to me. The information density is high, there are slight box shadows in interactive elements. We need more like this.

A dark red LIVE against a military green background... Again, it's nostalgic, I loved steam back then, but this isn't winning any design award.

> this isn't winning any design award

Good. I prefer interfaces that care more about being concise and usable than about winning design awards.


You can be concise and have good contrast. "Winning any awards" is a manner of speech.

Try another theme, looks great I think. Black or dark blue is crisp

I had to scroll all the way down in order for the bottom panel to appear. Then I was able to change theme. Good it has themes, but it just highlighted another problem in the design of the website.

The dark blue theme is indeed neat.


You can always turn it off.

> This isn't another trading bot. It's an autonomous agent with memory, intuition, and the authority to act.

The AI smell sure is strong with this one.

Overall, this bot will fail to generate a profit; it is trading what is already confirmed, using data that everyone has, ergo there is no edge.


> the workflow I’ve settled into is radically different from what most people do with AI coding tools

This looks exactly like what anthropic recommends as the best practice for using Claude Code. Textbook.

It also exposes a major downside of this approach: if you don't plan perfectly, you'll have to start over from scratch if anything goes wrong.

I've found a much better approach in doing a design -> plan -> execute in batches, where the plan is no more than 1,500 lines, used as a proxy for complexity.

My 30,000 LOC app has about 100,000 lines of plan behind it. Can't build something that big as a one-shot.


if you don't plan perfectly, you'll have to start over from scratch if anything goes wrong

This is my experience too, but it's pushed me to make much smaller plans and to commit things to a feature branch far more atomically so I can revert a step to the previous commit, or bin the entire feature by going back to main. I do this far more now than I ever did when I was writing the code by hand.

This is how developers should work regardless of how the code is being developed. I think this is a small but very real way AI has actually made me a better developer (unless I stop doing it when I don't use AI... not tried that yet.)


I do this too. Relatively small changes, atomic commits with extensive reasoning in the message (keeps important context around). This is a best practice anyway, but used to be excruciatingly much effort. Now it’s easy!

Except that I’m still struggling with the LLM understanding its audience/context of its utterances. Very often, after a correction, it will focus a lot on the correction itself making for weird-sounding/confusing statements in commit messages and comments.


> Very often, after a correction, it will focus a lot on the correction itself making for weird-sounding/confusing statements in commit messages and comments.

I've experienced that too. Usually when I request correction, I add something like "Include only production level comments, (not changes)". Recently I also added special instruction for this to CLAUDE.md.


LLMs are really eager to start coding (as interns are eager to start working), so the sentence “don’t implement yet” has to be used very often at the beginning of any project.

Most LLM apps have a 'plan' or 'ask' mode for that.

I find that even then I often need to be clear that i'm just asking a question and don't want them running off to solve the larger problem.

We're learning the lessons of Agile all over again.

We're learning how to be an engineer all over again.

The authors process is super-close what we were taught in engineering 101 40 years ago.


It's after we come down from the Vibe coding high that we realize we still need to ship working, high-quality code. The lessons are the same, but our muscle memory has to be re-oriented. How do we create estimates when AI is involved? In what ways do we redefine the information flow between Product and Engineering?

I always feels like I'm in a fever dream when I hear about AI workflows. A lot of stuff is what I've read from software engineering books and articles.

I'm currently having Claude help me reverse engineer the wire protocol of a moderately expensive hardware device, where I have very little data about how it works. You better believe "we" do it by the book. Large, detailed plan md file laying out exactly what it will do, what it will try, what it will not try, guardrails, and so on. And a "knowledge base" md file that documents everything discovered about how the device works. Facts only. The knowledge base md file is 10x the size of the code at this point, and when I ask it to try something, I ask Claude to prove to me that our past findings support the plan.

Claude is like an intern coder-bro, eager to start crushin' it. But, you definitely can bring Claude "down to earth," have it follow actual engineering best practices, and ask it to prove to you that each step is the correct one. It requires careful, documented guardrails, and on top of it, I occasionally prompt it to show me with evidence how the previous N actions conformed to the written plan and didn't deviate.

If I were to anthropomorphize Claude, I'd say it doesn't "like" working this way--the responses I get from Claude seem to indicate impatience and a desire to "move forward and let's try it." Obviously an LLM can't be impatient and want to move fast, but its training data seem to be biased towards that.


Be careful of attention collapse. Details in a large governance file can get "forgotten" by the llm. It'll be extremely apologetic when you discover it's failed to follow some guardrails you specified, but it can still happen.

Developers should work by wasting lots of time making the wrong thing?

I bet if they did a work and motion study on this approach they'd find the classic:

"Thinks they're more productive, AI has actually made them less productive"

But lots of lovely dopamine from this false progress that gets thrown away!


Developers should work by wasting lots of time making the wrong thing?

Yes. In fact, that's not emphatic enough: HELL YES!

More specifically, developers should experiment. They should test their hypothesis. They should try out ideas by designing a solution and creating a proof of concept, then throw that away and build a proper version based on what they learned.

If your approach to building something is to implement the first idea you have and move on then you are going to waste so much more time later refactoring things to fix architecture that paints you into corners, reimplementing things that didn't work for future use cases, fixing edge cases than you hadn't considered, and just paying off a mountain of tech debt.

I'd actually go so far as to say that if you aren't experimenting and throwing away solutions that don't quite work then you're only amassing tech debt and you're not really building anything that will last. If it does it's through luck rather than skill.

Also, this has nothing to do with AI. Developers should be working this way even if they handcraft their artisanal code carefully in vi.


>> Developers should work by wasting lots of time making the wrong thing?

> Yes. In fact, that's not emphatic enough: HELL YES!

You do realize there are prior research and well tested solutions for a lot of things. Instead of wasting time making the wrong thing, it is faster to do some research if the problem has already been solved. Experimentation is fine only after checking that the problem space is truly novel or there's not enough information around.

It is faster to iterate in your mental space and in front of a whiteboard than in code.


I've been doing this a long times and I've never had to do that and have delivered multiple successful products used by millions of users. Some of which were used for years after we stopped doing any sort of even maintaining with no bugs, problems or crashes.

There are only a few software architecture patterns because there's only a few ways to solve code architecture problems.

If you're getting your initial design so wrong that you have to start again from scratch midway through, that shows a lack of experience, not insight.

You wouldn't know this, but I'm also a bit of an expert at refactoring, having saved several projects which had built up so much technical debt the original contractors ran away. I've regularly rewritten 1,000s if not 10,000s of line into 100s of lines of code.

So it's especially galling to be told not only that somehow all code problems are unique (they almost never are), but my code is building technical debt (it's not, I solve that stuff).

Most problems are solved, and you should be using other people's solutions to solve the problems you face.


> Developers should work by wasting lots of time making the wrong thing?

Yes? I can't even count how many times I worked on something my company deemed was valuable only for it to be deprecated or thrown away soon after. Or, how many times I solved a problem but apparently misunderstood the specs slightly and had to redo it. Or how many times we've had to refactor our code because scope increased. In fact, the very existence of the concepts of refactoring and tech debt proves that devs often spend a lot of time making the "wrong" thing.

Is it a waste? No, it solved the problem as understood at the time. And we learned stuff along the way.


That's not the same thing at all, is it, and not what's being discussed.


> design -> plan -> execute in batches

This is the way for me as well. Have a high-level master design and plan, but break it apart into phases that are manageable. One-shotting anything beyond a todo list and expecting decent quality is still a pipe dream.


This is actually embarrassing. His "radically different" workflow is... using the built-in Plan mode that they recommend you use? What?

It's not, to be fair.

> I use my own `.md` plan files rather than Claude Code’s built-in plan mode. The built-in plan mode sucks.


From Claude docs: Planning is most useful when you’re uncertain about the approach, when the change modifies multiple files, or when you’re unfamiliar with the code being modified. If this isn't true, skip the plan.

Can you easily version their plans using git?

"Write plan to the plans folder in the project"

Can that be saved somewhere so it's done for every session?

You can just read the docs man

> if you don't plan perfectly, you'll have to start over from scratch if anything goes wrong.

You just revert what the AI agent changed and revise/iterate on the previous step - no need to start over. This can of course involve restricting the work to a smaller change so that the agent isn't overwhelmed by complexity.


100,000 lines is approx. one million words. The average person reads at 250wpm. The entire thing would take 66 hours just to read, assuming you were approaching it like a fiction book, not thinking anything over

wtf, why would you write 100k lines of plan to produce 30k loc.. JUST WRITE THE CODE!!!

That's not (or should not be what's happening).

They write a short high level plan (let's say 200 words). The plan asks the agent to write a more detailed implementation plan (written by the LLM, let's say 2000-5000 words).

They read this plan and adjust as needed, even sending it to the agent for re-dos.

Once the implementation plan is done, they ask the agent to write the actual code changes.

Then they review that and ask for fixes, adjustments, etc.

This can be comparable to writing the code yourself but also leaves a detailed trail of what was done and why, which I basically NEVER see in human generated code.

That alone is worth gold, by itself.

And on top of that, if you're using an unknown platform or stack, it's basically a rocket ship. You bootstrap much faster. Of course, stay on top of the architecture, do controlled changes, learn about the platform as you go, etc.


I take this concept and I meta-prompt it even more.

I have a road map (AI generated, of course) for a side project I'm toying around with to experiment with LLM-driven development. I read the road map and I understand and approve it. Then, using some skills I found on skills.sh and slightly modified, my workflow is as such:

1. Brainstorm the next slice

It suggests a few items from the road map that should be worked on, with some high level methodology to implement. It asks me what the scope ought to be and what invariants ought to be considered. I ask it what tradeoffs could be, why, and what it recommends, given the product constraints. I approve a given slice of work.

NB: this is the part I learn the most from. I ask it why X process would be better than Y process given the constraints and it either corrects itself or it explains why. "Why use an outbox pattern? What other patterns could we use and why aren't they the right fit?"

2. Generate slice

After I approve what to work on next, it generates a high level overview of the slice, including files touched, saved in a MD file that is persisted. I read through the slice, ensure that it is indeed working on what I expect it to be working on, and that it's not scope creeping or undermining scope, and I approve it. It then makes a plan based off of this.

3. Generate plan

It writes a rather lengthy plan, with discrete task bullets at the top. Beneath, each step has to-dos for the llm to follow, such as generating tests, running migrations, etc, with commit messages for each step. I glance through this for any potential red flags.

4. Execute

This part is self explanatory. It reads the plan and does its thing.

I've been extremely happy with this workflow. I'll probably write a blog post about it at some point.


If you want to have some fun, experiment with this: add a step (maybe between 3 and 4):

3.5 Prove

Have the LLM demonstrate, through our current documentation and other sources of facts, that the planned action WILL work correctly, without failure. Ask it to enumerate all risks and point out how the plan mitigates each risk. I've seen on several occasions, the LLM backtrack at this step and actually come up with clever so-far unforeseen error cases.


That's a good thought experiment!

This is a super helpful and productive comment. I look forward to a blog post describing your process in more detail.

This dead internet uncanny (sarcasm?) valley is killing me.

Are you suggesting HN is now mostly bots boosting pro-AI comments? That feels like a stretch. Disagreement with your viewpoint doesn't automatically mean someone is a bot. Let's not import that reflex from Twitter.

> This is a super helpful and productive comment. I look forward to a blog post describing your process in more detail.

The average commenter doesn't write this kind of comment. Usually it's just a "can you expand/elaborate?". Extra politeness is kind of a hallmark of LLMs.

And if you look at the very neat comment it's responding to, there's a chance it's actually the opposite type, an actual human being sarcastic.

I can't tell anymore.

Edit: I've checked the comment history and it's just a regular ole human doing research :-)


Now I'm just confused. Maybe LLMs really do change how humans communicate.

Yep with a human in the loop to process these larger sprawling plan docs (inflated with the intent of the designer iteratively)

Some get deleted from repo others archived, others merged or referenced elsewhere. It's kind of organic.


[flagged]


> Haven't seen a single useful thing produced by this garbage process you describe

By using it first-hand or by a colleague? And useful to whom, you, or the person writing it? There are plenty of people in this thread who have actually used this "garbage process," myself included, to produce stuff we, and our colleagues, find is useful.


[flagged]


These kinds of attacks lead nowhere.

https://news.ycombinator.com/newsguidelines.html


This response would be appropriate if the process we used only worked once.

What genuinely new thing have you produced?

Well I'm actually producing, not having an llm do things for me and frying my brain in the process. If you're building things with the process described above you're not producing anything, Dalio or Altman's GPUs are, you're simply just a slot machine user.

Have fun paying for "Think for me Saas".

2025-2026: The year everyone became the mental equivalent of obese and let their brain atrophy. There are no shortcuts in life that don't come at a huge cost, remember how everyone forgot how to navigate without a maps app, that's going to be you with writing code/reading code/thinking about code.


I must say i 100% agree with brain atrophy when it comes to writing code etc. But I think I am gaining extreme brain training when it comes to architecting, predicting outcomes, and iterating. We will see in the long run which of these skills the market values more.

By that logic, compilers ruined programming and calculators killed math.

If someone’s brain atrophies, that’s a user problem, not a tool problem.


They didn't write 100k plan lines. The llm did (99.9% of it at least or more). Writing 30k by hand would take weeks if not months. Llms do it in an afternoon.

Just reading that plan would take weeks or months

You don't start with 100k lines, you work in batches that are digestible. You read it once, then move on. The lines add up pretty quickly considering how fast Claude works. If you think about the difference in how many characters it takes to describe what code is doing in English, it's pretty reasonable.

And my weeks or months of work beats an LLMs 10/10 times. There are no shortcuts in life.

I have no doubts that it does for many people. But the time/cost tradeoff is still unquestionable. I know I could create what LLMs do for me in the frontend/backend in most cases as good or better - I know that, because I've done it at work for years. But to create a somewhat complex app with lots of pages/features/apis etc. would take me months if not a year++ since I'd be working on it only on the weekends for a few hours. Claude code helps me out by getting me to my goal in a fraction of the time. Its superpower lies not only in doign what I know but faster, but in doing what I don't know as well.

I yield similar benefits at work. I can wow management with LLM assited/vibe coded apps. What previously would've taken a multi-man team weeks of planning and executing, stand ups, jour fixes, architecture diagrams, etc. can now be done within a single week by myself. For the type of work I do, managers do not care whether I could do it better if I'd code it myself. They are amazed however that what has taken months previously, can be done in hours nowadays. And I for sure will try to reap benefits of LLMs for as long as they don't replace me rather than being idealistic and fighting against them.


> What previously would've taken a multi-man team weeks of planning and executing, stand ups, jour fixes, architecture diagrams, etc. can now be done within a single week by myself.

This has been my experience. We use Miro at work for diagramming. Lots of visual people on the team, myself included. Using Miro's MCP I draft a solution to a problem and have Miro diagram it. Once we talk it through as a team, I have Claude or codex implement it from the diagram.

It works surprisingly well.

> They are amazed however that what has taken months previously, can be done in hours nowadays.

Of course they're amazed. They don't have to pay you for time saved ;)

> reap benefits of LLMs for as long as they don't replace me > What previously would've taken a multi-man team

I think this is the part that people are worried about. Every engineer who uses LLMs says this. By definition it means that people are being replaced.

I think I justify it in that no one on my team has been replaced. But management has explicitly said "we don't want to hire more because we can already 20x ourselves with our current team +LLM." But I do acknowledge that many people ARE being replaced; not necessarily by LLMs, but certainly by other engineers using LLMs.


I'm still waiting for the multi-years success stories. Greenfield solutions are always easy (which is why we have frameworks that automate them). But maintaining solutions over years is always the true test of any technologies.

It's already telling that nothing has staying power in the LLMs world (other than the chat box). Once the limitations can no longer be hidden by the hype and the true cost is revealed, there's always a next thing to pivot to.


That's a good point. My best guess is the companies that have poor AI infrastructure will either collapse or spend a lot of resources on senior engineers to either fix or rewrite. And the ones that have good AI infrastructure will try to vibe code themselves out of whatever holes they dig themselves into, potentially spending more on tokens than head count.

> but in doing what I don't know as well.

Comments like these really help ground what I read online about LLMs. This matches how low performing devs at my work use AI, and their PRs are a net negative on the team. They take on tasks they aren’t equipped to handle and use LLMs to fill the gaps quickly instead of taking time to learn (which LLMs speed up!).


This is good insight, and I think honestly a sign of a poorly managed team (not an attack on you). If devs are submitting poor quality work, with or without LLM, they should be given feedback and let go if it keeps happening. It wastes other devs' time. If there is a knowledge gap, they should be proactive in trying to fill that gap, again with or without AI, not trying to build stuff they don't understand.

In my experience, LLMs are an accelerator; it merely exacerbates what already exists. If the team has poor management or codebase has poor quality code, then LLMs just make it worse. If the team has good management and communication and the codebase is well documented and has solid patterns already (again, with or without llm), then LLMs compound that. It may still take some tweaking to make it better, but less chance of slop.


Might be true for you. But there are plenty of top tier engineers who love LLMs. So it works for some. Not for others.

And of course there are shortcuts in life. Any form of progress whether its cars, medicine, computers or the internet are all shortcuts in life. It makes life easier for a lot of people.


How can you know that 100k lines plan is not just slop?

Just because plan is elaborate doesn’t mean it makes sense.


Dunno. My 80k+ LOC personal life planner, with a native android app, eink display view still one shots most features/bugs I encounter. I just open a new instance let it know what I want and 5min later it's done.

Both can be true. I have personally experienced both.

Some problems AI surprised me immensely with fast, elegant efficient solutions and problem solving. I've also experienced AI doing totally absurd things that ended up taking multiple times longer than if I did it manually. Sometimes in the same project.


If you wouldn't mind sharing more about this in the future I'd love to read about it.

I've been thinking about doing something like that myself because I'm one of those people who have tried countless apps but there's always a couple deal breakers that cause me to drop the app.

I figured trying to agentically develop a planner app with the exact feature set I need would be an interesting and fun experiment.


same as you, I tried all sorts of apps. Todoist, habitica, fantastical, wunderlist, karakeep and more and more for all sorts of things. All are OK for most users, but none was GREAT for me. That's why I thought to give LLM's a proper spin. Get all the features I want, in the way I want. And anything I can come up with/see on the web over time, I can add myself within minutes instead of waiting for months for some 3rd party to perhaps add it. + No subscription fee.

You can see my app in action here: https://news.ycombinator.com/item?id=47119434

While I can share the code and it might make a good foundation for forks, creating one from scratch with claude's $100 subscription will take ~2-3 weeks to get it into the state you see in the video. And that is me prompting the LLM for ~30-60 minutes most days of those 2-3 weeks.


That's awesome and a real motivator for me to try it myself. Especially since my employer is giving me a ton of credits for exploration I haven't been maximising to the point of hitting a usage limit.

Very cool and thanks for sharing.


In 5 min you are one shotting smaller changes to the larger code base right? Not the entire 80k likes which was the other comments point afaict.

Yeah, then I guess I misunderstood the post. Its smaller features one by one ofc.

What is a personal life planner?

Todos, habits, goals, calendar, meals, notes, bookmarks, shopping lists, finances. More or less that with Google cal integration, garmin Integration (Auto updates workout habits, weight goals) family sharing/gamification, daily/weekly reviews, ai summaries and more. All built by just prompting Claude for feature after feature, with me writing 0 lines.

Ah, I imagined actual life planning as in asking AI what to do, I was morbidly curious.

Prompting basic notes apps is not as exciting but I can see how people who care about that also care about it being exactly a certain way, so I think get your excitement.


Is it on GH?

It was when I mvp'd it 3 weeks ago. Then I removed it as I was toying with the idea of somehow monetizing it. Then I added a few features which would make monetization impossible (e.g. How the app obtains etf/stock prices live and some other things). I reckon I could remove those and put in gh during the week if I don't forget. The quality of the Web app is SaaS grade IMO. Keyboard shortcuts, cmd+k, natural language parsing, great ui that doesn't look like made by ai in 5min. Might post here the link.

Would love to check it out too once you put it up.

Here's a sneek peak into how it looks like/what it is. If there's still appetite for the source code, I'll probably drop a gh link by the end of the week: https://streamable.com/amdz92

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: