Hacker Newsnew | past | comments | ask | show | jobs | submit | lyjackal's commentslogin

it's Friday, and I'm avoiding doing work


Ha! Whelp, it is my weekend.


I notice that the python versions and typescript versions are pretty different. Python is sort of class based, with python magic decorators

    class MyWorkflow(Workflow):
        @step
        async def start(self, ctx: Context, ev: StartEvent) -> MyEvent:
            num_runs = await ctx.get("num_runs", default=0)
whereas TS is sort of builder/function based

    import { createWorkflow } from "@llamaindex/workflow-core";
    
    const convertEvent = workflowEvent();
    
    const workflow = createWorkflow();
    
    workflow.handle([startEvent], (start) => {
      return convertEvent.with(Number.parseInt(start.data, 10));

Is there reason for this? });


yea good callout -- python workflows came first, and while we could have directly translated these, the ergonomics around classes in python are not exactly what JS/TS devs expect.

So instead, the goal was to capture the spirit of event-driven workflows, and implement them in a more TS-native way and improve the dev-ux for those developers. This means it might be harder to jump between the two, but I'd argue most people are not doing that anyways.


Agree, thanks for the link. I was wondering what actually changed. The resource links and elicitation look like useful functionality.


I've been curious to see the process for selecting relevant context from a long conversation. has anyone reverse engineered what that looks like? how is the conversion history pruned, and how is the latest state of a file represented?


We didn't look into that workflow closely, but you can reproduce our work (code in GitHub) and potentially find some insights!

We plan to continue investigating how it works (+ optimize the models and prompts using TensorZero).


I recently did something similar. Using uv workspaces, I used the uv CLI's dependency graph to analyze the dependency tree then conditionally trigger CI workflows for affected projects. I wish there was a better way to access the uv dependency worktree other than parsing the `tree` like output


I use Github Actions triggers to pass flags to a monorepo dagger script to build and test the affected components. For example, if a commit touches the front- and back ends, rebuild both. If it only touches the front end, run integration tests using the latest backend without rebuilding it.

edit: spell out GHA


Yea this definitely makes sense for smaller monorepos. For us, we ended up writing our own dependency graph parser to figure out what tests to run (which is easy enough with a single language like python honestly)


Was bazel an option?


We used pants initially (which I believe is similar to bazel). And indeed the dependency graphing it does was very helpful, but other parts of the tool motivated us to create something more bespoke and debuggable (we were only using like 20% or less of the features pants offers)


GHA - GitHub Actions, right?


I agree! I hope uv introduces more tools for monorepos or refines the workspaces concept.

I saw workspaces require all dependencies to agree with eachother, which isn't quite possible in our repo


This is cool and something that I’ve wanted, but I don’t see hot listings requirements inline foregoes the need for a lock file to maintain reproducibility. What about version ranges? Versions of transitive dependencies?



Mobile 4g USB sticks you can usually rotate your IP address by reconnecting. I tried on a pi, it was inconsistent. This was just with some random test mobile plan from rando carrier renting off Verizon I think


An idea for more complete coverage: have 2 of them, and invert their intervals, such that one and only one is always on


This will work for a while, before thieves know to check for multiple airtags. Better to not detect any in the first place


Yes it's also a good strategy.


I’ve wondered whether they use thanks as a signal to a conversation well done, for the purpose of future reinforcement learning


I'd speculate that they use slightly more complicated sentiment analysis. This has been a thing since long before LLMs.


I don't know if they do or if it is efficient but it is possible.


It's more the content creators who bear the brunt of toxic rage. Who you follow doesn't solve that problem


> the content creators

This is IMO the problem. I don't use these sites to follow "content creators". For the most part I'm following normal people who happen to say things I find interesting.


I don't think they were saying it's a problem for people following content creators. It's more a problem for content creators, because they usually want the greatest reach possible, so they want to be on platforms that people use, which requires them to put up with the emotional swingings of the platforms' userbases.

If you want to say you don't care about having content creators on your platform, that's at least a coherent take. But you still have to think about the business models of the platforms that keep them around-- short of collecting payments from every ordinary user, there needs to be buy-in from someone wanting reach, whether that's corporate accounts, individual content creators, or someone else. And do you actually know all of those "normal people who happen to say things you find interesting" in real life, or did you find some of them online, i.e. they're basically influencers/content creators with you as an audience member?


That is indeed what I'm saying. I treat social media more like I treated Usenet back in the day. To me that's a superior model than the influencer model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: