In big orgs, 'agents can build it' rarely changes the buy vs build decision. The pragmatic moat I see isn’t the code, it’s turning AI work into something finance and security can trust. If you can’t measure and control failure-cost at the workflow level, you don’t have software.
I’m building an OTel-based SDK that wraps the billable edges (entrypoint, LLM/tool clients, async publish/consume) and emits both traces for debugging and a lightweight event ledger for run/attempt lifecycle and call boundaries. I define the workflow + possible outcomes up front, then attribute all runs and attempts to the final outcome event to get the cost per outcome
Hi John,
I saw you were looking for interesting startup opportunities! We're looking for people that share our passion for creating and fostering relationships and self growth. We have a small team of engineers and looking to grow in the next few months. We have a strong core team culture of authentic caring and we're looking for engineers that are interested in being mentored and mentoring, working with our product team to design larger features, and can help us grow our engineering culture.
We're a dating service that uses human matchmakers. Founded by Stanford alumni in 2012, we've built a successful platform that generates over $30m/year with little capital raised. Our diverse team includes a leadership team that's 50/50 men & women, and our backgrounds include life coaches to writers to engineers.
Free for a phone call sometime this week or the next to learn more about the company? If so, please schedule time on my calendar using the link below. https://calendly.com/erica-gacon
Take care, Erica
Count–Min Sketch requires each input to pass through multiple independent hash functions, which is computationally expensive. The workaround is to use a single hash function but use part of its output to split values into one of many buckets hence, simulating a situation in which we had m different hash functions. This costs nothing in terms of accuracy but saves computing many independent hash functions. This procedure is called stochastic averaging and has a predictable bias towards larger estimates. Durand-Flajolet corrected this bias using an algorithm called LogLog. HLL uses a different type of averaging. Instead of the geometric mean used in LogLog, Flajolet et al. proposed using the harmonic mean.
https://engineering.fb.com/2018/12/13/data-infrastructure/hy...
reply