Just like with everyuuid.com, you too can explore the space of all possible Claude code buddies. And find legendary shiny wise ducks. And install them by patching your binary on macOS.
We use produce and groceries we were already selling as parts of our kits, as well as partially prepared food we prep/cook in-house. 80% of what we sell is made within 200 miles of SF.
We also assume you have things like olive oil. Hah.
Also worth mentioning is that we hold onto produce for usually less than 24 hours from the farm to your door.
Let me know if you'd like a beta invite to try the kits out. (Or to talk Operations Engineering (my team), it's fascinating and full of classic compsci problems)
Super interesting. It's a shift from straight meal kits that are more "healthy meals that you don't have time to shop for" to more wholistic kitchen foodstuffs management.
It's more like a personalized Instacart in that way. Could see this going the way of "Trunk Club for the kitchen"
I saw Good Eggs on Nextdoor and learned a little bit about you guys there. I think I'll definitely check out the service. One thing I'd like to know is the average calorie count of the meals in the dinner kits. I couldn't find any information about this. I like my dinners to be on the higher-calorie side - anywhere in the general range of 700-1000. If you guys hit that consistently, then the dinner service would be pretty awesome for me.
That's a great question. We haven't done the calorie calculations yet but they're full of healthy fats. Each meal is about three servings, if that helps any.
This is wonderful. I read the whole series in one sitting. It actually made video codecs feel way more approachable, rather than some patented black box magic I'll never understand.
It also reminded me of a recent article talking about how you can break audio codecs by guessing which quantizer was used by the packet, then using it in reverse to produce speech! Which I suppose is obvious in retrospect, that lossy codecs are trying to compress data by making it perceptually similar, whatever the domain.
I also appreciated the ties to video game networking. Gaffer on Games has had a long-running series on designing multiplayer networking protocols with UDP and you two approach bit-shaving very similarly (unsurprisingly I suppose - it's a very specific process with its own tools).
It was a blast to write. Glenn is a smart guy with some great content around game networking. There are good ways to do networking for games and other real-time applications and TCP isn't really one of them.
This is a variant of "should you compress or encrypt first?"
Compression relies on pattern matching, and compressed size will leak details about what you compressed, even if that result is encrypted. (Unless you then pad the encrypted size, but then what was the point of compressing? I can see some more or less secure ways to do this like establishing a compression ratio/bandwidth/entropy limit, then padding and achieving that constraint so each encrypted payload looks more or less the same, but latency sensitivity makes this difficult)
In the case of VOIP, the codec uses a lookup table for distinct parts of speech (tch, sp, buh, etc). Then "all it has to send" is table cell numbers around (certainly not all). On the receiving side, you just look in your speech table and reconstruct.
These values have distinct output patterns, particularly when compressed. If you can guess better than 70% of the time (I forget the exact number they achieved) what table value was used, then reconstruct it, you can listen in on what they're saying, without having to break the underlying encryption.
Voice codecs are also awful at encoding music which may explain why when you're on hold, the hold music may just be dropped and replaced with white noise because it's reached some bandwidth cap. C.f. video encoding and falling snow.
Hey, cool. Always nice to see this work show up on HN. But I don't think this is the paper you're looking for. In '08, we could only spot phrases that we knew in advance, and they had to be at least a certain length.
The most impressive results -- going from encrypted VoIP to text -- were done by Andy White and others, a couple years after the paper you linked above. It's this one:
A.M. White, A.R. Matthews, K.Z. Snow, and F. Monrose. "Phonotactic Reconstruction of Encrypted VoIP Conversations:
Hookt on fon-iks." In Proceedings of IEEE S&P, 2011.
http://www.cs.unc.edu/~fabian/papers/foniks-oak11.pdf
As a second thought, that makes some sense. Regional offices do localization, sales (with support), and legal adaptation mostly. Any company that wants to sell on several countries need such stuff.
More than a hundred employees for each office looks oversized, but not extremely so.
The only question yet open is if Twitter needed all this infrastructure to become a viable business, of if they could open smaller and grow into that size organically. But well, even if Twitter didn't need it, it's reasonable to imagine that some business would need it.
I guess my question of what a company could spend 1 billion of investment on is answered.
Esper is really curious. I looked at it while getting a feel for the general landscape of streaming data tools in late 2013 - to be clear, it was a very cursory look, on the timescale of a few hours at most. I quote what my final take-away was at the time:
> Seems good but... such a weird project. Codehaus, svn, not high activity, but consistent, stable releases for five years. Maybe just not the kind of thing webdevs get into? Not sure if that's a strike against or not.
With the demise of Codehaus it looks like they've moved to github:
But oddly they don't seem to have migrated their svn history, and the README implies they don't plan to... I certainly hope they didn't lose it when Codehaus shut down. There was, as noted, already 9 years worth of code changes in that repo. That would be unfortunate.
Recently I've been trying to find people to do IT tasks, but we're not at the full-time IT level, we're still a small business.
Stuff like:
- Setting up a RAID0 dev ubuntu box and transferring my data to it
- Setting up a VPN to EC2 for both our office (using our router that does VPN) and our personal machines (for when we're away from the office)
- Diagnosing office internet issues
There are usually local companies that do IT services if you live in or near any major city. Check with other people you know or check Yelp for references and reviews.
This fits in very neatly with Casey Muratori's "Compression Oriented Programming" which is essentially Write your usage code first, keep YAGNI in mind, then Refactor.
I was once asked in an interview whether, on a new project, I would start with a monolithic app or some sort of SOA. ("Microservices Architecture" is the new SOA)
I answered that I would start with a monolith because usually you're trying to find product/market fit as quickly as possible, and having an SOA would likely slow you down due to its upfront cost. (How many services? Any redundancy? What do they do? How many databases? How do you keep them up? How do you diagnose when they're failing? etc)
They responded that they always do SOA, that the benefits are clear, that monolithic apps are idiotic, etc.
I didn't want the job... but I was confused about our differing opinions.
What I've learned since then is basically: are you constructing a building, or making an art installation?
Construction is thousands of years old. Contractors have huge tables of how long each part of the process takes, in what order, down to the quarter-hour (in some cases) and are fairly accurate in their estimations. (They're still hilariously wrong on occasion either in time or budget, building anything is pretty difficult)
In this case, architecting SOA would work out since you've done it before. You know about how long it takes, what the pitfalls are, what support infrastructure you need, etc.
When you're making art, making something new, with new materials, without a manual, with only some best practices in mind, time becomes essentially unbounded.
Upfront architecture in this case would be poorly suited to the situation - you'll probably end up changing it a lot, and each time you introduce a bit of "rigor" to the system it becomes a bit more difficult to change. Especially over socket boundaries and different API versions.
I also feel like bad code has a survivorship bias - you only hear about it because the company took off. To get the company to take off, the code perhaps was necessarily rushed just so the company could stay around long enough to make money.
"Ah, but if we could have done it right in the first place!"
You don't hear about the companies who die, no matter the quality of their code.
reply