Amazing outcome, congratulations! Back in 2015 working at Gusto as the first growth engineer I remember reading vwo blog to understand how a/b testing works.
If you are an ecom brand or tech startup - I'm CEO of Afternoon.co (YC F25) providing same services as Bench including year end tax filing, ready to onboard you asap, just email me at roman@afternoon.co
Amazing post, read through all blog posts in a single beat. I would be great to have a final blogpost on assembly process.
On a different note, it's mind-blowing that today one person today can do small scale design and manufacturing of a consumer electronics product. Super inspiring.
Not just the complexity but the absurd amount of human effort behind to produce every object around us.
As John Collison tweeted: "As you become an adult, you realize that things around you weren't just always there; people made them happen. But only recently have I started to internalize how much tenacity everything requires. That hotel, that park, that railway. The world is a museum of passion projects."
During the hackathon the team only did a simulated flight, not a real flight, so take the results on effectiveness with a grain of salt. In any environment with significant seasonal changes, localization based on google maps will be a lot harder.
Each 5 days, a satellite from sentinel mission take a picture of your location. It's 8 days for landsat mission. Those are publicly available data (I encourage everyone to use it for environment studies, I think any software people that care about the future should use it).
It's obviously not the same precision as the google map, and it needs update, but it's enough to take in account seasonal change and even brutal events (floods, war, fire, you name it).
I don't know where you live, but the default search at https://apps.sentinel-hub.com/eo-browser/ use sentinel-2 (the true color), 100% maximum cloud coverage (which mean any image), and the latest date.
So you should be able to find the tile of your region at a date close to today, definitely not 4-6 month.
Occurred to me that in a war or over the water this wouldn’t be useful. But I think it will be a useful technology (that to be fair likely already exists), in addition to highly accurate dead reckoning systems, when GPS is knocked out or unreliable, as secondary fall back navigation.
Why do you say that? Navigational techniques like this (developed and validated over longer timeframes of course) are precisely for war where you want to cause mayhem for your enemies who want to prevent you from doing that by jamming GPS.
This is not just an idea but we have already fielded systems.
> over the water this wouldn’t be useful
What is typically done with cruise missiles launched from sea that there is a wide sweep of the coast mapped where it is predicted to make landfall. How wide this zone has to be depends on the performance of the innertial guidance and the quality of the fix it is starting out with.
For the human eye maybe. For a computer using statistics less so. Extracting signals under a mountain of noise is a long solved problem - all our modern communication is based on it.
That is all really interesting speculation, but I'm not describing a system which could be, but one which is already available and fielded. In cruise missiles it is called DSMAC.
Basically inertia guidance enhanced by terrain matching. Which is great, but terrain matching as a stand-alonenis pretty useless. And it still requires good map data. Fine for a cruise missile launched from a base or ship. Becomes an operational issue for cheap throw-away drones launched from the middle of nowhere.
Well if you combine it with dead reckoning, I guess even a war torn field could be referenced against a pre-war image?
I mean, a prominent tree along a stone wall might be sufficient to be fairly sure, if you at least got some idea of the area you're in via dead reckoning.
And deadrecking is already standard in anything military anyways. For decades.
As an added data source to improve navigation accuracy, the approach sure is interesting (I am no expert in nav systems, just remotely familiar with some of them). Unless the approach was tried in real world scenarios, and developed to proper standards, we won't see it used in a milotary context so. Or civilian aerospace.
Especially since GPS is dirt cheap and works for most drone applications just fine (GPS, Galileo, Glanos doesn't matter).
For a loitering drone I imagine dead reckoning would cause significant drift unless corrected by external input. GPS is great when it's available but can be jammed.
I was thinking along the lines of preprocessing satellite images to extract prominent features, then using modern image processing to try to match against the observed features.
A quite underconstrained problem in general, but if you have a decent idea of where you should be due to dead reckoning, then perhaps quite doable?
You can't use visual key-points to navigate over open water.
You can use other things like Visual Odometer, but there are better sensors/techniques for that.
What it can do, if you have a big enough database onboard, and descriptors that are trained on the right thing, is give you a location when you hit land.
And for a little less, you can buy the original, from Analog Devices.[1]
Those things are getting really good. The drift specs keep getting better - a few degrees per hour now. The last time I used that kind of thing it was many degrees per minute.
Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly. A drone with a downward looking camera can get a vector from simple optical flow and use that to correct the IMU. Won't work very well over water, though.
>Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly.
An INS will usually need some kind of sensor fusion to become accurate anyways. Like how certain intercontinental ballistic missiles use stars (sometimes only a single one) as reference. But all these things are based on the assumption of clear sight and even this google maps image based navigation will fail if the weather is bad.
As with Stable Diffusion, text prompting will be the least controllable way to get useful output with this model. I can easily imagine midi being used as an input with control net to essentially get a neural synthesizer.
Yes. Since working on my AI melodies project (https://www.melodies.ai/) two years ago, I've been saying that producing a high-quality, finalized song from text won't be feasible or even desirable for a while, and it's better to focus on using AI in various aspects of music making that support the artist's process.
Text will be an important input channel for texture, sound type, voice type and so on. You can't just use input audio, that defeats the point of generating something new. You can't also only use MIDI, it still needs to know what sits behind those notes, what performance, what instrument. So we need multiple channels.
Emad hinted here on HN the last time this was discussed that they were experimenting with exactly that. It will come, by them or by someone else quickly.
Text-prompting is just a very coarse tool to quickly get some base to stand on, ControlNet is where the human creativity again enters.
I think it would be ideal if it could take the audio recording of humming or singing a melody together with a text prompt and spitting out a track that resembles it
It's crazy that nobody cares. It seems to me that ML hype trends focus on denying skills and disproving creativity by denoising randoms into what are indistinguishable from human generation, and to me this whole chain of negatives don't seem to have proven its worth.
LLMs allow people without certain skills to be creative in forms of art that are inaccessible to them.
With Dalee - I can get an image of something I have in my head, without investing into watching hundreds of hours of Bob Ross(which I do anyway)
With audio generators - I can produce music that is in my head, without learning how to play an instrument or paying someone to do it. I have to arrange it correctly, but I can put out a techno track without spending years in learning the intricacies.
Did anyone perform an analysis on where the laid off employees for last 3 years went? According to layoffs.fyi nearly 500k were laid off since 2021, would be interesting to see if people mostly reshuffled within FAANG or there was a more structural talent migration e.g. from megacorps to startups.
No, you still count as unemployed even if you haven't had a job for 12 months, as your own link attests. The question is whether you've looked for a job at any point in the past year, not whether you've held one.
More broadly, the US government publishes six different measures of unemployment, all of which can be seen here: https://www.bls.gov/news.release/empsit.t15.htm . The media usually focuses on U-3 (currently 3.7%), whereas the broadest metric is U-6 (currently 7.2%).
Your comment us needlessly pedantic, as I linked the full answer and said “~12 months” which is “about twelve months” which is not incorrect, though I’m sure the average is probably closer to 18-20 months for being removed from the U3 stat.
Or didn't seriously try to. I know people who almost certainly did pretty well over the past 15 years or so, got caught up in some layoff, and retired or semi-retired--probably a few years earlier than they would have done in different circumstances.
That's kind of what I've done. I looked around some last year, didn't see much out there, had a few interviews here and there that usually ended in ghosting. After the unemployment ran out stopped looking for a while. Then started looking again recently because I'm not sure I'm ready to be retired yet.
There are volunteer or more-or-less volunteer things you can do but you need to find the right open source project (or start one), find a channel where people will actually read what you write and do it regularly enough, have connections to people who actually want your advice, etc. Once you're not connected to an organization some of those things are harder than they seem unless you have specific, still-relevant credentials.
Or who ended up underemployed. I know one guy who ended up stocking shelves at Home Depot to keep food on the table for the family until the job market turns around. Technically employed, but not where he ought to be.
Depends on the other industries, I guess. It's pretty easy to find service jobs right now. The other thing I'm noticing is that the hourly rates being offered for contracts are a lot less than they were 2 years ago.
in 2021 Sequoia switched their strategy to hold on to stock of their portfolio companies for few more years after IPO.
"Sequoia is abandoning the 10-year venture fund, in which limited partners, the outside investors that contribute to the fund, expect to get paid back over a decade. The firm said it’s establishing a single fund, the Sequoia Fund, that will raise money from LPs and then funnel that capital down to a series of smaller funds that invest by stage.
Proceeds from those funds will feed back into the Sequoia Fund. With no time horizon, Sequoia can hold public stock for longer stretches, rather than distributing those shares to LPs. Investors who want liquidity can pull money out instead of waiting for distributions."
Unfortunately that's not true. Most beautiful stuff has been destroyed at one point or another.
The laws of economics and war and necessity really don't care the slightest about beauty.
The pyramids of Giza were covered with fine, smooth, white limestone that gleamed in the sunlight -- as beautiful as you could imagine. That didn't prevent people from stealing most of it later for their own purposes.
I’ll take the other side on this. OP said “surest” which you seem to interpret as “certain.”
Of course not _all_ beautiful artifacts persist - no one takes that position. But it is reasonable to say that most artifacts which receive attention today are just the artifacts which were formed with special care and whose beauty was preserved, probably by accident, for us.