Hacker News new | past | comments | ask | show | jobs | submit | romanzubenko's comments login

Amazing post, read through all blog posts in a single beat. I would be great to have a final blogpost on assembly process.

On a different note, it's mind-blowing that today one person today can do small scale design and manufacturing of a consumer electronics product. Super inspiring.


President of Iran is not head of state for Iran, you might be thinking of Supreme Leader of Iran, Ali Khamenei, who is very much alive.


Not just the complexity but the absurd amount of human effort behind to produce every object around us.

As John Collison tweeted: "As you become an adult, you realize that things around you weren't just always there; people made them happen. But only recently have I started to internalize how much tenacity everything requires. That hotel, that park, that railway. The world is a museum of passion projects."

https://twitter.com/collision/status/1529452415346302976


During the hackathon the team only did a simulated flight, not a real flight, so take the results on effectiveness with a grain of salt. In any environment with significant seasonal changes, localization based on google maps will be a lot harder.


Each 5 days, a satellite from sentinel mission take a picture of your location. It's 8 days for landsat mission. Those are publicly available data (I encourage everyone to use it for environment studies, I think any software people that care about the future should use it).

It's obviously not the same precision as the google map, and it needs update, but it's enough to take in account seasonal change and even brutal events (floods, war, fire, you name it).


Where can you find this data?



There is also https://apps.sentinel-hub.com/eo-browser/ that regroup landsat and sentinel under the same interface.



Hmm used the links shared below but the picture of my home is at least 4-6 months out of date. What am I missing?


I don't know where you live, but the default search at https://apps.sentinel-hub.com/eo-browser/ use sentinel-2 (the true color), 100% maximum cloud coverage (which mean any image), and the latest date.

So you should be able to find the tile of your region at a date close to today, definitely not 4-6 month.


Satellite images can be taken on a dependable schedule, but the weather doesn't always provide a clear view of the ground.


Occurred to me that in a war or over the water this wouldn’t be useful. But I think it will be a useful technology (that to be fair likely already exists), in addition to highly accurate dead reckoning systems, when GPS is knocked out or unreliable, as secondary fall back navigation.


> in a war … wouldn’t be useful

Why do you say that? Navigational techniques like this (developed and validated over longer timeframes of course) are precisely for war where you want to cause mayhem for your enemies who want to prevent you from doing that by jamming GPS.

This is not just an idea but we have already fielded systems.

> over the water this wouldn’t be useful

What is typically done with cruise missiles launched from sea that there is a wide sweep of the coast mapped where it is predicted to make landfall. How wide this zone has to be depends on the performance of the innertial guidance and the quality of the fix it is starting out with.


Well, landmarks have a tendency to change quickly in a war zone. Making whatever map material you have useless, or close to useless.

All the navigational methods predating GPS still work perfectly fine so.


For the human eye maybe. For a computer using statistics less so. Extracting signals under a mountain of noise is a long solved problem - all our modern communication is based on it.


You can get new satellite imagery ever day… (be it military ones if you're a major power, or just commercial one like your average OSINter)


Sure. And then you have to upload those new, and vetted, imagines to all your drones. Other nav data is much more stable.

Mind you, military hardware is not your smartphone, OTA updates are usually not a thong for various reasons.

The approach for sure is interesting so.


That is all really interesting speculation, but I'm not describing a system which could be, but one which is already available and fielded. In cruise missiles it is called DSMAC.

Here are some papers: https://secwww.jhuapl.edu/techdigest/Content/techdigest/pdf/...

https://apps.dtic.mil/sti/tr/pdf/ADA315439.pdf


Basically inertia guidance enhanced by terrain matching. Which is great, but terrain matching as a stand-alonenis pretty useless. And it still requires good map data. Fine for a cruise missile launched from a base or ship. Becomes an operational issue for cheap throw-away drones launched from the middle of nowhere.


Yes, that's also how it works right fucking now.


Well if you combine it with dead reckoning, I guess even a war torn field could be referenced against a pre-war image?

I mean, a prominent tree along a stone wall might be sufficient to be fairly sure, if you at least got some idea of the area you're in via dead reckoning.


And deadrecking is already standard in anything military anyways. For decades.

As an added data source to improve navigation accuracy, the approach sure is interesting (I am no expert in nav systems, just remotely familiar with some of them). Unless the approach was tried in real world scenarios, and developed to proper standards, we won't see it used in a milotary context so. Or civilian aerospace.

Especially since GPS is dirt cheap and works for most drone applications just fine (GPS, Galileo, Glanos doesn't matter).


For a loitering drone I imagine dead reckoning would cause significant drift unless corrected by external input. GPS is great when it's available but can be jammed.

I was thinking along the lines of preprocessing satellite images to extract prominent features, then using modern image processing to try to match against the observed features.

A quite underconstrained problem in general, but if you have a decent idea of where you should be due to dead reckoning, then perhaps quite doable?


You can't use visual key-points to navigate over open water.

You can use other things like Visual Odometer, but there are better sensors/techniques for that.

What it can do, if you have a big enough database onboard, and descriptors that are trained on the right thing, is give you a location when you hit land.


That's exactly what the comment you replied to was describing.


> You can't use visual key-points to navigate over open water.

No, but you can use the stars. Even during the day.


true, but that requires an accurate clock, and specialised hardware. Ideally you also need to be above the clouds as well.


For only $300 plus shipping from Ali Express you get a high accuracy inertial navigation system. Only weighs 10 grams.

The future is scary. It is now straightforward and inexpensive for lots of folks to construct jam-resistant Shahed-style drones. https://www.aliexpress.com/item/1005006499367697.html


And for a little less, you can buy the original, from Analog Devices.[1]

Those things are getting really good. The drift specs keep getting better - a few degrees per hour now. The last time I used that kind of thing it was many degrees per minute.

Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly. A drone with a downward looking camera can get a vector from simple optical flow and use that to correct the IMU. Won't work very well over water, though.

[1] https://www.analog.com/en/products/ADIS16460.html


>Linear motion is still a problem, because, if all you have is accelerometers, position and velocity error accumulates rapidly.

An INS will usually need some kind of sensor fusion to become accurate anyways. Like how certain intercontinental ballistic missiles use stars (sometimes only a single one) as reference. But all these things are based on the assumption of clear sight and even this google maps image based navigation will fail if the weather is bad.


10^-5 degrees/hour drift was achieved in the 1970s, for ICBMs, at very high cost.


The laser ring gyro? It'll be fun when those start showing up on Aliexpress.


Sold by "Peace Dove Grocery Store."


Sounds expensive.


“High accuracy”


For oceans, they could use juvenile loggerhead turtles: https://www.reed.edu/biology/courses/BIO342/2011_syllabus/20...


Being able to nagivate using only a map stored locally sounds extremely useful in a war.


Don’t we have basically this but it looks at stars?


I guess if it's really a possibility for military use they won't use google maps...


So the article is fraud.


As with Stable Diffusion, text prompting will be the least controllable way to get useful output with this model. I can easily imagine midi being used as an input with control net to essentially get a neural synthesizer.


Yes. Since working on my AI melodies project (https://www.melodies.ai/) two years ago, I've been saying that producing a high-quality, finalized song from text won't be feasible or even desirable for a while, and it's better to focus on using AI in various aspects of music making that support the artist's process.


Text will be an important input channel for texture, sound type, voice type and so on. You can't just use input audio, that defeats the point of generating something new. You can't also only use MIDI, it still needs to know what sits behind those notes, what performance, what instrument. So we need multiple channels.


Emad hinted here on HN the last time this was discussed that they were experimenting with exactly that. It will come, by them or by someone else quickly.

Text-prompting is just a very coarse tool to quickly get some base to stand on, ControlNet is where the human creativity again enters.


Yeah, we build ComfyUI so you can imagine what is coming soon around that.

Need to add more stuff to my Soundcloud https://on.soundcloud.com/XrqNb


For music perhaps. For sound effects I think text prompting is the rather good UI.


Controlnet/img2img style where you can mimic a sound with your mouth and it then makes it realistic could also be usable.


I think it would be ideal if it could take the audio recording of humming or singing a melody together with a text prompt and spitting out a track that resembles it


1. Do your humming and pass it to something like Stable Audio with ControlNet

2. Convert/average the tone for each beat to generate something resembling a music sheet

3. Use vocaloid with LLM generated lyrics based on your prompt (or just put in your lyrics) and pass in the music file

4. Combine the 1-3

Would love to see this


But works great when you don’t need much control, prompt example: “Free-jazz solo by tenor saxophonist, no time signature.”


What other inputs besides text promoting is there for SD? Are you referring to img2img, controlnet, etc?


It's crazy that nobody cares. It seems to me that ML hype trends focus on denying skills and disproving creativity by denoising randoms into what are indistinguishable from human generation, and to me this whole chain of negatives don't seem to have proven its worth.


LLMs allow people without certain skills to be creative in forms of art that are inaccessible to them.

With Dalee - I can get an image of something I have in my head, without investing into watching hundreds of hours of Bob Ross(which I do anyway)

With audio generators - I can produce music that is in my head, without learning how to play an instrument or paying someone to do it. I have to arrange it correctly, but I can put out a techno track without spending years in learning the intricacies.


Did anyone perform an analysis on where the laid off employees for last 3 years went? According to layoffs.fyi nearly 500k were laid off since 2021, would be interesting to see if people mostly reshuffled within FAANG or there was a more structural talent migration e.g. from megacorps to startups.


And how many haven't landed anywhere yet.


And after ~12 months or so, they are removed from the "unemployment" numbers as "no longer looking for work" so that the numbers look better.

ref: https://www.bls.gov/cps/cps_htgm.htm#nilf


No, you still count as unemployed even if you haven't had a job for 12 months, as your own link attests. The question is whether you've looked for a job at any point in the past year, not whether you've held one.

More broadly, the US government publishes six different measures of unemployment, all of which can be seen here: https://www.bls.gov/news.release/empsit.t15.htm . The media usually focuses on U-3 (currently 3.7%), whereas the broadest metric is U-6 (currently 7.2%).


Your comment us needlessly pedantic, as I linked the full answer and said “~12 months” which is “about twelve months” which is not incorrect, though I’m sure the average is probably closer to 18-20 months for being removed from the U3 stat.


Or didn't seriously try to. I know people who almost certainly did pretty well over the past 15 years or so, got caught up in some layoff, and retired or semi-retired--probably a few years earlier than they would have done in different circumstances.


That's kind of what I've done. I looked around some last year, didn't see much out there, had a few interviews here and there that usually ended in ghosting. After the unemployment ran out stopped looking for a while. Then started looking again recently because I'm not sure I'm ready to be retired yet.


There are volunteer or more-or-less volunteer things you can do but you need to find the right open source project (or start one), find a channel where people will actually read what you write and do it regularly enough, have connections to people who actually want your advice, etc. Once you're not connected to an organization some of those things are harder than they seem unless you have specific, still-relevant credentials.


Or who ended up underemployed. I know one guy who ended up stocking shelves at Home Depot to keep food on the table for the family until the job market turns around. Technically employed, but not where he ought to be.


As a software engineer, finding a job is harder than it's ever been for me, but it still seems _easier_ than my peers in other industries.


Depends on the other industries, I guess. It's pretty easy to find service jobs right now. The other thing I'm noticing is that the hourly rates being offered for contracts are a lot less than they were 2 years ago.


in 2021 Sequoia switched their strategy to hold on to stock of their portfolio companies for few more years after IPO.

"Sequoia is abandoning the 10-year venture fund, in which limited partners, the outside investors that contribute to the fund, expect to get paid back over a decade. The firm said it’s establishing a single fund, the Sequoia Fund, that will raise money from LPs and then funnel that capital down to a series of smaller funds that invest by stage.

Proceeds from those funds will feed back into the Sequoia Fund. With no time horizon, Sequoia can hold public stock for longer stretches, rather than distributing those shares to LPs. Investors who want liquidity can pull money out instead of waiting for distributions."

https://www.cnbc.com/2021/10/26/sequoia-changes-fund-structu...


Ah permanent capital vehicles... forgot Sequoia did this too


Making something so extremely beautiful is the surest way to optimize for long term longevity.


Unfortunately that's not true. Most beautiful stuff has been destroyed at one point or another.

The laws of economics and war and necessity really don't care the slightest about beauty.

The pyramids of Giza were covered with fine, smooth, white limestone that gleamed in the sunlight -- as beautiful as you could imagine. That didn't prevent people from stealing most of it later for their own purposes.


I’ll take the other side on this. OP said “surest” which you seem to interpret as “certain.”

Of course not _all_ beautiful artifacts persist - no one takes that position. But it is reasonable to say that most artifacts which receive attention today are just the artifacts which were formed with special care and whose beauty was preserved, probably by accident, for us.


fwiw according to Russian news[0] workers knew he was Hungarian, and reached out to Hungarian officials to claim him, but didn't hear back. According to article at the time when Hungary was prepping to join EU, ex nazi PoW was not a priority.

He only was "rediscovered", when Russian hospital personnel got him speaking Hungarian on camera for Russian news, and eventually news segment was picked up in Hungary and 80 Hungarian families came forward to claim him as a missing relative.

[0] https://ren.tv/news/lifestyle/887400-poslednii-plennyi-vtoro...


The article also says the Hungarian side later confirmed he did have a mental illness but it's RenTV which is not a very trustable source so one should take it with a grain of salt.


After fifty years locked up, of course he had a mental illness. It's amazing he was still sapient.


Unfortunately, I think it's likely that many of us would have mental problems if we went fifty years without talking to another human.


Reminds me of analytics.js which later turned into Segment and 3b acquisition


I do think this is an apt analogy. I've heard a counterpoint that there won't be enough "destinations" for this to work, but then it's not hard to imagine a single order of magnitude more LLM "destinations" than today (all of which were launched in the last 12 months).

There's also the fact that the data being sent to these LLM "destinations" could be significantly more valuable (or contain significantly more sensitive information) than the average segment identity or track objects.


Thanks - that's very kind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: