Hacker News new | past | comments | ask | show | jobs | submit | mNovak's comments login

Given that most (all?) of the US already collects property tax, this information is usually public, and (in my experience at least) includes land and property valuations, I'm curious if anyone has attempted a basic estimate of the needed LVT tax to fund some percentage of government spending? Many variables, sure, but I'm talking order of magnitude.


Is there a human-play benchmark (even informally) for this style of interface? Not saying it's necessary or even relevant, I'm just curious to know what programmatic Factorio feels like -- I imagine spatial reasoning around text prompts would be fairly challenging for human players to navigate as well.

Human benchmarks for Factorio are speed runners — rushing to launch the first rocket. The current record is just over 4 hours for one player, and 90 minutes for a team. You can see just from that that a multi-tasking LLM has room to outperform humans.

The current 4h12m hour record is for 100% (where you have to get every single achievement in the game, in the one run), any% (where you just need to launch a rocket) is under 2 hours (1h42 for the latest factorio v2.x, 1h18 for v1.x). There are a few other differences between the categories regarding map selection and blueprint use as well.

Records and specific rules for all categories can be found at https://www.speedrun.com/factorio


I think he is talking about a human using the programatic API the LLMs are using to play the game. I think that would be a whole lot slower than normal playthrough

We were able to pass all the early lab tasks manually - although it took a lot longer than using the UI!

A lot of fun! I enjoy the subtle coloring and changing background texture.

It's a bit confusing at first to grasp that one upgrade is applied randomly at the start of the game (I think?). The first few times I played it felt like the rules were changing and I couldn't figure out what contributes to the score multiplier.

Also, when you get multiple upgrades, the text reads something like "You caught 0 coins out of 0 in 0 seconds (+1 upgrade and choice). You missed 0 times (+1 upgrade and choice)." which is a bit confusing.


Thanks, i put a lot of thought in the neon effects. Yeah, i'll have to explain that in some clearer way. The "You caught 0 coins out of 0 in 0 seconds" is definitely a bug, happened to someone else too, do you mind trying to reproduce it ?


I see the buggy text too (You caught 0 coins out of 0 in 0 seconds (+1 upgrade and choice). You missed 0 times .). Usually on the 2nd+ item selection screen.


I think there's a lot of interest in this topic generally, so I hope this continues to develop. I for one pay for ExpanDrive, and am also aware of Mountain Duck in this space.

In my experience, those options are functional for infrequent operations, but not constant daily interaction, in the way you can with Dropbox for instance. Offline caching is of course one major aspect, but also I presume lots of error handling hidden in the background for smooth operation.


Thank you for your comment. I'm surprised to see such interest in what I'm doing. As this is just a personal hobby, it's difficult to make rapid progress, but I will try to develop it as much as possible.


Now this is an interesting idea.. a wall clock display showing the closest time-relevant meme..


Yes, if you can coordinate and hold their position to ~1/10th of the wavelength (gets challenging above a few GHz), and synchronize their transmitters/receivers with ps accuracy (feasible if there's no GPS interference). And somehow you're exchanging the raw signal components amongst all the platforms.

The parabolic distribution is not at all needed in this case, you would just adjust the phase at each unit based on the distribution you have.

It is a thing that's considered, mostly for very low frequencies (1-100MHz) where all the above challenges are vastly simplified, and also large antennas (potentially km) are needed for any kind of directionality.


No need to hold their position that accurately, just need to know, ideally in real-time.


At 6Ghz, 5cm wavelength, 1/10 would be 5mm. I think that might be a bit under regular RTK accuracy.

But at lower frequencies, maybe. Or you can do several passes with the same drone and process later :D


If the drones use e.g. laser rangefinders to measure the distance to 2-3 known reflectors, and maybe to each other, with sub-mm accuracy, that may be enough.

Deploying an antenna that's effectively 50 or 100 m wide by lifting 10-20 drones, after some simple ground preparations, could be invaluable in many scenarios, especially for the military, of course.


The same technique as he uses here for synchronizing the receivers would work. Common reception and triangulation of a strong local beacon would enable positioning at much better than GNSS precision, both because of the SNR advantage itself and because you'd have more options for dealing with carrier phase ambiguity.


Does UWB provide enough accuracy to be useful or would you need something else?


This site [1] has pretty approachable content generally, though it looks like the SAR page is pretty short.

I find it easiest to understand in the context of a conventional phased array, but that might not help if you don't already know how those work.

[1] https://www.radartutorial.eu/20.airborne/ab07.en.html


Yeah this kind of kindergarten explanation I've seen.. This is making a bunch of simplification that aren't helpful

With a phased array you're beam forming and sweeping an area of space. Your signal returns are from the beam or side lobes. You can passively beam form on Rx as well.

But with SAR you're not beam forming. You're illuminating everything - the whole ground below you. And you get a return from everywhere all at once. Two equidistant reflectors will return signals simulatenously. If your flight path is between these two points, and the distance is always equal, how can you differentiate them?

You're digitally beam forming on the Rx somehow but I think there is more to it


> But with SAR you're not beam forming. You're illuminating everything - the whole ground below you. And you get a return from everywhere all at once. Two equidistant reflectors will return signals simulatenously. If your flight path is between these two points, and the distance is always equal, how can you differentiate them?

There are a couple conceptual ways to think about SAR. One is, in fact, as beamforming. Each position of the radar along the synthetic aperture is one element in an enormous array that's the length of the synthetic aperture itself: that's your receive array.

Regarding your question about scatterers that are equidistant along the entire synthetic aperture length: typically, SAR systems don't use isotropic antennas. And they're generally side-looking. So you would see the scatterer to one side of the radar, but not the equidistant scatterer on the other side.

If you had an isotropic antenna that saw to each side of the synthetic aperture, then the resulting image would be a coherent combination of both sides. Relevant search terms would be iso-range and iso-Doppler lines. Scatterers along the same iso-range and iso-Doppler lines over the length of the synthetic aperture are not distinguishable.

As to your question earlier in the chain, my preferred SAR book is Carrara et al. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms. Given the title, it is of course geared toward spotlight (where you steer the beam to a particular point) rather than strip map or swath (where your beam is pointed at a fixed angle and dragged as you move along). It has decent coverage of the more computationally efficient Fourier-based image formation algorithms but does not really treat algorithms like the back projection that Henrik uses (I also think back projection is easier to grasp conceptually, particularly for those without a lot of background in Fourier transforms). But my book preference might just be because that's what I first learned with.


>> Your signal returns are from the beam or side lobes

You're skipping a step -- where does that beam come from? For simplicity lets think about a scene illuminated uniformly (i.e. from a single element) so that we don't get hung up on the transmit beam. I think we agree you could still sweep a receiving phased array beam across that scene. Lets further assume it's digital beamforming, so you're storing a copy of the signal incident _at every element of the array_. Not a 'beam' yet, just a bunch of individual signals.

>> you get a return from everywhere all at once

Yes! Think about each of those elements of the phased array -- they're also receiving signals from everywhere all at once.

It only becomes localized into a beam when you combine all the elements with specific phase weights. That process of combining element returns to form the beam is mathematically identical to what you do in SAR as well -- combine all your individual 'element' (individual snapshot in space) responses with some phase weights to get the return in one direction. Repeat the math in all directions to form one dimension of the image (second dimension is the radar time-of-flight bit, which is unrelated to the beamforming).

Maybe not you specifically, but I think people don't understand the 'synthetic aperture' part. Specifically, that you can ignore the time between snapshots (because the transmitter and receiver are synchronized) and act like all the snapshots the platform took across the line of flight happened simultaneously. What you're left with is the element responses to a big phased array, and you can 'beamform' using those responses.


You can't differentiate them in that case. You'd have to fly orthogonal across the surface for~ maximum effect.


I find there's a two step process, first overlapping the images (but which makes the images blurry), then letting my eyes refocus so the middle image is crisp. Only then does 3D or shimmer effect happen. Takes some practice to merge the images while maintaining focus for me.


I recall someone using one of the image generation models for pretty impressive (lossy) compression as well -- I wonder if AI data compression/inflation will be a viable concept in the future; the cost of inference right now is high, but it feels similar to the way cryptographic functions were more expensive before they got universal hardware acceleration.


At a startup where I worked many years ago, they trained a model to take the image and screen size as the input and it would output the JPG compression level to use so that the image appears the same to people. It worked exceedingly well that a major software company offered to acquire the startup just for that. Alas, the founders were too ambitious/greedy and said no. It all burned down.


That seems like a fun project to replicate independently. You didn't want to rebuild it?


Would love to see the files or precut sheets released as the ultimate scale model kit. It'd be really awesome to take these files to a laser cutter and make the model out of thin aluminum sheets.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: