Hacker News new | past | comments | ask | show | jobs | submit login

Hi from the OpenStreetMap Foundation. Please don't add AI-detected features directly to the database.

The algorithms have problems with false positives, and with mapping straight or rectangular objects as wobbly, as shown in the second-to-last screenshot.

As a helper to detect missing features, this is a precious tool. But we still need human intervention to make sure the detected objects are drawn correctly.

See also: https://wiki.openstreetmap.org/wiki/Import/Guidelines and https://wiki.openstreetmap.org/wiki/Automated_Edits_code_of_...




> But we still need human intervention to make sure the detected objects are drawn correctly.

Hi, I am the author.

The demo app and any provided code example includes a step asking a human to verify the detected features. You can't upload them automatically unless you modify the source code.

I reiterate the human verification across the docs, linked post, and any code samples.

I haven't ever uploaded features automatically. In fact I manually edited and labeled hundreds of swimming pool samples myself before even training the first version.

Happy to hear and implement any ideas on how to improve the process to prevent automated features to be uploaded.

I know some people might say: just don't publish the tool, I think we can do better at embracing AI and having an open discussion.


I don't understand the 'human verification' aspect.

Your docs show a simple image where the user can choose to keep a new object or not. [0] Afterwards it says: "The ones you chose to keep will be uploaded to OpenStreetMap using upload_osm.". This is uploading features automatically. The fact that it asks 'are you sure' is just silly. We all know if humans have to click yes 90% of the time, and no 10% of the time, they'll miss a lot of no's.

The image also proofs that:

- You don't see any polygons properly. You just see a an image of where the pool is. Already on the image I can see that if the polygons align to that image, it will be a total mess.

- You don't see any polygons further away from the object.

Both these points are in stereo's reply that the resulted data was a mess.

Please consider pulling the project. This will generate a lot of data that volunteers will have to check and revert.

[0] https://github.com/mozilla-ai/osm-ai-helper/blob/main/docs/s...


> This will generate a lot of data that volunteers will have to check and revert.

This is just not true. The data can be easily identified with the `created_by` tag. And I have been reviewing myself any data uploaded with the demo (with a clear different criteria on what is good enough)


If the upstream project thinks there may be a potential problem with this, that is a problem in itself. Try not to get defensive about it, just pull the project and have another go at the problem in collaboration with upstream. Perhaps parts of the project and be useful for upstream? Perhaps another workflow could make the project better?

We all strive for better open data. I upstream feel there is a risk that automated uploads could be easier with this project, creating more boring work for them which is already enough of a problem, that animosity will be a net negative for everyone in this space. Technical solutions such as new tags or opt out schemes will not solve the problem.


[flagged]


The OSM community has had extremely clear rules around automated edits for most of its existence. Every experienced mapper has seen first-hand the sorts of problems they can cause. The fact that it's using AI in this instance does not give any sort of exception to these rules. To emphasize, there are already AI-assisted tools that are allowed,[0] this isn't a blanket application of "no AI ever," it's about doing so properly with the right community vetting.

[0] Most notably: https://wiki.openstreetmap.org/wiki/Rapid

Edit to add: To be clear, they have since taken steps to try to resolve the concerns in question, the discussion is ongoing. I suspect at the end of this we will get a useful tool, it's just off to a rocky start.


The "good reason" is that OSM is supposed to contain real data, not wobbly AI guesswork data.


[flagged]


That's not an assertion I'm going to take seriously without evidence.


[flagged]


Link the evidence, then. Numbers, not vibes.


Mozilla engineers think they are way better than the rest of us though. Don’t need to follow the same rules.


Can I suggest also adding a tag for ML originated features that's not specific to your demo/app? Maybe this can help put extra eyes on them and/or help prevent them from polluting the DB wholesale. Maybe client apps could have a toggle to allow/reject them.


adding a "created_by" tag is not opt in. it's opt out. you are de facto choosing how OSM volunteers must approach their work, without their consent.


> Happy to hear and implement any ideas on how to improve the process to prevent automated features to be uploaded.

Idea: do not automatically create features that a human can simply approve, instead require them to draw the polygon themselves.


long time editor of osm here. what you describe is what the rapid [1] editor from meta does where user is forced to manually select objects overlayed sat imagery. is limited to 50 objects before user must push. a great method i think

[1] https://rapideditor.org/


That is what I will implement before bringing back the demo https://news.ycombinator.com/item?id=43448649


That kind of defeats the point surely?


Well it can still detect features in the satellite image that are missing on the map. That would already be a large help, no?


The point of what? Slamming a bunch of contributions in? Or actually improving the product for end users?


Adding missing features. I'm pretty sure if users are searching for swimming pools they'd rather have a swimming pool with slightly bent sides than none at all.


I am afraid your “human in the loop” isn't [0].

You acknowledge the problem of AI slop up-front, but seem to have chosen to plow forward anyway. Please do consider your actions and their effects carefully [1]. You are in a position of potential influence; try not to squander it.

All the best,

-HG

[0]: https://pluralistic.net/2024/10/30/a-neck-in-a-noose/

[1]: https://web.cs.ucdavis.edu/~rogaway/papers/radical.pdf


"plow forward" and implying they didn't consider their actions and their effects seem ungenerous, given the list of precautions in the comment you're replying to.

You can disagree on whether the measures are effective, of course, but they're clearly not thoughtless.


It was the equivalent of vibe coding:

> The polygons the algorithm had drawn were consistently of poor quality with stray nodes and nodes far outside the pool boundaries, and the imports hadn't been discussed with local communities.

https://news.ycombinator.com/item?id=43448498


That too seems an ungenerous characterisation, and the GP could not have deduced that from the OP. I'm glad the author is constructively incorporating criticism and working to turn this into a useful tool that OSM users will benefit from, because they wouldn't have been the first to get overly defensive after their work was interpreted in the worst light possible.


Tracing satellite data is really boring to do but easy to check. I would describe AI acting as a centaur here, or perhaps a pairing of equals.


I believe your critique would be more valuable if you've actually uncovered cases where the human in the loop missed these issues and bad or "wobbly" data ended up in the OSM database: did this happen? (the other comment from another person confirms it has)

Otherwise, you are discounting their effort based on a prejudice — others might be unable to supervise an AI, but someone who's actually developed it might have a better chance of success.


The screenshot in the article shows wobbly data. You may need to zoom in and look closely to notice.


I know at this point it's almost one of Mozilla's mottos to be horrible at communication but how come nobody felt the need to maybe talk about stuff like this before publishing it?


> straight or rectangular objects as wobbly, as shown in the second-to-last screenshot.

This is a because the polygon is drawn as a mask in order to overlay it on the image. The actual polygon being uploaded doesn't have the wobbly features.

It is True there are cases were the predicted polygon is wobbly and I encourage people to discard them. However I didn't publish this demo until I got a first version of the model that reached some minimum quality.

There is logic in the code to simplify the shape of the predicted polygon in order to avoid having too many nodes.


Hi! The Data Working Group had a look at the data, and decided to revert the two pool changesets. The polygons the algorithm had drawn were consistently of poor quality with stray nodes and nodes far outside the pool boundaries, and the imports hadn't been discussed with local communities.


Hi, thanks for the feedback.

I have disabled the hosted demo for now, and will remove the uploading part from the code in favor of showing an URL that will open the editor at the location.

If its of any help, you can find any contributed polygon with the tag `created_by=https://github.com/mozilla-ai/osm-ai-helper`. Feel free to remove all of them (or I can do it myself once I access a PC).

I will be happy to continue the discussion on what is a good prediction or not. I have mapped a lot of swimming pools myself and edited and removed a lot of (presumably) human contributed polygons that looked worse (too my eyes) than the predictions I approved to be uploaded.


Hi, thanks for replying! I was looking at your source code, and wondering how easy it would be to create a .osm file instead of uploading the data. The JOSM editor’s todo list plugin would make it easy to plough through all the polygons or centroids, and do any refinement necessary. For example, I’m curious to try this out to detect crosswalks, and those need to be glued to the highway being crossed.


> and wondering how easy it would be to create a .osm file instead of uploading the data. The JOSM editor’s todo list plugin would make it easy to plough through all the polygons or centroids, and do any refinement necessary. For example, I’m curious to try this out to detect crosswalks, and those need to be glued to the highway being crossed.

Hi, I didn't know about this possibility. I should have better researched what were the different options. I will be taking a look on implementing this approach.


> I will be happy to continue the discussion on what is a good prediction or not. I have mapped a lot of swimming pools myself and edited and removed a lot of (presumably) human contributed polygons that looked worse (too my eyes) than the predictions I approved to be uploaded.

Something else you need to be mindful of is that the mapbox imagery may be out of date, especially for the super zoomed in stuff (which comes from aerial flights). So e.g., a pool built 2 years ago might not show up.

https://docs.mapbox.com/help/dive-deeper/imagery/


This is a general problem when trying to compare OSM data with aerial imagery. I've worked a lot with orthos from Open Aerial Map, whose stated goal is to provide high quality imagery that's licensed for mapping. If you try and take OSM labels from the bounding boxes of those images and use them for segmentation labels, they're often misaligned or not detailed enough. In theory those images ought to have the best corresponding data, but OAM allows people to upload open imagery generally and not all of it is mapped.

I've spent a lot of time building models for tree mapping. In theory you could use that as a pipeline with OAM to generate forest regions for OSM and it would probably be better than human labels which tend to be very coarse. I wouldn't discount AI labeling entirely, but it does need oversight and you probably want a high confidence threshold. One other thought is you could compare overlap between predicted polygons and human polygons and use that as a prompt to review for refinement. This would be helpful for things like individual buildings which tend to not be mapped particularly well (i.e. tight to the structure), but a modern segmentation model can probably provide very tight polygons.


Can you link the (now reverted) changesets? I can't seem to find them.


https://www.openstreetmap.org/changeset/163855992 and https://www.openstreetmap.org/changeset/163863954 are the ones I've reverted. There are more in daavoo's changeset history.


I tried this out like a week ago and I was wondering the same so I tried to upload and... it's definitely uploading crap. I don't know what to tell you but all the clearly square ones I saw have bends on the straight lines

It's useful for finding ones that haven't been mapped but not for drawing them. It can get the 4 corners pretty accurate for pools that are square, many are half round at the ends though


Hi, sorry if the project or narrative gave the wrong impression but my idea was to show the potential, not providing a polished solution.

As disclaimed in the demo and code, the example model was trained only with data from Galicia on a Google Colab. A robust enough models would require more data and compute.

> it's definitely uploading crap.

What was uploaded was what a human approved.

> It's useful for finding ones that haven't been mapped but not for drawing them. It can get the 4 corners pretty accurate for pools that are square, many are half round at the ends though

I couldn't dedicate enough time on the best way to refine the predictions, but happy to hear and discuss any ideas.

Ideas I have are:

- Try an oriented bounding box model instead of detection + segmentation. It will not be useful for not square shapes but will definitely generate more accurate predictions. - Build some sort of https://es.wikipedia.org/wiki/RANSAC that tries to fits rectangles and/or other shapes as an step to postprocess the predicted mask.


> What was uploaded was what a human approved.

Yes, I hit approve on the best one because I was curious to see the actual final polygon. (I then went and fixed it.) You wrote above / I was responding to:

>> This is a because the polygon is drawn as a mask in order to overlay it on the image. The actual polygon being uploaded doesn't have the wobbly features.

Now you're saying it's my fault for selecting a wonky outline. What's it gonna be, is the preview bad or the resulting polygons? (And the reviewer is bad for approving anything at all?)

> my idea was to show the potential, not providing a polished solution

I can appreciate that, but if you're aware of this then it shouldn't have a button that unauthenticated users can press to upload the result to the production database. OSM has testing infrastructure if you want to also demo that part (https://master.apis.dev.openstreetmap.org/ is a version I found on https://wiki.openstreetmap.org/wiki/API_v0.6)


> You wrote above / I was responding to:

I apologize. I read `it's uploading` and misunderstood like you were saying the tool itself was uploading things.

> is the preview bad or the resulting polygons? (And the reviewer is bad for approving anything at all?)

It can be one, the other, or both.

I was replying to a reference about a specific example in the blog post.

In that example, I see wobbly features due to rendering alongside the edges that make it look like the polygon is going to have dozens of nodes. Then, there is an over-simplification of the polygon around the top-right corner (which I didn't consider an error based on my criteria from reviewing manually created pools).

> And the reviewer is bad for approving anything at all?

I didn't say that. I was trying to assert that the UI/X can be improved to better show what will be uploaded.

> but if you're aware of this then it shouldn't have a button that unauthenticated users can press to upload the result to the production database

You are right. I was manually reviewing the profile created for the demo every day, but I didn't realize the impact/reach until I saw the first comments here. As soon as I read the first comment, I shut down the demo.

As I said in other comments, I will make alternative changes to the demo.

> if you want to also demo that part (https://master.apis.dev.openstreetmap.org/ is a version I found on https://wiki.openstreetmap.org/wiki/API_v0.6)

Thanks for the suggestion, I don't know why I didn't thought about that earlier.


Replied to daavoo, can I suggest adding a tag for ML originated features? As other comments have stated, it is likely that these tools are already being used (potentially semi-automatically) and this could help prevent them from polluting the DB wholesale.


Hi there! Feel free to jump into https://community.openstreetmap.org/t/about-mapping-features... , I think is a good point to discuss.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: