I kinda doubt GHC stands for GitHub Comments but if it does I just gotta say thank goodness GitHub started saving comments in local cache so when you accidentally pull to refresh you don't delete your whole comment. Idk when they added this but must have been in the past 6 months or so. Same applies if you try to submit a comment and then are offline.
This sounds so fantastic. Thanks for sharing. I wonder how well it'll be able to incorporate into existing apps vs making new ones from scratch. I've been using Splitt.app a lot lately while traveling and it drives me nuts that it doesn't have better offline/low data support. I'd like to improve it but haven't dug into the site yet to see what it would take.
If you start on a free plan but then ultimately switch to another provider do you have any idea of how hard it would be to export and import all your tasks, files, etc?
I have moved twice now. First from my raspberry pi to cloud and the second time between cloud providers.
There might be other ways but you can share folders between Nextcloud instances.
I have shared my whole nextcloud from the old one to a folder in the new one. Then in the new instance you copy folders from the shared drive to your new instance. For ~300-400 GB in takes a moment and I do some spot checks, but after half a day it's done. And you don't actually do things, you just wait that a folder copy finishes, check and then start the new one.
There might be more automated ways, but this worked for me.
Files are files so you can download them to your computer and upload them to the new provider. Unfortunately I am not aware of any direct provider-to-provider sync.
Application data depends on the app. For example, Notes [0] save your notes as Markdown files so you can move them (along with your files) wherever you want. However, News [1] don't and don't have export/import features at the moment either [2].
Nextcloud as a file storage solution and a non-collaborative office suite is great, but I cannot recommend its apps the same way. They are very convenient to install, but the quality varies a lot in my opinion so evaluate before you adopt.
On this note, does anyone know how Cursor scrapes websites? Is it just fetching locally and then feeding the raw html or doing some type of preprocessing?
Lately I've been thinking more about how to get better POI (like business) data into OSM. Apps like everydoor work okay but I feel it's still annoying to type it in and get the tags right.
I think it could be a really good use of AI to let me, for example, snap a photo of the menu and then have it automatically generate the OSM tags. Then someone just has to review if it all is appropriate.
Just bring able to walk down the street, snap a bunch of menu or sign pics, then go home and drop pins and confirm tags from photos would be great.
Heck it could even scrape their website to verify the information too!
Does anyone know if there is a project like this? Or have any thoughts on if this is a reasonable approach? I think as long as there's a human in the loop checking things it should be fine by OSM.
In terms of scraping, there's already a huge project that collects business data in an OSM-compatible format: http://alltheplaces.xyz/
The main trouble is licensing and change tracking. Most of the scraped data is protected by copyright or database rights so it can't be imported.
And even if the licensing is solved, you have the problem of matching scraped data to OSM data and what to do when changes disagree. For example, a store might be scraped as a point in the middle of a shopping mall, but then an OSM editor would come by and move it to the correct section of the mall - the next import round shouldn't undo that. Or maybe a store changes opening times but forgets to update their website - an editor can fix that, but the next import would break it again.
I have a sort of "grey area" idea for this, but I haven't had the time to try it. Basically, I would track changes in AllThePlaces and create "change reports" such as "store X changed open times from AAA to BBB". Then, I'd make a UI that would show you the changed website alongside an OSM editor and a convenient "copy change" button.
This way, a human is still the one looking at the website and entering info into OSM, which is essentially the same as in-person surveying. The copy button is "just a convenience".
Still, I think this is too messy from a legal standpoint and the OSM editors wouldn't allow it out of caution...
Thanks for the great answer. Based on what I'm reading scraping a business website is probably fine in general but using data from Google Maps/Yelp/etc is generally not gonna fly.
What I think could work is if everydoor allows you to create notes with photos (https://github.com/Zverik/every_door/issues/184) then it would be pretty easy to later go back and drop those photos into an AI tool and extra websites and try to create some tags for review. Could also work with Streetcomplete but there it's not easy to see if a POI already exists.
In any case, I might experiment with this idea further.
Some very basic testing shows me that Claude 3.5 Sonnet is pretty great at taking a photo of a menu turning it into decent tags.
So if I could run around taking photos of menus and the outside of businesses then quickly turn them into tags later that would be a nice workflow for me (and hopefully others).
The presentation we made in July at the Open edX 2024 conference with my colleague Faqir Bilal should be available on youtube soon-ish. The tl;dw is that we should focus on a different market vertical, which is residential/blended/hybrid learning.
Yeah I keep hearing this but it never pans out, seems like in my experience a lot of people don’t know they might have to turn off an extension or two (ublock, built-in trackers, etc) to get a website to work.
Huh? I use YouTube all the time on Firefox and it's fine. Better than fine, really, thanks to the YouTube improvement extension I have loaded. Never heard of the other two though.
Google is essentially using A/B testing methods to slow it down for one group of FF users while keeping it absolutely fine for another. Funnily enough, I've been placed in this 'slowdown' group even though I am a Premium subscriber ever since it launched (post-Red renaming) and another channel on the same Google account has 0 issues in the same browser on the same PC etc.
I like the idea of Signal a lot. It works pretty well. Unfortunately, not that many of my friends are on it. However, I have quite enjoyed using Matrix, specifically in the form of Beeper, which has nice bridges for Signal, WhatsApp, FB, IG, and so on. Ultimately, I can talk to my friends where they are and keep it all in one place.
Beeper has been making great improvements but unfortunately it's still not as good as Signal in terms of polish.
reply