Hacker News new | past | comments | ask | show | jobs | submit | oregoncurtis's comments login

I'm getting the same, it's a 400 response.


So far, I've only generated the graph for Europe(using Graphhopper). Even just Europe required 128GB of RAM and around 10 hours of computation time (the entire planet would likely need 384GB of RAM). I plan to add North America on a separate Docker container soon though. I started with Europe because I’m familiar with some of the bike trails here, which makes it easier for me to check if the routing makes sense.


> the entire planet would likely need 384GB of RAM

Unlikely. Even with turn costs enabled 256GB (or less) are sufficient. You could also try to disable CH as for bike often no long routes are required (you could disable them). Here we have written down a few more details: https://www.graphhopper.com/blog/2022/06/27/host-your-own-wo...


Hey karussell, I really appreciate all the hard work you’ve put into Graphhopper. I wouldn't be able to create this project without GH. I have a question about memory usage during the import stage (specifically in the OSM Reader's preprocessRelations function). I'm using a HashMap<Long, List<Long>> to map way IDs to OSM bike route relation IDs, which means allocating lots of arrays. Could this be causing me to run out of heap memory faster or am I off base here?

I thought I would be able to compute the graph with 64GB of ram but it kept crushing before CH and LM stage. After switching to a 128GB instance, it finally worked, hitting around 90GB at peak memory usage. For context, I was using 3 profiles - one with CH and two with LM, plus elevation data and used all of the tips from deploy.md


Love your project!

Maybe you already considered, but there are a number of collection libraries out there that are optimized for holding Java primitives and/or for very large sets of data, which could help you save significant memory. Eclipse Collections [0] and Fastutil [1] come to mind first, but there are many out there [2]

[0] https://github.com/eclipse-collections/eclipse-collections [1] https://fastutil.di.unimi.it/ [2] https://github.com/carrotsearch/hppc/blob/master/ALTERNATIVE...


Thank you! I'm a total Java noob - actually, this is the first project where I've written any Java code (had to slightly modify the Graphhopper source code to suit my needs). Those libraries look very interesting. I'm saving this post for another battle with processing maany GBs of OSM data :D


We already use carrotsearch internally so you could replace the java util classes like HashMap and HashList with it to reduce memory usage a bit. But it won't help much. E.g. all data structures (in any standard library btw) do usually double their size at some point when their size increases and then copy from the old internal array to the new internal array, which means that you need roughly 3x the current size and if that happens roughly at the end of the import process you have a problem. For that reason we developed DataAccess (inmemory or MMAP possible) which is basically a large List but 1. increases only segment by segment and 2. allows more than 2 billion items (signed int).

Another trick for planet size data structure could be to use a List instead of the Map and the OSM ID as index. Because the memory overhead of a Map compared to a List is huge (and you could use DataAccess) and the OSM IDs for planet are nearly adjacent or at least have not that many gaps (as those gaps are refilled I think).

All these tricks (there are more!) are rather tricky&low level but necessary for memory efficiency. A simpler way for your use case could be to just use a database for that, like MapDB or sqlite. But this might be (a lot) slower compared to in-memory stuff.


> Could this be causing me to run out of heap memory faster

Yes, definitely.

> I thought I would be able to compute the graph with 64GB of ram but it kept crushing before CH and LM stage.

For normal GraphHopper and just the EU the 64GB should be more than sufficient.


You should at least add some kind of error handling so I'm not sitting there like a dope clicking over and over with no result.


Apologies, just added a popup with region availability info


Cool project, best of luck! Agree with the other commenter that some kind of error handling or warning re: outside-of-EU availability.


I'm in Portland and I say that anyone that lives on the Eastside within 82nd and works within those bounds should just get a bike, it's sooo much faster to get around the city. We have infamously small city blocks so all those intersections make driving a car very slow through the city and for good reason as it's safer. If you hop on a bike and take a greenway that has all the stop signs facing perpendicular traffic you can zoom around the city no problem. It's the exact same amount of time (from door to desk) for me to drive to work as it is to bike to work (including the shower). It's just shy of 7 miles. No brainer.


Really cool. The main thing I wish is that the transitions between "slides" was slower and not just a quick crossfade. The current transition is a bit disorienting mean I have to look around again to see where I am relative to where I was. Looking forward to seeing something like this on a VR headset.


Thanks, good to know -- that's something I can change


Davinci Resolve is miles better than Premiere. I don't do a lot of compositing, but I know more and more people are starting to use it over After Effects as well.


Resolve is better than Premiere on its own (hence why I list premiere as having competition) but the Fusion compositing is not a comparison for After Effects, but rather for something like Nuke.

While After Effects does some compositing (and it’s decent at it but poor in comparison to Nuke/Fusion), its’ stronghold is motion graphics. There’s very little other than Cavalry to compete with it.

And with that comes the benefit of Premiere: live updates to my edit when using After Effects.


Steens Mountain summit is at 9,738′ and is pretty much smack dab in the middle of the darkest region. It's pretty awesome up there.


How do you fix the routes? I just checked out the suggested route from my work to home and it doesn't stick to the bike greenways or streets with bikelanes. It actually suggested I take a very dangerous road with no bike traffic at all. How can I help improve that?


A lot of the routing comes down to pressure around the tag `bicycle`. If there is a road illegal to bikes, tag it `bicycle=no`. But if it is unsafe, the best way to fix it is to better document designated routes, as well as cycleways.

OSM's philosophy on routing is you should not try to fix routing by trying to translate your opinion that it is unsafe to ride on the street into to tags. Instead the routing algorithm should improve or the data should improve to the point where an alternative route can be suggested based on data.

https://wiki.openstreetmap.org/wiki/Bicycle


> Instead the routing algorithm should improve

Where's the feedback to the routing algorithm? I thought graphhopper / OSM was sent to users without feedback.


Also curious. I was looking locally for some sort of meetup/show where I could see people builds, ask about cost, etc. Sadly I just missed an expo the previous weekend.


Audio that went along with the physical issue of the NY Times Magazine, also in digital form. I'm a big fan of audio tours so thought it was a cool format to travel the world.


I actually used Anki cards to study LeetCode problems when preparing for interviews and it seemed to help. After doing a problem and solving it I created the card as such:

- Front of card is the entire LC problem statement

- Back is a bulleted list of the steps or key points (ie. first I notice this list is unsorted, so I would sort first, next I would do blah blah..)

- Back also contains the code solution that I might just glance through or look at a particular part of it.


I also benefitted a bunch from using Anki for LC problems -- I described the details in https://news.ycombinator.com/item?id=35517232


I would actually advise against it, or at least take the approach of removing cards that are too easy. I remember reading some article about spending your time learning stuff that is "just hard enough". When you study things that are easy you are kind of wasting time, you want to the material to be +1 in difficulty what you already know, not +0, not +250. While the easy questions give you satisfaction, they aren't helping you actually learn. I would argue that multiple cards on the same subject end up equating to a bunch of time wasting easy cards.

The disclosure to this is that I also don't think you should spend a lot of time figuring out how to create cards. There is some payoff in optimizing the process, but focus on just making the cards and reviewing them so you are learning the actual target subject.

All that said, my current approach is to create cards for concepts that I think are a little hard to understand or that I know I won't see enough repetition in daily work/tasks. If I find out after a few weeks the cards are too easy or too similar I usually with just delete it.


I don't think this really makes much of a difference. At least for me it didn't. Sometimes I remove cards that are too easy, sometimes I see that the next time to review them would be 6 months down the line and I leave it in, because the cost of leaving it is so small.

What has made a difference though is thinking about wether I still actually want to remember the contents of a card. Sometimes a card comes up that I haven't seen in months and I think "you know what, I haven't thought about this at all outside of Anki and I don't think that'll change." and then I just remove it. Sometimes I create a nearly identical card again later on, sometimes I really didn't need to know something.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: