I think it is the novelty of the idea of what an LLM can do that is important. I suppose accuracy can be improved over time. Compared to using gmaps to search places, it seems to be a bit better.
I think with LLMs the tighter the constraints you give it and the bumpers the better it is. Something you could do with it is like a JackBox game that generates the content. However I think it is very dangerous to rely too heavily on it without writing something to kind of gate or filter the inputs.
Prerecord video demos of it, and have your site swap out the live demo with the pre-recorded videos after $x amount of money has been spent by visitors.
Not too bad, I had some gas in the tank from an old project sitting on my API account so it used up all of it. Expect in the few hundred dollars.
I'm considering switching to DeepSeek since it's way cheaper. I'll swap once I'm done testing out the API. You can use your own hosted LLM but it's not worth it at this moment.
Woah, a few hundred just to demo the thing. How could you keep this afloat?
That’s the tricky part about building LLM apps. I’d love to hear more from Indie devs because money is absolutely a bottle neck here.
For fun:
You don’t need an LLM for some of your calls I think. “Where is the Eiffel Tower”, Eiffel Tower is a NER that small NLP libraries can extract. Then it’s a simple long/lat lookup. You might be able to re-route 20% of your calls to a no-cost backend call.
never had any thought on monetizing this at all so maybe offer pro features down the line? idk
I'm used to just putting stuff out there for people to try out. Could be spent on worse things all things considered haha
CPU inference for LLMs takes forever (you'll get like 1tk/s on CPU) and limits you significantly in terms of model size/quality. You'll lock up all of your cores to provide service for a single user at a snail's pace.
I don't think it should even be considered as an option
llama 3.2 3b, qwen2.5 3B quantized to 4bit runs CPU inference quite fast. You can get a beefier VM and still save a ton of money. Depending on the context token length of this soluion, it's either fast or slow. If it's below 1024 tokens per request, you get around 10 sec delay, if you are at around 128 tokens I guess you would be somewhere at 1 sec for time to first token...
LLMs are notoriously terrible at spatial thinking — if you can solve that with RAG and a database of actual locations, that’s promising. Or was there another approach?
Sadly in my prompt it replies with absolutely invented data.
Same prompt, three times gives 3-4 different results, that era simply not true.
Prompt: show me the best glass factories in valsassina, italy.
(there are no glass factories, so it suggest glass worker that contains wrong coordinates and invented names)
I tried "show me Portugal" and it showed me Madeira, which is an island that is indeed part of Portugal, but I was expecting it to go for the mainland.
Repeating the prompt alternates between Madeira and Açores which is again, technically correct, but in this case not the best kind of correct.
I asked it to find me soccer fields in <insert my suburb> and it showed me results but all 5 of them were misplaced. One of the fields was shown where a train station is, there are no parks nearby.
Same for my question of bars in Thessaloniki, it found 5 in the whole city and the locations were off by km. I fear that we're missing the point, though, because the good bit here isn't "it can do what Google Maps can do", but we aren't asking the right questions to show that off.
It seems you can make it hang if you input something that triggers the LLM's guardrails. For example, I entered 'where they sell drugs,' and the API endpoint hung every time I submitted it. However, on a few occasions, it returned a response immediately (with an error in the API response). I suspect there are too many retries if the structured JSON output is not formatted correctly when the guardrails are activated
Really cool idea and fast response times, nice job! I noticed after a few searches the results seem to get less accurate, however. Pins dropping in the ocean, etc.
Not sure why this got nuked from the front-page all of a sudden but thanks everyone for checking it out. I'll be sure to incorporate the features mentioned here.
e.g. "show me all IT companies in <suburb where I live>" should show my company (and only my company) - but it shows two other companies that aren't actually in the same city and draws the POI on a sugar beet field near <suburb>
Is the LLM perhaps only trained on english language geo data?
It does, I can certainly add it. I made it because I'm looking to open a new location for a brick and mortar business so I needed a way to look at the competitive landscape, high-level demographics etc...
It would be cool for planning travels! I'll look into it.
awesome. i can see future iterations of this becoming really useful.
e.g. "i'm traveling to Tokyo this summer, show me good areas to live in if i want to run 10k every day through nature, as well as work out of highly rated cafes.".
i want to see good areas highlighted on the map. and even better would be integrating with Airbnb / Yelp / Google business ratings, etc. to show places i can rent in those areas.
another e.g. "best times to land in XYZ city if i want to avoid traffic getting to ABC". - to check this now i have to toggle some dropdowns in Google maps. natural language is a much better "interface" for most of what i want to do with maps.
Just re-used a domain name, fyi "godview" is not exclusive to Uber, it's used mostly for internal tools with super-admin privileges. I'm not 100% certain if it comes from Uber originally because I remember using it internally in early 2010s before the Uber incident.
I asked it:
"Show me all the train yards in New York."
It only identified seven of them when there are many more:
https://en.wikipedia.org/wiki/List_of_New_York_City_Subway_y...
Then when I tried to copy and past my prompt from the history it did not display the full prompt and had no option to copy it to the clipboard.