Hacker News new | past | comments | ask | show | jobs | submit | diceduckmonk's comments login

Another reason to golf. It's not quite right, but it's a fun goal to fit an app within the first few TCP roundtrips.

https://endtimes.dev/why-your-website-should-be-under-14kb-i...


PinkBike (online mountain bike community) has a good tech blog post on increasing their slow start settings (2010)

https://www.pinkbike.com/u/radek/blog/pinkbike-speed.html


Impressive. A bit sad that out of 5 million visits, they have articles with 13 comments. Is all that traffic really organic?


PinkBike used to be the go-to place for uploading media for further embedding in off-site forums (SouthernDownhill, SingletrackWorld, RideMonkey, NSMTB, VitalMTB) - for example, race reports, sales (when forums had limited storage capacity for users), etc.

Your media was watermarked and made available in different resolutions (a unique feature at the time on a bike specific website!)

I would bet a large chunk of those 5M visitors were hotlinks as I don’t remember their forums being too busy —- but I do remember them being very fragmented with too many subforums.

Unfortunately Facebook Groups has killed off the almost tribal online MTB communities now :(


It actually felt weird that the site loaded so fast. I'm not used to that!


Do you view passenger drones as a competitor?

Photography and hobbyist drones are easy enough to control, and I would expect a manual passenger drone to follow that consumer model.


If you’re using TypeScript then you have a build step and package manager like NPM, which means packages and libraries are published in the NPM package registries.

This website was from/for an era where you hotlink the scripts.


We need to move towards “zero trust” for APIs.

SaaS can provide “open core” or better yet simply sell a hosted version of their fully open source code. If the provider fails to provide, you can fall back to self hosting.

The API equivalent would be open sourcing the data. This is the OpenStreetMap model. If the API provider fails to provide, you can fallback to the underlying data.


That's asking too much. SaaS should give an option for you to export all your own data in a simple, parseable format (like JSON). That's about it. They don't owe anyone their source code, and they don't owe any ethically sourced data (such as employees researching and manually entering).

API access needs better terms. Like guaranteed access for X years at $Y price with Z days notice if there's a change, where Z > 3 months or so.


This was the hope behind the StackExchange data dumps, that the community at large could always take their contributions elsewhere if the service jumped the shark.

Well, before the SE organization tried to kill the data exports off in an attempt to commercialize it towards AI companies, but thats a whole other issue.


For all of these companies lipservice to A.I., mapping still requires a high degree of human involvement to normalize.


I accumulated 8,000+ visited locations on Google Maps, but they've been increasingly abandoning "power users". You might want to reconsider before "investing" or being emotionally attached to Google Maps's saved lists:

[1] https://www.reddit.com/r/GoogleMaps/comments/1cfqk52/did_i_j...

[2] https://techissuestoday.com/google-maps-limits-entries-into-...


From that reddit thread:

Sharing some perspective on this (I was involved in the fix).

Basically what started happening at some point was that Maps had built in a “timeout” for the fetch of Saved lists. When the lists weren’t able to be downloaded/fetched in under X seconds, the system stopped trying, assuming that lists in general would always load in under X seconds.

For users with huge lists, and for some users with very slow connections, it would timeout and not show anything. The way to notice it was typically when sharing the list, because the receiving user would fall into that timeout trap. The owner of the list usually didn’t notice immediately because the places were cached on their devices.

It’s still being worked on, and being rolled out slowly.. Some changes will come though, not sure how it’ll be announced.


My perspective as an ex-Google SWE:

It is someone's quarterly performance review and OKR to improve SLO. One such SLO is average load time. If you set such metrics as a goal, and financially reward people to do it, it's easy to meet the goal: arbitrary prune out the long tail, especially if the distribution of saved locations per user follows an Pareto/power law distribution.

They've since rolled back that experiment of payload sized limit, and instead used this number based limit. I guess it makes more a cleaner explainaton and internal documentation from a product sense this way as opposed to an arbitrary engineering limit.

The issue is still apathy and laziness from the engineers... they can have this per-request limits, but cache previously loaded pinnned locations, then incrementally append more to the list on subsequent requests. Instead, they attempt to reload the source of truth from the server everytime. There is a specific behavior where I see this is the case: they will load 3,000, you add another location which is now 3,001 locally, but after a few minutes they reconcile with the server and your local machine is back to 3,000. This is especially ironically considering the Google interview's emphasis on Dynamic Programming and building up a larger solution from previous solutions...

On Desktop, you can still load all your locations. There is some weird and intuitive behavior here how they reconcile edits to existing pin. The caveat on Desktop is that they will entire to render entire regions when there is too much (somewhere in the vicinity above 3,000 but below 8,000). I've noticed in recent weeks they've rolled out a quadtree implementation where in a given rectangular region, they'd limit and cap how much saved locations they attempt to render, versus before where it would be seemingly depth-first until they reach the 3,000 limit.


Was the fix to use a paginated API?

Also, as an aside, do you have any political sway over this decision?

https://www.theguardian.com/technology/article/2024/jun/06/g...

Location history is extremely valuable to remember trips and retrace steps. I don't want this living locally on a device where it can go missing.


That change is hilarious. So many potentially goofy reasons for it happening but my best take is google deciding not to host any data it can’t mine or use. The privacy policy on location history was quite good and effectively granted users privacy.


Privacy policies don't help your users against subpoenas.


If they cared about privacy they could have easily hosted the data encrypted with a key only known to the user rendering subpoenas useless for that service.


Or they could just not upload it. No one trusts E2EE systems; you can see Apple tries it and everyone just accuses them of putting secret backdoors in.


Why are mapping services so stingy with custom lists of pinned location? This announcement from Apple made me excited because Google Map's saved list has become unusable for me. I have 8,000+ favorites (used to mark places I've visited) on Google Maps and behavior above 500 is undefined. On Mobile, Google Maps loads an arbitrary list of 3,000 pins. Unfortunately, Apple is even worst with a limit of 100 [2]

[1] https://www.reddit.com/r/GoogleMaps/comments/1cfqk52/did_i_j...

[2] https://support.apple.com/en-us/103188



You can put items in guides, i'm not sure the limit of items you can put in a guide or how many guides though


"Total number of places across all Guides: 300"

These limits wouldn't be an issue if Google Maps API actually allowed fetching saved list features.


I have over 2,000 places saved in Guides in the Apple Maps app. Not sure what that 300 place limit is referring to, but I haven't had any issues.


For Google Maps, They have an “official” limit of 500 where anything beyond that is not guaranteed. In practice, the current limit is 3,000.

I’m wondering if Apple Maps is doing something similar where they set a low official limit that they can walk back on / to in the future, as to avoid legal responsibility


AFAIK the API doesn’t even let you log in as your Google user account.


> Deny that might feel good, but society didn't mandate 10 years of public education for shits and giggles.

k12 was created by industrialists who wanted obedient workers and well behaved citizens who won’t cause crime and instability in the ruling class’ assets.

School prepares workers, not the actual people changing the world.

Most high school grads will know more about how the mitochondria is the power of the cell, more so than know about taxes and personal finance. That’s by design.


I was constantly failing classes in high school, but I ended up as an engineer at FANG company without breaking a sweat. My cofounder didn’t finish high school and worked as an HFT trader.

People here who are judging people based on their high school accomplishments must be coming from a sheltered middle class background.


>People here who are judging

Shouldn't be surprising, in any group there will be a normal distribution. A person that didn't finish high school and succeeding isn't shocking, but falls way to the right of the distribution.

Of course they exist, but until you find one yourself one would think the same.


This works only because a fairly no-license environment we have in software development and stock market investments.

It won't work in places like Medicine or Army, where human lives are at stake.

There is a progression your have to work through to get to serious places, else bad things can happen to both you and many people around you.


Having worked in GCP, it is not surprising at all to me that Google, despite having the resources, can’t productize an LLM.

It feels like OP question stems from a belief that all there is to training a good model is throwing lots of compute at it. As with any product, there’s polishing and fine tuning you need to do, both before and after travel. Google can’t do that. You also have to accept imperfection and clever tricks to this end, a.k.a. the startup / hacker mentality, which Google is also not positioned to do. I think Meta has a good chance though


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: