PinkBike used to be the go-to place for uploading media for further embedding in off-site forums (SouthernDownhill, SingletrackWorld, RideMonkey, NSMTB, VitalMTB) - for example, race reports, sales (when forums had limited storage capacity for users), etc.
Your media was watermarked and made available in different resolutions (a unique feature at the time on a bike specific website!)
I would bet a large chunk of those 5M visitors were hotlinks as I don’t remember their forums being too busy —- but I do remember them being very fragmented with too many subforums.
Unfortunately Facebook Groups has killed off the almost tribal online MTB communities now :(
If you’re using TypeScript then you have a build step and package manager like NPM, which means packages and libraries are published in the NPM package registries.
This website was from/for an era where you hotlink the scripts.
SaaS can provide “open core” or better yet simply sell a hosted version of their fully open source code. If the provider fails to provide, you can fall back to self hosting.
The API equivalent would be open sourcing the data. This is the OpenStreetMap model. If the API provider fails to provide, you can fallback to the underlying data.
That's asking too much. SaaS should give an option for you to export all your own data in a simple, parseable format (like JSON). That's about it. They don't owe anyone their source code, and they don't owe any ethically sourced data (such as employees researching and manually entering).
API access needs better terms. Like guaranteed access for X years at $Y price with Z days notice if there's a change, where Z > 3 months or so.
This was the hope behind the StackExchange data dumps, that the community at large could always take their contributions elsewhere if the service jumped the shark.
Well, before the SE organization tried to kill the data exports off in an attempt to commercialize it towards AI companies, but thats a whole other issue.
I accumulated 8,000+ visited locations on Google Maps, but they've been increasingly abandoning "power users".
You might want to reconsider before "investing" or being emotionally attached to Google Maps's saved lists:
Sharing some perspective on this (I was involved in the fix).
Basically what started happening at some point was that Maps had built in a “timeout” for the fetch of Saved lists. When the lists weren’t able to be downloaded/fetched in under X seconds, the system stopped trying, assuming that lists in general would always load in under X seconds.
For users with huge lists, and for some users with very slow connections, it would timeout and not show anything. The way to notice it was typically when sharing the list, because the receiving user would fall into that timeout trap. The owner of the list usually didn’t notice immediately because the places were cached on their devices.
It’s still being worked on, and being rolled out slowly.. Some changes will come though, not sure how it’ll be announced.
It is someone's quarterly performance review and OKR to improve SLO. One such SLO is average load time. If you set such metrics as a goal, and financially reward people to do it, it's easy to meet the goal: arbitrary prune out the long tail, especially if the distribution of saved locations per user follows an Pareto/power law distribution.
They've since rolled back that experiment of payload sized limit, and instead used this number based limit. I guess it makes more a cleaner explainaton and internal documentation from a product sense this way as opposed to an arbitrary engineering limit.
The issue is still apathy and laziness from the engineers... they can have this per-request limits, but cache previously loaded pinnned locations, then incrementally append more to the list on subsequent requests. Instead, they attempt to reload the source of truth from the server everytime. There is a specific behavior where I see this is the case: they will load 3,000, you add another location which is now 3,001 locally, but after a few minutes they reconcile with the server and your local machine is back to 3,000. This is especially ironically considering the Google interview's emphasis on Dynamic Programming and building up a larger solution from previous solutions...
On Desktop, you can still load all your locations. There is some weird and intuitive behavior here how they reconcile edits to existing pin. The caveat on Desktop is that they will entire to render entire regions when there is too much (somewhere in the vicinity above 3,000 but below 8,000). I've noticed in recent weeks they've rolled out a quadtree
implementation where in a given rectangular region, they'd limit and cap how much saved locations they attempt to render, versus before where it would be seemingly depth-first until they reach the 3,000 limit.
That change is hilarious. So many potentially goofy reasons for it happening but my best take is google deciding not to host any data it can’t mine or use. The privacy policy on location history was quite good and effectively granted users privacy.
If they cared about privacy they could have easily hosted the data encrypted with a key only known to the user rendering subpoenas useless for that service.
Or they could just not upload it. No one trusts E2EE systems; you can see Apple tries it and everyone just accuses them of putting secret backdoors in.
Why are mapping services so stingy with custom lists of pinned location? This announcement from Apple made me excited because Google Map's saved list has become unusable for me. I have 8,000+ favorites (used to mark places I've visited) on Google Maps and behavior above 500 is undefined. On Mobile, Google Maps loads an arbitrary list of 3,000 pins. Unfortunately, Apple is even worst with a limit of 100 [2]
For Google Maps, They have an “official” limit of 500 where anything beyond that is not guaranteed. In practice, the current limit is 3,000.
I’m wondering if Apple Maps is doing something similar where they set a low official limit that they can walk back on / to in the future, as to avoid legal responsibility
> Deny that might feel good, but society didn't mandate 10 years of public education for shits and giggles.
k12 was created by industrialists who wanted obedient workers and well behaved citizens who won’t cause crime and instability in the ruling class’ assets.
School prepares workers, not the actual people changing the world.
Most high school grads will know more about how the mitochondria is the power of the cell, more so than know about taxes and personal finance. That’s by design.
I was constantly failing classes in high school, but I ended up as an engineer at FANG company without breaking a sweat. My cofounder didn’t finish high school and worked as an HFT trader.
People here who are judging people based on their high school accomplishments must be coming from a sheltered middle class background.
Shouldn't be surprising, in any group there will be a normal distribution. A person that didn't finish high school and succeeding isn't shocking, but falls way to the right of the distribution.
Of course they exist, but until you find one yourself one would think the same.
Having worked in GCP, it is not surprising at all to me that Google, despite having the resources, can’t productize an LLM.
It feels like OP question stems from a belief that all there is to training a good model is throwing lots of compute at it. As with any product, there’s polishing and fine tuning you need to do, both before and after travel. Google can’t do that. You also have to accept imperfection and clever tricks to this end, a.k.a. the startup / hacker mentality, which Google is also not positioned to do. I think Meta has a good chance though
https://endtimes.dev/why-your-website-should-be-under-14kb-i...