Hacker Newsnew | past | comments | ask | show | jobs | submit | phineyes's commentslogin

This isn't unique to Vodafone. Google has also been slowly withdrawing from IXes globally in favor of PNIs and "VPPs" (verified peering providers). This only makes it harder for smaller networks to establish presence on the internet and feels pretty anti-competitive.

On the flip side, IXes are becoming harder and less desirable to participate in: port fees are going up, useful networks are withdrawing, low quality network participants are joining and widening blast radius. I'm not sure what the answer to this is, but this has not been a great year for the "open" internet.


Google gave a presentation on this that I think is helpful context for "why": https://nanog.org/events/nanog-94/content/5452/


Direct YouTube link: https://youtu.be/Yg-qV6Fktjw


> low quality network participants are joining

(Genuinely curious because I truly don't know in this context) What is a low quality network participant? One of the "bulletproof" hosts?


Malware, flapping, bogons, remote peers, etc


I thought Google was _always_ like this. At least going back to 2015 when I left the ISP game, peering with them was notoriously difficult if you didn't have the traffic volumes required. Our network suffered from asynchronous routing to Google and Netflix for years because they refused to allow our routes despite checking all the boxes they require. Customers eventually left because other (larger) ISPs didn't have this issue.

I get why the enshittification of IXPs is occurring. Over the years many small and careless ISPs have caused issues for IXPs (and peers) based on what I've seen on mailing lists. It's hard work managing many hundreds or thousands of peers, let alone the equipment cost with multi-100Gbit ports becoming the norm for larger providers.


Why did your company expect Google to readily accept peering?

If there was such a large difference in volume they would be choosing to intentionally make it more difficult for themselves.


Google publishes a peering policy. It's reasonable to expect that they will peer with you if you hit all the requirements in the policy.

Afaik, their requirements have never been judgement based: just bandwidth minimums, port types and locations. I would expect that they prioritize new connections in some way, so if you barely hit the criteria and are somewhere well served by transit, you'll be low priority, and the requirements might change before your connection gets setup and if so, you might not get connected because you don't meet the new requirements, but otherwise, seems like if you meet the requirements, send in the application, and have some patience, the peering connection should turn up eventually.

It's not like they have a mostly balanced flows requirement like Tier 1 ISPs usually do. Also, even in their current peering policy, they don't require presence in multiple metros; just substantial traffic (10gbps), fast ports (100G), two pops in the same metro.


But it’s still Google’s choice?

They clearly didn’t publish a guarantee or an obligation that they will peer with anyone who meets the criteria.


Certainly, it's their choice. But I expect them to mostly follow through and peer with those networks that apply and meet their published requirements.


I know… I was the one who asked why that expectation formed…?


What sort of verification are they doing? Is this trend being pushed by the lack of proper security on BGP?


Notably, Chris Sacca (now a famous VC), when working at Google in the early 2000s, got in trouble for mentioning that telecoms were preventing Internet from being Net Neutral, and instead of being fired, he got promoted by Larry Page and given a huge budget to work on it. I can't find a reference but I'm almost certain this is correct.


> IXes are becoming harder and less desirable to participate in

Could this be due to the rise of services like Equinix Fabric and Inter.link? Google doesn't need to peer directly with most anymore because there is always a middleman somewhere who can handle it, and for many businesses the convenience of a point and click web gui outways whatever it costs?


21G on tc egress is slightly surprising to me. I'd like to see the program used for the benchmark. Was GSO accounted for? If you pop/pull headers by hand, you'll often kill GSO which will result in a massive loss in throughput like this.


According to the FCC doc [1], the measurements may be conducted "between two defined points on a network, such as between a user’s interface device and the ISP’s network core or between the user interface device and the nearest internet exchange point where the ISP exchanges traffic with other networks". They require these measurements to be an average. I think this makes sense - it at least guarantees your speeds within the service provider's network.

[1] https://docs.fcc.gov/public/attachments/FCC-22-86A1.pdf


>They require these measurements to be an average. I think this makes sense - it at least guarantees your speeds within the service provider's network.

It is useful that the remote end is the ISP exchange with the other ISPs.

It is way less useful to talk about averages. The averages could be great and it won't mean much when all of a sudden it can be slow, or not work at all.

The important information (and missing there) is what the guaranteed speed is, and what the guaranteed response times are for incidents.

e.g.: Outside the US, many countries have mandated minimums for guaranteed bandwidth, set as a percent of the advertised speeds. Similarly, they enforce certain maximum response times for incidents. There's of course different minimums for residential vs business, and ISPs will offer better tiers with better guaranteed bandwidth.

The FCC has the power to fix the insanity that is US broadband, but appears to be not motivated to.


Let’s not forget that we’re also feeding in all our code into OpenAI Codex.


Many people like me like to paste stuff into an editor to strip the formats.

And Github Copilot will gather that tiny amount of data to who knows where.


Which I assume also means sensitive code that may be .gitignore but being pushed up to OpenAI. I.E. secrets, passwords, api keys.


Parliament Hill has great views of the city - probably the most elevated viewpoint in North London. Went there yesterday - gets super quiet after 10pm unlike Primrose.


I had to add the trailing period for HN to accept it as a valid URL. However, for me (using Chrome on macOS Monterey), Chrome ignores the period and renders it as "https://16777217/". I thought it would be the same on other browsers. Interesting.


Was more pointing out the numbered hostname, not the service its-self.

16777217 is just the lowest number that corresponds with a routed IP address :)



I mentioned “routed” address, e.g. an address that actually appears on the DFZ. 0.0.0.0/8 is not a range that’s assigned to any RIRs by IANA - it’s reserved for special use.


It's not just Apple Maps either! Even with location services disabled, try opening AirDrop (even on another device you own) while running a ping and you'll see your en0 device's latency spike, while Apple tries to divide traffic between awdl0 and en0.


Hey HN - I created this new ID system which combines Stripe-style (developer friendly) IDs with Snowflakes (proven, scalable and timestamped) IDs. I believe it combines the best of both worlds and allows for functionality to be added (e.g. secure, signed tokens) while still remaining short and identifiable. Main disadvantage (when compared to Snowflakes, for example) is that they're not sequenceable in a db, which is a trade off we were happy to make.

Let me know your thoughts. :)


I created a similar project recently! Main difference is that I use base64 to encode unique Snowflake IDs, which are timestamped: https://github.com/hopinc/pika :)


In your example, the token contains the timestamp.

With prefixed-api-key, the hash of each token is stored in the database, so the timestamp can easily be added there.

Most of the time I don't see the utility in the client having the timestamp, outside of a scenario where you have a third party validate on their own (e. g. JWT w/ RSA keys). The best way you see if a token has expired is by trying to use it.


Yeah, pika is more of an ID system vs. prefixed-api-key which seems to be more oriented around just API keys, which makes sense. However, advantages of timestamps in IDs are that you can create unique IDs based on the current timestamp; bits which, in other ID systems, are usually just random - which feels like a waste imo. Also, knowing when a resource was created when debugging is very helpful.


I've used MongoDB quite a bit and I don't like that it has the timestamp in the id. That's unnecessary information.

With UUIDs I prefer UUID4 to UUID1 most of the time.

I prefer to only include the relevant information and for API keys the client doesn't usually need the time the key was created.


Of course if you don't like the id's that MongoDB generates by default you can always the supply your own. The only constraint on the _id field is that it be unique as we automatically apply a unique index. Effectively if you have a unique key for your data you should use _id for it as it saves you an index. (We always index _id).

(I work for MongoDB).


I don't agree. Your examples (UUID1, UUID4) are much longer strings and contain no useful information. UUID1 contains the device's MAC ID, UUID4 is just random bits.. vs a functional ID system like pika or Snowflakes which make use of those random bits by embedding a timestamp - something which might actually come in useful for some.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: