Hacker News new | past | comments | ask | show | jobs | submit | ehhthing's comments login

The Topics API doesn't seem to have any abuse opportunity since it's entirely enforced by the browser itself. There's nothing the JavaScript API can do that could give Google an advantage here given as far as I know there's only a single function that can be called in the first place.

I think more research would need to be conducted to see whether this change is actually anti-competitive or not.


I don't play Roblox, but I'm aware of their history of exploiting children.

Based on my preliminary research it appears as if these websites actually just ask you for either your Roblox username/password or ask you for your Roblox cookie to authenticate.

I really doubt they would actually be apathetic to removing these, since even though they do get richer off of it... If discovery finds evidence they're trying to cover this up the damages will be endless...


> They charge an arm and a leg because they want to keep you and your data on their platform. When you move it you are breaking free.

This isn't remotely true.

The bandwidth alliance exists, and a lot of cloud companies are on the list: https://www.cloudflare.com/en-gb/bandwidth-alliance/

The actual answer is much more complicated. For example, Google Cloud offers two different bandwidth tiers: premium and standard. The calculation on the OP assumes premium since that's the default option, but obviously it's much more expensive.

Google cloud's "premium" bandwidth is much akin to AWS Global Accelerator since it utilizes Google's own backbone network for as long as possible before exiting at the nearest peering point between Google and whatever ISP your end user is at. AWS Global Accelerator has some other options available, that make it fundamentally a different product, but the routing characteristics are much more similar to GCP Premium bandwidth than anything else AWS offers.


That’s an impressive word salad, but sorry, no. Egress costs are high strictly to keep you there.

This is why egress is dirt cheap on other platforms outside the big 3 cloud providers.

This is also why ingress is free.


Ingress is free because it helps them balance their pipes, and it would be really shitty to charge for DDoS attacks. As far as I can tell, with the exception of some really expensive network environments (e.g. China), nobody has ever charged for ingress.

With an exception to OVH, none of the cheap providers the article has listed have any kind of backbone network. They all rely fully on transit providers. Turns out backbone networks are expensive to operate!

OVH is the sole example of a provider that has a backbone network, and admittedly, it's pretty good. However, nowhere near expansive as the big three, and it falls flat in Asia (which is the hardest to route traffic in). Also OVH has to build datacenters so cheaply that one of them burnt to the ground in recent years...

(Cloudflare has a backbone too, but you have to pay a lot extra to use it. Linode uses the Akamai backbone now but that's a very recent acquisition and it's expected that Akamai will eventually raise costs significantly)

Yes, bandwidth is way too expensive on cloud providers. AWS Lightsail is proof of that. However, I see no reason to believe that this is purely for vendor locking, and nobody has been able to give any evidence of causation between the two beyond "well it's so expensive!!!"


Again with the word salad.

It is very simple. If you move your data off cloud provider X, cloud provider X is losing revenue because you are doing things with your data off their platform.

They therefore charge high fees to move your data off the platform to discourage this behavior. Meaning you now need to use cloud provider X’s services to do anything with the data.

Attempts at vendor lock-in have been core to software service companies since they were born.


> It is very simple. If you move your data off cloud provider X, cloud provider X is losing revenue because you are doing things with your data off their platform.

Right but if this were the case then why does the Bandwidth Alliance allow you to move data at a much lower cost for 2/3 of the major cloud providers? If they _really_ cared so much about not allowing you to do processing with a third party, the Bandwidth Alliance wouldn't exist!

AWS is the sole hold-out here, and I think the way that Cloudflare worded this makes it pretty clear that the Bandwidth Alliance is basically a middle finger to AWS than anything else, but it also seems clear that the cloud companies aren't actively trying to make it costly to do data processing on a third party.

In fact if you want to move off GCP right now, Google will waive all egress fees to do so: https://cloud.google.com/blog/products/networking/eliminatin...


Egress toll is not about preventing migration off-platform. It's about preventing operation off-platform. They don't want you to come to GCP for a single product like Spanner or BigQuery or some high-tech ML/AI offering while most of your infra runs in big dumb baremetals at the Hetzner or OVH datacenter down the street. If you're coming for Spanner, you also have to buy their overpriced VMs, object storage, log storage and whatever else you need. That's where the real money is made.


Bandwidth alliance looks to be a political tool for cloud providers to save face. Not dissimilar to public companies paying token tribute to ESG which is all the rage these days.


> Ingress is free because it helps them balance their pipes

That isn't how business works. Companies maximize their profits and "balance" isn't a profit center. If it didn't benefit them in some customer leveraging way, they would charge for ingress.

You pay for everything. Either directly or indirectly. Indirectly often turns out to be much more expensive.


> That isn't how business works. Companies maximize their profits and "balance" isn't a profit center. If it didn't benefit them in some customer leveraging way, they would charge for ingress.

What I'm referring to is the practice of balancing peering ratios. That is, when you make transit/peering arrangements with other ISPs, some ISPs will charge more if the amount of data you're sending to them vs the amount of data you're receiving from them is not balanced. It is in Google's best financial interest to at least try to balance their pipes in this way.


Google's "standard" bandwidth pricing is about 15%-45% cheaper than "premium", which is admittedly a significant discount, but it's still an order of magnitude more expensive than some of the other options on the list.


> This isn't remotely true.

Nothing in your comment rejects or disproves the claim that egress costs are vendor lockdown.

Your link to the bandwidth alliance explicitly states that their justification for network costs is unloading infrastructure costs onto end users as data fees. That's their only and best justification. This is clearly a business decision that has no bearing in operational costs.

Some cloud providers charge nothing, others only start charging after hitting a high threshold from a single instance. Do they not operate infrastructure?

It's their business, it's their business model. Some restaurants charge you for a glass of tap water too. Let's not pretend they do it because of infrastructure costs.


> Some cloud providers charge nothing, others only start charging after hitting a high threshold from a single instance. Do they not operate infrastructure?

Yes you do pay for the rest of their infrastructure when you rent servers from them...

I'm not saying that the fees aren't extremely overpriced. I know what a gigabit port costs. But saying it's to keep vendor locking is just not true, and nobody has suggested any actual proof of it being true.


The bandwidth alliance exists to try to cut into AWS’ business. They could always have unilaterally cut rates closer to their cost but that margin was appealing, until they realized that they were never going to catch up with AWS without being cheaper.


This is also a fair take, but not a very compelling reason why bandwidth costs are vendor lock in...


High egress makes it expense both to leave and to use other services: if you use S3, you’re probably putting processing and analysis in AWS because using someone else’s service would incur hefty egress charges.


> For example, Google Cloud offers two different bandwidth tiers: premium and standard. The calculation on the OP assumes premium since that's the default option, but obviously it's much more expensive.

Of course, non-premium tier is v4 only, and only available at some locations.


If you're on GCP because you want v6 support you're probably in the wrong place :^)


It's better than AWS at IPv6, from what I can tell?

I didn't pick the hosting I'm working with, but GCP IPv6 for instances seems to work fine, other than it costs more?


Deno wasn't originally designed to be node compatible, but I think they realized nobody would want to switch to it because node is so prevalent already...


I think the main appeal of projects like Bun and Deno is the built-in tooling for building/bundling modern typescript applications without requiring dozens of dependencies for even a basic hello world app.

If node.js decided to include functionality similar to what is available on Bun/Deno, both projects would probably lose traction quickly.


Being an old dog in the prairie, I see the outcome of these projects being like egcs, and io.js.

They create some riff, make the key incumbent improve itself, and then the world moves on as if nothing happened.


> If node.js decided to include functionality similar to what is available on Bun/Deno, both projects would probably lose traction quickly.

I believe this too. The big appeal for me is not having to install typescript, eslint, jest AND then set up all the configs.

deno has nice defaults, though the importing via URLS and browser compatible API do make deno very tempting


that feels like a really weak value prop to me. how often do you have to install that stuff? how hard is it actually? can you really not use, e.g. for react, the typical vite starter and it's done?


The other side of it is if you want to distribute your code not as a server. If you write a CLI in Node + TS + ... then it might be pretty fiddly for someone to clone that repo and get it running locally. You'll certainly have to document exactly what's needed.

Whereas with Deno you can compile to a single binary and let them install that if they trust you. Or they can `deno install https://raw.githubusercontent.com/.../cli.ts`, or clone the repo and just run `deno task install` or `deno task run`. For those they need to install Deno, but nothing else.


> then it might be pretty fiddly for someone to clone that repo and get it running locally

with node + TS, it is straightforward (and common) to generate JS output at publish time for distribution. then, using the CLI tool or whatever is only a `npm install -g <pkg>` away, no extra steps.

sure it's not a single binary, but I'd argue _most_ users of a general CLI utility don't necessarily care about this.


For one-off scripts: every time.

So Deno is better at small scripts written in Typescript than Node. Then, the question becomes, if you're going to have Deno installed and if it works well enough to replace Node, why keep Node?


then you have to define "works well enough to replace Node"

i was excited about bun too, until v1's "drop-in node replacement"

that was in no way a drop-in node replacement. using that would be the fastest way to kill a business with its terrible bugs and rough edges.

i used to be really excited about deno, but now i think the tradeoffs aren't going to be worth it for mass adoption. i sometimes write servers in go. now that i have go installed, should i use it for all my servers? no, it's just another tool with different trade-offs. most times, node will suit my project better.


It is more a matter of trust than effort, eg being less exposed to supply chain attacks.


> how often do you have to install that stuff?

> how hard is it actually?

> can you really not use, e.g. for react, the typical vite starter and it's done?

I have to install that stuff everytime I'm starting a new project, switching to a new project, or creating a one-off script.

It's hard when creating a new project where there's always at least one flag that needs to be found and set different from a previous project for some random reason, every single time.

It's hard when switching to a new project, because you have to figure out which version of node you're supposed to be running because each version runs the dependencies differently between different versions of node, and different computers. It might even silently work for you without being on the right version, meaning you continue working on it, then your commits don't work for yourself later or others now or later. This leads to one of two possibilities:

1. A longer job either unwinding everything to figure out what the versions should have been the whole time.

2. A lot of trial and error with version numbers in the package and lock files trying to figure out which set of dependencies work for you, work for others, and don't break the current project.

We also can't use the typical community templates because they always become unmaintained after 2 years or so.

---------------------------

Why I like Deno:

- Stupid easy installation (single binary) with included updater

- Secure by default

- TS out of the box (including in repl making one-off-scripts super easy to get started)

- Settings are already correct by default.

-- and if you ever need to touch settings for massive projects, they all sit in one file, so no more: tsconfig/package.json/package-lock/yarnlock/prettier/babel/eslintrc/webpack/etc... And since the settings are already sensible by default, you only need to provide overrides, not the entire files, so the end result for a complex project is usually small (example link: https://docs.deno.com/runtime/manual/getting_started/configu...)

- Comes with builtin STD meaning I don't need to mess-around with dependencies

- Builtin utilities are actually good so I don't need to mess-around with the rest of the ecosystem. No jest/vitest, no webpack/rollup, no eslint/prettier/biome (but you can keep using your editor versions just fine).

- Since it came after the require -> import transition, basically everything you're going to be doing is already using the more sensible es modules.


its not that much of an issue, i have created a template repo in github that i can make new projects but not I have to maintain it.


Except they are playing catch up with what Microsoft says Typescript is supposed to mean.

I rather have pure JavaScript, or use the Typescript from source, without having to figure out if a type analysis bug is from me, or the tool that is catching up to Typescript vlatest.


> without having to figure out if a type analysis bug is from me, or the tool

Deno uses regular Typescript for static type checking, it's just built-in. Bun also doesn't do type checking by itself, they recommend using tsc [2].

[1] https://docs.deno.com/runtime/manual/advanced/typescript/faq...

[2] https://bun.sh/docs/runtime/typescript#running-ts-files


Until they bundle Microsoft's compiler, it isn't the same thing.

It is like tracking down if a C bug in GCC relates to developer, or GCC understanding of ISO C documentation.

Just this alone proves it isn't the same thing,

=> Deno tries to keep up to date with general releases of TypeScript, providing them in the next patch or minor release of Deno.


The “Microsoft compiler” is exactly what tsc is, and is exactly what is bundled with Deno. Sorry if that wasn't clear.


Not sure what modern typescript means, but you only need one or two dependendencies (esbuild and tsc) unless you are doing something more involved, in which case deno alone might not work either.


OTOH the ways that you can improve upon node's shortcomings while staying compatible with it are limited. Bun is taking the pragmatic approach of providing fast drop-in replacements for node, npm and other standard tools, while Deno was the original creator of node going "if I started node today, what would I do differently?". So, different approaches...


CloudFlare domains is a bit of an underbaked product, it's not bad but you should be aware that you cannot change your root nameservers . They must be CloudFlare. The only way to change them is to switch registrars.


UU Booster, which is the service I currently use for gaming is operated by NetEase, which is a giant in the Chinese online gaming space. It's fully legal, no issues whatsoever.

Also you can get roaming SIM cards or even eSIMs, which connect to APNs overseas.

You can also get Alibaba Cloud private networking connection between a region inside of China and a region outside. They use private lines so there's no GFW involved. My understanding is that you need an international real name verified account to do this, but after that you basically have an uncensored line that's also much more stable than connections that have to go through the GFW. I know of a US company that uses this to connect their Chinese workers to their central office, and again it's fully legal once you get an ICP license.


I'm sorry but this article is BS and it's associated "source" seems to be a SEO stuffing website and smells very much like it was written by AI.

Is there any actual evidence that this is happening? This NYPost article and their "source" material seems to be the only things that reference such a practice.


Some quick googling does seem to lend at least some basic credibility: https://www.reddit.com/r/Tiktokhelp/s/Y4RCSS6jqk


I was searching mostly under the News section of google although this seems to just be a Tiktok feature "restricted mode" that Tiktok may have accidentally enabled for some users?

Regardless the article is pretty misleading...


Nobody wants to assume fraud in science, unless you have unassailable evidence that someone has committed fraud you really cannot risk your own reputation by dishing out allegations. The solution to this is trust but verify, but verifying data would require twice the amount of work, which not every lab has the funding to do, even if it was breakthrough research.

The sad truth about a lot of science is that science is built on trust. The psychology replication crisis is proof of that, there are so many ways to manipulate data to make it look like you have a result and also so much pressure to get a result that people resort to the former to get the latter.

There are plenty of historical scandals about how scientific fraud has taken way too long to investigate, people like Victor Ninov and Hwang Woo-Suk are two examples of just how far fraud can go before it's found.

The article here is comparatively tame compared to how far Huang and Ninov got.


Academia is funded to the tune of billions of dollars a year. If they don't have the funds to check their own work then it is a damning indictment of the funding process, one that suggests governments are utterly unqualified to be funding research at all.

But in this case funding doesn't seem to be the problem. The author says why she didn't check: she wanted to be famous.


Don't you think its funny that we pay academics a fraction of how much we pay tech workers and yet we expect so much more from them?


No because "we" don't pay academics, universities do. It's their decision to hire more professors instead of paying the existing ones more, it's their decision to create siloed departments etc. The system has plenty of money, but universities and grant agencies allocate it very badly despite being mostly staffed by academics and ex academics.


The internet isn't really decentralized and realistically it cannot be.

Submarine cables are owned by companies, T1 ISPs provide the majority of routing and you really cannot prevent any of this.

Centralized control is somewhat required because submarine cables cost money and transit costs money and small companies simply do not have the capital to do that.


Theoretically you don't need to reveal your identity to prove that you're human. You can use a zero knowledge proof instead, likely attached to something like an EU Digital ID, which would allow you to remain anonymous and also prove that you're human.


How could renting out one's ID to provide access to bots for spamming/manipulation be avoided then?


A simple zero-knowledge credential system isn't sufficient. It would need to embed some kind of protections to limit how often it could be used, to detect usage of the same credential from multiple (implausibly far apart) IP addresses. There would need to be extremely sophisticated reputation scoring and blocklisting to quickly catch people who built fake identities or stole them. And even with every one of those protections, a lot of them will still be stolen and abused.


Yes, I wonder how feasible it is to do that while still protecting state of being anonymous.

And what if you develop this very sophisticated system of reputation score, what if bad actors find a way to still perfectly abuse it, e.g. they pay for desperate people for the IDs and then stay just within the limits ever so slightly.

Would you be able to easily iterate on the system when that happens to make it more secure?

But if you also track IP addresses then doesn't that already mean loss of anonymity?

And ultimately with something like IP address, a bad actor could offer you to download an app where they could simply use your IP address to post content/propaganda from under your ID and IP.

It would be more expensive for bad actors, but also I think there was period when Facebook accounts were bought and sold, and there was very active market for that. I imagine teenagers for example are really easily tricked into selling their creds etc.

Also Reddit and other social media accounts are being sold a lot, so definitely there would be market for that.


There are a lot of risks here and I think it’s very challenging to build something anonymous that can deal with (say) Google’s current level of fraudulent behavior, let alone what we’re likely to see in the future.

Regarding the IP address question, I’d assume you could decouple the IP address verification portions from the “know who the person is” portions with some clever multi-party computation. Someone always has to know your IP address, but it doesn’t have to be the same person you’re talking to. (Think of Tor as an inspiration here.)


Slap on the wrist from the stage director.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: