Hacker News new | past | comments | ask | show | jobs | submit | henry700's comments login

Cloudflare doesn't cost money, you pay the price with your soul.


You can personalize the CSS of any website with extensions. The correct comparison here would be talking about disabling the capability to do this.


I can't hear you over the literal backbone of the internet running in Rust inside Cloudflare right now


Funny since their GitHub has more Go and C++ than rust, yet they get no credit when cloudflare is boasted as a rust shop by loud rustaceans.

On that topic, go (which is of similar age to rust) is the backbone of docker and kubernetes. C++ is the backbone of unreal engine, Google search, HFT, nvidia, etc. For everything that rust is used, there are a dozen others written in languages with less annoying fans, languages that don’t force you to do [manual name mangling](https://docs.rs/serde/latest/serde/trait.Deserializer.html) or [copy pasting](https://github.com/nautechsystems/nautilus_trader/blob/maste...) and macro nonsense to compensate for poor ergonomics. Turns out “rewrite it you’re doing it wrong” and “<convenient feature from other language> is an antipattern” is not a good solution when real money is on the line. Perhaps rustaceans should stick to what they’re good at (GitHub surveys and spamming threads discussing unrelated topics).


Cool initiative. Just watch out for bullshit. Redis reply from some months ago to those benchmarks: https://redis.com/blog/redis-architecture-13-years-later/

(Originally posted 3 months ago on https://news.ycombinator.com/item?id=34231033)


Yep. Dragonfly compares a single-threaded Redis with a multi-threaded Dragonfly. It’s an extremely misleading benchmark.


How is it misleading when the whole point is that Redis can only be single threaded†? That's why Dragonfly (claims) to scale better. If anything, it's the Redis rebuttal that comes across as misleading; the posted announcement is very up front that Dragonfly's value proposition is that you get vertical scaling for free without having the additional ops overhead of a Redis cluster, which is very much not free in terms of maintenance and opportunity cost.

†: Redis 6 added threads, but AFAIK this is only for handling connection I/O. Actual database access is still single threaded. The only way I'm aware of to scale Redis is via clustering.


> How is it misleading

It's misleading because the comparison would be redis cluster vs dragonfly. There's no speed-up if the Redis user isn't fully saturating a single core. The real question is why is it only 25x faster on a 64-vCPU machine? Why isn't it 64x? Does this mean it's 60% slower when the request volume is below the needs of a single-threaded redis?

> Dragonfly's value proposition

Dragonfly has zero value proposition other than a ticking-time-bomb of pricing fuckery when they're forced to yield a return on that $21M investment.


It compares a process with a listening port with another process with a listening port. To give another example - nobody compares minio with bunch of disks to which you can write separately, and probably more efficiently.


hmm, Redis Labs are setting a cluster of 40 Redis processes on the same instance. It would be extremely difficult to do that with Redis OSS for anyone else.


But

"For the last 15 years, Redis has been the primary technology for developers looking to provide a real-time experience to their users. Over this period, the amount of data the average application uses has increased dramatically, as has the available hardware to serve that data. Readily available cloud instances today have at least 10X the CPUs and 100X more memory than their equivalent counterparts had 15 years ago. However, the single-threaded design of Redis has not evolved to meet modern data demands nor to take full advantage of modern hardware."

That's not what they are saying is wrong with Redis. Is Redis really 'antique tech'? Arguably, concurrent processing with a scale-up-only approach is a poor fit for "modern hardware".

So yes, you are correct: Redis from github requires knowledge and (your) code to make n instances work together (whether on the same node or not). But to claim that this is the case for "anyone else [but Redis Labs]" is questionable.

From a certain architectural camp, pin-to-core-process-in-parallel approach is optimal for [scaling on] "modern hardware". Salvatore can correct me on this but I don't recall that being a consideration at the early days, but it turned out to be a good choice. Some of the Redis apis however require dataset ensemble participation (anykind of total order semantics over the partitioned set) which is what is "difficult" to do effectively.

So basically any startup that can do that, should theoretically be able to squeeze more performance form their SaaS infrastructure than running Dragonfly type of architecture. Bonus, as pointed out by Redis Labs, being that the lots of parallel k/v processes can bust out of the max-jumbo-box should you ever need that to happen (for 'reliability' for example) ..


They chose those numbers because they wanted a fair comparison with their benchmark instance of AWS c6gn.16xlarge. Says so in the 4th paragraph.


I think using word "misleading" is also "misleading". Dragonfly hides complexity. Docker hid complexity of managing cgroups and deploying applications. S3 hid complexity of writing into separate disks. But you do not call S3 or minio misleading because they store stuff similarly to how disk stores files. Dragonfly hides complexity of managing bunch of processes on the same instance and the outcome of this is a cheaper production stack. What do you think has higher effective memory capacity on c6gn.16xlarge: a single process using all the memory or 40 processes which you need to provision independently?


It's misleading because, practically speaking, the type of people who are after the performance you advertise, are running clusters to begin with. So what you are selling is just a simplified stack that lets you not have to manage one more "system". That's fair but you could mention that? Or atleast acknowledge that if you repeat these tests with redis cluster the results will be wildly different and you wont have those crazy looking charts.

For example it's like me claiming that my new python web framework is X faster than Flask because it comes bundles with uwsgi. Yes, technically mine is faster, but its not a fair comparison.


What's odd is that they probably saw the reply but they still chose to re-iterate their misleading claims rather than not mentioning anything.


It's decent if you've been in the loop enough to use it. It's not built-in. It's a good practice, for sure, but it not being built-in at the language level makes it insanely easy for a newcomer to just... Not use virtualenvs at all.

In contrast to Javascript/Node.js/NPM/Yarn/whatever-you-want-to-call-server-js, which maintains a local folder with dependencies for your project, instead of installing everything globally by default.

Heck, a virtual env is literally a bundled python version with the path variables overriden so that the global folder is actually a project folder, basically tricking Python into doing things The Correct Way.


Virtualenvs are a part of the standard library since v3.3[0] and most READMEs do reference them btw.

[0]: https://docs.python.org/3/library/venv.html


Well this text at least accomplishes a purpose... The purpose of showing us that not every philosopher is particularly smart


Cool, but terrifyingly bad FPS



Thanks Henry!



Sometimes it's useful to review the basics, had a good read. It's geared towards actual network routers, but kinda applies to software routing infrastructure as well depending on the protocol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: