Hacker Newsnew | past | comments | ask | show | jobs | submit | smacker's commentslogin

While I really appreciate Workers platform "eleminated cold starts" advertising was always bothering me.

This is a curl request from my machine right now to SSR react app hosted on CF Worker: ``` DNS lookup: 0.296826s Connect: 0.320031s Start transfer: 2.710684s Total: 2.710969s ```

Second request: ``` DNS lookup: 0.002970s Connect: 0.015917s Start transfer: 0.176399s Total: 0.176621s ```

2.5 seconds difference.


Does this app make any network requests that might have their own cold start or caching effects?

2.5 seconds seems way too long to be attributed to the Worker cold start alone.


It makes requests to API server that is deployed to k8s, which doesn't have a cold start. Clearly, some caching by the runtime and framework is involved here.

My point is that "cold start" is often more than just booting VM instance.

And I noticed not everybody understands it. I used to have conversations in which people argue that there is no difference in deploying web frontend to Cloudflare vs a stateful solution because of this confusing advertisement.


Likely reusing http keep-alive connections.


I'm not well versed with CURL design, but curious - is your first connection handling TLS while second relying on the previously established handshake?


I'm not very well versed with CURL design too but afaik it does reuse connections but only inside the same process (e.g. downloading 10 files with 1 command). In this case it shouldn't be re-using them as I ran 2 different commands. I should have included TLS handshake time in the output, though. You can see it here (overall time is lower because I hit preview env that is slightly different from staging/prod):

First hit: ``` DNS Lookup: 0.026284s Connect (TCP): 0.036498s Time app connect (TLS): 0.059136s Start Transfer: 1.282819s Total: 1.282928s ```

Second hit: ``` DNS Lookup: 0.003575s Connect (TCP): 0.016697s Time app connect (TLS): 0.032679s Start Transfer: 0.242647s Total: 0.242733s ```

Metrics description:

time_namelookup: The time, in seconds, it took from the start until the name resolving was completed.

time_connect: The time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed.

time_appconnect: The time, in seconds, it took from the start until the SSL/SSH/etc connect/handshake to the remote host was completed.

time_starttransfer: The time, in seconds, it took from the start until the first byte was just about to be transferred. This includes time_pretransfer and also the time the server needed to calculate the result.


TLS handshakes (outside of embedded hardware) should be measured in milliseconds, not seconds

Edit: you can kind of tell this from the connect timings listed above. TLS is faster the second time around, but not enough to make much difference to the overall speedup


I think you're right that TLS doesn't explain the difference shown above, but for completeness: TLS 1.3 can reduce the round trips from 3 to 2 on session resumption. [1] Depending on your Internet connection, that could be a lot more than milliseconds. I don't think `curl` uses it by default though.

[1] https://blog.cloudflare.com/introducing-0-rtt/


I would say Cloudflare "eliminated cold starts" in the context of bringing the server online, not in the form of rendering + caching the SSR page.


I like using postgres for everything, it lets me simplify infrastructure. But using it as a cache is a bit concerning in terms of reliability, in my opinion.

I have witnessed many incidents when DB was considerably degrading. However, thanks to the cache in redis/memcache, a large part of the requests could still be processed with minimal increase in latency. If I were serving cache from the same DB instance, I guess, it would cause cache degradation too when there are any problems with the DB.


Select by id is fast. If you’re using it as a cache and not doing select by id then it’s not a cache.


absolutely. But when PG is running out of open connections or has already consumed all available CPU even the simplest query will struggle.


You can have a separate connection pool for 'cache' requests. You shouldn't have too many PG connections open anyway, on the order of O(num of CPUs).


> But when PG is running out of open connections or has already consumed all available CPU even the simplest query will struggle.

I don't think it is reasonable to assume or even believe that connection exhaustion is an issue specific to Postgres. If you take the time to learn about the topic, you won't need to spend too much time before stumbling upon Redis and connection pool exhaustion issues.


> But using it as a cache is a bit concerning in terms of reliability, in my opinion.

This was the very first time I heard anyone even suggest that storing data in Postgres was a concern in terms of reliability, and I doubt you are the only person in the whole world who has access to critical insight onto the matter.

Is it possible that your prior beliefs are unsound and unsubstantiated?

> I have witnessed many incidents when DB was considerably degrading.

This vague anecdote is meaningless. Do you actually have any concrete scenario in mind? Because anyone can make any system "considerably degrading", even Redis, if they make enough mistakes.


No need to be so combative. Take a chill pill, zoom out and look at the reliability of the entire system and its services rather than the db in isolation. If postgres has issues, it can affect the reliability of the service further if it's also running the cache.

Besides, having the cache on separate hardware can reduce the impact on the db on spikes, which can also factor into reliability.

Having more headroom for memory and CPU can mean that you never reach the load where ot turns to service degradation on the same hw.

Obviously a purpose-built tool can perform better for a specific use-case than the swiss army knife. Which is not to diss on the latter.


> No need to be so combative.

You're confusing being "combative" with asking you to substantiate your extraordinary claims. You opted to make some outlandish and very broad sweeping statements, and when asked to provide any degree of substance, you resorted to talk about "chill pills"? What does that say about the substance if your claims?

> If postgres has issues, it can affect the reliability of the service further if it's also running the cache.

That assertion is meaningless, isn't it? I mean, isn't that the basis of any distributed systems analysis? That if a component has issues, it can affect the reliability of the whole system? Whether the component in question is Redis, Postgres, doesn't that always hold true?

> Besides, having the cache on separate hardware can reduce the impact on the db on spikes, which can also factor into reliability.

Again, isn't this assertion pointless? I mean, it holds true whether it's Postgres and Redis, doesn't it?

> Having more headroom for memory and CPU can mean that you never reach the load where ot turns to service degradation on the same hw.

Again, this claim is not specific to any specific service. It's meaningless to make this sort of claim to single out either Redis or Postgres.

> Obviously a purpose-built tool can perform better for a specific use-case than the swiss army knife. Which is not to diss on the latter.

Is it obvious, though? There is far more to life than synthetic benchmarks. In fact, the whole point of this sort of comparison is that for some scenarios a dedicated memory cache does not offer any tangible advantage over just using a vanilla RDBMS.

This reads as some naive auto enthusiasts claiming that a Formula 1 car is obviously better than a Volkswagen Golf because they read somewhere they go way faster, but in reality what they use the car for is to drive to the supermarket.


> You opted to make some outlandish and very broad sweeping statements, and when asked to provide any degree of substance, you resorted to talk about "chill pills"?

You are not answering to OP here. Maybe it's time for a little reflection?


> You're confusing being "combative" with asking you to substantiate your extraordinary claims. You opted to make some outlandish and very broad sweeping statements, and when asked to provide any degree of substance, you resorted to talk about "chill pills"?

what are these "extraordinary claims" you speak of? I believe it's you who are confusing me with someone else. I am not GP. You appear to be fighting windmills.


> what are these "extraordinary claims" you speak of?

The claim that using postgres to store data, such as a cache, "is a bit concerning in terms of reliability".


Can you point to where I made that claim?


> This was the very first time I heard anyone even suggest that storing data in Postgres was a concern in terms of reliability

You seem to be reading "reliability" as "durability", when I believe the parent post meant "availability" in this context

> Do you actually have any concrete scenario in mind? Because anyone can make any system "considerably degrading", even Redis

And even Postgres. It can also happen due to seemingly random events like unusual load or network issues. What do you find outlandish about the scenario of a database server being unavailable/degraded and the cache service not being?


Inferring one meaning for “reliability” when the original post is obviously using a different meaning suggests LLM use.

This is a class of error a human is extremely unlikely to make.


That is exactly a service I was hoping Cloudflare would provide. Simple binding using wrangler is really a life quality upgrade when starting new projects.


I tried it a few days ago, a privacy-focused Chrome that doesn't break browsing experience resonates with me. The team seems to care about quality and the browser felt good overall.

A few things that made me go back to Brave: - No vertical tabs - Sync with mobile - I haven't found how to show bookmarks bar only on blank page - Little customization for new tab page (might be fixable using extensions)

I'll definitely follow the project as Brave is far from perfect too.


Regarding the bookmarks bar, Settings / Appearance / Show Bookmarks Bar. If the setting is off, the bar only appears on new tabs. I found that by accident.


thank you! Indeed, it is shown on a new tab in such case!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: