Hacker News new | past | comments | ask | show | jobs | submit | pluies's comments login

I'm not sure there is a consensus on ice baths at this stage? Some athletes swear by it, some say it's useless, and last I looked at it the science was ambivalent (somewhere between small to no effect, either positive or negative).


There was a meta study published in February.

https://onlinelibrary.wiley.com/doi/10.1002/ejsc.12074


Fwiw this is how I use Loki most of the time. Pick an app label, pick a time period, look at raw logs. The LogQL for this ends up something like `{app="workload-foo"}`. Loki is excellent at that.

Then if I know which pod I'll filter down to a specific pod with `{pod="workload-foo-1234"}`, sometimes I'll search for a specific term (error message etc) with `{pod="workload-foo-1234"} |= "error message"` then look at the logs around that. There's really no point writing complicated queries unless you need to.


That will, if I understand correctly, get the logs for one pod, not for one process. For example if the pod restarted 10 times you will not get 10 separate files from that query.


You'd have the label shown in the output that indicates the log line in question is from a different process/pod/container/host/whatever.


How so? The pod, container, and host labels should be the same for a process that crashes and is automatically restarted, no?


Even more than that, if you are running multiple instances of the app in multiple pods concurrently, then all of those logs will be joined together.


I'm not sure I really understand this.

If you mean one instance in each pod, then each should be labelled differently and you can filter down to one instance.

If you mean running multiple instances in each pod (and container?), then the standard kubectl log output will also have them all joined together. For both of those, you would need to add another unique identifier to each line, or run each instance in a separate container so you can submit the logs with the pod name and container name combined being the unique identifier.


That's definitely false


Why? If the pod is defined to spawn multiple containers, and each container runs the same application, then this seems true to me? Unless you would add an additional filter on the container name.


Well yes obviously you have to filter on container if you want a single container (just like kubectl logs -l <...>). The parent comment was phrased as a limitation of Loki, of course if you request all logs for an application you'll get all containers, or if you request all logs for an applications or a namespace you will get that.

Not being able to filter between multiple processes or multiple restart of a container was a genuine issue, not being able to filter between pods of a deployment is not.


I actually didn't understand it being phrased as a limitation. It could also be a feature - maybe one would prefer to look at logs for multiple services within a single query?

Anyhow, the nice thing about the system is that one can get anything that is preferable as long as the logs are annotated correctly (with pod and container id).


No, once again, the trouble is that you can't get the logs for a specific execution. If a container in your pod restarts, that is invisible to Loki, you have to look for whatever the container writes on startup and cut there manually. If you want a specific process in your container, it's mixed with the rest.


Its certainly true in my environment, maybe not others though? Apologies!


Yeah but Prometheus has a web ui where you can run PromQL queries and it'll give you basic graphs back, which is handy for throwing a quick query at it before putting it into something more long-term like a Grafana dashboard or an alerting rule.


wow I always thought that Prom UI was just a tacked-on part of alertmanager or something because it’s so rudimentary. In my experience, everyone just uses Grafana Explore since that’s what Grafana was originally purpose-built for and it’s crazy easy to set up. Just pull down a container or helm chart or whatever.

Since Grafana built Loki, it doesn't make any sense why Grafana would create a separate querying UI app for Loki when they already have Grafana Explore. Prometheus (and I assume its UI) was created by Google [edit: sorry, created by SoundCloud, inspired by a Google Borg tool] before Grafana became the de facto Prometheus query UI, so it’s not really analogous.


Prometheus was created by SoundCloud, not Google, but was inspired by the Google Borgmon tool.


You're in a lot of trouble if your _hooker_ is feeding the scrum!


Gah! Brain, I was thinking about which player usually strikes the ball first (hooker) rather than who feeds the scrum (scrum-half). Point still stands though I don't think I've seen a straight feed in years, at least in the Six Nations matches.


It is indeed used quite a lot in France, and it's great but... For extra confusion my parent's street starts with straight numbers (1, 2, 3...), _then_ switches to distance-based count! So that on one side of the road, we get houses 43, 45, 47, then suddenly 947, and on the other side 64, 66, 68... then 920.


If you're interested in diving deeper, the History of Aotearoa New Zealand podcast ( https://historyaotearoa.com/ ) is a great resource, very well researched and engaging.



They have 30GW of renewables in the AEMO connection pipeline at the moment, ~7GW greater than entire continent coal generation (which will add to current renewables capacity) [1] [2]. Batteries are scaling up to consume grid services revenue and drive out thermal generation [3].

Also of note: "NEM total emissions declined this quarter to the lowest Q2 level on record, of 28.7 million tonnes of carbon dioxide, 6.6 per cent lower than Q2 2022, whilst emissions intensity dropped 4.3 per cent to 0.61 tCO2 e/MWh."

"“Rooftop solar generation increased 30 per cent from Q2 2022, which reduced electricity demand from the grid. Coupled with higher renewable output, wholesale prices were zero or negative nine per cent of the quarter throughout the NEM, a new Q2 record,” Ms Mouchaileh said." This bears repeating: 9 percent of the time, the wholesale cost of power is zero or negative.

[1] https://www.energymagazine.com.au/aemo-quarterly-energy-repo...

[2] https://opennem.org.au/facilities/au/?status=operating

[3] https://www.bloomberg.com/news/features/2023-04-04/how-tesla... | https://archive.is/egMXl


> They have 30GW of renewables in the AEMO connection pipeline at the moment, ~7GW greater than entire continent coal generation (which will add to current renewables capacity) [1] [2]. Batteries are scaling up to consume grid services revenue and drive out thermal generation [3].

Yet they are still digging coal out of the ground - for export. Australia exported USD 75 billion worth of coal in 2022.


If they discovered a new source of coal, as large, clean and easy to mine as their best ever existing coal seams, they'd be financially better off putting solar PV over the top of it than mining the coal seam. So that problem should solve itself if they let the market do its thing and don't tilt it to help out coal mine owners.


We should try to keep it that way.


Service mesh. Istio for example injects an envoy sidecar to each pod, and manages everything "meshy" (routing, retries, mTLS, etc) via this sidecar.

This is also how linkerd works, though they're using their own purpose-built sidecar rather than envoy.


Isn’t the new ambient mesh available in istio 1.18 sidecar-less?


Yep, it’s also alpha, under intense development, and by every account (including those vendors who are chomping at the bit to start selling it to customers) absolutely not production ready.


Oh, good to know. I was about to pack a spike into an upcoming sprint.


It seems to be exchange-specific, I assume the main reason for this article is:

> For example, on the New York Stock Exchange (NYSE), if a security's price closed below $1.00 for 30 consecutive trading days, that exchange would initiate the delisting process.

It can be avoided via a "reverse stock-split", which seems to be easy enough to get approved (no owners of the stock want to be delisted!), so nothing hugely dangerous... But it's not a good sign for you company's general health I guess.


Nitpick, but I don't think this is necessarily correct - if the end client has _no_ UK presence, then the "old rules" apply for IR35, i.e. the contractor is responsible for determining their own status.

If the end client has some UK presence (a branch, etc), then yes the "new rules" apply and the end client is responsible for determination.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: