Hacker News new | past | comments | ask | show | jobs | submit | renonce's comments login

In Chrome you can F12 and go to "Network" tab and then refresh the page. Choose the first file in the list (that's the HTML itself) and you will find "Response Headers" in the "Headers" panel, which includes Last-Modified. It's a bit deep, which makes sense as it's rarely useful.


> That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program.

I’m not sure how an alarm for “high white cell count” should have had so much impact. Here in China once the doctor prescribes a finger blood test, we sample finger blood after lining up for 15 minutes, and the result is available within 30 minutes. The patient prints the results from a kiosk and any patient who cares enough about their own health will see the exceptionally high white cell count and request an urgent appointment with the doctor for diagnosis right away. Even in normal cases we usually have the doctor see the report within two hours. Why wait several hours?

> While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand.

> But in health care, he stressed, these tools have immense potential to combat the staff shortages plaguing Canada's health-care system by supplementing traditional bedside care.

This sounds like the deaths prevented by this tech are caused by delays and staff shortage and what this tech does is to prioritize patients with serious issues? While I appreciate using new tools to cut deaths, it looks like the elephant in the room is staff shortage?


> I can't log in to stackoverflow.com, then go to superuser.com and already be logged in.

I would expect a popup like “This site wants to share cookies with stackexchange.com, press Allow to sign in, press Reject to reject forever or press Ignore to decide later”. Takes a single click to enjoy the benefits of both worlds. The mechanism should make sure that every website has a single “first-party domain” shared across all subsites and that first-party domain must not share cookies with any other site than itself to minimize confusion.


Instead you will get a "we and our 3789 partners value your privacy", and people will blame GDPR/whatever regulation for it.


And that would be annoying to people who aren't already logged in to a related site.

Also, there is no way to know which related site the user is logged in to, so they would have to prompt for every one of their sites.


> Also, there is no way to know which related site the user is logged in to, so they would have to prompt for every one of their sites.

This is not how it works. The mechanism is about allowing a cluster of websites to choose a single first party domain and have all of them share cookies together, not sharing arbitrary cookie from arbitrary domain, otherwise it would create loopholes in connected components that bring back the downsides of third-party cookies. What you mentioned should be done using SSO.

After thinking about it a bit more, I have a clearer picture of how it should work in my mind:

* All cookies are double-keyed: the primary key is the origin of the top-level page and the secondary key is the origin of the page that sets the cookie, just like how partitioned cookies work right now.

* stackoverflow.com uses a header, meta tag or script to request changing its primary key domain to “stackexchange.com”

* The browser makes a request to https://stackexchange.com/domains.txt and make sure that “stackoverflow.com” is in the list, authorising this first-party domain change

* When the user agrees to the change, the page is reloaded with stackexchange.com as the primary key, thus stackoverflow.com can obtain login details from stackexchange.com via CORS or cross site cookies.

* A side effect is that all cookies and state are lost when switching the first-party domain. Should stackoverflow.com be acquired by a new owner, say x.com and changes its first-party domain to x.com, all cookies on stackoverflow.com are lost and the user will have to login on x.com again, maybe using credentials from stackexchange.com. It’s unfortunate but it works around the issues mentioned in the post in a clean way, avoiding loopholes that transfer cookies by switching the first-party domain frequently.


Looks like a perfect use case for LLM: generate that JSON-LD metadata from HTML via LLM, either by the website owner or by the crawler. If crawlers, website owners doesn’t need to do anything to enter Semantic Web and crawlers specify their own metadata format they want to extract. This promises an appealing future of Web 3.0, not by crypto, defined not by metadata but by LLMs.


> The settlement, announced Tuesday, does not act as an admission of guilt and Meta maintains no wrongdoing.

> In 2011, Meta introduced a feature known as Tag Suggestions to make it easier for users to tag people in their photos. According to Paxton’s office, the feature was turned on by default and ran facial recognition on users’ photos, automatically capturing data protected by the 2009 law. That system was discontinued in 2021, with Meta saying it deleted over 1 billion people’s individual facial recognition data.

> The 2022 lawsuit

> We are pleased to resolve this matter, and look forward to exploring future opportunities to deepen our business investments in Texas, including potentially developing data centers

Each statement makes it increasingly harder to view it as a fine than a tax. An offence that lasted 11 years and got prosecuted a year after it ended can be explained in no other way than being an excuse dug out of the ground to make a ransom


> This whole HN post could just be bots all the way down and it'd still be an interesting read through the comments.

The assumption is that the comments are a function of the post and the present public info. Real world comments can disclose private information (My Google account banned), make real impact (S*e/C*e support site), connect to celebrities (I'm Karpathy ask me anything). And even if the assumption holds and the site is a sample from a probability distribution, the particular sample can be referenced in other sites so it makes sense to check what everyone is viewing right now.


I feel many security researchers like to overemphasize the importance of certain security practices (the most common one being "longer and random password with symbols and upper case letters") without considering its costs, trouble, and human's lazy nature. Forcing long passwords causes people to use repetitive or easy to remember words, enforcing Secure Boot doesn't work if it gets in the way of normal boots. Making sure that these security mechanisms "just work" is as important as enforcing rules like these.

A natural question is whether Secure Boot is the right place to protect against the type of attack mentioned in the post. Given that we've already invested a lot of effort in fixing kernel privilege escalations, and any program able to install BIOS rootkits can access all data and modify any program anyway, what justifies the extra complexity of Secure Boot (which includes all the extra design necessary to make it secure, such as OS'es robust to tampering even with kernel privileges)? I mean, why invest so much in Secure Boot when you could harden your kernel to prevent tampering BIOS in the first place?


Real security researchers know that requiring symbols and upper case letters actually reduces security. Those requirements are explicitly rejected by the latest NIST recommendations:

https://pages.nist.gov/800-63-3/sp800-63b.html

So I'm basically agreeing with you, that a lot of people "in security" are just cargo culting.


For me it cannot be justified. A corporate environment might be different though.

Still, as a consumer I reject it for personal use because I believe boot malware is rare since other forms of attack have been vastly more effective and I also don't have an evil maid.

I just hope we don't get to a ridiculous situation where my shitty bank gets panic if I root my phone and wants to extend that behavior to PCs. "Trusted computing" is a failure in my opinion and "security" on mobile devices is an example where it significantly impacts the usefulness of the devices themselves. Of course this might be more driven by ambitions to lock down phones than real security, but still.

Secure boot might be useful for devices you administer remotely. But any secure boot validation doesn't mean anything to me, the system could be infected without secure boot noticing anything. It probably only gets in the way of OS installations.


The idea is to stop you getting rootkits that can never be removed. You want to feel safe knowing you can just wipe your computer and start again.


You can usually flash BIOS while wiping your computer in the same way that a malware does except in very rare cases. Also Secure Boot doesn't remove the kind of rootkit that doesn't get removed along with the storage since it has to boot from your hard drive anyway.


It’s going to be significantly faster very soon, we have seen how AlphaGo evolved into KataGo which is many magnitudes more compute efficient


The main difficulty to scaling Alpha Proof is finding theorems to train it with. AlphaGo didn't have that problem because it could generate it's own data.


So if I understand correctly, this puts x.com under the same entity as twitter.com so third party cookies are allowed between x.com and other twitter sites?


Firefox partitions (and soon blocks) 3rd party cookies by default. That means they can't be used for cross-site tracking. Also not across different top level sites belonging to Twitter / X. The entity configuration for ETP does not change anything here.

The bug we fixed was in ETP, an older mechanism in Firefox, which blocks cookie access for known trackers, based on a list. Only for that mechanism we consider a hand full of domains by Twitter / X to be the same party. This is so we e.g. don't block their CDN if you're on x.com. We still block them on sites not belonging to Twitter / X. This is especially relevant for ETP strict which does not only block cookie and storage access, but also blocks 3rd party loads from known trackers altogether. If we block Twitter's CDN on x.com the site breaks.


This website is indeed hosted by Cloudflare.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: