The right operator for unit conversions is "in." "to" is used for ranges, so it's doing 100 euros _goes to_ 1 us dollar and giving you back the answer in dollars.
But it seems the parser is breaking when trying to do the inverse, anyway: "100 usd in eur" seems to do "e*ur" and gives "Values must be converted to units."
I think you're missing part of the author's point.
Yes, as adults, we are bound to "cause some pain" as you put it in some mundane situations, such as cancelling a plan that someone else has been looking forward to.
But her point is that what matters is expressing and discussing with your friends in that scenario:
- Tell them you don't feel like going out after all, maybe you're drained by work and need some time to cool off
- They could answer that it's fine, they don't mind going out alone
- Or maybe they'll propose to just stay in at your place for a quick dinner, just to catch up for a bit and let you rest
- Or they could let you know that they really need to go out with you, as they are going through a rough patch
- At that point you have a better idea of what different options you both have, and you can make an informed decision either way, deciding between your needs and your friend's needs.
- etc...
Obviously if that friend is important to you and you've already cancelled 3 times then maybe suck it up a bit. It's all a matter of context.
The point is that you should start by not avoiding that interaction with your friend for silly reasons, and relying on tech/tricks is not going to help for long.
It's not really a good idea to restrict crawlers at directory level since it doesn't prevent any of those pages being indexed. You can get weird behaviour such as googlebot trying to crawl a page but not being allowed to. If this happens a lot then they are going to penalise you for it because they don't want a bunch of pages in their index that they don't know the content of. If this number of pages is a significant percentage of your site then you are in real trouble.
Much better to use x-robots noindex, nofollow for pages you don't want to be in the public domain.
If you are a neutral site (in this case, caters to a lot of people - especially in Asia, where Google as a search engine is a hit-or-miss proposition), then this might backfire sine other crawlers won't understand X-Robots. You can mitigate this by knowing which spiders knows X-Robots (Google, Bing and Yandex AFAIK) and whitelist them while still using a disallow directive for the rest of them.
It might be true for the fact checking groups described here, but to state that as a general case is incorrect.
> Fact checking is better referred to as "authority enforcement" or "mainstream validation".
Fact checking at its core is about analysing assertions that can be objectively verified.
Fact checking must bring forth strong references/sources or proofs to either go in or against the initial assertions.
> Here's how you know the fact checkers are actually propagandists: you can't find their name or their credentials, or who they work for/paid them.
If a fact checker does not provide this kind of info, then yes, it does not seem very trustworthy. Although you could still evaluate them based on the sources they are using for their fact checking.
But, obviously, it does not have to be the case. Fact checkers can and will provide info about their funding, affiliations, composition, etc...
As you can see, they clearly state that they received funds from Google News to set things up. They also share details about their partners, members, etc.
> Fact checking at its core is about analysing assertions that can be objectively verified.
"Missing context" does not meet this standard of objective verification. Many fact-checking desks and organizations dish out Pinocchios in cases where the "proper context" is political/ideological.
If the proper ideological context is "we want to go to war with Iraq"
The statement "Iraq has WMDs" switches from false to true
Similarly, "we want to go to war with China" vs "we don't want to go to war with China" switches "the virus is a lab leak" to "the virus is not a lab leak"
There are definitely lineups like that, yes. It's a bit unfair to put all lab leak stuff in the China sabre-rattling camp, but there's plenty like that too.
Havana syndrome is another one. I was accused of being a unwitting stooge for Putin (on HN) for doubting "Havana syndrome", or at least saying it's also really quite plausible that there's nothing there -- no weapon, no attacks. Now the CIA itself says it does not exist. Which is an interesting position to be in because based on historical precedent you should never trust a word the CIA says, unless they're admitting "we did that"!
>"about analyzing assertions that can be objectively verified."
Knowing what I know about BS-ing corporate power points and how data can be spun multiple different ways using statistics, the objectively verified information is not enough to prove/disprove an assertion. It always comes down to subjective interpretations and meta analysis of the context behind the facts themselves.
> Yes, because no network issue ever happened in the past with the "greybeards".
These issues, one might think, should have turned into tomes of extensive information on why they happened and how to avoid them, and become an integral part of showing the ropes to new sysadmins, or operation persons, you name them. It seems, however, that by and large the knowledge of the actual working systems does get irretrievably lost once some Kevin gets retired.
P. S. Would you be just as cavalier if lives were lost as a result of such incidents?
- Wired charging is now a pretty well known topic, and USB-C is considered a good standard by the vast majority of the industry. So let's make sure that everyone uses this standard to simplify the life of the users.
- Wireless charging is still a hot topic, with no clear winner, there is time for the industry to settle on a semi-standard. We'll wait until then to see if a legislation is required.
I was getting started on this year's Advent Of Code and seeing all the ruckus around ChatGPT decided to try and solve the problems using it.
It has been so fun that I decided to publish an article on it with some explanations, examples, ...
Hope you find it interesting =]