I'm assuming you and the son of the other commenter are US citizens? Quite frankly the way the US operated on most things absolutely baffles me. In the UK were the same thing to happen to you the police would be paying the bill, but obviously we have the NHS so it actually pays. The NHS might be broken but I am thankful every time I hear an American health story!
Well, the NHS might be better than what the US has, but that doesn't mean it's good.
So Britain as a whole spends roughly half (in terms of percentage of GDP) on healthcare as the US. That includes both public and private expenditure. At similar health outcomes.
Now Singapore spends roughly half of what Britain spends (in terms of percentage of GDP), and our population is no worse off for it.
You need to look at PPP-adjusted per-capita stats, and also accept that there are limitations in a simple measure of "health outcomes" (e.g. average life expectancy)
What you can see though, is a cluster that vaguely fits "spend more, better life expectancy", with two outliers:
1. The USA, massively outspending every other country, but having same life expectancy as China spending a tenth of what it does
2. South Africa, spending roughly as much as Mexico or Columbia, but 10 years less life expectancy. I suspect it needs more targeted spending with its HIV crisis, rather than measuring average spend vs average life expectancy
Another thing to consider about South Africa is that wealth inequality is insanely high; it seems very plausible to me that most money is spent on like the rich 30% of population, and the majority of the country is basically on Namibian levels of care.
Public healthcare expenditure is also likely to be wasteful; governmental corruption and languishing infrastructure is a comparatively big problem there (compare power infrastructure, rail network, postal service), so the pure dollar value spent on healthcare is systematically off.
Thanks for the link btw-- I would not have expected such a clear trend in this, especially given how noisy metrics like life expectancy are; very interesting.
You can also get cheap dental and haircuts in Mexico: what's Singapore's purchasing power parity compared to the UK? Maybe not an extreme difference since Singapore has has such a high GDP or even worse, but if it is more that can explain a lot even for advanced services.
Purchasing power parity depends a lot on what basket you compare.
Eg owning a car or cigarettes are very expensive here. But eating out starts much cheaper than in the UK. (There's no upper limit in either place, of course.)
The GDP per capita in Singapore is roughly double that of the UK, so health spending per capita is similar. I cant find latest figures, but it seems it is somewhere around 20% higher spend in UK.
On the other hand, there are EU countries such as lithuania and estonia, that spend less than half per capita of Singapore, and are ranked with a higher healthcare index.
Depends on how you measure. Lots of health spending is labour costs, and our labour costs are higher. So percentage of GDP relatively closely tracks percentage of total working time.
Also perhaps our more enlightened policies are helping us achieve that higher per capita GDP?
> Also perhaps our more enlightened policies are helping us achieve that higher per capita GDP?
Singapore’s GDP per-capita is likely fairly inflated as it doesn’t correct for the effect of multinational tax planning by large corporations on the GDP statistics , unlike say Ireland.
Well, im not measuring anything. Im just saying that the spend per capita is mostly equivalent for Singapore and UK.
No idea why you are equating enlightenment to per-capita GDP. I don’t quite understand that equation. Singapore may have a high per capita GDP, but that isn't resulting in a higher median individual incomes. Given the extremely high cost of living, the purchasing power of an average Singaporean is actually comparable to that of someone in the less affluent EU countries that have 1/4 of the GDP per capita and equivalent (or better!) healthcare. So while the GDP figure looks impressive, it doesn’t fully reflect the financial reality for most residents. Is that your "enlightened policies" at work?
Hmm...there are other countries on that list...who spend more relative to their GDP. So not exactly "the world's ATM".
And of course the rest of the world finances the US economy and US debt by virtue of the US dollar being both the currency of international trade and reserve currency. And it is reserve currency by virtue of being the currency of international trade.
That is a far, far greater monetary value than the aid given out.
Which you can also tell by what happens to you if you start to use another currency for trade. "Would you like some regime change to go with that?" Or how the US fights the Euro tooth and nail, including sabotage.
The emulator these days is just using an x86_64 image and with not-that-modern VM hardware extensions the overhead is pretty small.
So I'd expect CPU performance to be very similar if not basically identical.
GPU performance would be the bigger question, and the emulator has put a lot of work in that direction. Is this better or worse than that would be the biggest difference performance-wise.
Waydroid runs Android services and applications natively on your machine in a container with apps launching in windows side-by-side with your other applications, whereas the Android emulator runs a VM with regular smartphone-esque hardware.
The emulator is just KVM. Yes VMs vs. container is a difference, just like it is in endless cloud discussions, but it's not particularly big in any objective metrics
This is like suggesting that running applications in a Windows VM and running them under WINE isn't a big difference.
(One might say "but it's the same OS!" - but it's only really a similar kernel, and even that is heavily customized and not at all the same as your laptop runs. The graphical stack is also nothing like neither X nor Wayland.)
Also, rant: it's not KVM, but QEMU. KVM is a small API to deal with virtualization instructions, but while that makes VMs much faster than without (similar to how floating point is much faster with hardware support), it does very little on its own. The virtualization instructions are basically a way to let QEMU run code as is but intercept everything important - syscalls, faults, you name it - whereas without, it would have to instrument or outright interpret the executed program.
It's QEMU that micro-manages setup, execution and importantly is responsible for emulating all the VM devices - KVM just exposes memory mapped into the virtual CPU's memory space.
> This is like suggesting that running applications in a Windows VM and running them under WINE isn't a big difference.
False equivalence. WINE reimplements windows libraries and higher level APIs to convert to Linux-native ASAP.
Waydroid/containers operate at a kernel level. The entire userspace is "virtualized".
> One might say "but it's the same OS!" - but it's only really a similar kernel, and even that is heavily customized and not at all the same as your laptop runs.
Exactly my point. The only difference is whether or not the kernel is shared, which is a very, very small fraction of the OS.
WINE reimplements windows libraries and higher level APIs to convert them to something compatible with your host kernel, providing as full a glue Windows user-space as needed to support Windows user-space applications integrating into your host system. This includes a full, Windows-centric emulated filesystem hierarchy.
Waydroid reimplements Android libraries and higher level APIs to convert them to something compatible with your host kernel, providing as full a glue Android user-space as need to support Android user-space applications integrating into your host system (such as by providing a Wayland-backed implementation of SurfaceFlinger or HWComposer). This includes a full, Android-centric emulated filesystem hieararchy.
They are equivalent, even if their targets are different. And most importantly, they are both the polar opposite of a VM setup, as the user-space is not the user-space of the VM, but a dedicated user-space that presents Windows and Android behaviors respecitvely, but implementing them in terms of your host system's window management, input management, audio management, etc.
(That waydroid relies heavily on namespaces is a technical detail, not a relevant difference between WINE and Waydroid. WINE if implemented again today would likely do the same.)
> Exactly my point. The only difference is whether or not the kernel is shared, which is a very, very small fraction of the OS.
No, the main difference is whether the user-space is deeply integrated through various glue, or whether you're emulating the whole system as-is, with its original user-space and kernel, with just a virtual screen to interact with.
> Waydroid reimplements Android libraries and higher level APIs to convert them to something compatible with your host kernel,
This is simply incorrect, though, negating the entirety of the rest of your argument.
> No, the main difference is whether the user-space is deeply integrated through various glue, or whether you're emulating the whole system as-is
Okay, sure, but then again waydroid and a VM are kissing cousins going off of that definition anyway. Both emulate the whole system. The only difference is whether or not the kernel is replaced.
> That waydroid relies heavily on namespaces is a technical detail, not a relevant difference between WINE and Waydroid. WINE if implemented again today would likely do the same.
No, it wouldn't. WINE does the approach it does to avoid redistribution of any Microsoft binaries. Simply containerizing & emulating the kernel API would be useless for WINE as a result
Never counted but I can touch type accurately until I become conscious I'm doing it at which point I get in the way. Generally I start to appreciate how good I'm doing, at which point my 7 year becomes better than me.
This is all well and good but unless you have networking experience and know what makes a good router you're still stuck.
What router should I be using in place of the ISP one? Can I trust it's manufacturer? How can I make sure it definitely is a one to one replacement and I don't need to use my isp router as a bridge?
One cannot trust manufacturers, it's common practice to put backdoors. That's why you simply get an OpenWRT compatible router and flash it.
It does not require networking experience, just a bit of curiosity and following a bunch of well documented steps.
Knowledge is required to build a decent setup. It doesn't end there for a proper environment, you also want a VPN, this can be configured at the router level. Oh, what about an ad blocker, blacklisting all known ads serving hosts perhaps?
Given the time we spent hooked online, worth gaining what really is vital knowledge for a decent internet access, or the internet will gain most of your precious attention.
Considering the audience: Get whatever x86 (arm if you have a more enthusiastic vibe and don't mind some independent research) hardware, install your Linux/BSD distro of choice (it doesnt need to be a "router" distro in case youre already handy with some other base system. setting up from vanilla can be easier than getting into idiosyncracies of openwrt/pfsense/etc) and configure it yourself. It will be valuable and useful even if your ISP requires their own gateway in the middle. Get 2 of whatever it is so you have a spare/staging ready if it becomes necessary later.
Intel NICs are generally preferred over Realtek if available.
For any site not offering an RSS feed I has an alias email address (similar to: use.rss.idiot@myemails.com) solely used for newsletters with a rule to move any emails to that address to a newsletter folder.
My only experience of C#/.Net programming is by way of powershell. But the comments here make me think it would be a good idea to invest some time in learning c# beyond what it can provide over add-type in powershell. I've mostly avoided it because I thought it was windows only, I've learned dart and flutter for cross platform programming. Can c# run on mobiles?
Yes. MonoGame (written in C#) for example supports iOS, iPadOS, Android, macOS, Linux, PlayStation 4, PlayStation 5, PlayStation Vita, Xbox One, Xbox Series X/S and Nintendo Switch.
Can someone ELI5 or point me to an idiots guide as to why macros are desirable? As a non professional programmer who work on small side projects I'm struggling to understand what benefit they bring.
Generally meta features like macros are _far_ more useful to library authors that need to do some sort of "reflection" on your application code.
Serialization libraries are frequently the largest beneficiary.
In my ideal world I don't need to write macros or use language meta/reflection features while (only) writing code to solve business problems, but to get to that ideal often the libs you use do need reflection capabilities
I'd argue that it's entirely reasonable for the average application/service developer to not appreciate the usefulness of macros. Not everyone needs to be a foundational library author, honestly we probably need fewer (looking at you npm/cargo micropackages)
There's also the Lisp philosophy which emphasizes that you are your own libraries' author, too! Third-party libraries can only help you with common business code - anything specific to your domain, or your business, you have to model yourself. Using macros is fundamentally not different than using classes or functions to create an environment[0] where expressing your business logic is straightforward, and invalid states are not representable. Metaprogramming just lets you go further in this direction than "traditional" tools languages offer.
--
[0] - The correct term here is Domain-Specific Language, but by now it's badly overloaded and people have knee-jerk reactions to it...
Your language doesn't have pattern matching? It doesn't have Python's "with" or do...while? You want to generate an enum from a CSV file? Want to add a useful abstraction specific to your domain? How do you implement short-circuiting yourself (i.e. `A && B` only evaluating B when A succeeds) without thunking?
One of the main use-cases for compile-time metaprogramming (like macros) has been to be able to write performant code that does not type-check correctly in a typed language. Library writers encounter this issue frequently, e.g. the C++ standard library is heavily based on template metaprogramming. One example of code we want to write in a generic way are map and flatMap operations that you can define on lists, binary trees, hashmaps, rose trees and many other container-like data structures. But many typing system do not let you write the map and flatMap abstraction in a type-safe way once and for all. In dynamically typed languages, there is no such issue.
Some modern languages (Haskell, Scala) overcome the lacking expressivity for library writers with higher-kinded types and principled support for ad-hoc polymorphism (e.g. typeclasses), thus reducing the need for meta-programming. Notably, Haskell and Scala have unusually principled support for metaprogramming.
As a heuristic, I would suggest that using metaprogramming for small or medium sized normal ("business") code is a sign that something maybe be suboptimal, and it might be worth considering a different approach (either to the choice of implementing business logic or the chosen programming language.)
Macros are capable of much the same thing as reflection, but without the reflection part. Anything you would use an annotation/decorator/attribute/etc for, can be done as a macro to eliminate the runtime performance impact and the vector for action-at-a-distance. In addition to removing internal copy-pasting/templating when you just can't quite replace it with an interface instead.
lexing -> parsing/macro expansion -> compilation of core language
The macro expansion phase removes all uses of macros and produces a
program in the core language (which has no macros).
The idea is that a user can extend the language with new features.
As long as a program that uses the new feature can be transformed
into an equivalent program that doesn't use the feature, it can
be implemented with a macro transformation.
Macros are needed when the user wants a construct that:
- uses a non-standard evaluation order
- introduces new bindings
- remove boiler-plate (not covered by functions)
- analyzes program elements at compile time
Non-standard evaluation order
What the programmer can use macros depend on how powerful the macro system is.
Reasonable macro systems can be used to implement pattern matching (non-standard evaluation order) and object systems. Implementing, say, pattern matching as a macro
has the advantage that it can be done within the language without changing the compiler.
General constructs such as pattern matching are usually provided by the standard library
of a language - but users are free to experiment with their own versions if they have special needs.
Removing boiler plate
In my 6502 emulator I have two macros `define-register` and `define-flags`.
This allows me to define the registers of the CPU as:
(define-register A) ; accumulator ( 8 bit)
(define-register X) ; index register ( 8 bit)
(define-register Y) ; index register ( 8 bit)
(define-register SP) ; stack pointer ( 8 bit)
(define-register PC) ; program counter (16 bit)
And the individual flags of the status register are defined as:
(define-flags
(C 0 carry) ; contains the result affecting the flag (set if result < #0x100 )
(Z 1 zero) ; contains the last byte affecting the flag
(I 2 interrupt) ; boolean
(D 3 decimal-mode) ; boolean
(B 4 break) ; boolean
(U 5 unused) ; true
(V 6 overflow) ; 0 or 1
(S 7 sign)) ; contains the last byte affecting the flag
Note that macro expansion happens on compile time. Thus there is no overhead at runtime.
Notes
The convention in languages with macros is to use them only if functions can't do the job.
Since macros follow non-standard evaluation order, macros need to be documented carefully.
Over time one finds the balance of when to use them and when not to.
[In the beginning everything looks like a nail.]
Language exploration
An often overlooked positive effect of macros is that the entire community can experiment with new language features.
It's no longer just the "compiler team" that have the ability to add new features. This means that iteration of language design happen faster.
> An often overlooked positive effect of macros is that the entire community can experiment with new language features.
Sure, that sounds positive - but with enough macros, you can turn "your" version of the language into something completely unrecognizable to people that are only familiar with the "basic" version (or, otherwise said: congratulations, you've got yourself a DSL!), and I would say that's a rather negative effect...
TeXInfo is a plain-TeX system that rewrites the parser to a completely new syntax, whilst maintaining compatibility with importing and linking to other libraries. It is extensively used, especially in emacs.
reply