In Canada, that's the legal definition of engineering. You may not call yourself an engineer without accreditation and such accreditation will be rescinded if you make severe enough engineering mistakes.
I don't disagree with you, but there's a LOT of people in Canada calling themselves software engineers or network engineers who don't have a degree qualified to wear the iron ring.
That's about being a Professional Engineer, which legally entitles you to certain actions (e.g. sign off on design docs). Anyone can call themselves engineers of any kind as long as they don't pretend to be P.Engs.
I am not sure though that being P.Engs in software means anything in practice, unlike in e.g. structural engineering.
The whole point of insurance is to distribute risk. At some point, if coverage granularity becomes too high, insurance will just become pointless, right?
I think it's more subtle than that. It's also about symmetry of information.
If you for example do 23&me you might find it that you have a much increased likelihood of getting a fatal disease soon. Thinking about your family's financial future you might decide to get life insurance because of this new information. Your individual risk in this case is very different from the average, you know this and that's why you are getting the insurance.
This scenario if it became common also would break the insurance model because the insured pool suddenly wouldn't include people with average risk but tend towards higher risk. This would lead to higher cost for everyone who is insured. In an extreme scenario the cost might get so high that it only makes sense to get life insurance for people who know they are likely to die soon. Of course the likely scenario is less extreme, but this shows the principle nicely.
I don't think so, no. If I know for a fact that someone has a 40% chance of dying tomorrow, it still makes sense for me to sell them insurance valid tomorrow only, charging 40% of payout plus some margin. It also probably makes sense for them to buy it, to guarantee the higher payout for their family if they die.
The fact that this relies on overloading an operator for every enum makes this a complete non-starter as is in any professional codebase unfortunately. That can be fixed with a trait type though.
OpenGL is "learnable" by someone in the process of learning. Vulkan and Metal are much less approachable. This will put a huge damper on low-level graphics programming as a hobby.
I would disagree with that, especially with regards to Metal. It's a very approachable and well-designed API. It might not have the volume of resources that OpenGL does, but the docs themselves are good, and I have seen plenty of intro-level tutorials that are decent enough. Debuggability is also much better than OpenGL, which I think is important for newcomers. Debugging OpenGL issues is very, very painful, especially with macOS's lack of debug extensions. Metal is described as "low-level", but it's not quite at the level of Vulkan -- things are simpler and more "streamlined".
There's also the problem that a large chunk of OpenGL learning materials out there are hopelessly outdated, and IMO actively detrimental to learning modern graphics techniques. Judging from the types of questions I see around various forums, it seems to be VERY hard for newcomers to distinguish between "bad" and "good" OpenGL tutorials. In general, there's too much cruft for learners to focus in on the stuff that is actually part of "good OpenGL".
> OpenGL is "learnable" by someone in the process of learning.
If this is the case, it's only because there is much more material and tutorials written. Not because OpenGL is simpler or better.
I know because I watch noobs stumble with OpenGL all the time over at ##opengl in freenode. It usually takes them a week or two to get a triangle on screen, and they're super confused about semi-opaque concepts such as "vertex array objects" (they're well documented in the OpenGL wiki, but reading documentation seems to be out of fashion).
It would certainly help (them) if they had a good knowledge about 3d graphics in general before stumbling into OpenGL. But if they had, they'd be able to do it using Vulkan or Metal with no great difficulty. OpenGL isn't at all better here.
I believe (hope) that OpenGL continues to be that developer-friendly API. Building OpenGL on top of Vulkan shouldn't be too hard, and it means we don't have to pointlessly deprecate and recreate the huge number of OpenGL resources out there.
OpenGL isn't friendly to developers on either side. Tutorials are generally not trustworthy, there's no cross platform debugging tools, and errors just get you a "something happened vOv" code.
If you care about performance it will just do something slow at uncontrollable points in the background, like copy CPU-GPU-CPU memory, synchronize against the GPU, do format conversions, etc.
If you're the one implementing GL, it's gotten gigantic again since they simplified it. GL 4.3/4.6 core has compute shaders, which means you have to implement OpenCL twice but different this time.
Right, but it still involves several gigabytes of largely unused functionality to get a “hello world” and it pigeonholes you into a specific ecosystem. Unity is an entirely different offering from a graphics api.