Only iPad Pro has M4? Once upon a time during the personal computer revolution in the 1980s, little more than a decade after man walked the moon, humans had sufficiently technologically developed that it was possible to compile and run programs on the computers we bought, whether the computer was Apple (I,II,III, Mac), PC, Commodore, Amiga, or whatever. But these old ways were lost to the mists of time. Is there any hope this ancient technology will be redeveloped for iPad Pro within the next 100 years? Specifically within Q4 of 2124, when Prime will finally offer deliveries to polar Mars colonies? I want to buy an iPad Pro M117 for my great-great-great-great-granddaughter but only if she can install a C++ 212X compiler on it.
Consider a simpler example from basic math. Is 1/x infinite when x==0? The answer is that 1/x is undefined when x==0. In calculus one can take limits as x "approaches" 0 but x==0 is still undefined. Likewise, the Lorentz length contraction is undefined when traveling at c.
How were these drawings made? It seems the only accurate way would be to painstakingly excavate soil around the root systems a few tiny clumps at a time so as to record how the root system really is shaped prior to any disturbance. This would mean slowly observing the root system from shallower to deeper levels, then reconstructing the side views seen in the drawings.
Growing the plants in some sort of 2D glass observation vessel in order to observe the roots from the side would cause the roots to grow more unusually than in nature.
This appears to be more limited than what CBMC[1] (the C Bounded Model Checker) can do. CBMC can do function contracts. CBMC can prove memory safety and even the absence of memory leaks for non-trivial code bases that pass pointers all over the place that must eventually be freed. Applying all the annotations to make this happen though is like 10x the work of getting the program actually running in the first place. CBMC definitely makes C safer than even safe Rust for projects that can invest the time to use it. There is an experimental Rust front end to CBMC called Kani[2] that aims to verify unsafe Rust (thus making unsafe Rust become safe) but it is far from the speed and robustness of the C front end.
An EPA ban can't do anything about natural mineral asbestos that occurs near many residential areas. Floods continue to contaminate residential areas with natural asbestos and have done so for millennia. Recent report from Washington state:
I never said the EPA ban was bad. My point was to point out that natural asbestos sources remain a threat. I was quickly massively downvoted though because readers assumed I was criticizing the EPA when I in fact was not. I will have to be more careful in the future to understand that many readers don't carefully read what one actually writes, but rather read what they think someone might be implying.
Are you sure that the multiple people who are assuming your implication and downvoting as a result are wrong?
I think it’s quite clear and obvious that your comment’s intention was to downplay the importance of the EPA’s ban and frame it against or redirect it toward the topic of naturally occurring threats.
If that’s NOT what you were doing, then your comment was essentially irrelevant.
Again, my intention wasn't to downplay the importance of the EPA's ban. My intention was to point out that natural sources of pollution remain, as natural sources of pollution is something I have been fascinated with for a long time. For example, certain tree forests can emit sufficient unhealthy levels of hydrocarbon smog that the smog becomes visible in the air, kinda like Los Angeles before catalytic converters. But I certainly think eliminating man-made smog is a great thing, even if tree-made smog remains. And again, I have come to see now that my wording to the effect of, "An EPA ban on pollution X can't prevent natural sources of pollution X" has been interpreted as a statement logically identical to "EPA bans of pollutant X are pointless because natural pollutant sources of X still remain." But those two statements are not, in fact, logically equivalent. And again, I need to better learn that humans frequently suspect implications and insinuations in words that the speaker did not intend.
I still think there’s more suggestiveness in your words than you’re giving them credit for:
It’s in the very first sentence: “An EPA ban can't do anything about natural mineral asbestos that occurs near many residential areas.”
You’re saying an EPA ban can’t do something. That’s a place where you’ve pointed out a weakness of the ban, right in the first sentence. None of the remainder of your comment mentions any redeeming value to this EPA policy, so we have to assume that your main objective is to criticize it. After all, your very first sentence is a criticism of the policy’s effectiveness.
If you recall persuasive writing from grade school, we put our overarching opinion/objective right in the first sentence/paragraph as an introduction and then follow it up with supporting evidence throughout the rest of the piece. That’s exactly how your comment was structured.
It’s not some kind of flaw of human nature to assume that your main objective was to criticize the merit of this EPA policy, because you essentially directly stated that it was.
For many years the XOR sprite cursor was patented, but the patent seems to have expired in 2013. One of us should make the request to the GNOME/KDE/Wayland/DE people and link the expired patent:
EDIT: The linked patent seems to only cover 1 bit inversion, not full color inversion. I can't find the patent for the full color, but recall reading that a patent stopped the adoption of a nice color inverting cursor for Linux that Windows has had for so long. If Windows has had it for as long as you say, then perhaps any relevant patents have expired?
That patent is only for MSB XOR though and seems to be only for hardware. That's not the only way to do it if someone wanted to achieve the result of better visibility.
This video mentions that Tolkien started a sequel to LotR titled The New Shadow but quickly abandoned it after realizing the story would inevitably be about politics and the evil of Men, about the descendants of Aragorn becoming like Denethor, or worse. And Dune is full of politics and far-future men becoming much more sinister than Denethor. LotR is about good vs. evil, about little people having the power to change the world. In Dune almost everyone is self-interested and Machiavellian, the powerful cruelly use the powerless for their own ends.
That's probably physically impossible for a charged-particle beam (like electrons). It would defocus itself from its own self-interaction, as well as from interaction with intervening magnetic fields.
On this theme: we know several types of natural, astrophysical accelerators of charged particles—but none of those are observed as a localizable source of charged particles, from the perspective of astronomy. We just see secondary photons.
A lot of effort is currently put into tracking missile launches and predicting ballistic trajectories. The goal is to give some warning if a nuclear strike is launched.
It would be obvious if a strike came from an extraterrestrial source, there would be no terrestrial launch detection.
In addition, such a attack is unlikely to succeed. It would take a long time to arrive on earth and by the time the result of the attack was known there could be angry Earthlings counterattacking.
> It would be obvious if a strike came from an extraterrestrial source, there would be no terrestrial launch detection.
There would be no launch detected, that is true. That doesn’t mean that it would be obvious that the attack was extraterestrial. The alternative hypothesis would be the that the known terrestrial enemies developed some technique to confuse your sensors, or cloak themselves, or bribe your watchdogs, or pre-position warheads in space, or any other similar deception. People would be sooner thinking that their enemies smuggled nukes in overland than to think that aliens attacked them.
> A lot of effort is currently put into tracking missile launches and predicting ballistic trajectories. The goal is to give some warning if a nuclear strike is launched.
Less advanced nations such as North Korea and Pakistan have nukes. Do you think that their monitoring systems are really that good?
No, but they won't destroy the earth with what they have - untold destruction of couple other nations, with millions dead and extreme humanitarian and economic catastrophe - sure, but not extinction of life on earth.
I don't see why or how the step 3 would happen. Even if NK nuked South Korea, and even if theoretically US responded with a nuclear strike(and that is already really stretching it), why would other superpowers respond with their own nuclear strikes? China would be super unhappy, sure, but would they attack US with nukes over destruction of North Korea? I don't see it.
I can see Pakistan and India exchanging nukes, but why would anyone join in that?
NK nukes SK, USA nukes NK, CCP needs to show that they are not weak and nukes something America-adjacent (ex: Guam). I think all bets are off at that point.
There is a contingent of generals in every nuclear capable country which is itching to use them. Historically, it has only been a rational civilian leader who has kept them in check.
There was recently a US President who was talking about nuking a hurricane.[0] Once nuclear weapons are in play, we just need one irrational reactionary leader, in the wrong place, at the wrong time, and Fin.
Yeah that's true, I just worry that things could escalate quickly once there is any exchange of nuclear weapons. It would certainly be destabilizing to a great extent.
My concern is that it would possibly snowball and draw in other actors.
The aliens would have superior tech, so hacking into our systems would be trivial. They could just launch an actual nuke rather than lobbing one in from space.
People assume this and I never understand why. Even if we postulate highly complex realtime computing as a necessity for controlling a superluminal engine, why assume an entire technological history unrelated to ours would make it trivial, or for that matter possible, that even FTL-capable extraterrestrials can compromise earthly systems?
'The thing about aliens is they're alien.' - why assume this only works in one direction?
There's at least one science fiction writer who had similar thoughts and wrote a story [1] where the FTL is pretty much the only technology in which the FTL aliens who visit Earth are ahead of us.
There's a good deal of Turtledove in my perspective on this, yeah. His academic study of history lends a perspective few if any other authors in my experience share; my own study of the same subject, though entirely amateur, tends to make his counterfactuals seem more plausible rather than less, by which I infer both they and their consequences in his stories are drawn with scholarly care.
Of course it is not a guarantee. It is possible that the alien invasion fleet arrives, they land, unload their main battle tanks and a passing puppy laps up their whole fleet accidentally.
What we know that by virtue of them being here they are either very good at faster than light travel, or they are good at traveling slow.
If they are good at FTL what else are they good at? We currently think that is impossible. What other things we think as impossible are practiced by them?
If they traveled slow, they must be also good at maintaining their equipment on crazy long timeframes. It also shows that they are the patient sorts who plan and execute things on the order of timelines our empires crumble. How long have they been with us then? How much preparation did they do beforehand? Did Pham Nuwen code the intel management engine?
But sure, it is possible that the Aliens arrive. They broadcast a TV signal threatening us, but unfortunately the sync is a bit off so basicaly no-one understands it. Then they enter our atmosphere. The high oxygen concentration rusts their equipment and they all die.
Seeing something occur that we thought was impossible tells us our understanding of physics is incomplete, which we already know. Seeing how it is done would probably tell us more, but we haven't. Until we do, we're guessing, and to assume FTL mastery confers godlike powers is as much an assumption as any other. Turtledove addresses this in the story that a sibling commenter mentions; I can also recommend that story, which rewards the reader with considerable entertainment while making its point about what can reasonably be assumed in the total absence of information.
The same goes for the sublight option, only a lot more so. Unless they have FTL communications, which I believe we also consider effectively impossible, by the time they get here anything they think they know about us will be wildly outdated, in technological terms at the very least. Possibly also in terms of the dominant terrestrial species, but we can be generous here.
We can be generous about their information latency because that only matters if they want it to. Any species which wants us dead and doesn't care what state the planet's left in - a reasonable assumption, if we're talking about them popping our nukes at us - doesn't need to come close to landing, or even to orbit, to do it; a kinetic bombardment in passing will amply suffice to depopulate Earth to more or less any degree desired. For subluminal interstellar travel to be even remotely feasible, even for an individually long-lived species, implies access to the kind of delta-V budget where the only limiting factor in such a bombardment is the time it takes to accelerate impactors, which may be zero if those are released before or during deceleration to match velocity with our solar system.
In the former or FTL case, we don't know how FTL works or even could, and we therefore can assume anything we like - with all assumptions at equal risk of bankruptcy. In the latter or high-sublight case, they don't need to be more clever to kill us if that's all they're after, and it may be unreasonably charitable to assume we would even get a chance to see it coming.
That does not seem so far fetched to me. So many of the purported alien sightings are beings with bilateral symmetry, two eyes, two arms, humanoid face and so on. The only way I could see that happening is visitors from a distant future.
Or, more likely, the alien "creators" have created them more or less in our image.
Another possibility is that the Galactic Federation has a rule that when they need to visit a planet that is not yet aware of aliens the crew must entirely consist of beings that have the same general form as the people of that planet.
That makes it harder for someone who sees them to convince others that it was aliens and not just a trick of bad lighting or someone with deformities or injuries that give them an unusual appearance.
It also makes it less likely for them to be mistaken for some other species on the planet. Suppose the Federation sent an expedition to ancient Earth that included crew from a species that looked a lot like our cats, and accidentally let Earth people get too good a look at them and their technology.
Those Earth people might think those are Earth cats, and conclude that Earth cats are a lot more powerful and advanced then they thought, and that they had better stop treating cats like animals lest the cats decided to wipe them out and start treating them as superior beings. Next thing you know that entire civilization is treating cats as magical being of great value or even worshiping them as gods.
It's quite possible that the humanoid lifeform is optimal to have a technological species that can travel between stars. An aquatic species would have huge difficulties just building technology and civilization, because of the habitat. A species without arms and thumbs would have a hard time manipulating its environment (just look at all the 4-legged animals now). A species with more than 4 limbs would likely either be too small (insects) to accomplish much, or would need too much energy (and probably evolve to lose the extra limbs over time).
There's good reasons to think that alien species might not look all that different from animals on this planet, simply due to physics. Animals here didn't simply spring to life in their current form; they evolved from single-celled organisms to best suit their environments.
The word 'parochialism' positively vaults to mind. Not to mention your flagrantly unjustifiable opening assumption, given the number of bipedal technological species known to us to travel among the stars is currently zero.
Speaking of Golden Age twists! And not fully thought-through ones, at that. It requires two assumptions: first that there exists a yahweh-style creator deity, and second that Genesis 1:26 is accurate to fact. Even taking both as axiomatic, this approach still further assumes that this likeness, namely the one in which we as humans are made, must also be the only likeness in which a mortal could be made after its creator.
Given the assumptions of faith under which we here labor, it may also be wise to heed 1 Cor. 2:11, in which the convert Roman makes one of his few worthy statements in warning men against imagining they can know the mind of God. In that light, the proposition lacks soundness even under its own axioms.
You're kidding right? It's all SciFi. In case you're confused, the Fi is short for Fiction. Stuff that's not real. So of course we're making assumptions on the entire thing. Including The Book as the greatest selling book of fiction of all time.
You're also now assuming that we Earthlings are the original source. Some scifi tropes state we're more Martian fleeing their dying planet or with things like panspermia. I like the SciFi where everyone is searching for the nearly mythical planet that turns out to be Earth. Ice Pirates is a goofy one.
You may labor under a misapprehension here; if I met Yahweh on the road, I would do my level best to kill it. But I was raised with that book, and still remember enough to play with the toys in it when I want to; if we're talking 1950s sf twists like "the aliens were fellow children of God all along!" then those are the toys with which we're playing.
That aside, of course we're making assumptions. But if we don't choose to either be bound by the assumptions we've already made or re-evaluate them, then we're playing with dolls rather than worldbuilding. Your pastimes are of course your own business, but it's been a long time indeed since I graduated from the former to the latter.
(Not that I mind space opera, when it focuses on the character-driven stories it's best suited to tell - trying to figure out how a TARDIS works misses the point entirely, while "The Doctor's Wife" is beautiful. But you mentioned science fiction, and my current standard there is set by Children of Time and Blindsight.)
They probably can do that much cheaper: just come up with two opposing conspiracy theories, and people will naturally divide into 2 camps and will eagerly kill each other.
Nowadays how well are old-school X11 window managers doing necessary interop with the desktop bus to keep modern GTK/QT apps rendering correctly on 4K HiDPI screens? I was forced to give up on FVWM because tweaking my X resources Xft.dpi and GDK_SCALE settings for one app such as firefox in order to get it rendering correctly on a 4K screen would break other apps. Firefox only seemed to do HiDPI correctly when run under a full GNOME desktop that provides the dbus messages it is assuming to set the 200% font size and UI scale correctly. Is there a way to run an old-school X11 WM now that allows snaps/flatpacks/GNOME-bloat dbus apps to run without issue with 200% scaling on 4K screens?
Does WebGPU enable a pure 2D game with many sprite animations to not need to pack a texture atlas for best performance? I.e. can I tell the GPU, "Draw these 1000 quads using the following distinct 1000 textures using just this one draw call. I'll be changing the 1000 textures each frame." My experience with OpenGL and D3D11 is that an atlas is the only way to do this. (I've found stb_rect_pack.h to be the least hassle route to packing the atlas.) I started looking at D3D12 and saw that it had command recording but it wasn't clear to me if this is any more efficient for a 2D game than just using D3D11/OpenGL to send 1000 separate pairs of commands to set-this-texture/draw-this-quad. With D3D12 the CPU is still performing thousands of function calls per frame to "record" these commands and I don't see how this is cheaper than having D3D11 do thousands of draw calls. D3D11 just puts a draw call into a command queue and immediately returns so isn't this effectively kinda the same thing as using ID3D12CommandQueue "command recording"? I never got around to benchmarking or learning more, so I'm sure I must be misunderstanding the advantages. I've also noticed that despite D3D12 launching back in 2015, the #1 most used engine Unity is still defaulting to D3D11 and has struggled to make D3D12 as performant/stable. So it seems I'm not the only one who can't figure out how the newer APIs offer more performance.
The "max number of sampled texture per shader stage" is a runtime device limit, and the minimal value for that seems to be 16. So texture atlasses are still a thing in WebGPU.
WebGPU has render bundles, which allow to pre-record command sequences, but even with that you don't want to change resource bindings thousands of times per frame (or even hundreds of times).
It might make sense though to build texture atlases dynamically (basically use one very big texture as "tile cache") and update that via writeTexture() calls (just don't rebuild the entire atlas each frame).
The new modern APIs are not to be understood as graphics APIs, rather as GPU APIs, thus using them directly is more akin to writing a graphics device driver than a rendering engine.
Most people are better served by using GL 4.6, DX 11 and such, or a middleware engine.
Even console vendors have multiple APIs because of that, not everyone needs all little details of the GPU.
Unity is supposed to have good DirectX 12 coming up, by the way.
"Achieving Real Time Ray Tracing on Xbox with Unity and DirectX12"
> The new modern APIs are not to be understood as graphics APIs, rather as GPU APIs
I've heard this repeated a lot in internet forums, but my experience working through hundreds of pages of Frank Luna's 800+ page DX12 book before concluding it pointless for 2D was that DX12 is actually fundamentally very similar to DX11 with most of the API focused on graphics rather than general compute. Compute shaders are just one chapter (13) of Luna's book, roughly 40 of the 800+ pages. I did some of the LunarG Vulkan tutorial and browsed the Khronos ref pages and reached a similar conclusion for Vulkan. I played with CUDA a bit and that's what real GPU programming looks like, almost no mention of graphics for much of the documentation. The "hello world" program isn't drawing a triangle, it's adding two arrays. Whereas a great deal of DX12, Vulkan, etc. is all about pixel formats, pixel shading, swap chains, geometry and tessellation, blending, depth and stencil, mipmaps and cube maps, clip coords, triangle winding and culling, the perspective Z divide, viewports, indexed and instanced draw calls, ... you know, graphics. But in chapter 9 of the DX12 book, end of 9.4, Luna writes, "Texture atlases can improve performance because it can lead to drawing more geometry with one draw call." So the conclusion I reached is that DX12 doesn't offer some fancy GPU compute way of writing a GPU program that can use 1000 distinct textures to draw 1000 distinct quads using only one CPU function call to launch this GPU program.
Now I've been doing more research and there is some sort of new feature called bindless textures not covered in Luna's DX12 book that might accomplish what I want (I'm not sure), but it seems to be Win11 only, WDDM 3.0 only, shader model 6.6 only, very new cards only. With this feature I might be able to set up 1000 distinct integer ids for my 1000 distinct textures, and then, with one single CPU draw call, have those 1000 textures applied to the correct 1000 quads, with no need to pack those 1000 textures into an atlas. Doing more web searching just now, this possibly can also be done in OpenGL on cards that support NV_gpu_shader5, but only semi-recent nVidia cards might support this. (I'm finding it difficult to get quick, quality answers to these sorts of questions using either web searches or LLMs.) Anyway, a gamedev forum or DX-focused reddit might be a better place for me to ask these sort of technical questions.
If I understand correctly, what you are looking for are mesh shaders and shader work graphs, which allow one to basically do most of the compute stuff on the GPU without having the CPU steering anything, besides setting up the whole chain.
You will need DirectX 12 Ultimate or Vulkan for them.
So I read through the materials on mesh shaders and work graphs and looked at sample code. These won't really work (see below). As I implied previously, it's best to research/discuss these sort of matters with professional graphics programmers who have experience actually using the technologies under consideration.
So for the sake of future web searchers who discover this thread: there are only two proven ways to efficiently draw thousands of unique textures of different sizes with a single draw call that are actually used by experienced graphics programmers in production code as of 2023.
Proven method #1: Pack these thousands of textures into a texture atlas.
Proven method #2: Use bindless resources, which is still fairly bleeding edge, and will require fallback to atlases if targeting the PC instead of only high end console (Xbox Series S|X...).
Mesh shaders by themselves won't work: These have similar texture access limitations to the old geometry/tessellation stage they improve upon. A limited, fixed number of textures still must be bound before each draw call (say, 16 or 32 textures, not 1000s), unless bindless resources are used. So mesh shaders must be used with an atlas or with bindless resources.
Work graphs by themselves won't work: This feature is bleeding edge shader model 6.8 whereas bindless resources are SM 6.6. (Xbox Series X|S might top out at SM 6.7, I can't find an authoritative answer.) It looks like work graphs might only work well on nVidia GPUs and won't work well on Intel GPUs anytime soon (but, again, I'm not knowledgeable enough to say this authoritatively). Furthermore, this feature may have a hard dependency on using bindless to begin with. That is, I can't tell if one is allowed to execute a work graph that binds and unbinds individual texture resources. And if one could do such a thing, it would certainly be slower than using bindless. The cost of bindless is paid "up front" when the textures are uploaded.
Some programmers use Texture2DArray/GL_TEXTURE_2D_ARRAY as an alternative to atlases but two limitations are (1) the max array length (e.g. GL_MAX_ARRAY_TEXTURE_LAYERS) might only be 256 (e.g. for OpenGL 3.0), (2) all textures must be the same size.
Finally, for the sake of any web searcher who lands on this thread in the years to come, to pack an atlas well a good packing algorithm is needed. It's harder to pack triangles than rectangles but triangles use atlas memory more efficiently and a good triangle packing will outperform the fancy new bindless rendering. Some open source starting points for packing: