My thoughts as well. I was holding a board the other day and it just seemed, forgive me, aerosol-ized. Like those Aero chocolates that are essentially full of bubbles. "This new wood doesn't feel like wood used to" and shook my fist at the passing cloud.
I have high hopes for this product as a leg of sustainability.
This is kind of how plywood is made - take wood chips and glue and press them together. I feel it wouldn't work well with sawdust, even with the chemical and heat+pressure process, since there would be little natural cohesion between the particles (larger pieces = more strength, up until you have entire logs/boards).
Large chips glued together is Oriented Strand Board (OSB). Small chips glued together is Particle Board. Sawdust glued together is Medium Density Fiberboard (MDF). Plywood is layers of veneer--thin sheets of wood--glued together in alternating orientations.
Plywood can be nice. It doesn't expand with temperature changes like planks and doesn't have a grain direction that it can split along.
The others, I hate. Any small amount of moisture and they delaminate. OSB is so ugly and rough that you need to hide it because you'll never be able to apply enough primer to cover the chip pattern. I'd rather just use regular plywood at that point. Particle board is the same, but I'm okay with the kind coated on both sides with melamine. It's pretty hard to get a much flatter surface than melamine particle board without spending ridiculously more on granite.
But MDF is the worst. A lot of people like MDF because it's easy to work and can be fairly structural, if you use it right. But it's very, very easy to damage, has absolutely zero edge strength, and it makes a super-fine, extremely carcinogenic sawdust that is extremely difficult to clean up completely. Yes, all sawdust is carcinogenic, always wear a mask in the wood shop, but MDF sawdust never goes away.
Frankly, it's just easier to get a bunch of sheets of birch plywood and southern pine dimensional lumber shipped direct to my house and not worry about it.
I personally find pleasure in reading my old notes, even ones that contain outdated ways of thinking, incorrect assumptions, etc. If anything, it helps me reflect on the growth that's occurred. I agree it's not necessarily productive to log everything all the time, though.
Me too. But again, its nice to re-read old notes which are "lost to time". The author of this piece is clearly finding the past is actively influencing the present:
> At least for me — and most of the people I know — we got a garbage dump full of crufty links and pieces of text we hardly ever revisit. And we feel guilty and sad about it.
It'll never work if you can't leave things behind.
> Oh yeah, and if you didn't bleed when building a PC, you didn't really build a PC
Can confirm - I recently built an AT system from fully disassembled case, as the case was gross and all parts/screws/plastics needed a bath. My hands were very rough by the end, lots of small cuts!
Although not perfect, I added a couple features to help ensure uptime:
* LAN components are on a UPS, helps keep continuity between power blips and breaker flips
* Dynamic DNS, cron runs a script 4x per day to ensure a DNS name points to my IP, even if issued a new one by the ISP
* Rebooting everything occasionally to ensure the network and services come back up on their own and I didn't make a mistake with some config that loads at boot, etc.
Indeed, that and missing a pretty serious sounding email from Apple.
I 100% commiserate with the author and have missed important stuff before, but yeah, this article reminds me of the importance of making a habit to read the emails from my "important" accounts each morning. I setup some guardrails to help save myself from my own humanity, e.g. monthly "pay CC" calendar events.
> sounds like the apple support person never thought to ask or check if they had unpaid bills for those frozen services.
Yeah, that was a little surprising to me too. One would imagine that ones iCloud or Apple account would have everything in one place on Apple's side. The support person should have been able to pull up the account and see some reference number related to the incomplete trade-in, the bounced payment, and some status message about the account being auto-locked due to missing payment.
If the support person saw a normal, unaffected account - that makes me think visibility into the account is restricted into support tiers and the person should had the ability to escalate or request (logged) access to more details. It's a shame it took the author multiple days and calls into different departments to find resolution for what should have been a very obvious payment problem.
It's very impressive to see "realistic" graphics on the N64. The demo reminds me of "ICO" for the PS2.
I've always wondered if it would be possible to create an SDK to abstract the N64 graphics hardware and expose some modern primitives, lighting, shading, tools to bake lighting as this demo does, etc. The N64 has some pretty unique hardware for its generation, more details on the hardware are here on Copetti.org:
Note that the N64 was designed by SGI, And seeing as how influential SGI was for 3d graphics, I sort of assume the reverse, that the n64 probably has the most standard hardware of it's generation. I would be vaguely surprised if there was not an opengl library for it.
However there is a large caveat, 1. you have to think of the system as a graphics card with a cpu bolted on. and 2. the graphics system is directly exposed.
Graphics chip architecture ends up being a ugly hateful incompatible mess, and as such the vendors of said accelerators generally tend to avoid publishing reference documents for them, preferring to publish intermediate API's instead. things like OpenGL, DirectX, CUDA, Vulcan, mainly so that under the hood they can keep them an incompatible mess(if you never publish a reference, you never have to have hardware backwards compatibility, the up side is they can create novel designs, the down side is no one can use them directly) so when you do get direct access to them, as in that generation of game console, you sort of instinctively recoil in horror.
footnote on graphics influence: OpenGL came out of SGI and nvidia was founded by ex SGI engineers.
> that the n64 probably has the most standard hardware of it's generation
The Reality Coprocessor (or RCP) doesn't look like any graphics cards that previously came out of SGI. Despite the marketing, it is not a shrunk down SGI workstation.
It approaches the problem in very different ways is actually more advanced in many ways. SGI workstations had strict fixed function pixel pipelines, but RCP's pixel pipeline is semi-programmable. People often call describe it as "highly configurable" instead of programmable, but it was the start of what lead to modern Pixel Shaders. RCP could do many things in a single-pass which would require multiple passes of blending on a SGI workstation.
And later SGI graphics cards don't seem to have taken advantage of these innovations either. SGI hired a bunch of new engineers (with experience in embedded systems) to create the N64, and then once the project was finished they made them redundant. The new technology created by that team never had a chance to influence the rest of SGI. I get the impression that SGI was afraid such low-cost GPUs would cannibalise their high-end workstation market.
BTW, The console looks most like a shrunk down 90s SGI workstation is actually Sony's Playstation 2. Fixed function pixel pipeline with a huge amount of blending performance to facilitate complex multi-pass blending effects. Though, SGI wouldn't have let programmers have access to the Vector Units and DMAs like Sony did. SGI would have abstracted it all away with OpenGL
------------------
But in a way, you are kind of right. The N64 was the most forwards looking console of that era, and the one that ended up the closest to modern GPUs. Just not for the reason you suggest.
Instead, some of the ex-SGI employees that worked on the N64 created their own company called ArtX. They were originally planning to create a PC graphics card, but ended up with the contract to first create the GameCube for Nintendo (The GameCube design shows clear signs of engineers overcompensating for flaws in the N64 design). Before they could finish, ArtX were bought by ATI becoming ATI's west-coast design division, and the plans for a PC version of that GPU were scrapped.
After finishing the GameCube, that team went on to design the R3xx series of GPUs for ATI (Radeon 9700, etc).
The R3xx is more noteworthy for having a huge influence on Microsoft's DirectX 9.0 standard, which is basically the start of modern GPUs.
So in many ways, the N64 is a direct predecessor to DirectX 9.0.
Both use a unified memory architecture, where the GPU and CPU share the same pool of memory.
On the N64, the CPU always ends up bottlenecked by memory latency. The RAM latency is quite high to start with, your CPU is sitting idle for ~40 cycles if it ever misses the cache, assuming RCP is idle. If RCP is not idle, contention with can sometimes push that well over 150 cycles.
The gamecube fixed this flaw in multiple ways. They picked a CPU with a much better cache subsystem. The PowerPC 750 had Multi-way caches instead of a direct mapped, and a quite large L2 cache. Their customisations added special instructions to stream graphics commands without polluting the caches, resulting in way less cache misses.
And when it did cache miss, the latency to main memory is under 20 cycles (despite the Gamecube's CPU running at 5x the clock speed). The engineers picked main memory that was super low latency.
To fix the issue of bus contention, they created a complex bus arbitration scheme and gave CPU reads the highest priority. The gamecube also has much less traffic on the bus to start with, because many components were moved out of the unified memory.
---------------------------
The N64 famously had only 4KB of TMEM (texture memory). Textures had to fit in just 4KB, and to enable mipmapping, they had to fit in half that. This lead to most games on the N64 using very small textures stretched over very large surfaces with bilinear filtering, and kind of gave N64 games a distinctive design language.
Once again, the engineers fixed this flaw in two ways. First, they made TMEM work as a cache, so textures didn't have to fit inside it. Second, they bumped the size of TMEM from 4KB all the way 1MB, which was massive overkill, way bigger than any other GPU of the era. Even today's GPUs only have ~64KB of cache for textures.
---------------------------
The fillrate of the N64 was quite low, especially when using the depth buffer and/or doing blending.
So the Gamecube got a dedicated 2MB of memory (embedded DRAM) for its framebuffer. Now rendering doesn't touch main memory at all. Depthbuffer is now free, no reason to not enable, and blending is more or less free too.
Rasterisation was one of the major causes of bus contention on the N64, so this embedded framebuffer has a side-effect of solving bus contention issues too problem too.
---------------------------
On the N64, the RSP was used for both vertex processing and sound processing. Not exactly a flaw, it saved on hardware. But it did mean any time spent processing sound was time that couldn't be spend rendering graphics.
The gamecube got a dedicated DSP for audio processing. The audio DSP also got its own pool of memory (once again reducing bus contention).
As for vertex processing, that was all moved into fixed function hardware. (There aren't that many GPUs that did transform and lighting in hardware. Earlier GPUs often implemented transform and lighting in DSPs (like the N64's RSP), and the industry were very quickly switching to vertex shaders)
The RCP was actually two hardware blocks, the RDP which as you say did the fixed function (but very flexible) pixel processing and the RSP which handled command processing and vertex transformation (and audio!).
The standard api was pretty much OpenGL, generating in-memory command lists that could be sent to the RSP.
However the RSP was a completely programmable mips processor (with simd instructions in parallel).
One of my favorite tricks in the RDP hardware was it used the parity bits in the rambus memory to store coverage bits for msss
Good point. It is the software APIs are where you do see the strong SGI influence. It's not OpenGL, but it's clearly based on their experience with OpenGL. The resulting API is quite a bit better than other 5th gen consoles.
It's only the hardware (especially RDP) that has little direct connection to other SGI hardware.
https://brenthull.com/article/old-growth-wood
reply