Hacker News new | past | comments | ask | show | jobs | submit | dmbaggett's comments login

One of the best teams on the planet!

An authoritative source on this Inventing the Alphabet by Johnanna Drucker. She covers not only the modern evidence but also attempts to classify alphabets throughout history, with particular focus on the Middle Ages. The first half is a bit dry -- how much do we really care what various scholars in the 16th Century made up about the history of the alphabet? -- (a lot was made up), but the second half looks at the modern archaeological contribution to the study of alphabetic origins and is very interesting.

There are also lots of scans of really interesting Medieval manuscripts cataloging alphabets in the book.


Thanks!

https://press.uchicago.edu/ucp/books/book/chicago/I/bo141943...

From the book's "Chapter 7. Modern Archaeology -- Putting the Evidence of the Alphabet in Place"

"We can now describe the origin of the alphabet chronologically and geographically with some degree of reliability. The basic outlines are these: The alphabet was formed in the context of cultural exchanges between Semitic-speaking people from the Levant and communities in Egypt after or around 1800 BCE. The earliest evidence is dated to Wadi el- Hol, a site in Egypt just west of the Nile, north of Luxor. Later inscriptions in the Sinai and throughout the Fertile Crescent show the gradual distribution and evolution of alphabetic writing, with most evidence dating from the fourteenth century BCE and after."

Regarding "how much do we really care what various scholars in the 16th Century made up about the history of the alphabet?" we should, if we're interested in knowing the context in which various claims, which pop up even today, appeared for the first time. The context often explains the motivation which resulted in specific "made up" narrations, some still (often unfortunately) influencing our lives.


Unfortunately it's not perfectly supported. If you set up a private registry on gitlab and try to push packages with epoch versions, gitlab will normalize the ! to _ so everything breaks. (We found this out the hard way recently.)

Another Python versioning gotcha I've never see discussed is use of instructions like AVX2 in C extensions. There seems to be no way in Python versioning to say "this version is for CPUs with AVX2 instructions" so I guess public C extensions have to be compiled for the least common denominator instruction set. Does anyone know otherwise? How do other packaging systems like cargo deal with this?


> If you set up a private registry on gitlab and try to push packages with epoch versions, gitlab will normalize the ! to _ so everything breaks. (We found this out the hard way recently.)

That's unfortunate; I know Poetry's support for epochs has also been spotty (at least historically, I'm not sure about currently).

> There seems to be no way in Python versioning to say "this version is for CPUs with AVX2 instructions" so I guess public C extensions have to be compiled for the least common denominator instruction set. Does anyone know otherwise?

This is my understanding as well: x86_64 in wheel tags implies the lowest common denominator for AMD64. I think the best you can generally do here is either (1) accept the pain that comes with sdists, or (2) do runtime feature detection for things like AVX2.

> How do other packaging systems like cargo deal with this?

Cargo always builds from source, so they effectively punt on this problem.


Or simply require an AVX2-capable CPU. According to Steam survey, 90% have AVX2 support. That's not as nice as the 99.18% SSE4.2 support or the 100% SSE3 support, but for some use cases such as scientific libraries, requiring a CPU that is less than 8 years old makes sense to me. Some more helpful error message than "Illegal Instruction" would still be a good idea though. This does require runtime AVX2 detection but at least it avoids the pain of trying to compile "pristine" architecture-specific code, which seems only possible for C/C++ code that does not use any standard library function or by writing assembler directly.


I distinctly remember buying a 1GB hard drive in 1992 for $1300 via an ad in Computer Shopper. And that was a really good price!


At age 15 I wrote a pacman clone for the Atari ST and was both impressed by and jealous of Minter's Llamatron for the same platform. My game was 30Hz only rarely, usually degrading to 15Hz, and you could really feel it in the gameplay. Llamatron was always fast (always 60hz?) -- because that's just how Jeff rolls. Respect.

On the plus side, my crappy pacman clone was good enough to convince Andy Gavin to (years later) bring me on as the first developer hire at Naughty Dog. The system works! (I guess?)


As someone working at the same time as him on similar problems, I honestly and sincerely disagree. At the time my reaction was "this is space cadet stuff". 486 hardware was really slow, and the idea of doing any kind of real computational geometry back then was incredibly far-fetched. Until he showed it wasn't.


Amazingly brilliant work, especially given the CPU capabilities at the time. Carmack's use of BSP trees inspired my own work on the Crash Bandicoot renderer. I was also really intrigued by Seth Teller's Ph.D. thesis on Precomputed Visibility Sets though I knew that would never run on home console hardware.

None of these techniques is relevant anymore given that all the hardware has Z buffers, obviating the need to explicitly order the polygons during the rendering process. But at the time (mid 90s) it was arguably the key problem 3D game developers needed to solve. (The other was camera control; for Crash Andy Gavin did that.)

A key insight is that sorting polygons correctly is inherently O(N^2), not O(N lg N) as most would initially assume. This is because polygon overlap is not a transitive property (A in front of B and B in front of C does NOT imply A in front of C, due to cyclic overlap.) This means you can't use O(N lg N) sorting, which in turn means sorting 1000 polygons requires a million comparisons -- infeasible for hardware at the time.

This is why many games from that era (3DO, PS1, etc) suffer from polygons that flicker back and forth, in front of and behind each other: most games used bucket sorting, which is O(N) but only approximate, and not stable frame to frame.

The handful of games that did something more clever to enable correct polygon sorting (Doom, Crash and I'm sure a few others) looked much better as a result.

Finally, just to screw with other developers, I generated a giant file of random data to fill up the Crash 1 CD and labeled it "bsptree.dat". I feel a bit guilty about that given that everyone has to download it when installing the game from the internet, even though it is completely useless to the game.


> Finally, just to screw with other developers, I generated a giant file of random data to fill up the Crash 1 CD and labeled it "bsptree.dat". I feel a bit guilty about that given that everyone has to download it when installing the game from the internet, even though it is completely useless to the game.

Wonderful! THIS is the kind of silly nitty gritty detail I want to hear more about - particularly the whys and wherefores of the decision, largely-unconsidered as it may well have been. Puts me in mind of the semi-affectionate rivalry between different demo/crack houses in the eighties and nineties, engaging in largely unseen conflict, all submarine-like :) And, if you're reading this - know that this thoroughly un-tech nerd has read all of your Crash Bandicoot breakdowns, no matter how arcane they might seem to me :)


> None of these techniques is relevant anymore given that all the hardware has Z buffers

Not true if you consider transparent objects. Rendering with order-independent transparency is still a hot topic without a clearly winning solution on GPU.

Web browsers have lots of semi-transparent rectangles, which can be transformed under "preserve3d" context. This is a classic case of effective BSP that is actual. (background: implementing BSP in WebRender a few years ago).

https://github.com/servo/plane-split


Interesting — TIL. Thanks for commenting.


Wasn't Michael Abrash harping on BSPs breaking down in the case of transparence and portais, in Zen of Graphics (the black book?)? Glad to see some found ways around it.


Z buffer has limited bit depth so this is all still relevant.


I'm developing an isometric city builder game. I've figured out how to depth sort everything on the CPU efficiently, though there were some ugly workarounds I had to implement. Suffice to say, for anything outside of buildings, sorting on the screen.y position of the texture works perfectly (so long as each asset is one tile in size, or broken up into multiple tiles).

But, I am starting to implement vehicles, first time adding assets that are multiple tiles in size. It would be really nice if I could create one asset for the vehicles, but the sorting on screen.y will not work for 3D rectangles, so I am breaking the up into multiple tiles...

Do you think BSP trees will work with thousands of moving assets? i.e., I would have to recreate the BSP tree every frame.


How much do things move between two frames? Would you really have to recreate the full tree or just reorder a small number of leaves?


Somewhere between one to $tile_size pixels. Tile size depends on the zoom level (I'm using SFML, had to add a separate sprite sheet for each zoom level cause SFML's built in zoom out method is awful).

This is the first time of hearing BSP, and I read most of the OP's article to have a very basic understanding how it works.

Since this is a tree, reordering N elements would be approach N^2 complexity, would it not? (edit: I assumed you would have to find each node from the root, which could very well be a bad premise).


Yes, it would average nlogn and worst case would be n², unless I'm thinking about it incorrectly.

But this is one of those cases where asymptotic complexity seems a bit too reductive. One of the "n" in the complexity is really a fraction of the n.

That said, if I understand your use case correctly, you might not benefit much from having a binary subdivision of space for this. It sounds like what you need is more of a plain ordered list. One of the ways to implement that is as a binary tree, but an array that is kept ordered may be fast enough due to locality etc.


there's probably some optimization that you can do, but you can't know that the second frame isn't the conclusion to a move from 10 frames ago that is what reveals a giant area.

    frame 0: open door
    frame 10: still can't see past edge of door
    frame 11: door opens enough to see big new area
between 10 and 11 it turns out there's a huge difference, even though they're only two frames apart.


The door being open or closed shouldn’t change the order of most elements. So it’s only relevant if you haven’t been sorting occluded objects.

Which gets into one of the major tradeoffs in rendering engines, do you try to minimize the average effort per frame or the worst case effort per frame.


Assuming everything touches the floor, and floor is flat, sort based on bottom y coordinate of each object


This works perfectly for isometric cubes, and is what I do currently.

If you imagine two long rectangles, one in each of your two hands, and pretend they are two cars passing each other in an isometric world, you will see that pretty soon, one car's screen.y position will be below the other before it's clear of the vehicle which should actually be occluding part of it.


I thought about this for a bit, and I have a feeling that as long as everything is touching the ground, then making covering loops is impossible, and so there exist a simple ordering you can compute.

The ordering is as follows: I'm assuming the isometric rendering of a map as a 45 degrees tilted square, and I'm only considering tile ordering just for simplicity but it should generalize fine. The uppermost tile is where you want to start rendering. From there, you render following the two 45 degree diagonals until you are done (so you don't only look at the y axis). Once this is done, you restart the process from the tile just below the uppermost corner, and so on. This ordering makes sure that all rectangular objects that are aligned with the 45 degree diagonals are rendered correctly.

Now you need an additional trick to render rectangular objects that are transversal to those diagonals correctly. What you do is you keep track of the boundaries of all such objects, so that the rendering loop described above can tell when it encounters one. Once it encounters it, it pauses rendering the current diagonal and considers it temporarily complete. The diagonal on the other side still needs to be rendered fully though --- or at least as far as possible with the same stopping condition. The next rendering pass will likely at some point re-encounter the same transversal object, just at a further point. Stop again, start the next diagonal. Once the rendering encounters the lowest and last part of the transversal object, then that object can be rendered, and the first stopped diagonal can be resumed (and after this resume all the paused diagonals in order).

This should always give you the correct order to render everything without errors. Let me know if this made sense, otherwise I can try to clarify.


This is an interesting approach, though I dont think it will work. Vehicles/units move pixel by pixel, not tile by tile. Each wall tile in my game takes up one tile, and walls sit on the edge of a tile. I dont think this will handle unit/wall rendering order properly unless I also form larger shapes from the walls.


I forget the exact algorithm, but you can factor in your x with y. For right-facing sides, anything at the same y with a greater x is in front of a lesser x, and vice versa for left-facing sides.


> None of these techniques is relevant anymore given that all the hardware has Z buffers, obviating the need to explicitly order the polygons during the rendering process.

You can’t mean that all the polygons in a game world are now sent to the GPU, entirely relying on viewport culling and Z buffer to remove the ones out of view? I’m not an expert but I’m sure that’s not true - doesn’t Source and latest iD Tech use BSP right now for example?


That's known as "occlusion culling", and it's still a bit of an open problem [0]; a lot of games do indeed just send all draw calls inside the frustum. Turns out a good Z-prepass is pretty free and helps a lot with things occlusion culling would normally help with. And deferred rendering helps even more with things like quad overdraw from disparate objects, as long as your material mask is mostly the smae.

The Source Engine still uses has a pre-built BSP for surface visibility. Source 2 has replaced this with a world-aligned visibility octree [1].

[0] The common solution these days is a low-resolution depth buffer rasterized on the CPU with SIMD, because doing it on the GPU would have noticeable latency... you've probably played a game where you turn into a twisty hallway and the walls/object in there only show up after a few frames... that's GPU occlusion culling at work.

[1] https://developer.valvesoftware.com/wiki/Half-Life:_Alyx_Wor...


It's a bit more nuanced than that. The parent commenter is correct that modern game engines don't need to sort polygons to render them correctly (with some exceptions e.g. transparent objects), and doing so at a fine-grained level is generally not worth the CPU cost. Especially since the bandwidth between the CPU and GPU can be a bottleneck, so if possible you want to only upload the level geometry once instead of having to rearrange it on every frame.

Game engines can of course still do their own coarse-grained occlusion culling, in order to reduce the amount of geometry that needs to be processed per frame. And there can still be a benefit to approximately sorting objects: if you draw objects in approximate front-to-back order, then a shader can do "early depth testing" and skip the expensive shading computations for pixels that are occluded by already-drawn polygons.


Even for modern games that don't care about depth ordering, they still tend to render first-person weapons or third-person characters first, and the sky last, just because it's an easy way to skip a bunch of overdraw, so why not?


It should be noted that virtually all renderers do frustum culling, meaning anything in the game world that is guaranteed to be out the camera's viewing volume is not rendered. Frustum culling is quite simple. Occlusion culling is usually done after frustum culling and is more complex, and the method tends to vary depending on the scene type and game engine, or is sometimes not done at all (modern GPUs are so powerful that smaller or simpler game areas render fast enough without occlusion culling).


Occlusion culling is still relevant and of course BSP trees help with that as well. What I meant is there is no value in sorting polygons any longer. (As far as I know; the last time I worked on a game was 1997.)


From what I've seen in the modern game engines, the current state of the art seems to be executing a compute shader which does frustum and occlusion culling on GPU and dynamically generates an indirect argument buffer which gets drawn in several or a single indirect draw.

This article contains a basic implementation of such idea in Vulkan - https://vkguide.dev/docs/gpudriven/gpu_driven_engines/


Source was released 18 years ago.

The current Source 2 does not use BSP anymore.


I think BSP was kind of obsolete even when it was used in Source 1, but was left in because removing it would have required changing a lot of code for no real benefit. The blocky brush-based level geometry system from GoldSrc still remained, but they added a lot of higher-fidelity features on top of it. IIRC, the level editing pipeline code for things like PVS and baked lighting were still based on code from GoldSrc. Better, newer techniques existed but what Valve had wasn't broken, so why fix it?


yeah this didn't seem right to me either


I’m not an expert but I’m sure that’s not true

If you're not an expert why are you sure it's not true. Most games render a lightweight pass to do all rasterization to a g-buffer, then do the expensive shading later. This separates visibility from shading. If fragments are already occluded by the z-buffer they can be skipped as soon as they can test their z value against the buffer.


Do you have my comments bookmarked or something? You reply to me with something negative to say far more frequently than if you came across them randomly. Can you give it a rest if you can’t manage to tone down the snide please?

As other comments say, including the original comment author, game engines actually do still rely on their own space partitioning to reduce draw calls. Source 2 just does it quad rather than binary. Source is still used in actively developed games and is still BSP, so it’s not true that the techniques are not relevant.


[flagged]


Just give replying to me a rest will you, if you don’t like my comments.


In addition to https://news.ycombinator.com/item?id=33715814:

You're right insofar that CyberDildonics has been replying to you frequently and negatively, in a way that suggests a strong bias if not outright harassment. It's complicated, though, because you've also replied to CyberDildonics frequently and negatively. I don't see the same evidence of bias in your case because there are other users you've replied to more frequently. But it's clear that it's a two-sided situation in that you've also been feeding the problem. I realize it can be annoying when people show up to pester you in different threads. It sucks to just ignore it, but that's the best option (i.e. the worst option except for all the others) so please do that in the future.


The person you replied to made an informative comment and you seemed to want to poke a hole in a generalization that people could learn from, which a lot of other people have pointed out, so I don't know why you would get upset at me for explaining more about how rendering works.


There are two issues here.

First, chrisseaton is correct insofar as you have replied to him way more times than you have to anyone else, and your replies have been haranguing if not harassing. That's abusive, and we ban accounts for that kind of thing, so please stop.

It's also the case that chrisseaton has posted a lot of negative replies to you, so this situation isn't exactly one-sided. Both of you need to leave each other alone from now on.

Second, more generally: your account has a long history of posting in the flamewar style, being aggressive towards others, and generally breaking the HN guidelines. We ban accounts that do this. I'm not going to ban you right now because (a) you also post good things, and (b) it has been a long time since we warned you about this:

https://news.ycombinator.com/item?id=27571842 (June 2021)

https://news.ycombinator.com/item?id=19064798 (Feb 2019)

https://news.ycombinator.com/item?id=18447328 (Nov 2018)

https://news.ycombinator.com/item?id=18103733 (Sept 2018)

https://news.ycombinator.com/item?id=10611272 (Nov 2015)

https://news.ycombinator.com/item?id=10609469 (Nov 2015)

But it's clearly a longstanding problem, and we need you to correct it or we're going to end up having to ban you. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

https://news.ycombinator.com/item?id=10336908 (Oct 2015)


> A key insight is that sorting polygons correctly is inherently O(N^2), not O(N lg N) as most would initially assume. This is because polygon overlap is not a transitive property (A in front of B and B in front of C does NOT imply A in front of C, due to cyclic overlap.) This means you can't use O(N lg N) sorting, which in turn means sorting 1000 polygons requires a million comparisons -- infeasible for hardware at the time.

A key insight for binary space partitioning is that it also solves the problem above my making sure this never happens. It slices polygons for that specific reason.


Excellent work on Crash! I still play it to this day (the original version, but the recent remaster is also great).

Your point about flickering just made me wonder if ordering problems were exacerbated by lack of floating point hardware; we already know that this is the reason for the Playstation's wobbly (charming) graphics, but I imagine that it may have also contributed to errors in the ordering process, since the positional errors would include Z.


This is a common myth about the PS1. In actual fact geometry calculations can have a decent amount of precision, as they are usually carried out using the 4.12 fixed-point format supported natively by the console's geometry coprocessor. The wobbly graphics are due to the GPU's lack of support for subpixel coordinates and antialiasing, but supporting them wouldn't have required floating-point hardware.

The ordering issues arise from how ordering is actually implemented. When applying perspective transformation to a polygon's vertices, the geometry coprocessor can optionally calculate and return the average value of the Z coordinates after projection [1]. This value is then scaled, rounded and used as an index into what's known as the "ordering table", which is basically a set of command buffers which are executed sequentially by the GPU (implemented in hardware as a linked list) [2]. The number of ordering table buckets is usually limited to 256 or 512 to save RAM, however that also limits the Z index resolution to 8-9 bits and thus results in polygons close to each other being drawn in more or less random order; hence the glitchy overlapping polygons that would occasionally appear in games with highly detailed models.

[1] https://psx-spx.consoledev.net/geometrytransformationengineg...

[2] http://lameguy64.net/tutorials/pstutorials/chapter1/2-graphi...


> None of these techniques is relevant anymore given that all the hardware has Z buffers, obviating the need to explicitly order the polygons during the rendering process. But at the time (mid 90s) it was arguably the key problem 3D game developers needed to solve. (The other was camera control; for Crash Andy Gavin did that.)

In your opinion, What is the key problem 3d game developers need to solve in 2022?


I honestly don’t know, but I’m guessing an expert 2022 3D game developer is around here somewhere. :)


Great comment! That poor poor bsptree.dat will forever live on. I wonder if anyone really sat down and tried to bash their head against bsptree.dat


> None of these techniques is relevant anymore given that all the hardware has Z buffers

I remember the js13k game jam winner this year used BSPs. Linked below.

https://github.com/SalvatorePreviti/js13k-2022


> sorting polygons correctly is inherently O(N^2), [...] because polygon overlap is not a transitive property (A in front of B and B in front of C does NOT imply A in front of C, due to cyclic overlap.)

Well ok but I don't get this:

> This means you can't use O(N lg N) sorting, which in turn means sorting 1000 polygons requires a million comparisons -- infeasible for hardware at the time

ISTM you CAN'T sort it full stop because of cycles - so what kind of sort (never mind O(N^2), any sort) can order cycles? They can't.


Your intuition is correct. However, in the context of an O(N^2) comparison sort you can implement a tie-breaker check that at least ensures order stability between frames.


As ray tracing becomes more common, I think we’ll see a big return of various BSP trees in games. BSP is still the crucial element in offline rendering.


Do you know how much changed for the sequels? From some reverse engineering, Crash Warped relies on depth bucketing for dynamic triangles, while level geometry is streamed from the disk based on the camera position, already at the appropriate LOD and sorted and bucketed. Is the BSP logic you're talking about real-time, or part of a similar pre-processing step?


We didn’t use BSP trees for Crash 1-3; instead we pre-rendered frames of the game on our SGI workstations and recovered the sort order from the (software) Z buffer. Then we stored the sorted list of polygons, using differential compression between key frames to reduce the space required. It was a total brute force hack.

What I think you’re seeing in Warped is what we did with Crash 1 and 2 as well: I had to sort the foreground object polygons into the pre-sorted background polygons. That was modified bucket sorting, but there were manually tuned adjustments to make it work and stable between frames.


I’ve been playing roguelikes since the mid 1980s, and from that perspective it’s hard to overstate Josh Ge’s contribution to the genre. Cogmind is just incredible — an absolute masterpiece that he is continually evolving. And Josh has likely written more for and about the genre than anyone else in history.

This is a guy who’s put over ten thousand hours into his seminal game, and to the genre overall. I feel like Josh — along with Tarn and Zach Adams (Dwarf Fortress) — have by sheer force of will translated roguelikes into the current era. (And yes, FTL and Spelunky are awesome as well — just farther afield.)

Respect.


Neat! Way back in the day (circa 1998!) we (ITA) released a Java applet that offered Gantt chart -- we called it "colored bars" -- airfare search. This applet downloaded a compact representation (graph) of all the possible solutions into its working memory and applied all the sorting and filtering client-side in real time. So it was really fast and powerful.

Of course, Java applets as an ecosystem died so we ended up with the server-side matrix design. It's nice to see the same (client-side) concept resurrected again 20+ years later.


The predecessor to GOAL was GOOL and was 100% Andy's work. We used it to code the logic for the objects in the Crash games (critters, platforms, etc.) and I actually wrote the entire load/save UI in it as well! I assume GOAL was also all Andy but I don't know that for sure because I only worked on the Crash games.

ITA's use of Lisp was driven by our co-founder and Chief Scientist, Carl de Marcken. He wrote the prototype in LispWorks, then the production code in Allegro, and then we finally ported everything to CMUCL when Franz essentially tried to extort us for a tax on our entire business. (We couldn't imagine this would become the norm for consumer "apps" five years later, but that's another topic entirely.)

Ultimately the MIT AI Lab circa 1993 was the source of the decisions to use lisp for both these commercial products. At the time, the lab's culture was still very much influenced by all the work done there and in spin-off companies in the late 80s commercializing lisp, e.g., at Symbolics (first registrant of a .com domain name!)

At INKY (my current startup) we use Python. By 2010, when I started the incubator that evolved into INKY, it was pretty clear that scaling a development team on a lisp stack was going to be really difficult.


Thank you for the history and background! It's great to get it from a first-party source.

> Ultimately the MIT AI Lab circa 1993 was the source of the decisions to use lisp for both these commercial products.

I suppose another example is iRobot, where there's a custom Lisp-like language used, at least in the earlier robots. My understanding is that it's not used as much anymore if at all.

> ITA's use of Lisp was driven by our co-founder and Chief Scientist, Carl de Marcken.

I chuckled at his website that states "I retired long ago and almost certainly do not find whatever commercial project you want help with interesting".

> At INKY (my current startup) we use Python. By 2010, when I started the incubator that evolved into INKY, it was pretty clear that scaling a development team on a lisp stack was going to be really difficult.

If you were to start it today, with the developments of languages like F#, Elixir, and Clojure, do you think you'd have a different decision or outcome? I'm just curious, as a hypothetical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: