> The device you're on doesn't generate text. It looks like text, but it's really just red, green and blue dots.
Printers don't generate text. It looks like text, but it's really just ink on paper.
The point is that real labyrinths have properties these pseudo-labyrinths don't. To be specific:
> In colloquial English, labyrinth is generally synonymous with maze, but many contemporary scholars observe a distinction between the two: maze refers to a complex branching (multicursal) puzzle with choices of path and direction; while a single-path (unicursal) labyrinth has only a single, non-branching path, which leads to the center. A labyrinth in this sense has an unambiguous route to the center and back and is not designed to be difficult to navigate.
"In colloquial English, labyrinth is generally synonymous with maze" means that yes, this is a labyrinth because "labyrinth" is synonymous with "maze". "Many contemporary scholars observe a distinction between the two" means "most people don't observe any distinction between the two."
This is an interesting long-running debate in procedural generation more generally. There are a lot of approaches, hybrids between them, and orthogonal decisions, but two ends of one axis of what I'd call "procedural faithfulness" are:
1. Attempt to faithfully simulate an underlying process. An example: procedurally generate canyons by simulating water flow and erosion. Dwarf Fortress tries to take this one to its logical conclusion, doing things like simulating thousands of years of history in order to decide where to place things.
2. Produce an algorithm that mimics a desired result without necessarily using a process even close to what produced the original. Often this involves attempting to capture patterns in the original without attempting to capture why those particular patterns arose. Many grammar-based methods take this approach, such as the classic "shape grammars" used in architecture, as do data-mining approaches.
This book looks at something closer to #2 from the perspective of procedural faithfulness. But it's a variant that was particularly popular in early algorithmic art: pick an algorithm that has interesting outputs, tweak it, and see what you can do with the results. So in a sense it's closer to #1 in that its macro-properties are emergent, rather than being specifically optimized for (it's not doing things like solving for path reachability).
If generating a city, for example, you could attempt any of these approaches. You could build a little artificial agent society where you simulate agents going to work, having families, buying houses, etc., and their actions produce a city. Or, you could play with hand-coded city-generation algorithms that produce interesting patterns, and go with one. Or, you could choose specific targets or constraints (maybe culled from databases of real city geometry) and use constraint-solving or genetic algorithms or generative machine-learning algorithms to produce the desired results. Different pros and cons.
It's something I've been thinking a bit about lately, because I've been trying to understand the landscape of systems that claim to do "automated game design" . Much of the work takes a simulation-based approach: tries to come up with a theory of what makes a game "balanced" or "fun", and optionally also simulates a broader theory of game design, playtesting, and revision. But another approach is to view it as crafting generative spaces of game variants, which is more of a theory of how game structure can vary than a theory of why it varies, and gives a different (not clearly better or worse) angle on the problem.