Hacker News new | past | comments | ask | show | jobs | submit login
Black Triangles (2014) (rampantgames.com)
267 points by andric on Dec 26, 2023 | hide | past | favorite | 67 comments



> It wasn’t just that we’d managed to get a triangle onto the screen. That could be done in about a day. It was the journey the triangle had taken to get up on the screen.

If anyone wants to experience something like that today, just follow an introductory tutorial on Vulkan programming. Vulkan is a programming model that gives so much low-level control directly to programmers. As a result, this "Hello Triangle" example became (in)famous for its need to implement a skeleton of a full 3D engine in thousands of lines of code before you can render a triangle on the screen.

https://vulkan-tutorial.com/Drawing_a_triangle/Setup/Base_co...


If you RTA, that’s a perfect example of drawing a black triangle versus “A Black Triangle Moment”.

That tutorial is a general overview of all the possible boilerplate you would expect to see in a template project, and just happens to end in a triangle being drawn on the screen.

You could skip a lot of those steps. Is that 5 page section on constructing validation layers and fine tuning the debugger really necessary if your end goal was a triangle?

You can draw a triangle in Metal in thousands of lines of code too. And it’s common to have several hundred lines of pipeline descriptors, buffer management, and debug tooling before you even started encoding into your shaders. But it’s not necessary for _just a triangle_.


That tutorial sets up a complete rendering system, and demonstrates that it works by drawing a triangle... the exact thing described in the black triangle story.

Drawing a triangle is not the end goal. That's nonsense.


Yeah we’re saying the same thing. It’s not a “Hello Triangle” tutorial.


It kind of sounds like we're saying opposite things.

You're talking about the minimum necessary for just drawing a triangle, as if that's what "hello triangle" means. That's nothing. That's pointless.

The construction of hello triangle program marks the end of setup, and the beginning of actual work.

Skipping parts because they're boilerplate, or don't directly serve to draw the triangle, defeats the purpose. That's like typing the words "hello world" into notepad because it's faster than setting up an IDE.


The first OpenGL tutorial from 30 years ago was called “Hello Triangle”. It was uselessly simple, just like the iconic “Hello World”.

The point is that the tutorial is not in the same spirit as the classic Hellos. You can draw a triangle without all that scaffolding.


(New to comment thread) The article is about setting up all the core infrastructure to start bringing content into the engine, validated by the triangle.

Even tho the Vulkan tutorial might seem like work, it’s not actually preparing you to build anything significant on top of it. This just an overview of the API, not much of a building block.


I think both sides have a point, it's just a matter of perspective. One person's "drawing a black triangle" can be another's "black triangle moment". It's turtles all the way down.

From the perspective of an experienced Vulkan programmer, it's only a basic tutorial on creating an empty stub project, which is useless in itself as the real work of creating a powerful 3D engine has yet to start. But from the perspective of a new graphics programmer without any prior Vulkan knowledge, creating a stub project capable of drawing a triangle is already an accomplishment in itself. In OpenGL, the same thing is possible in 50 lines of code, compared to a stub Vulkan project that explicitly defines every component using thousands lines of code. Upon finishing it, one would finally have an understanding of how every part comes together, which enables further development. It would be then possible to draw many different kinds of object on top of this stub framework. In this sense, it fits the definition of:

> accomplishments that take a lot of effort to achieve, but upon completion you don’t have much to show for it only that more work can now proceed.


I appreciate this perspective, good points, thanks


It's fun to see the evolution in e.g. these examples of image loading for Dear Imgui:

https://github.com/ocornut/imgui/wiki/Image-Loading-and-Disp...

DirectX9 will even load the image for you, DirectX11 okay we get a few more structures to fill out, DirectX12 is where it goes off the rails and we are filling out a bunch of UNKNOWN DONT_CARE JUST_DO_IT. Then of course Vulkan is the one that gets the big fat "this probably won't actually work for you" warning.

I understand whats happening, but you know sometimes I just want to display a fucking image.


Thanks for sharing this, I was thoroughly tickled. The optimistic tone of the tutorial stands in stark contrast of the convoluted path to drawing an object.


Spent the last 4 days trying to get the raylib example game to run on my Android. Yesterday I changed course and got rawdrawandroid running instead. That was my black triangle (actually it was green heh), proof that my goal was actually possible. If rawdraw can build and run, so can raylib! I got raylib working shortly afterwards.

It's funny I always thought C gamedev would be hard, I just didn't think the hard part would be the "hello world"... I'm looking forward to actually building the game now!

(My experience is more like "the triangle that should take a day", but I still thought it was relevant as a kind of major milestone that puts you in a qualitatively new reality where now you have an actual working thing you can fiddle with.)


I appreciate the concept, I had my own black triangle recently when I finished writing my own implementation of an adaptive radix tree that is highly concurrent and keeps hot paths in an LRU and loads everything else from disk.

Now when I circle back to my full text search extension for Postgres I can do so knowing I can replace my naive inverted index with something that gives like a 10% performance gain but more importantly significantly reduced and CONSTANT memory profile.

On it's own however, it doesn't seem to do much and unless you're in the space you won't appreciate why this is an interesting implementation.


I like such projects. They're the programming equivalent of making jewelry. Is your code public?


Thanks for your interest! It was definitely rewarding though I’m sure I can optimize a few things still.

The code is public: https://github.com/iantbutler01/dart


So, I had a look at it, very nice and tight code, relatively easy to follow even for someone who doesn't really know rust (I think I picked up more rust from your code than I did from any other source about rust).

I especially like the very clean and simple interface that abstracts away all of the complexity underneath.

Thank you for putting this out there, rust needs more projects like this, quiet and effective, showing rather than telling.


Thank you so much! :)


Neat, thank you. I will give it a read later tonight.


It's good to have a name for this. I feel like my entire career has been "black triangle" problems, and it's been a struggle to get recognized for that effort at times.


I've built a reputation for leading skunkworks projects in my org.

The trick I've discovered is that maximizing self-interest eg: maximizing your recognization of efforts (and for everyone else working on the project) is the best way to deliver long-term actual customer value.

So you put out some hype in the org first, build your black triangles internally, and then when you connect customer value, then connect the hype with results.


One thing that’s interesting today is how in the corporate world, even in big tech companies where one might naively think black triangles with explanations might be appreciated by more technically literate managers, working on these sorts of building blocks for big wins will often actually result in projects and teams getting axed. I’ve seen serious attempts at innovation get thrown out the window a number of times in favor of shiny demo projects that demo well but can never go much further. Black triangle projects require time investments that at least big companies only rarely bet on despite being in possibly the best position to do so.

I think a part of it is that humans, or at least those that make their way into management, are in large part very visual creatures. If you can’t show them progress in a traditional visual way you sort of have to figure out some contortion to show some visual thing to use as a proxy of progress to keep the faith. Sometimes it can be achieved through selling some abstract progress metrics about work remaining (most often entirely detached from reality) but that only works so long.

More often you end up knowingly spending a large percentage of resources on vacuous side projects solely because they demo well to management in the short term — it’s a distraction but a way to keep the faith and the resources. I find it funny how much time and energy we’re willing to waste to play the game — paying the souped up visual taxes for decision makers, while doing the real work independently, often in the dark because we know management will react short sightedly to the black triangles.


     working on these sorts of building blocks for big 
     wins will often actually result in projects and teams 
     getting axed.
Had this experience at a previous position. My predecessor had implemented a solution that was extremely hairy, and took many hours to run in production.

My manager (a software engineer himself, still involved in day-to-day engineering on this product) said that it couldn't be meaningfully bettered. But in the meantime, we were taking a beating from our primary customer upon whom we depended for ~75% of our revenue. I viewed this existing solution as a potential company-killer.

So I spent some nights and weekends hacking together an alternative. Got the run time down from several hours to thirty seconds using some basic caching and a basic tree structure... not exactly advanced black magic. It also required about 50% less code.

I excitedly showed it to my manager. I walked him through the code. He became angry for the following "reasons."

1. Instead of making the code smaller, he felt it made the code larger. This is because he failed to comprehend that my MVP "hey this is possible" prototype didn't actually remove the old solution. It was just an MVP! I explained this to him but apparently it didn't take.

2. He couldn't understand the underlying concept. Again, it was... a tree. Something you would encounter in a 200-level computer science course, at the very latest.

3. My code lacked tests. Again... this was a "nights and weekends" MVP.

Probably the single fucking stupidest moment in the history of my career. I am not a person who typically has communication issues with managers or coworkers. I was dumbfounded that he was dumbfounded and to this day I am absolutely baffled by this whole incident. Our relationship had been deteriorating somewhat but not to the extent that would explain his brain-dead and hostile response.

Unsurprisingly this lead to my departure from the company.


> said that it couldn't be meaningfully bettered

There’s the problem!

I’ve had this experience many times — people will internalise their limitations and assume that their best is the best possible. Etc..

When a junior employee just casually proves them wrong, calling the emperor naked, that makes them feel inadequate and even ashamed.

I’ve been involved in many similar scenarios. Often there is a history of meetings, consultants, vendor techs, etc… trying to fix the problem and then a grudging acceptance and business workarounds. To suddenly reveal all of that as a lie is to undo established history. It’s like trying to close down the Vatican and tell all the priests to go home because you found a “neat proof” that there is no God. To say that you’ll experience some disbelief and resistance to your ideas is an understatement!

I had one of these moments where I got a nightly report batch job down from 3 hours to about 5 seconds. The customer turned red in the face, screamed at me, accused me of lying and stormed out of the meeting room.

Turns out that guy had to stay back each night to make sure the job ran successfully and so that he could sign the print out officially. He’d accepted the impact on his personal life, after many arguments about his work hours with his wife, etc…

To have all that sacrifice and suffering instantly made superfluous!? Ouch.


    I’ve had this experience many times — people will
    internalise their limitations and assume that their 
    best is the best possible. Etc..
Man, yeah. I've never found a way to really do this well.

The closest I've come to a winning formula here is to

1. Build a relationship where you have given praise and positive feedback to the individual's other efforts, privately and publicly. To be 100% clear here I'm talking legitimate praise here, not butt-kissing.

2. Find a way for the "victim" to share in your achievement somehow. Help him save face. "Bob and I were looking at ways to optimize the batch job etc etc, and we found a way... etc etc." People who know you and "Bob" will understand who really did the work.

That said, I would say my success at this sort of thing has been pretty low (although I am on a positive winning streak of one positive outcome in a row...)

    He’d accepted the impact on his personal life, after 
    many arguments about his work hours with his wife, etc…
Oh jeez. The... the pathos here is overwhelming.


Generally I've found that there is no easy way to call someone's baby ugly. Sometimes it is best to just say it outright, other times you have to skirt around the issue, or just let them come to the conclusion themselves when you demonstrate a fix to an "unrelated" issue that just so happens to be relevant to their problem.

> Oh jeez. The... the pathos here is overwhelming.

A landmine I've stepped on multiple times is that this type of scenario is sometimes not tragic, but actually a form of soft corruption: after hours work gets overtime pay! "Fixing" these issues can cut into people's salaries very significantly, perhaps reducing their pay to less than half. They're obviously going to fight you every step of the way, without ever saying the real reason for why they're really opposed to your helpful suggestions.


Oh geez, that unhappy guy was getting OT pay? yeah that explains a lot!!

Great points all around


I've worked in places where management understood the idea of laying a foundation, and building up the abstractions and information architecture, and understood that the result of the first 50% of the project could be a single text log showing all the data paths working, but those companies are rare. Most places are like you said: they won't believe any work is happening until they are bamboozled by a visual demo of the software--and they think it's "close to done" when the visual demo matches what they think the software should eventually look like. That's why so many places ship unfinished demo-ware: Engineering shows off the proof of concept that only scales to 10 customers and doesn't actually write to the database, and then management demands they ship it because it looks done and the deadline is coming up.

I wish I could figure out, as a candidate, a good question to ask in order to suss out which kind of company is interviewing me.


This is the most important bit, to me:

> By the end of the day, we had complete models on the screen, manipulating them with the controllers. Within a week, we had an environment to move the model through.

The Black Triangle was a leading indicator of a payoff that began happening by the end of the day. Too many similar projects result in a constant string of promises that we are just about to turn the corner on velocity/capability if only we would let the engineers focus on this refactor for a bit longer.

In practice, "Black Triangles" projects are bets like any other. If they work out, they unlock faster velocity and the ability to deliver better features. Often they deliver nothing but a refactor to someone's idea of a better architecture, and are lucky to ever reach breakeven on engineering time.


> One of the engine programmers tried to explain, but she shook her head and went back to her office.

...Maybe they shouldn't be making their way into managing engineering then?

To be fair, the lady in question, Jen, did come around in the next paragraph. However, it still highlights the extremely annoying problem of doing non-visual work under visual(-only) people -- one then has to actively create visuals.

This quote from Feynman expresses a similar idea: https://fs.blog/richard-feynman-on-beauty/


> ...Maybe they shouldn't be making their way into managing engineering then?

...What part of being a financial controller and acting HR is "managing engineering"?


I learned a lesson early on in my career: never show a functional but ugly demo.

I once worked for months on a modern front-end for an incredibly old legacy system. This was so technically challenging that we weren’t sure if it could be done at all, so a functioning demo was a huge achievement.

What the customer saw was that it was ugly: black and white, plain text, no CSS, no styling of any kind.

They completely ignored the technical milestone and spent an hour asking if we could rearrange the buttons and change the colours. Bike shedding, in other words.

Since then I would much rather show something that’s very pretty but completely broken.


In my experience, maybe half of "foundation" or "infrastructure" work is actually bullshit. That's probably why it's not a carte blanche to just do whatever with nothing to show for it until "later".


The black triangles are also a lot more fulfilling than other problems. When you get the foundation of your project finished, it feels so good. And you know that you can start doing the fun stuff with relative ease now that the hard part is over.


A long, long time ago:

Me: Look at this dot!

Mom: You put a dot on the screen?

Me: Yes! Now hit the keys! You can move it!


I used to write opengl screensavers for xscreensaver (glplanet and pulsar) and I have to admit, sometimes when working with opengl it would take me more than a day to just get a triangle (or sphere, my preferred primitive) to show up on the screen. Anything from an errant transform, to a mistaken feature bit (lighting, whatever) turned on/off could make a huge difference. Once I could draw a quad I had everything required for pulsar, and once I could draw a sphere I had everything I needed for glplanet (although I later learned, I did get the projection math wrong the first time).


I had the pleasure of working with Jay doing the character illustrations on his indie space shooter, a couple of years before I started coding professionally. This was the article that always stayed with me, especially at times when progress was difficult (which became especially poignant 18 months ago with my ADHD diagnosis).

These days he works for a VR company, but his main creative activity has been writing sci-fi/fantasy/horror novels for some years now, and he has his own series.

PS. I was going to say this article was from way further back than 2014, but on reading the post it appears it was his own repost.


I remember that one—was always sad that Rampant didn't go on to make more games, but it seems like Jay went on to do some cool stuff in other areas.


I'm sure he did more than just one game, but the revenue wasn't enough and he had a family to feed. We connected on GarageGames back in the day, which saw some reasonable success. I think it's easier these days to make a living as an indie game developer, but it's still very much a hit-driven business.


My black triangle was making Suzane (blender monkey) appear on the screen.


OpenGL? I did the monkey too! But I did a simple software rasterizer... (No OpenGL, draw bitmap directly.)

I only got as far as point cloud (the wireframe without the wires, heh). Want to continue the project and add lighting and texturing eventually.

My "big insight" (shower thought) that gave me the confidence to do 3D is that you don't need linear algebra. You can do it all with trig! A 3D rotation is just three 2D rotations!

It won't be very performant, obviously, but I'll take "my toy renderer runs slowly" over "it doesn't exist because I am struggling with linear algebra for the 7th time".

It's a lot more fun doing things the easy way! Even perspective is one line of code.


>It won't be very performant, obviously, but I'll take "my toy renderer runs slowly" over "it doesn't exist because I am struggling with linear algebra for the 7th time".

Haha I could not agree more


It helps to know that under the hood almost every serious game is a complete more-or-less real time operating system with its own IO, scheduler, memory management and various sub-processes relating to output generation and so on.

It was only after I wrote a couple of games that I realized that extracting this OS component would be worth the effort and after that making new games went substantially quicker.


I don't consider it an OS if it needs an OS to run. Something like a "Ship can carry a boat, boat can't carry a ship" rule.


Have you written a serious game engine? Parent is right, a bunch of them basically are a real-time OS sitting on a small amount of the underlying OS (if one exists). When you have to write your own malloc() and fopen(), and your own thread scheduling, that counts as Operating System. This is becoming less true today, as the hardware gets better and bigger and faster and resources are less constrained, but it wasn’t that long ago that many console games literally were the OS.

Your analogy and definition is one reasonable viewpoint, that the OS is the thing that bootstraps software on top of hardware and provides access to resources like memory, filesystem, network, and display, but it’s also an incomplete definition. An OS provides a standardized environment for whatever processes run inside it. In that sense it doesn’t matter whether or not it’s contained in a different OS. Ubuntu doesn’t suddenly become not an OS when you run it in a VM, it still provides the Linux OS to the processes that run inside the VM. Windows doesn’t become not an OS if it boots on a hypervisor. Similarly, even if an OS requires another OS to run, when it provides a different environment or container, it’s still serving the purpose of providing a specific Operating System to the software it runs.

Some consoles used to not provide an OS at all, the game provided all of it. The developer was responsible for handling things that would (and should) otherwise be the purview of the OS or drivers, power loss, out of memory, cables yanked, etc., otherwise they would not be allowed to publish. Nintendo did this longer than the others, in my recollection of events.


> Parent is right, a bunch of them basically are a real-time OS sitting on a small amount of the underlying OS (if one exists). When you have to write your own malloc() and fopen(), and your own thread scheduling,

I don't doubt that game developers do this, but I've never really heard a satisfying reason why, besides vague hand-waving about "needing more performance". I'm not a game developer though, so maybe it's truly needed. Or maybe it was needed in the 80s and 90s but not anymore, and maybe the mentality is simply stuck in the minds of game developers.

I remember interviewing a former game developer at a [not game] company, and we started talking about the C++ Standard Library. Candidate insisted that "the STL is slow," and that you shouldn't use it in any production software, and that if he worked on a project that used it, the first thing he'd want to do was re-write it using his own containers and allocators and so on. I would ask him questions like "what is slow about it" and "how do you know where it's slow, as used in our company's application" and "have you profiled any code that used it to find out where it's slow" and so on, and it became clear (to me) that he was probably just operating out of some ancient tribal believe that "STL = slow" that was organically passed on to him from other game developers throughout the decades.


I haven't worked on games in about 10 years, but I wrote core tech for AAA games during the 2000's. Stuff like async I/O, reflection, memory allocation, low level networking, and especially containers were all written in-house. Quite a bit of it was utilitarian, but I agree, a lot was a case of NIH syndrome. Some studios had these insane container libraries that just archived whatever weird data structure someone had as a toy project in university. Math libraries were the same. Scripting languages were all half-baked toy projects by the lead engineer etc.

For consoles, it wasn't that the STL was slow, it was that not all of it was fast, and some parts really were just trash written by whatever the compiler vendor had put together. RTTI was also generally not allowed as it was "slow", and virtual functions weren't allowed at one place because the vtable took up "a ton of extra space". So some of it was cultural, some of it was advice that was ok-ish maybe 15 years before the project began, and a lot of it was just not understanding what was happening under the hood.


Custom allocators were needed, and often still are, in order to have enough memory in the first place (for example memory pools for specific sized small allocations are common), to avoid fragmentation that could lead to out-of-memory conditions or the need to defrag over time, to have control over how the free list regains memory and how long it takes. It’s important in a real-time or embedded application to prioritize predictability over some of the host OS’s goals, and to have stronger guarantees.

IIRC I think one of the primary reasons the EASTL was developed is because the STL at the time did not allow overriding the default built-in memory allocator, and because STL did a bunch of memory allocation.

This is changing and games use lots more STL and dynamic memory allocation than they used to, but a decade ago and earlier, STL use was generally discouraged for the same reason that any and all heap allocations were discouraged for gameplay and engine code: because doing heap allocations during real-time and in performance critical sections can impact performance significantly, and can be dangerous. This is still very true today: the first rule of high performance is to avoid dynamic memory allocations in the inner loop -- true in JavaScript, CUDA, C++, even Python. Dynamic memory allocations can sneak up on you too. It might seem small or innocuous until it piles up or pushes you over a threshold or hits some kind of lock condition.

One mentality I learned game programming that can be counter-intuitive at first is how and when to reserve the maximum amount of memory you need for a given feature and never change it. It can feel at first like you’re hogging way too much, or that something’s wrong with coding that way, but it’s much more predictable, can be much higher performance, and it can be safer for the overall system to ensure that you will never have a weird condition where several things all need more memory than expected at the same time and it suddenly crashes.


I'm not a game developer, but I was similarly lead to believe that stl containers were especially slow compared to other implementations. This is supposed to be because the stl has more restrictions on the implementation owing to the interface they have to export. The stl hash table for example has to provide low level access to the buckets, even though modern hash tables usually avoid having buckets in the first place. By lifting these restrictions the performance can be improved tremendously. Facebook, for example, are supposed to have their own in house hash table implementation to achieve this.


That’s a great example to raise, and I think you’re right to point it out.

Any field is susceptible to cargo-culting, especially ppl who are just trying to get their job done so they focus on having to rediscover everything their leaders are claiming to have verified.

But for the specific claim you’re challenging, it’s hard to really convey the realities without having some experience in those problems.

High level it really comes down to:

1. Relaxing some generic constraints to apply game specific ones

2. Fine grained control over execution, again by relaxing some constraints

3. Game programming can play fast and loose with a lot of things for the sake of speed; this is not generalizable


The problem with STL is that there isn't one of it but every compiler/C++ library/platform has its own and some are better than others. Writing your own containers at least means you take one uncertain dependency out of the scene for something that'd be all over the engine everywhere (so it isn't easy to replace later).

Though personally i'd make my own containers because i heavily dislike the API design C++ library has and i am glad every single engine i've worked on had their own (well, except in a single case where while the engine did use its own containers it was basically 99.9% the same API design as the C++ library :-P).


I'm fine with that. I did eventually 'unplug' that game os to the point that it would run standalone on bare metal by adding a couple of hardware drivers.

The point was: to make a game function you have to perform the bulk of the functions that an operating system performs and the similarities are greater than the differences.


Well, Windows / Mac / Linux needs the Bios (or whatever we call it today) to run.

If you can't call those "OS"s, than what really is an OS?

I guess it's turtles all the way down.

> almost every serious game is a complete more-or-less real time operating system

Games tend to have very different requirements and different demands from the OS; they tend to regulate the OS to being (mostly) a hardware abstraction layer. In the late 1990s and early 2000s, it was a lot harder to run games on Windows NT versus Windows 9x because DOS was a lot better at getting out of the way of games.


And I wrote that stuff in the 80's, when your typical 'OS' on a computer was much closer to what a BIOS is than an actual operating system. Usually the games were far more sophisticated in terms of what their internal services provided than the equivalent OS services. The most the BIOS usually did for you was to boot your game code, change graphics modes, handle the keyboard and read and write raw media. Everything else you had to do for yourself.

Today there are far more levels of abstraction, especially when you use the various 3D engines or other large frameworks. But essentially those have taken over the role of a lot of that OS-like code.



I refer to it as getting an ant tunnel through the mountain


>"It wasn’t just that we’d managed to get a [black] triangle onto the screen. That could be done in about a day. It was the journey the triangle had taken to get up on the screen."

Related concepts:

Egg of Columbus: https://en.wikipedia.org/wiki/Egg_of_Columbus

Hindsight Bias: https://en.wikipedia.org/wiki/Hindsight_bias


This reminds me of yak shaving

https://news.ycombinator.com/item?id=29879009


I get a white or black or any monochrome triangle whenever I try a new graphics stack. Every. Single. Time.


So this is a 10-year old repost (2014), of an original article 10-years older than that (2004), describing and event that happened ten years before that (1994)?


I am looking forward to seeing this post in 2034!


tbh, i personally think the article is timeless and beautiful


Oh it wasn't a criticism, it's a lovely article.


[flagged]


[flagged]


Don’t feed the trolls.


Nobody owns it, obviously. It can be helpful however to be aware of the associations your terminology already has.

From a practical perspective, it interferes with getting your point across to overload a symbol that already has strong associations. From a social perspective, it's considerate not to use symbols that have strong negative associations for others (which is not a view shared by the "fuck your feelings" crowd, although I've noticed that they're not all that consistent in that view, and it tends to depend on whose feelings are getting fucked.)


On the other hand, if you go into history deep enough you'll find symbolism for any and everything being a hate symbol. The okay hand emoji Of all things had a controversy so nothing is safe.

Triangles are the most basic shape available so you'll find any and all kinds of symbols to represent good and bad things. I don't think it's very productive to worry about how it was used 70 years ago unless you're talking to a very old person.


It was used during the 80s.

Like I said, I'm not telling anyone what to do. Everyone can feel free to call anything whatever they want, to use or reuse symbols as they see fit. The signifier is not the signified.

Just wanted people to know. Because like I said, if your goal is to communicate, you might want to know if you're using something that has another, emotionally-loaded interpretation.


I'll keep it in mind, but this feels more like a case-by-case basis than something to be avoid at all costs (which was the tone of your original comment). We are 80 years out from WW2 so the only people who would really understand such a reference are the dying Silent Generation and neonazis/anti-neonazi cirlces (a circle of discussion I already avoid for the sake of my own sanity).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: