Hacker News new | past | comments | ask | show | jobs | submit login
Why is there a “V” in SIGSEGV Segmentation Fault? (cloudflare.com)
222 points by caution 18 days ago | hide | past | favorite | 104 comments



Well some aspects are still not clear. If this thing was originally called "segmentation violation", who, when switched to calling it "segmentation fault"? Why we don't get

    Segmentation violation (core dumped)
when this thing fires?

Actually "violation" sounds much clearer to me. It's telling me that the code I'm running does something that was not part of the contract. With "fault"... well, it's someone's fault... probably someone else's fault... who knows what happened... ¯\_(ツ)_/¯

I wouldn't be surprised if it was found out to sound smoother to managerial ears.


The reason is that the term "fault" is used is because on the PDP range of computers (the first computers to run Unix) when a instruction (op code) fails it creates a "fault". In the case of accessing a segment that no longer exists, or beyond the length of the segment, etc the instruction that tried to execute but failed is said to have "faulted", or suffered a "fault". So an instruction that faults due to a memory segment issue is called a "segment fault" and one that faults due to a memory page issue is called a "page fault". In the case of segment, the name stuck even though almost all modern CPUs have pages rather than segments.


This leads on to demand paging, when an instruction tries to access a page that isn't loaded in memory (say for example the page has been "paged out" to swap), then when the instruction faults, it traps into the kernel page handler, and the kernel then determines that yes, the program does own that page and that it is indeed in swap, the kernel will then "page in" that page, i.e. read it from the swap partition and put it in memory. Once the IO has completed and the page is in memory then kernel will reschedule the process and start the execution from the instruction that previously failed. Now that the page is in memory the instruction should execute normally and the process will continue unaware that some of its memory wasn't in RAM but now it is.

[EDIT] fixed a typo.


"Faulting" was a general term (has it fallen out of favor?) when an instruction didn't execute properly and wasn't specific to DEC machines. Hence page faults, as you note.

Neither PDP-7 (original Unix machine) nor PDP-11 had segmented memory. Segmentation was fundamental in the design of Multics, and I suspect the term, if not the signal itself of course, was carried over from there. Memory segmentation was pretty common back then, though far from universal.

By the way there were a lot of machines under the "PDP" umbrella with some vary different architectures. PDP-1, -7, and -9 had 18-bit words; PDP-5, -8 and -12 had 12-bit words; the PDP-6, -10 (and renamed from PDP -20) had 18-bit addresses and 36-bit words; and the PDP-11/LSI-11 had 16-bit words (this was later pumped up into the 32-bit VAX line). Unix only really ran on the 11s (and VAXes which BTW gave us the MIPS metric) apart from the one PDP-7 development machines. We used a variety of Ones on all those machines, often homegrown.

In those days it was quite common for a company to write their own OS and programming language just for their computers, and for their customers to do the same!


Actually, the "segmentation" comes from segmented memory management, which was integrated into in AT&T Unix. BSD went with paged virtual memory, and there we get "fault" from "page fault", which is an often correctable situation that is invisible to the application, but is sometimes an access violation.

So "segmentation fault" is a weird combination of terms.

And of course "core" is from "magnetic core memory" that nobody uses any more.

The SIGSEGV constant having come from AT&T Unix propagated into other variants, for the sake of source code compatibility, even though those other variants didn't use segmented memory.

I suspect that what happened was that hackers not working with segmented memory at all somehow adopted the "segmentation" term from the AT&T code and documentation, but did not warm up to the "violation" part, sticking with the "fault" terminology of their paged world.

The "Segmentation fault" string you see displayed by the shell comes from the strsignal function that maps signals to descriptive strings. I think that originated in BSD Unix. "(core dumped)" is locally generated by the shell if that flag is true in the process status.

Bash internationalizes that with gettext. Here it is in Hungarian:

  po/hr.po:msgid " (core dumped)"
  po/hr.po-msgstr " (jezgra izbačena)"
Estonia gets a cool one:

  po/eo.po:msgid " (core dumped)"
  po/eo.po-msgstr "(nekropsio elŝutita)"
Trivia: very early Linux kernel versions used 80386 segments for processes.


"hr" is Croatian, Hungarian would be "hu" (and no doubt it sounds cool too).

And yeah, a "necropsy" (synonym for autopsy) definitely sounds cooler than a "dump"...

Otherwise: great summary, almost more interesting (and more compact) than the original blog post!


> "hr" is Croatian, Hungarian would be "hu" (and no doubt it sounds cool too).

And the translation is abysmal: "jezgra" = "core" (of nuclear reactor) or "nucleus" (of an atom); the closest translation for "izbačena" is "ejected". I believe that "core" comes originally from memory technology of the time (https://en.wikipedia.org/wiki/Magnetic-core_memory) so it should have been possible to find a more meaningful translation.

EDIT: As a native Croatian speaker, I have NEVER used any program in Croatian locale. It is simply unintelligible. Funnily, I find Norwegian (where I live now) translations more approachable.


Localized error messages (that are still full of jargon) are terrible. It makes looking them up online much harder and often the translation does not more than obfuscation.


The goal of "izbačena" here is clearly to indicate that the memory has been stored elsewhere. FWIW, "dumping trash" would be readily translated to "избацити смеће" (izbaciti smeće) in Serbian, so I am surprised it's such a stretch in Croatian. Perhaps the Croatian translation has been based off Serbian one.

You are probably more accustomed to Norwegian translations because you never learned Norwegian IT-speak before the translations appeared, whereas I imagine you grew up on English interfaces while speaking Croatian (we term that Srblish, maybe you've got Croglish :)).


Croatian "izbaciti" is closest to this meaning: https://www.merriam-webster.com/dictionary/eject or the phrase "throw out". F.ex., "Izbacivac" [sorry, haven't bothered with setting up HR keys] is the guy throwing out too drunk people from a club.

> "избацити смеће" (izbaciti smeće) in Serbian, so I am surprised it's such a stretch in Croatian

But that's exactly it! You "izbacis smece" and you don't care what happens to it after. Likewise with "izbacivac", he throws out a person and doesn't care what the person does afterwards. You don't "izbacis" something FOR someone to use it after. That's why the translation is bothering me.

Whereas the kernel writes the core FOR THE USER to inspect it, back it up, delete it, whatever.

The closest word I can come up with for English "dump" https://www.merriam-webster.com/dictionary/dump is "ostaviti" ("leave around") ili "(is)pustiti" (in the meaning of "drop", not "flushing the toilet" :D). A better literal translation would thus be "Jezgra pustena." (Like a space probe is "pustena" into space and there it goes.) I guess the authors originally used "dump" because it's kinda random (system-wide config) where it ends up.

The most meaningful translation to Croatian would be, IMHO, "Memorija sacuvana." [or "Memorija spremljena."] ("Memory preserved".)

> I imagine you grew up on English interfaces while speaking Croatian

Indeed. Some professors at the university did use Croatian terms, but everything about it was awfully alien to me. "Thread" would be "dretva". Which is ironic, as the professor explained that it's borrowed and mangled from German, whereas we have a perfectly valid, even nice, Croatian word "nit" that is literally "thread".


Those are all great points, but in English, "dumping" is also used when you want to get rid of something/someone (dump a boyfriend, dump into trash...).

If anything, the translation is too literal, and you are advocating for a better translation for the actual action. It's a common complaint, but it's a hard balance to strike: literal translation is (usually) easier to translate back (not the case here), but a translation that is more descriptive is easier to understand.

I think I am in the same boat as you, in that I usually prefer descriptive translations when a literal one uses a metaphore or concept that's alien to the local culture.

Still, in this particular case, I'd use a less commonly used term (I imagine spremiti/sačuvati is also used for "save") like "zapisana" or "zabeležena", just so there's a better chance to keep 1-1 mapping between English and Croato-serbo-bosnian-montenegrin language.

What English has mostly done was keep using old concepts (like "core" to represent "memory"), or repurposed seldom used words which I always found intriguing. Such approach would require some re-learning for us who grew up on English IT terminology, but every profession has a specialised terminology like that too (I like to bring up the example of maths, where in Serbian it's integral and izvod for integral and differential: always try coming up with a good native word, and if it _is_ good, it will stick :)).

Note that people translating free software are usually volunteers working without much local support, and without an established vocabulary for all these specialised terms, so they will frequently come up with awkward translations.


I think I posted while you edited your post so you added a couple of good examples for a better literal translation.

But this is exactly the point, it's hard work, it's not always done by people who understand the actual concepts or history, and they are doing it in their spare time. Imagine spending this much time and saying "I translated one message".

Thus, I'd encourage you to contribute your suggestions upstream :)


> Croato-serbo-bosnian-montenegrin language

"In confidence": I miss the days when there was only serbo-croatian and croato-serbian :D (I grew up in Yugoslavia and still remember cyrillic.) Politicizing of the languages is just f*up. It says enough that I understand "urban" (i.e. newspapers/TV) serbian better than heavy croatian dialects from Dalmatia, Istra or Zagorje :p

> integral and izvod

Wow, "izvod" is really nice, I like it :D We used just "derivacija". If I had to guess what some eager translator would translate "integral" to, it'd be something like "ocjeljivanje" :D


I checked Hungarian, it's simply "core készült" ("core has been made"). I guess they did not want to go too far with translating "core" as possible faithful translations would collate with expressions for ejaculation...


> Estonia gets a cool one:

Sad to say that's also not Estonian -- it's Esperanto. :-)


didn't go too deep on the code spelunking, but v7 Unix (and 2BSD) used 'swapping' rather than paging, where the whole process was moved in/out of RAM, 32V (32bit v7 for the vax by AT&T) kept this, 3BSD (32V++ in a way) added paged memory, and this was reworked in 4.3BSD (see 44doc newvm), Mach VM, etc.

can confirm SEGV is in v7 signal.h from 1979:

https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/include/si...

where it is happily #defined as 11, and continues as such to this day:

https://github.com/freebsd/freebsd/blob/master/sys/sys/signa...

(etc, in the other BSD-derived systems)

if anyone wants to dig further.


When I played with IA32 in assembly, I think it was possible to swap segments in and out as well (you could order actual printed IA32 books from Intel for free, which was great for a high-school student in Serbia in late 1990s ;)). It was a long time ago, so maybe I am misremembering things.

Thus, I always considered "paging" a shorthand for "swapping of memory pages".


The term "swapping" originally comes from the multiprogramming technique of exchanging an entire program in memory, and its data, for another program and its data (without any virtual memory support).


eo is Esperanto, Estonian looks like a version of Finnish.

(so does Hungarian, as sibling has pointed out)


The violation is what the program did. A fault is the handler that runs when the program does something funny. Like with a page fault.


this is the right answer. I'd tweak it a little, a fault is an interrupt, in this case a hardware interrupt, and the message prints because there is no handler for it (except the default handler that prints the message and halts the program)


DEC VMS equivalent of this is 'Access Violation', which as such is a Fault, and manifests itself in logs as the dreadful ACCVIO, though one is always hopeful it's debuggabble, unlike 'AST fault' (where AST stands for Async System Trap, which is not a Fault, but the fault happened while trying to deliver the AST...), meant a lot of head scratching and likely call to support. The worst by far is a 'Machine Check', next thing one hopes to see is the carrots >>> prompt on reset.

It's a very nuanced and structured terminology, but does make sense after reading all those hefty and well-written manuals. People like to use those volumes as monitor risers these days, but those are often true standards of quality technical writing. I once was amazed by the clarity, so the VMS doc volume migrated to the shelf in exchange for some old conference proceedings volume of the same heft.


Well, it's less intrusive to change the text than to change the constant.


I've always thought of it like a geological fault, where the two sides are misaligned. But this only fits in the case of misaligned memory access, and not in the more general case of accessing illegal locations.

There's also the tennis fault. It's a noun corresponding to the adjective "faulty."


The fault can be corrected. The term “violation” seems to imply that it can’t be corrected. Your program only crashes if the segmentation fault isn’t corrected.


trying to fetch from a memory address outside of your boundaries is a violation of a program's contract with the OS. It can't be "corrected" by the OS, and if you see the message, the program has not included a handler for it.

If the program wishes to do something different, it can register a handler, and you won't see the message. If you see the message, there was a violation. If you don't see the message... well, there was still a violation, but it got handled.


The program has no such contract with the OS. The only contract is that if you access unmapped memory, you get a segmentation fault, and the relevant signal handler is invoked. The default signal handler terminates the program.

Depending on the program, a segmentation fault may be routine. That’s why it doesn’t make sense to call it a violation.


in systems with unprotected memory, you would actually access the address; even though you accessed it, it is a violation, the result is undefined. When there is protected memory, what catches the violation is outside of your process space. It is a violation; possibly I should not have said OS, it's a hardware violation.

I'm not the only one who says this, look at the name, -V

edit: actually, though hardware support is required for certain OS features, it's the OS that sets up the segmentation and the fault handlers so ... it is ultimately an OS contract.


As the article notes, the -V suffix is a historical curiosity. Since no contract is being violated here, the term “violation” does not make sense. This is nothing more than a fault condition, like trying to open a file that does not exist or trying to go to a webpage that does not exist.

The contract with the OS is that if you access unmapped memory, your program is sent SIGSEGV. Just like the contract with open() is that it returns -1 if the file is not found.


Never thought of that solution to segfaults before. Great trick for writing bug-free programs, going to go integrate that into all my code now.


You're joking, but this can be used in a semi-practical way to keep programs alive and mostly functioning, see http://people.csail.mit.edu/rinard/paper/pldi14.pdf for instance. The idea here is that you catch certain faulting operations and drop/fix them: segfaulting store? ignore! segfaulting read? manufacture a result value of 0, it's usually not too wrong. And by using LD_PRELOAD magic, this can even be retrofitted onto existing applications without changing or recompiling them.


When I used to work on Win9x, where badly-behaved programs would often corrupt memory and lead to the system eventually crashing, I made use of that trick a few times --- keeping a kernel debugger resident meant that sometimes I could skip over faulting code and keep the OS alive long enough to save/finish my work and reboot. But often, a lot of things would stop working too, which is not so surprising since if you continue to ignore errors you will eventually end up with an empty program.

Also reminds me of this: https://news.ycombinator.com/item?id=4157777


Did you ever use BlackICE? I worked on Windows drivers in the early 90's and all of our systems had BlackICE running which was like a hypervisor debugger with full hardware control: you could CTRL-ALT-TAB into it and halt the OS at any point and save state. The basic debug strategy was to break to BlackICE on fault, handle it with a macro, and save the OS from crashing. Super useful, also great for breaking copy protection on games. ;-)


Wasn't that SoftIce?


Wow, yes it was. I have no idea where BlackICE came from in my mind... Thanks!


> I have no idea where BlackICE came from in my mind...

Sounds like something from Neuromancer


"BlackICE Defender" was a popular Windows firewall.


Surely SoftICE.


SoftICE, yes, I used that one.


I think that this was the idea behind Norton CrashGuard. Intercept all faults and then try to skip past the faulting instruction. This worked ... sometimes. At least long enough for you to save your work and restart the program. But lots of times you'd get stuck in an endless loop or end up locking up the whole system anyway.


Norton CrashGuard was an implementation of that idea for Windows. Unsurprisingly, doing anything other than saving your work and restarting after it did its thing was a very bad idea.


Signals always seem (at least to me) to be an early implementation of exceptions


Sort of? They're really an implementation of interrupts, but sitting on the kernel/user boundary rather than the hardware/kernel boundary. It's a hold over from when a process was really thought of as closer to a virtualized computer rather than a distinct concept in it's own right. And it's not uncommon for the interrupts managing CPU faults to be called exceptions https://wiki.osdev.org/Exceptions , so their nomenclature does converge if you squint hard enough.


"It's not uncommon for the interrupts managing CPU faults to be called exceptions, so their nomenclature does converge..."

An interrupt and a CPU exception are different things. UNIX treats them similarly because the PDP-11 did. An interrupt is something outside the CPU wanting to be serviced, like an I/O completion. An interrupt can be deferred during a critical section, which is what "preventing interrupts" does. Some machines direct interrupts to one of many CPUs, so whoever is free can handle I/O. Interrupts have priorities, queuing, and are handled like events on a queue.

A hardware exception is the CPU doing something that stops execution. Inaccessible memory - could be the need to page something in from disk, or a program error. The OS has to decide that. Floating point overflow. Divide by zero. An illegal instruction. The CPU can't continue. So exceptions cannot be deferred, even if in a critical section. The CPU that raised the exception must handle the exception; it can't be handled by another CPU.

UNIX/Linux signals are rarely used for I/O completions in user space, but that is supported. See "aio".[1] Apparently Oracle uses this.

[1] https://man7.org/linux/man-pages/man7/aio.7.html


CPU exceptions are very much a type of interrupt (and vis-versa).

You can see NMIs for examples of interrupts outside the CPU that can't be deferred like the distinction you're making.

Additionally software interrupts are an example of interrupts that come from user space and can't be deferred from user space's perspective, but must be handled before their instruction stream continues.

You can also see processors like slave DSPs who's exceptions are routed to other processors to be handled just like any other interrupt on that other core. The N64's RSP, and the Cell's SPEs are great examples of this.

You gave the example of AIO for peripheral interruption to user space like an interrupts, (which is used by more than just Oracle), but the classic example is SIGLARM as a corollary to a timer interrupt.

This is not a Unix/PDP-11 thing, but pretty much every hardware arch and every OS out there. I say this as someone who's ported a non Unix derived RTOS to MIPS, PowerPC, ARMv7A, ARMv7M, ARMv8-A64, X86_64 linux user mode, Microblaze, and SH4, and has written drivers for Linux, FreeBSD, Windows CE, Windows NT, and that aforementioned RTOS.


You can also see processors like slave DSPs who's exceptions are routed to other processors to be handled just like any other interrupt on that other core. The N64's RSP, and the Cell's SPEs are great examples of this.

That's more of a support processor thing, where the special-purpose processor doesn't really do interrupts. GPU exceptions usually create interrupts in the controlling CPU, for example, rather than being handled within the GPU. (How the GPUs should talk to the CPUs is a whole subject in its own right.)

Timer interrupts are usually deferrable.

The Cell. Is it totally gone now? (If they'd had, say, 16MB/SPE instead of 256K, it might have been good for something.)


> GPU exceptions usually create interrupts in the controlling CPU, for example, rather than being handled within the GPU

I would say that's out of date. GPU exceptions typically don't exist for most shader code (unmapped memory loads are just RAZ, division by zero is defined and doesn't trap, etc.). For the ones that do exist, they're typically handled on GPU these days for latency reasons, but that's just a config register to route it externally or not.'

> Timer interrupts are usually deferrable.

I didn't say they weren't. Just like SIGALRM can be masked.

> The Cell. Is it totally gone now? (If they'd had, say, 16MB/SPE instead of 256K, it might have been good for something.)

16MB would have never made sense. You're only supposed to keep the working set in memory, and 64x the amount of memory was never in the cards from a gate count perspective.


With sigaction handlers (as opposed to the C standard library's simplistic signal handlers) it's not just exceptions, it's more general, like a condition system like in Common Lisp. From inside the handler you can analyze what's wrong, then manipulate the program state and then restart the computation where you were, or at the next instruction, or wherever else you choose. In contrast, your typical exception mechanism (as in Python or C++ or Java or whatever) will unwind the stack and make it impossible to restart from the exact place where the problem occurred.


To me exceptions seem like interrupts for Java, and callback hell JavaScript webapps are like CPU that runs on unregulated interrupts instead of electrical current


Note that x86 has variable-length instructions, so incrementing RIP by 10 will not, in general, do what you want it to do. The author basically put in a nop slide[1] to ensure that that would work. Buuuuuut I’m pretty sure you could parse ELF debug symbols and effectively calculate the address of the next logical C instruction to execute :D

Terrifying but fun!

[1] https://en.wikipedia.org/wiki/NOP_slide


Or decode the x86 stream to find the next instruction. :-)

Next C statement is pretty cute, though.


That reminds me of the time I saw a C# database save function looking at the stack trace to figure out if it had already been called, exiting if it saw itself on the stack.

I mean, it's not wrong, but it is crazy.


If I recall, I once did something like this in Python that modified variables in its caller. Not for production, just for shits and giggles :D



Interesting problem - what's the shortest C function that takes a pointer to bytes and returns the length of the x86-64 instruction there? (and how many hours would it take to code it)


Start here and minimize as you see fit: https://intelxed.github.io/ref-manual/group__DEC.html#ga4bef...


The cheating way is probably to do this https://www.blackhat.com/docs/us-17/thursday/us-17-Domas-Bre... (page ~71) i.e. shift across an instruction boundary.


Put it on a page boundary. Make sure the next page isn't accessible. Analyze the resulting page fault.


On Error Resume Next


Is a larger slide and RIP increment needed then? Or can these never be made large enough?


The reason this worked in this example is because they knew which line would cause the SEGV. In the general case where you're trying to catch arbitrary segfaults, you'd need a slide in between every instruction if you wanted to do a fixed RIP increment.

What you really need to do is figure out which instruction was executing and increment RIP appropriately.


On Error Resume Next


I still remember getting a support call communicated on to myself and a colleague when we were driving on-route to another client.

colleague: "Caller says she's getting an error 'No Resumé' ?!?" us: ... huh?..... it is a document management system, but still ... . time passes . . me: Oh! On Error No Resumé us: much hilarity. No Resumé indeed.


I still shudder at this.


Thanks for the flashbacks!


The original Bourne shell trapped SEGV (which it called MEMF) as part of its memory management strategy - https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd...


Almost all language runtimes trap SEGV for memory management and other services.


most modern concurrent copying garbage collectors use memory protection and sigsegv handlers to avoid the need for locking


If you really just want the program to continue operating on dereferencing NULL pointers (and assuming that a NULL pointer is a pointer to address 0 in the VM, which is a risky assumption to make) you can generally convince the OS to map address 0 to something valid, therefore making the access Just Work.

Beyond that there are sometimes very valid reasons for allowing segfaults to occur in certain conditions and catching/patching them. For instance in an emulator's dynamic recompiler you could optimize your generated code by assuming that most memory accesses target the emulated RAM region (generally a reasonable assuption). Then you map the RAM buffer in such a way that if it turns out that the emulated program was actually attempting to access an address outside of RAM a memory fault occurs, which you can then catch and recompile the offending code block with a slower but more comprehensive address decode.


> If you really just want the program to continue operating on dereferencing NULL pointers you can generally convince the OS to map address 0 to something valid, therefore making the access Just Work.

Yeah yeah yeah but this ALSO fixes use-after-free bugs! Really an amazing little trick, I wonder why compilers don't just do it automatically.


See also how the JVM uses sigsegv signals for synchronizing safepoints for multiple threads: https://www.ateam-oracle.com/why-am-i-seeing-sigsegv-when-i-...


You can even do user-space on-demand paging with segfault handlers. Though there is a more modern solution for this: https://man7.org/linux/man-pages/man2/userfaultfd.2.html


Some programs catch SIGSEGV and automatically prints out the stack trace and starts a debugger for you.


This a standard technique for recovering from a memory violation in a non-critical function. The proper way to handle it is to save any transient data and restart the program.


And validate the transient data really carefully when you load it back in.


This is the old model made popular by Visual Basic.

ON ERROR RESUME NEXT


Don't forget to also catch other sigs.


Most of my signal handlers work just fine, but for some reason I cannot get the unit tests for SIGKILL and SIGSTOP green.


On a BBS forum in the 90s, I read some lyrics for a blues song where each verse ended with "segmentation violation -- core dumped blues". Here is what seems to be the definitive version of that song:

https://www.netfunny.com/rhf/jokes/92q3/coredb.html


The author makes a big fuss about the old UNIX documentation using sigseg instead of sigsegv.. but then completely ignores the comment in the same line that does use the word violation


> Long long time ago, computers used to have memory segmentation.

If you are using an Intel chip, they still do.


While technically the modern 64bit CPUs still support segmentation in 16 and 32bit modes (not very well but it works), in 64bit if you're not setting the segment registers to "everything" you're essentially operating outside supported margins. Some strange things happen if you do that.

I don't recall exactly but I don't think segmentation was heavily used after 2000 or so, it doesn't really do a lot for you if you have page tables.


One thing the segment registers are still used for are thread local storage (on Linux). So you read data from FS (different per thread) segment but same address, if you've prefixed your variable with __thread.

(Having said that, I remember optimizing thread local storage away by explicit pointers some time ago in my code, because it was calling some function to get the address constantly, so maybe there are some subtleties there)


FWIW, there was a good LWN article recently on the work to expose FS to userspace control safely


I haven't tried this lately, but you get a compiler error if you include <windows.h> and use 'near' and 'far' as variables names (which are holdovers when 'near' and 'far' were keywords for ptrs that supported segmented memory flavors).

A lot of old OpenGL code and uses 'hither' and 'yon' for the near and far clipping plane for this reason. :)


Eh, not really in long mode at least.


I always thought the V was actually 5 as System V UNIX. Maybe to denote a change that started in that version.


I've always read it like Dracula is telling me there was a seg fault: "A seg vault! Muah hah hah hah!"



It's interesting to see all signal names in that early version had six letters (SIGQIT instead of SIGQUIT, even) but SIGPIPE was the exception. Was that one added later?

(Also funny how the article says "this is from around 1978" when the date on the listing says May 24 1976)


I don't know the real answer, but I've always assumed it's because there's no way to get the right "I" vowel sound without that trailing "E".

Also, when creating abbreviations, it feels weird to create one that is only one letter shorter than the full version.


> Also, when creating abbreviations, it feels weird to create one that is only one letter shorter than the full version.

This is Unix, which gave us the "creat" system call (an abbreviation of "create"). https://man7.org/linux/man-pages/man2/creat.2.html


If it makes you feel any better, the creators of UNIX regrets this.

> Ken Thompson was once asked what he would do differently if he were redesigning the UNIX system. His reply: "I'd spell creat with an e."; Kernighan, Brian W.; Pike, Rob (1984). The UNIX programming environment. Prentice-Hall. ISBN 0139376992. OCLC 10269821., p. 204.


The C linker of those years only support function names up to 6 characters. That's maybe why the defines try to match their respective function name in 6 char.


I started a project similar to the fictional "skip instructions that cause segmentation violations" for SIGILL (illegal instruction) which tried to implement SSE3 replacements on hosts without SSE3. It had two modes: replace the illegal instruction in memory, or handle it in the signal handler:

https://github.com/rkeene/sse3-emu


> Was there a "Segmentation Vault?"?

It's not that far fetched, there's a Referer header after all.


The original cited SIGSEG constant definition in the OP still has a "segmentation violation" right there in the comment. Which suggests that "violation" was the norm even then.


Prior code is available. Before V4, there were no 'signals' per se; errors were trapped individually with dedicated system calls.


This didn't really answer the question! However, I've been using Unix since the early 80s and never once wondered about this.


So much for the "do not change" comments. I love these archaeological digs into Unix history.


SIGSEG sounds inappropriate in some Turkic languages. Extra V kinda masks the issue.


Huh, this doesn't explain why they added the V.


Maybe V like in System V?


The shape of a "V" is a fault.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: