IIUC the input to LLMs is tokenized not on word boundaries but some kind of inter-syllable boundaries, because then whatever the model associated with "task" will also apply to "tasking", "tasked", "taskmaster", etc for example. So a model making up compounds that don't exist would be fully possible and even desirable, especially since real humans do it with English all the time.
Or maybe it's an ancient dictionary. I was kind of surprised at the sizes of ]dictionaries I could find while trying to test out a personal project.