Hallucinations in LLMs when learning is dangerous; IF you have some background, you can usually tell with LLMs go off the rails, but It would be unfortunate for you to commit to memory an incorrect fact at such a vulnerable time. It will be difficult to "uncommit it" at that point.
I don't think the LLM value prop here is to build lots of cards. It's just to interact with the model conversaitonally. If the model is wrong in this one translation, I'm not going to exactly "commit it to memory", I'm just going to keep on carrying on. I don't know that the LLM mistakes are any particularly worse than the many and sundry other mistakes I'm already continuously making as a language learner anyhow.
If I could speak a foreign language as well as a 8B parameter LLM, hallucinations and all, I'd be immensely ahead of where I am now. It's not like second languages aren't themselves often broken in somewhat similar ways.
I've used anki before. Most of the decks you get are randomly downloaded from the anki website. I'm not sure that an LLM hallucination is that much more likely than a typo or error in a random free deck I downloaded that someone compiled for me.