You can retrain your distributional knowledge and keep your lexical knowledge. Moving to a new domain shouldn't mean you have to forget everything about what words mean and hope you manage to learn it again.
The whole idea of Numberbatch is that a combination of distributional and lexical knowledge is much better than either one alone.
BTW, ConceptNet is only partially expert-derived (much of it is crowd-sourced), aims not to be rigid like WordNet is, and is in a whole lot of languages.
"Retraining" ConceptNet itself is a bit of a chore, but you can do it. That is, you can get the source [1], add or remove sources of data, and rebuild it. Meanwhile, if you wanted to retrain word2vec's Google News skip-gram vectors, you would have to get a machine learning job at Google.
The whole idea of Numberbatch is that a combination of distributional and lexical knowledge is much better than either one alone.
BTW, ConceptNet is only partially expert-derived (much of it is crowd-sourced), aims not to be rigid like WordNet is, and is in a whole lot of languages.
"Retraining" ConceptNet itself is a bit of a chore, but you can do it. That is, you can get the source [1], add or remove sources of data, and rebuild it. Meanwhile, if you wanted to retrain word2vec's Google News skip-gram vectors, you would have to get a machine learning job at Google.
[1] https://github.com/commonsense/conceptnet5