in the word2vec model trained with Chinese characters, there does not seem to be much correlation between the input of a word vector and the output, and I do not know whether it is normal or not.
example, the corpus is made up of some components, input "horse", model.most_similar output, top10 results do not have "reach", a little surprised, I do not know whether it is normal.
how much is the dimension of another, character embedding?
ask seniors for advice, thank you.