Well, I have this anecdote. We switched from serde to our own serialization / deserialization scheme (it still uses serde, but only for the JSON part), which is heavily based on dynamic dispatch, and actually got it faster.
Wasn't apples to apples comparison, but it was some times faster at the time (my memory doesn't serve me, but something around 3x to 5x). Also, compilation speed went down (well, at the time :) ). It was mostly due how some of the features work in serde (flatten and tagged enums), though.
I made a separate, cleaner, experiment (https://github.com/idubrov/dynser), which does not show that dramatic improvement (again, wasn't apples to apples, there were other factors which I don't remember), but shows some.
Wasn't apples to apples comparison, but it was some times faster at the time (my memory doesn't serve me, but something around 3x to 5x). Also, compilation speed went down (well, at the time :) ). It was mostly due how some of the features work in serde (flatten and tagged enums), though.
I made a separate, cleaner, experiment (https://github.com/idubrov/dynser), which does not show that dramatic improvement (again, wasn't apples to apples, there were other factors which I don't remember), but shows some.