Dynamic typing has advantages and disadvantages. E.g. Static typing violates the DRY ("Don't repeat yourself") principle concerning code reuse whereas compile time type checking reduces the effort for testing (but for very simple cases only).
I think it's because you're declaring the type, which is obvious from context, when you could save characters and have the runtime system figure it out for you.
That isn't an issue of static typing, it is an issue of explicit typing. I program in a statically typed language. When I don't feel something warrants a type signature, I simply don't write one. Type signatures are compiler enforced documentation.
In Haskell, for example, you only need type declarations for disambiguation when the information is actually missing from the program. That is, if the program were dynamic, you'd also need to write that down.
For example, the type of:
show . read
is ambiguous, because "read" parses a String into some "Readable" type, and "show" converts a "Showable" type into a String. Which type? It could be anything, so the compile complains it is an ambiguous type.
You could say something like:
(show :: MyType -> String) . read
To resolve the ambiguity.
In a dynamic language, a function like "read" is not possible at all, since it uses static types to determine which type to parse into. So in Python, this would look like:
lambda x: MyType.parse(x).show()
Same information, same DRY violation/non-violation.
Note that in some other cases, where the type can be determined, I can write:
[1, 2, read "3", 4]
Whereas in a dynamic language I'd need:
[1, 2, Int.parse("3"), 4]
So it is actually dynamic typing that violates DRY here.
The notation [x, y, z] denotes a homogeneously-typed list.
So I used it as an example where type inference can figure out the type of an expression from its use, rather than from its intrinsic value.
Dynamic types only work based on the intrinsic value, so whenever a statically-typed language can figure out the correct types from context, a dynamically-typed language is going to have to require redundant type hints.
So [1, read "2", 3] is a list of ints, so the read call there is known by type inference to return an int, so the read parser chosen is the parser for ints. In Python, even if you had some value that is required to be an integer, and you wanted to parse a string to that value, you'd need to say: Int.read("1"), which is redundant.
That you don't need a type signature, or any explicit type information at all. Because the compiler already knows it has to be a list of Ints, you can just call read and it knows it has to be converting that string to an int. In a unityped language you still need to supply the information about what type to convert to.
There are different degrees of type inference. Go's `:=` operator and C++'s `auto` keyword use the simplest kind, where the right-hand side must evaluate to a concrete type. More powerful than that are systems that can infer the type of all variables in a function scope depending on how they are used throughout the function (such as Scala and Rust, though Scala's is waaaay more advanced than Rust's afaict), but require functions themselves to always be explictly typed. More powerful still are languages with whole-program type inference such as ML, which can infer even function signatures.
(Please note that I'm very rough on approaches to type inference, corrections to the above are welcomed!)