I also believe that type inference is of limited practical application at scale. On a local level, internal to a module or function, it can make a lot of sense, but it's at risk of underspecification at the module interface level. For example, when you modify function bodies, you may inadvertently add more constraints to an inferred type (e.g. use an operator or function defined on a typeclass not previously brought in), and consequently break clients of the module.
OCaml (and potentially Haskell, though I haven't used this feature in Haskell) ameliorates this problem by giving the programmer the facility to specify (or generate and edit based on inference) module signatures that are then checked against. With this approach, I would argue that type inference actually makes your large-scale code more robust and easier to modify than without it due to the automatic verification of module signature. If you inadvertently introduce extra type constraints to your module's interface, the type-checker will tell you and save you from releasing a module that is incompatible with previous versions of the same module.
Without type inference, you're either annotating types everywhere (looking at you Java...) or waiting until runtime (or production!) to find out you've made an error (e.g. perl, python, ruby and on and on).
I really don't see how type inference and its associated program analysis and verification is anything but good.