Using these numbers, it would take your computer roughly 12 years to check 32M passwords against that string. That's a long time, but feasible (with additional power), and more specialized systems can bring it within reach.
In contrast, assuming a 40-character (20 in config, 20 in database) alphanumeric salt, an attacker would have to perform 704423425546998022968330264616370176 digests per row to check 32M passwords.
Unless you believe that is an insufficient barrier, implementing BCrypt is merely degrading the user experience (12-second logins? come on) for no real improvement.
There are lots of ideas which are beneficial, but not particularly useful. For example, requiring passwords to be at least 120 characters would (in theory) make passwords more difficult to compromise, but in practice users are just going to type "password" 15 times.
Increasing the digest time prevents an attacker with simultaneous access to the server and database from cracking very weak passwords, but at the cost of tripling or quadrupling how much time each request takes. There are some cases where this could be useful -- for example, running a dissident website in an authoritarian country -- but it's user-hostile to implement it anywhere else.
Calculating a BCrypt hash with the default cost factor takes about as long as reading an uncached file off a conventional filesystem. What a silly thing to try to optimize. Really? It's killing you to spend 100ms on password hashing? Ok, dial it down to 50ms. BCrypt is tunable.
In contrast, assuming a 40-character (20 in config, 20 in database) alphanumeric salt, an attacker would have to perform 704423425546998022968330264616370176 digests per row to check 32M passwords.
Unless you believe that is an insufficient barrier, implementing BCrypt is merely degrading the user experience (12-second logins? come on) for no real improvement.