It's all trial and error. You start with some model about how your target disease works. Perhaps, for the sake of argument, your model is that disease Q is caused by a deficit of protein N. Protein N is broken down by enzyme F, so obviously if you found a drug that suppressed enzyme F, you could cure disease Q. Now all you have to do is try every chemical you know how to make to see if it reacts with enzyme F.
Of course, you have to be a little more picky than that. Elemental Flourine would probably react with the enzyme, but might react with other important parts of the patient's anatomy as well; probably there would be side effects. So you screen millions of compounds against your enzyme, and against thousands of other molecules commonly found in the human body that you _don't_ want it to interact with, looking for the one that interacts with as few of them as possible. These days this part is somewhat automated. Machines can squirt thousands of chemicals into thousands of test cells every second, and automatically check them for chemical reactions. There are apparently whole companies that do nothing but this, on a contract basis. They maintain a library of compounds to test against, you ship them a big bottle of your enzyme F in solution, and they run all the tests for you. That takes a big logistical problem off your plate, which is nice. Since this is all they do, they can really specialize and increase their efficiency.
Now you've spent a couple of years on the project and identified a few dozen likely candidates. The next step is to optimize them to improve their effect. You're basically trying to guess what part of the molecule is most important (hopefully backing that guess up with some data), then changing the less important looking parts of the molecule to see what happens. Think of all the different combinations of side groups you could add to it, or remove from it, or swap out with other groups, etc, and try them all. Lots of synthesizing small batches of chemicals nobody else has ever synthesized before, determining their structures to make sure you synthesized what you set out to synthesize, lots of assays to see what kind of reactions they get up to, lots of failures.
After a few years of that and you might have something you can start testing in a real biological system. For this step you use cell cultures, rather than going immediately to the full complexity of an animal model. Your drug isn't much good if the liver immediately thinks it's a poison and dismantles it, or if it kills the cultured liver cells, etc.
If none of that goes wrong, then maybe you do tests in an animal model (provided you can find some animals that are susceptible to disease Q, or something close enough), and then later do human testing. Hopefully your disease model was correct; not all of them are. Look at all the alzheimers drugs that have failed, for instance. It seems that none of our hypotheses for how alzheimers works are correct.
Also, don't forget that at some point you also have to work out how to synthesize your drug efficiently, safely, inexpensively, and in large batches.
Labs are presumably a big part of the costs, but a lot of the cost of a lab is the people, not just the equipment.
I think changing the way the FDA works is a hopeless cause, because the real costs are at the beginning of the process. Fund basic research instead, so that we can find new types of chemicals to build, new ways of building them, new natural products, etc. Maybe someone will even crack the simulation problem (the problem is that accurate chemical simulations take months and years to run, and simulations that are faster than physical tests are inaccurate).