I highly recommend you  cause it's free, hands on and if you have solid stats will go fast and easy.
Anyways, the thing with Bayesian is you need to look at the problems from a different optic and learn new names for the same things you know from classic (Maximum likelihood is now called mean a posteriori and whatnot).
If I had to summarize the whole uncertainty thing I would say: Both in classic and Bayesian it holds that a function of random variables yields another random variable. Models are functions of random variables. As such, every point estimate you give has a full distribution behind. In classic this is accounted for with asymptotics and confidence intervals. In classics you have priors all the same, but using defaults and not talking about them.
In Bayesian you talk about your priors, and if you have extra information about your inference problem you include it there. Or you abuse the priors to make the model yield results that will make your boss/client happy (I worked as an economics researcher at a big bank, the whole department was Bayesian). Or choose priors that make you custom model actually compute. Or use priors that do regularization (like a classic lasso)... Different priors are different models and you need to do your model checking / selection.
I am not so sure what you mean about continuous priors. Any gamma should be continuous. But I don't think that is what you mean.