In my, admittedly maybe lacking, understanding amortization doesn't really work that way. "Amortized worst case" (for example) means that you can still bound the worst case[1], but it's just not necessarily going to be a very accurate bound. Obviously, amortized complexity doesn't tell you "<X ms" right off the bat since it deals in abstract "operations", but if you have known worst case bounds for all the "operations", then an amortized bound for a given operation will give you something equivalent to "<X ms".
[1] I mean, it's a common proof technique to actually have a non-negative cost function (+ invariant) and postulate/derive an upper bound on that, so... what gives? What's your reasoning here?
No. The entire premise of amortized analysis is to get a more optimistic "eventually O" number. "Eventually" is not good enough for real time. Yes you can get a real hard worst-case number, but that's a different analysis from amortized analysis. Unless all of your amortized operations are happening between deadlines, it's useless--worse, dangerous--for safety. And amortized analysis is almost never used that way. You don't have language run-times that reinitialize between every deadline.
I somehow confused myself by thinking of it in terms of one of the proof techniques for amortized worst case where you derive a fixed upper bound for any "n". Of course this is a much stronger property than needed.