In computer science, the foremost application of such notation is to the time-complexity function of an algorithm. But there are other applications within C.S., and many more outside it.
OTOH, there is a difference between reading the notation as written, and interpreting the concept. Certainly there is a place in computing for talking about "quadratic time complexity". In such a situation, we might write "O(n^2)". That would be literally read as "(big-)Oh of n squared", but in context it might be telling us something about time complexity, and there isn't anything wrong with reading it that way.
Note: I always say the "big" in "big-O", because standard mathematical notation also includes "little-o", which is also about asymptotic growth rate. Little-o is far less common in C.S., though.
That has more to do with how the rest of the sentence is constructed, and to some extent the selection of pronunciation of O(n²). e.g., selecting "order n-squared" makes "is of" the natural choice, while "big-oh n-squared" is less grammatically awkward with "is" (arguably, the former example could be "of order n-squared", in which case "is of" is not actually different from "is"). To completely disambiguate, one should specify what the growth is in respect to: time, space, or something else. For example "In the worst case, quicksort is oh of n-squared with respect to time." "Has" should be reserved for when using O(n²) as an adjective, e.g. "Quicksort normally has oh of n log n growth in time."