Now, the statistical structure of the horizontally-homogeneous atmospheric surface layer ("hh_ASL") has been the subject of about eight decades of research. On the basis of the governing (conservation) equations, certain general inferences may be made, but, the closure problem (see below) prevents detailed predictions. Our present understanding of the hh_ASL essentially stems from measurements, suitably organised and generalised by appeal to "dimensional analysis," or, "similarity theory." The "Monin-Obukhov similarity theory" is generally accepted as being a satisfactory description of most of the most important statistical properties of undisturbed turbulent flow, very near ground (though not too near - not down amongst the "roughness elements," eg., inside the forest, if any). Of the Monin-Obukhov theory we can say, simplistically speaking, that it "collapses" or "re-scales" all hh_ASL flows to a single flow, which, once known (as it is), fixes all such flows by re-scaling (if you know the shape of a Volkswagon van, you can build one of any size - and if your models represent the real one visually, they are said to be "geometrically similar" to the original). As an example of the content of the MO theory, it states that the mean horizontal windspeed varies with height according to
where f() is an undetermined empirical function. In this formula σw in effect is a "velocity scale" (more usually chosen as u* , the "friction velocity"), and L is the Obukhov length scale, whose magnitude depends on σw and on the heat flux density QH from ground to air (ie. the temperature stratification). Similar equations, which the reader may note inter-relate dimensionless (ie. suitably scaled) properties of the flow, may be written for the height variation of mean temperature or humidity. Experiments have determined the universal, empirical functions.
But now suppose we need a systematic, satisfactorily complete understanding of a disturbed turbulent windflow, eg. the flow over a hill or in a forest clearcut. By definition, flow statistics are now to be regarded as varying not only with height, but with one or more of the horizontal coordinates, U=U(x,y,z). There are an infinity of such flows, as there are an infinity of landscapes: and so there is no possibility of collapsing "all disturbed flows to one" by some skillful choice of scales, though doubtless, certain groups of disturbed flows may be unified. So how do we look for our "systematic," theoretically-based knowledge of a disturbed flow? One option is to make measurements, again exploiting dimensional analysis (to help in choice of scales which effectively order the data). The other route:
We may solve the governing equations (subject to appropriate boundary and/or initial conditions) to determine the mean velocity vector, the standard deviation(s) of the velocity about the mean, etc. Usually, it is necessary to solve the equations numerically, even though we usually restrict this sort of investigation to windflow problems possessing a degree of symmetry (eg. a steady-state windflow at perpendicular incidence to to an infinite ridge; this problem presents symmetry along the ridge). There is however a vast literature of analytical solutions to turbulent flow problems, mostly pre-dating the era of the computer.
This is a very old problem, and requires the introduction of "closure hypotheses," eg. (first-order closure, or "K-theory") that the turbulent fluxes are linearly proportional to (and thus fixed by) gradients in the mean (resolved) velocity (it remains to fix the proportionality constant, K, which is called the "eddy viscosity," and is a property of the state of motion rather than of the fluid). Over the past decades more complete "second- and third-order closures" have been applied to many disturbed flows, including flow through vegetation. But these involve many assumptions and introduce many arbitrary constants, and rather little is gained over the judicious use of K-theory. Whatever the closure, it is (for now) necessary to "tune" a flow model to observations; then the (now "calibrated") model retains (hopefully) a measure of truth when "extrapolated" to situations beyond those observed.