According to the ASTM E2698-18 (page 7) the Total Image Unsharpness can be calculated as cubic root of the sum of cubes:
where
v - the geometrical magnification SRb detector is interpolated basic spatial resolution of the detector Ug - the geometrical unsharpness as (v - 1) * d and d is the focal spot
But in "Industrial Radiography - Image forming techniques" (page 100) the square root from sum of squares is used instead of cubes (and division by multiplication factor is omitted):
Now if I will draw the curves according equation above with "square" of "cube" method, then it will looks like this for 100 µm detector and tube with focal spot 0,2 mm:
My DGZfP Learning book said that the "Optimum" magnification factor can be computed as (Ui/d)+1, where Ui is detector unsharpness (which is 0,2 mm in our case) and d is focal spot, which is 0,2 mm as well. Optimum as M = 2 and everything is fine for both curves (but estimated total unsharpness is different, obviously).
But the situation gets changed if I will take X-Ray tube with larger focal spot, for example, 0,4 mm, now both minimums slightly different:
The question is — "squares" or "cubes", which equation is "more correct" for digital radiography? How it was obtained, analytically or empirically? I can take some shots of the Duplex Wire IQI at different magnifications and check how theory and practice meet together, but just curious about your opinion.