Suppose f(x) and g(x) are real valued functions defined for all x > x0 (where x0 is a fixed positive real). We write
f(x) = o(g(x))
If the limit as x approaches infinity of f(x)/g(x) is zero (that is, if eventually f(x)/g(x) becomes less than any given positive number). Examples: 10000x = o(x2), log(x) = o(x), and xn = o(ex). Notice that f(x) = o(g(x)) implies, and is stronger than, f(x) = O(g(x)).
We often use the little-oh notation this way:
f(x) = g(x) + o(h(x)).
This intuitively means that the error in using g(x) to approximate f(x) is negligible in comparison to h(x).
The little-oh notation was first used by E. Landau in 1909.