Exploring o-Calculus

I was inspired by this post on Knuth’s approach to using o-notation to simplify limits. It turns out that independently, I explored a similar variant in this writeup. For a first post in this blog, perhaps it is wise to revisit this notion, but now cleaning it up using o-notation.

We will first define o(1) to have what we conventionally mean by a function with limit 0 as x \to 0.

Definition 1. Let f be a real-valued function defined on the interval (-r,r) \backslash \{0\} for some r > 0. We write f \in o(1) to mean:

For any \epsilon > 0, there exists \delta > 0 such that 0 < |t| < \delta implies |f(t)| < \epsilon.

Intuitively, this definition means that for any output error threshold \epsilon, there exists an input error threshold \delta such that inputs t \approx 0 (i.e. 0 < |t| < \delta) yield outputs f(t) \approx 0 (i.e. |f(t)| < \epsilon).

Indeed, conventionally this is equivalent to the notation \displaystyle \lim_{x \to 0} f(x) = 0. Furthermore, we will denote f(x) = o(1) to mean f \in o(1) and

f(x) = g(x) + o(1)\quad \iff \quad f(x) - g(x) = o(1).

Using standard \epsilon\delta arguments, we can prove the following limit laws:

Theorem 1. The following properties hold: for any real k,

\begin{aligned} o(1) + o(1) &= o(1), \\ k \cdot o(1) &= o(1), \\ o(1) \cdot o(1) &= o(1).\end{aligned}

Proof. Fix f \in o(1) and g \in o(1) and \epsilon > 0. Then, for any k_i > 0, there exists \delta_i > 0 such that 0 < |t| < \min\{\delta_1,\delta_2\} =: \delta implies

|f(t)| < k_1 \cdot \epsilon \quad \text{and} \quad  |g(t)| < k_2 \cdot \epsilon.

For the first result,

|(f+g)(t)| \leq |f(t)| + |g(t)| < (k_1 + k_2) \cdot \epsilon,

so that setting k_1 = k_2 = 1/2 yields f+g \in o(1).

For the second result, set k_1 = 1 if k = 0. Otherwise,

|(kf)(t)| = |k||f(t)| < |k| \cdot k_1 \cdot \epsilon

so that setting k_1 = 1/|k| yields kf \in o(1).

For the third result,

|(fg)(t)| = |f(t)||g(t)| < k_1 \cdot k_2 \cdot \epsilon^2

so that setting k_1 = 1 and k_2 = 1/\epsilon yields fg \in o(1).

With a little bit more flexibility, we can also prove an analogous result for division, but more on that later. Many elementary limit-based results can be derived similarly, such as the squeeze theorem.

Theorem 2 (Squeeze Theorem). Let f,g,h be real-valued functions defined on (-r,r)\backslash \{0\} for some r > 0 such that f \leq g \leq h. If f,h \in o(1), then g \in o(1).

Proof Sketch. Perform epsilontics to derive

-k_1 \cdot \epsilon < f(x) \leq g(x) \leq h(x) < k_2 \cdot \epsilon

for suitably chosen \delta_1,\delta_2 > 0. Setting k_1 = k_2 = 1 yields the desired result.

The power of analysing o(1) functions arises in defining and proving the more general limits used in practice.

Definition 2. Let f be a real-valued function defined on the interval (-r,r) \backslash \{0\} for some r > 0. We write \displaystyle \lim_{x \to 0} f(x) = L to mean f(x) = L + o(1).

We write f(x) = g(o(1)) to mean that there exists h = o(1) such that for any x \in (-r,r) \backslash \{0\} for some r > 0, f(x) = g(h(x)).

Theorem 3. \displaystyle \frac{1}{1+ o(1)} = 1 + o(1).

Proof. Let \displaystyle f(x) = \frac{1}{1+g(x)} for some g \in o(1). Fix \epsilon > 0. It suffices to show that \displaystyle f(x) - 1 = o(1). To that end, consider the bound

\displaystyle |f(x) - 1| = \left|\frac{1}{1+g(x)}-1\right| = \frac{1}{|1+g(x)|} \cdot |g(x)|.

We take advantage of g \in o(1) in two ways. For any k_i, i=1,2, there exists \delta_i such that

0 < |x| <\delta_i \quad \Rightarrow \quad |g(x)| < k_i \cdot \epsilon.

For k_1, we have the bound -k_1  \cdot \epsilon < g(x) < k_1  \cdot \epsilon implying, if 1-k_1 \cdot \epsilon > 0, that

\displaystyle \frac{1}{1+g(x)} < \frac{1}{1-k_1 \cdot \epsilon}.

Therefore, defining \delta := \min\{\delta_1,\delta_2\}, the complete bound is given by

\displaystyle |f(x) - 1| = \frac{1}{|1+g(x)|} \cdot |g(x)| < \frac{1}{1-k_1 \cdot \epsilon} \cdot k_2  \cdot \epsilon.

To complete the proof, we then simply choose k_1 = 1/(2\epsilon) and k_2 = 1/2. Thus, f(x) - 1 \in o(1), as required.

The limit laws for such functions then generalise rather naturally. We illustrate using the multiplicativity of taking limits.

Theorem 4. Let f,g be real-valued functions defined on the interval (-r,r) \backslash \{0\} for some r > 0. Suppose \displaystyle \lim_{x \to 0} f(x) = L and \displaystyle \lim_{x \to 0} g(x) = M. Then \displaystyle \lim_{x \to 0} (f(x)g(x)) = LM.

Proof. Write f(x) = L+ o(1) and g(x) = M+o(1). Then

\begin{aligned} f(x)g(x) &= (L+o(1))(M+o(1)) \\ &= LM + L\cdot o(1) + M \cdot o(1) + o(1) \cdot o(1) \\ &= LM + o(1) + o(1) + o(1) \\ &= LM + o(1).\end{aligned}

Therefore, \displaystyle \lim_{x \to 0} (f(x)g(x)) = LM.

We can even generalise to limits at any real c.

Definition 3. Let c be a real number and f be a real-valued function defined on (c-r,c+r) \backslash \{0\} for some r > 0. We write \displaystyle \lim_{x \to c} f(x) = L to mean \displaystyle \lim_{t \to 0} f(c+t) = L. Equivalently, f(c + o(1)) = L + o(1).

One can check that this does indeed satisfy the usual \epsilon\delta definition for limits. We can therefore define (local) continuity and differentiability. For our next post, we will use the local definition of differentiability to compute the derivative of x^n.

—Joel Kindiak, 18 Oct 24, 0745H

,

Published by


Responses

  1. The Linearity of Calculus – KindiakMath

    […] techniques in real analysis, we can prove that the collection such […]

    Like

  2. Complex Differentiation – KindiakMath

    […] Same computations as in the case of real analysis, assuming that the map defined by is continuous, which we prove […]

    Like

  3. What is Euler’s Number? – KindiakMath

    […] , the squeeze theorem […]

    Like

Leave a comment