The purpose of this post is to describe the strategy Neil Lutz and I used to prove lower bounds on the effective dimension of points on a line in
In this post I will focus on the effective dimension of points. However, one of primary motivations of this work is that lower bounds on effective dimension can be converted into lower bounds on the (classical) Hausdorff dimension of sets. For example, Neil and I used the lower bounds on the effective dimension of points on a line to give better bounds on certain classes of Furstenberg sets. In a future post I might describe how to convert effective theorems into classical theorems using the point-to-set principle of Jack and Neil Lutz.
For this post, we will restrict ourselves to proving the following theorem.
Theorem: Let (a, b) be a slope-intercept pair of effective dimension d. Let x be a point whose effective dimension, relative to (a, b), is s >= min{d, 1}. Then
Lower bounds at fixed precisions
In this section, I will explain how to prove lower bounds of the complexity of a point on a line at fixed precisions. In the next section we’ll see how to use these ideas to give lower bounds for the effective dimension of a point on a line.
Let (a,b) be a slope-intercept pair with
We will make the further assumption that d < 1 (we will see how to remove this assumption in the next section). Let x be a point such that
Let r be a sufficiently large natural number (precision) such that
Note that infinitely such r exist by the definition of effective dimension. Our goal is to prove that
We in fact prove the stronger statement, that
Remark: Using our assumption about the dimension of x relative to (a,b), it is clear that (2) implies (1). We should also note that, modulo the sublinear additive error, this lower bound is tight. Specifically,
Therefore, we will show the slightly unintuitive fact that the point (x, ax + b) contains all of the information of the line (a,b) and the point x.
To prove (2) it suffices to establish the following lemma.
Lemma 1: Let (x, a, b) and precision r satisfy the previous properties. Suppose (u, v) is a line such that
Then
Remark: Assuming Lemma 1, we can quickly deduce (2) as desired. Given (a program producing) (x, ax + b) to precision r, search for the first line (u, v) satisfying (3) and
Theorem 1 shows that any such line is very close to (a, b). Therefore, with an additional sublinear number of bits, we can produce (a, b) from (u, v), and we have (2) as desired.
Thus, it suffices to prove Lemma 1. One of the standard techniques of Kolmogorov complexity is the incompressibility argument. The idea is to show that some property P holds for a random string w by proving that if it didn’t, we could compress w (a contradiction, since w is random). The proof of Lemma 1 follows this pattern. Essentially we are arguing that if there were many lines (u, v) intersecting (x, ax + b), then x wouldn’t be random relative to (a,b).
To use this strategy, we rely on the simple geometric fact that two non-parallel lines intersect at exactly one point. Therefore, intuitively, to compute x, with oracle access to (a,b), it suffices to give any line (u, v) which intersects (x, ax + b). As x was chosen to have high complexity relative to (a, b), this implies that any such line (u, v) must have high complexity as well. Our assumption that s > d implies that
There is a major issue in formalizing the previous discussion. We aren’t working with infinite objects, but with Euclidean points up to finite precision. Thus we aren’t dealing with the intersection of two lines, but with the intersection of two $2^{-r}$ neighborhoods of lines (i.e., two tubes). Unlike lines, the intersection of two tubes has area. The size of this intersection depends on how orthogonal the two lines are. The key fact that allows us to continue with our idea is that the area depends linearly on the distance between the two lines. Formally,
where A is the area of intersection of the $2^{-r}$-tubes around lines (a, b) and (u, v) and
It is not difficult to translate (5) into the language of Kolmogorov complexity, which yields our next lemma.
Lemma 2: Let (a, b) and (u, v) be two lines and x be a point such that
Then
where
We are now in a position to prove Lemma 1, and thereby complete the proof of the lower bound (2).
Proof of Lemma 1: Let (u, v) satisfy (3) and
Therefore, by Lemma 2,
To bound the RHS, we note that the oracle (a, b) gives complete information about (u, v) up to precision t. Therefore,
Since any
Recall that we assumed that dim(a,b) = d. Using this and our assumption about the complexity of (u, v) we conclude that
Since s < d, this implies that
and the conclusion follows.
Lower bound for effective dimension
In this section we use the work of the previous section to prove the corresponding lower bound for the effective dimension of (x, ax + b). In order to do this, we need to prove a lower bound for the complexity of our point at every precision. In the previous section, we made the assumption that the complexity of (a, b) was essentially minimized at precision r. Obviously, at an arbitrary precision this property won’t necessarily hold.
The final technique needed is to, at an arbitrary precision r, artificially lower the complexity of (a, b) at r so that the minimization property holds (and so we can apply the results of the previous section).
Let r be any sufficiently large precision. By the “continuity” of
Let
That is, the universal machine, given the first m bits of (a,b) and
At this point there are two equivalent ways to tie this into the work of the previous section. I will follow the notation Neil and I used in the paper (instead of oracles, one can condition everything on
Since there is (almost) no information in A that isn’t in (a,b),
for all precisions t. Therefore, relative to this oracle A, we can use the results of the previous section to conclude that
The LHS is
We can therefore conclude that, for all sufficiently large r,
and the proof is complete.
Remark: When the dimension of (a, b) is greater than 1, we can lower the complexity of (a, b) to a rational arbitrarily close to 1.
To finish the post, I wanted to quickly recap the strategy Neil and I introduced.
- At precision r, decrease the complexity of (a, b) to roughly dim(a,b) r using conditional complexity.
- The intersection lemma (Lemma 2 of this post), gives a lower bound on the complexity of any other line intersecting (x, ax + b).
- If x, relative to (a,b) has dimension at least min{1, dim(a,b)}, this lower bound implies the only lines intersecting (x, ax + b) are very close to (a,b).
- Therefore, given (x, ax + b), we can compute (x, a, b) with a sublinear amount of extra information.
