## The dimension spectra of planar lines

I recently uploaded a paper solving the dimension spectrum conjecture. I wanted to write a post explaining some of the intuition behind the proof.

The theme of the conjecture is to better understand the randomness of points on a line. Let (a, b) be a slope-intercept pair. The dimension spectrum of a line (a,b) is set of all effective dimensions of points on that line, i.e.,

$\text{sp}(L_{a,b}) = \{\text{dim}(x, ax + b) \, \vert \, x \in [0, 1]\}$.

In the early 2000s, Jack Lutz conjectured that the dimension spectrum of any line in the plane contains a unit interval.

Remark: This is the best one could hope for, in general. There are lines whose dimension spectrum is exactly an interval of length one. This is the case, for example, whenever (a, b) is ML-random (when considered as a point in $\mathbb{R}^2$).

In my paper, I showed that Lutz’s conjecture is true. I proved that, for any line (a, b),

$\text{sp}(L_{a,b}) \supseteq [d, 1 + d]$,

where $d = \min\{\text{dim}(a,b), 1\}$. Specifically, I proved that, for any line (a, b) and any real s in [0, 1], there is a point x such that

$\text{dim}^{a,b}(x) = s$

and

$\text{dim}(x, ax + b) = s + d,$

where $d = \min\{\text{dim}(a,b), 1\}$.

My proof makes essential use of the framework Neil Lutz and I developed (described in my previous post). I will assume familiarity with these ideas, and describe the new ideas needed for the dimension spectrum conjecture.

### Key difficulty

We have a general method for proving lower bounds on the dimension of a point on a line. Unfortunately, this method only works when the dimension of x is at least as large as the dimension of (a, b). To see where this breaks down for points of low dimension, recall our intersection bound

$K^{a,b}_{r-t}(x) \leq K^{a,b}_r(u,v) + o(r)$,

for any line (u, v) intersecting (x, ax + b), where

$t = -\log\|(a,b) - (u, v)\|$.

As previously described, if we consider only lines (u, v) of complexity at most dr, the intersection bound tells us that

$K^{a,b}_{r-t}(x) \leq d(r - t) + o(r)$,

where d is the dimension of (a, b). Let s be the dimension of x, relative to (a, b). When s is greater than d, we saw that intersection bound gives us a contradiction unless r – t is very small. However, when s is less than d, the intersection bound tells us essentially nothing:

$s(r - t) \leq d(r - t) + o(r)$,

which is just true.

### Constructing the point (for lines of low dimension)

In this section, I will describe how to construct a point of low complexity which overcomes the obstacle described in the previous section. Fix a line (a, b) of effective dimension d. For now, we will assume that d < 1. For simplicity, I will describe how to define x up to a fixed precision r. Let r be a precision such that

$K_r(a, b) = dr + o(r)$.

Note that infinitely many such precisions r exist. Our goal is to define the first r bits of x such that

$K^{a,b}_r(x) = sr$

and

$K_r(x, ax + b) = (s + d)r + o(r)$.

The key idea needed to overcome the obstacle of the previous section is to encode information of the line into our point. Specifically, we define x = yz, the concatenation of strings where

• y is a string of length sr which is random relative to (a, b), and
• z is the string consisting of the first r – sr bits of the binary representation of the slope a.

It is not difficult to see that the complexity of x relative to (a, b) is sr. In fact, since the first sr bits of x are random relative to (a, b), for any t < sr,

$K^{a,b}_t(x) = t - O(1).$

To see why this is useful, consider any line (u, v) of complexity at most dr which intersects (x, ax + b). Our intersection bound implies that

$K^{a,b}_{r-t}(x) \leq K^{a,b}_r(u, v) \leq d(r - t) + o(r).$

Note that when (u, v) is close to (a, b), i.e., when

$t := -\log\|(a,b) - (u,v)\| > r - sr$,

we have r – t < sr. In this case, x is random at precision r – t, and so the intersection bound implies that

$r - t - O(1) \leq d(r - t) + o(r).$

Note that this is false unless r – t is o(r). In other words, when (u, v) is close to (a, b) we are essentially in the high complexity case we know how to deal with.

Thus, to complete the proof that

$K_r(x, ax + b) \geq (d + s)r - o(r)$

it suffices to show that we only have to consider candidate lines within $2^{-r + sr}$ of (a, b). To see why this is true, notice that, given (x, ax + b) we know

• The first r bits of x, and thus the first r – sr bits of a.
• The first r bits of ax + b, which combined with the first r – sr bits of x and the first r – sr bits of a, gives the first r – sr bits of b.

Thus, we can compute the first r – sr bits of (a, b), and we can restrict our search to lines (u, v) within $2^{-r + sr}$ of (a, b). This shows that, given (x, ax + b) and sublinear number of extra bits, we can compute (x, a, b), i.e.,

$K_r(x, ax + b) \geq K_r(x, a, b) - o(r).$

It is easy to see that the RHS is (d + s)r – o(r). Moreover, since

$K_r(x, ax + b) \leq K_r(x, a, b),$

we have the equality we needed.

Remark: For the full proof, we need to construct a point x (an infinite binary sequence), not just a finite string. To do this, we take a sufficiently fast growing sequence of precisions

$r_1, r_2,\ldots$

such that the complexity of (a, b) at precision each precision $r_i$ is

$K_{r_i}(a, b) = dr_i + o(r_i).$

We do the same procedure described above at each such precision. Of course, we need to prove the lower bound

$K_r(x, ax + b) \geq (d + s)r - o(r)$,

for every precision r, not just at the precisions of our chosen sequence. This takes a bit of work, but follows essentially from the techniques described in the previous post.

### Lines of high dimension

The previous section described the proof for lines of low dimension. While the key idea of encoding the information of the line into the point is still useful, there is further obstacle in this case. Informally, the problem is that, when dim(a, b) > 1, we don’t get the upper bound

$K_r(x, ax + b) \leq (1 + s)r + o(r)$

for free. Thus, if we followed the same proof as the previous section, we would prove that

$K_r(x, ax + b) \geq (1 + s)r - o(r)$,

but not the equality needed to establish our theorem. I won’t go into the full details of the proof for this case, but the general idea is to use a non-constructive argument.

Let (a, b) be a line with dimension d > 1. Let r be a precision such that the complexity of (a, b) is minimized, i.e.,

$K_r(a, b) = dr + o(r)$.

Let y be a string of length sr which is random relative to (a, b). When we take x to be y concatenated with the string of r – sr zeros, we have the upper bound we want:

$K_r(x, ax + b) \leq K_r(x) + K_r(ax + b \, \vert \, x) \leq sr + r.$

When we take x to be the string y concatenated with the string of the first r – sr bits of a, we have the lower bound we want:

$K_r(x, ax + b) \geq (1 + s)r - o(r)$

Since the Kolmogorov complexity function is “continuous” we can use (an approximate, discrete version of) the MVT to show that there is some x which contains a subset of the information of a which satisfies the equality we need.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.