Projection theorems using effective dimension

In this post I will describe how to use effective dimension to investigate how the fractal dimension of a set changes when projected onto a linear subspace. The first theorem investigating this behavior is the celebrated Marstrand projection theorem.

Theorem (Marstrand’s projection theorem): Let E \subseteq \mathbb{R}^2 be an analytic set. Then for almost every angle \theta,

\text{dim}_H(p_\theta E) = \min\{1, \text{dim}_H(E)\},


p_\theta(x, y) = x\cos \theta + y\sin\theta.

Note that, since the projection map p_\theta is Lipschitz,

\text{dim}_H(p_\theta E) \leq \text{dim}_H(E).

Also, since p_\theta maps into \mathbb{R}, we have

\text{dim}_H(p_\theta E) \leq 1.

Thus, Marstrand’s theorem shows that, for almost every angle \theta, the Hausdorff dimension of p_\theta E is the maximum possible.

Remark: We cannot, in general, drop the almost every assumption. It is possible to construct a Borel set E of dimension, e.g. 1, such that

\text{dim}_H(p_\theta E) < 1,

for uncountably many angles \theta. There is great interest in proving bounds on the “exceptional” directions not satisfying the conclusion of MPT. Kauffman gave the first bounds in this direction, but sharp bounds are still an important open problem.

Remark: Marstrand also showed that, when the dimension of E is strictly greater than 1, the projection of E has positive measure for almost every angle. We do not yet have a proof of this theorem using algorithmic techniques (and I think this is an important problem that needs a solution).

Remark: Marstrand’s theorem can be generalized to sets in \mathbb{R}^n, for arbitrary n and to subspaces of arbitrary dimension m < n. In this post, I will restrict to subsets of \mathbb{R}^2 and projections onto lines.

In this post, I will describe the techniques Neil Lutz and I used to give a proof of Marstand’s theorem using effective dimension.

It turns out that these techniques can be also be used to prove two new theorems on the dimension of projected sets. The first result proves that the Marstrand’s theorem holds for sets whose Hausdorff and packing dimensions agree (not necessarily analytic). The second result gives lower bounds on the packing dimension of projected sets. Tuomas Orponen recently gave a new proof of these two theorems using classical techniques. I will explore these new theorems, and related work, in a future post.

Effective dimension of projected points

The typical approach to proving (classical) theorems about the fractal dimension of sets using effective dimension is to prove the analogous bound for the effective dimension of points and then apply the point-to-set principle (discussed in the next sections). We follow this pattern, and begin with effective dimension.

Remark: It turns out that it is not so simple to convert the results of this section into a result on the dimension of sets. Still, it makes sense to start with bounding the effective dimension of projected points. In the next section, we will discuss how to convert this into a bound on the Hausdorff dimension of projected sets.

In the effective setting, bounds on the dimension of projected points follows almost verbatim from the techniques Neil and I used for points on lines (as described in this previous post). I will briefly recall the main ideas.

We first state the point-wise analog of Marstrand’s theorem. Let z \in \mathbb{R}^2 be a point. . Let \theta \in [0, 2\pi) be an angle which is random relative to z. Finally, let

d = \min\{\text{dim}^\theta(z), 1\}.

Our goal is to show

\text{dim}^\theta(p_\theta z) = d \ \ \ \ (1).

Remark: Note that we bounding the dimension of p_\theta z relative to \theta. It is also important to note that this bound is the best possible, since

\text{dim}^\theta(p_\theta z) \leq \text{dim}^\theta(z),


\text{dim}^\theta(p_\theta z) \leq 1.

The proof of (1) essentially follows from the techniques Neil and I developed for points on lines. For full details, see this previous post, but recall the main ideas: For every sufficiently large precision r we will do the following.

  1. Use conditional complexity to construct an oracle A which reduces the complexity of z. Thus, K^{A}_r(z) = \eta r, where \eta < d. Note that this is not relative to \theta.
  2. Using the linearity of p_\theta, we show that the complexity of any point w such that p_\theta(z) = p_\theta(w) is lower bounded by the complexity of \theta (this corresponds to the “intersection bound” for lines). Thus, since \theta was chosen to be random, any other point w must have high complexity.
  3. Since \theta was chosen to be random relative to z we use (2) to show that z is essentially the only point of complexity less than \eta r which projects to p_\theta(z). Thus, simple enumeration allows us to compute z given \theta and p_\theta(z).

The formal statement corresponding to step 2 is encapsulated by the following lemma.

Lemma 1: Let z \in \mathbb{R}^2 and \theta \in [0, 2\pi). Let w \in \mathbb{R}^2 such that p_\theta(z) = p_\theta(w). Let r \in \mathbb{N} and A be any oracle. Then

K^{A, z}_{r-t}(\theta) \leq K^{A,z}_r(w) + o(r),

where t = -\log\|w - z\|.

As with points on lines, the purpose of this lemma is to give lower bounds on the complexity of any point w which projects to p_\theta(z).

For the first step, recall that we can construct an oracle A such that

K^{A}_r(z) = \eta r + O(\log r).

An important property of this construction is that A does not contain any information not already contained in z, and therefore, if \theta is random relative to z,

K^{A, z}_{r-t}(\theta) = K^{z}_{r-t}(\theta) = r - t - O(\log r),

for all t < r. As in the case of lines, this combined with Lemma 1 shows that any w such that

  1. p_\theta(z) = p_\theta(w), and
  2. K^{A}_r(w) \leq \eta r

must be very close to z.

Therefore we can proceed to step 3, allowing us to conclude that

K^{A, \theta}_r(z) \leq K^{A, \theta}_r(p_\theta z) + o(r).

It is important to note that this does not immediately imply what we want (equation (1)). We still need to show that

K^{A, \theta}_r(z) = K^A_r(z) - o(r) = \eta r - o(r).

Fortunately, this follows from a standard symmetry of information argument. Since \theta is random relative to z, \theta gives no information about z. Since A contains only information contained in z, there is no mutual information between oracles A and \theta, and the conclusion follows.

The point-to-set principle

In this section, I describe how to convert the lower bound on the effective dimension of projected points to lower bounds on the fractal dimensions of projected sets. The bridge between the classical and effective worlds, proved by Jack Lutz and Neil Lutz, is called the point-to-set principle.

Theorem 2 (Point-to-set principle): Let E \subseteq \mathbb{R}^n. Then

\text{dim}_H(E) = \min\limits_{A\subseteq\mathbb{N}} \sup\limits_{x\in E} \text{dim}^A(x),


\text{dim}_P(E) = \min\limits_{A\subseteq\mathbb{N}} \sup\limits_{x\in E} \text{Dim}^A(x).

Oracles testifying to the above characterizations are called Hausdorff or packing oracles.

In many cases, proving a classical bound on the Hausdorff or packing dimension of a set via effective dimension reduces to proving the analogous bound for the effective dimension of points, and relativizing with respect to a Hausdorff (or packing) oracle. However, in the case of projections, it is not so simple.

The key issue is Step 1 from the previous section, where we lowered the complexity of our point. The problem with trying to naively apply the point-to-set principle is that a Hausdorff oracle B for p_\theta E could give a lot of information about \theta.

If we followed the argument of the previous section, relative to B, we run into a problem in Step 2. Lemma 1 does relativize, and so we can conclude that K^{B, A, z}_{r-t}(\theta) \leq K^{B, A, z}_{r-t}(w) + o(r). However, because B can give a lot of information about \theta, this will not give a good enough bound on the complexity of w.

It is important to note that this is not a limitation of our tools, but a fundamental obstacle that we can’t avoid, in general. It is known (due to a construction of Davies) that Marstrand’s theorem does not hold for all sets, assuming CH. Therefore, since there is no restriction of the set in the point-to-set principle, we are doomed to fail.

Remark: This is a very interesting phenomenon, and in a later post I will describe it in more detail. It turns out that it is possible to extract sufficient conditions for our techniques to work. In particular, we can use algorithmic information theory to understand why assumptions of the set E are needed for MPT.

To overcome this obstacle, Neil and I turned to a restricted version of the point-to-set principle proved by John Hitchcock.

Theorem 3: Let E \subseteq \mathbb{R}^n be \Sigma^0_1 relative to oracle B. Then

\text{dim}_H(E) = \sup\limits_{x\in E} \text{dim}^B(x).

Remark: The key feature of this restricted point-to-set principle is that we can eliminate the oracle (for \Sigma^0_1 sets). Informally, this removes the main obstacle preventing us from applying our pointwise projection bound.

We can combine this with a classical result of geometric measure theory.

Theorem 4: Let E \subseteq \mathbb{R}^n be analytic with Hausdorff dimension s. Then there is a \mathbf{\Sigma}^0_1 subset F of E such that the Hausdorff dimension of F is s.

Effective proof of Marstrand’s theorem

We can now prove Marstrand’s theorem. Let E be an analytic subset of the plane with dimension s. Let F be a \mathbf{\Sigma}^0_1 subset of E. Let B be an oracle such that F is $\Sigma^0_1$ relative to B. We note that, for any angle \theta, the projection p_\theta F is $\Sigma^0_1$ relative to (B, \theta).

For each n, let z_n \in F be an element such that

\text{dim}^B(z_n) > s - 1/n.

Let \theta be random relative to the join of B, z_1, z_2,\ldots. By Hitchcock’s point-to-set principle, it suffices to show that, for every n,

\text{dim}^{B, \theta}(p_\theta z_n) > \text{dim}^{B,\theta}(z_n)  = s - 1/n .

The point here is that we chose \theta so that it is random relative to (B, z_n) for every n. Therefore, we can simply relativize (with respect to B) the pointwise bound of the previous section, and the conclusion follows.

Leave a comment

Filed under Uncategorized

Leave a Reply

Your email address will not be published.