Monday, January 28, 2013

Carleson's Theorem

Carleson's Theorem:
MathML-enabled post (click for more details).
I’ve just started teaching an advanced undergraduate course on Fourier
analysis — my first lecturing duty in my new job at Edinburgh.

What I hadn’t realized until I started preparing was the
extraordinary history of false beliefs about the pointwise convergence
of Fourier series. This started with Fourier himself about 1800, and was
only fully resolved by Carleson in 1964.

The endlessly diverting index of Tom Körner’s book Fourier
Analysis
alludes to this:

Pages 586 and 587 of Koerner's book

MathML-enabled post (click for more details).
Here’s the basic set-up. Let T=R/Z be the
circle, and let f:T→C be an integrable
function. The Fourier coefficients of f are

f^(k)=∫ Tf(x)e −2πikxdx

(k∈Z), and for n≥0, the nth Fourier partial
sum
of f is the function S nf:T→C
given by

(S nf)(x)=∑ k=−n nf^(k)e 2πikx.

The question of pointwise convergence is:


For ‘nice’ functions f, does (S nf)(x) converge to f(x) as n→∞ for all x∈T?


And if the answer is no, does it at least work for most x? Or if not for
most x, at least for some x?

Fourier apparently thought that (S nf)(x)→f(x) was always true,
for all functions f, although what a man like Fourier would have thought
a ‘function’ was isn’t so clear.

Cauchy claimed a proof of pointwise convergence for continuous functions.
It was wrong. Dirichlet didn’t claim to have proved it, but he said he would. He didn’t.
However, he did show:


Theorem (Dirichlet, 1829)  Let f:T→C be a continuously differentiable function. Then (S nf)(x)→f(x) as n→∞ for all x∈T.


In other words, pointwise convergence holds for continuously differentiable
functions.

It was surely just a matter of time until someone
managed to extend the proof to all continuous functions. Riemann believed
this could be done, Weierstrass believed it, Dedekind believed it, Poisson believed it.
So, in Körner’s words, it ‘came as a considerable surprise’ when du
Bois–Reymond proved:


Theorem (du Bois–Reymond, 1876)  There is a continuous
function f:T→C such that for some x∈T, the sequence ((S nf)(x)) fails to converge.


Even worse (though I actually don’t know whether this was proved at the
time):


Theorem  Let E be a countable subset of T. Then
there is a continuous
function f:T→C such that for all x∈E, the sequence ((S nf)(x)) fails to converge.


The pendulum began to swing. Maybe there’s some continuous f such that
((S nf)(x)) doesn’t converge for any x∈T. This,
apparently, became the general belief, solidified by a discovery of
Kolmogorov:


Theorem (Kolmogorov, 1926)  There is a Lebesgue-integrable
function f:T→C such that for all x∈T, the sequence ((S nf)(x)) fails to converge.


It was surely just a matter of time until someone managed to
adapt the counterexample to give a continuous f whose Fourier series converged nowhere.

At best, the situation was unclear, and this persisted until relatively
recently. I have on my shelf a 1957 undergraduate textbook
called Mathematical Analysis by Tom Apostol. In the part on Fourier
series, he states that it’s still unknown whether the Fourier series of a
continuous function has to converge at even one point. This isn’t
ancient history; Apostol’s book was even on my own undergraduate
recommended reading list (though I can’t say I ever read it).

The turning point was Carleson’s theorem of 1964. His result implies:


If f:T→C is continuous then (S nf)(x)→f(x) for at least one x∈T.


In fact, it implies something stronger:


If f:T→C is continuous then (S nf)(x)→f(x) for almost all x∈T.


In fact, it implies something stronger still:


If f:T→C is Riemann integrable then (S nf)(x)→f(x) for almost all x∈T.


The full statement is:


Theorem (Carleson, 1964)  If f∈L 2(T) then (S nf)(x)→f(x) for almost all x∈T.


This was soon strengthened even further by Hunt (in a way that apparently
Carleson had anticipated). ‘Recall’ that the spaces L p(T) get
bigger as p gets smaller; that is, if 1≤q≤p≤∞ then
L q(T)⊇L p(T). So, if we could change the
‘2’ in Carleson’s theorem to something smaller, we’d have strengthened it. We
can’t take it all the way down to 1, because of Kolmogorov’s
counterexample. But Hunt showed that we can take it arbitrarily close to 1:


Theorem (Hunt, 1968)  If f∈⋃ p>1L p(T) then (S nf)(x)→f(x) for almost all x∈T.


There’s an obvious sense in which Carleson’s and Hunt’s theorems can’t be
improved: we can’t change ‘almost all’ to ‘all’, simply because changing a
function on a set of measure zero doesn’t change its Fourier coefficients.

But there’s another sense in which they’re optimal: given any set of measure zero,
there’s some L 2 function whose Fourier series fails to
converge there. Indeed, there’s a continuous such f:


Theorem (Kahane and Katznelson, 196?)  Let E be a measure
zero subset of T. Then there is a continuous function f:T→C such that for all x∈E, the sequence ((S nf)(x)) fails to converge.


I’ll finish with a question for experts. Despite Carleson’s own proof having been
subsequently simplified, the Fourier analysis books I’ve seen say that all
proofs are far too hard for an undergraduate course. But what about the
corollary that if f is continuous then (S nf)(x) must converge to
f(x) for at least one x? Is there now a proof of this that might be
simple enough for a final-year undergraduate course?

DIGITAL JUICE

No comments:

Post a Comment

Thank's!