Guest post by Emily Riehl
Whether we grow up to become category theorists or applied mathematicians, one thing that I suspect unites us all is that we were once enchanted by prime numbers. It comes as no surprise then that a seminar given yesterday afternoon at Harvard by Yitang Zhang of the University of New Hampshire reporting on his new paper “Bounded gaps between primes” attracted a diverse audience. I don’t believe the paper is publicly available yet, but word on the street is that the referees at the Annals say it all checks out.
What follows is a summary of his presentation. Any errors should be ascribed to the ignorance of the transcriber (a category theorist, not an analytic number theorist) rather than to the author or his talk, which was lovely.
Prime gaps
Let us write p 1,p 2,… for the primes in increasing cardinal order. We know of course that this list is countably infinite. A prime gap is an integer p n+1−p n. The Prime Number Theorem tells us that p n+1−p n is approximately log(p n) as n approaches infinity.
The twin primes conjecture, on the other hand asserts that
liminf n→∞(p n+1−p n)=2
i.e., that there are infinitely many pairs of twin primes for which the prime gap is just two. A generalization, attributed to Alphonse de Polignac, states that for any positive even integer, there are infinitely many prime gaps of that size. This conjecture has been neither proven nor disproven in any case. These conjectures are related to the Hardy-Littlewood conjecture about the distribution of prime constellations.
The strategy
The basic question is whether there exists some constant C so that p n+1−p n<C infinitely often. Now, for the first time, we know that the answer is yes…when C=7×10 7.
Here is the basic proof strategy, supposedly familiar in analytic number theory. A subset H={h 1,…,h k} of distinct natural numbers is admissible if for all primes p the number of distinct residue classes modulo p occupied by these numbers is less than p. (For instance, taking p=2, we see that the gaps between the h j must all be even.) If this condition were not satisfied, then it would not be possible for each element in a collection {n+h 1,…,n+h k} to be prime. Conversely, the Hardy-Littlewood conjecture contains the statement that for every admissible H, there are infinitely many n so that every element of the set {n+h 1,…,n+h k} is prime.
Let θ(n) denote the function that is log(n) when n is prime and 0 otherwise. Fixing a large integer x, let us write n∼x to mean x ≤ n<2x. Suppose we have a positive real valued function f—to be specified later—and consider two sums:
S 1=∑ n∼xf(n)
S 2=∑ n∼x(∑ j=1 kθ(n+h j))f(n)
Then if S 2>(log3x)S 1 for some function f it follows that ∑ j=1 kθ(n+h j)>log3x for some n∼x (for any x sufficiently large) which means that at least two terms in this sum are non-zero, i.e., that there are two indices i and j so that n+h i and n+h j are both prime. In this way we can identify bounded prime gaps.
Some details
The trick is to find an appropriate function f. Previous work of Daniel Goldston, János Pintz, and Cem Yildirim suggests define f(n)=λ(n) 2 where
λ(n)=∑ d∣P(n),d<Dμ(d)(log(Dd)) k+ℓP(n)=∏ j=1 k(n+h j)
where ℓ>0 and D is a power of x.
Now think of the sum S 2−(log3x)S 1 as a main term plus an error term. Taking D=x ϑ with ϑ<14, the main term is negative, which won’t do. When ϑ=14+ω the main term is okay but the question remains how to bound the error term.
Zhang’s work
Zhang’s idea is related to work of Enrico Bombieri, John Friedlander, and Henryk Iwaniec. Let ϑ=14+ω where ω=11168 (which is “small but bigger than ϵ”). Then define λ(n) using the same formula as before but with an additional condition on the index d, namely that d divides the product of the primes less that x ω. In other words, we only sum over square-free d with small prime factors.
The point is that when d is not too small (say d>x 1/3) then d has lots of factors. If d=p 1⋯p b and R<d there is some a so that r=p 1⋯p a<R and p 1⋯p a+1>R. This gives a factorization d=rq with R/x ω<r<R which we can use to break the sum over d into two sums (over r and over q) which are then handled using techniques whose names I didn’t recognize.
On the size of the bound
You might be wondering where the number 70 million comes from. This is related to the k in the admissible set. (My notes say k=3.5×10 6 but maybe it should be k=3.5×10 7.) The point is that k needs to be large enough so that the change brought about by the extra condition that d is square free with small prime factors is negligible. But Zhang believes that his techniques have not yet been optimized and that smaller bounds will soon be possible.
DIGITAL JUICE

No comments:
Post a Comment
Thank's!