Monday, May 21, 2012

Heuristic limitations of the circle method

Heuristic limitations of the circle method:
One of the most basic methods in additive number theory is the Hardy-Littlewood circle method. This method is based on expressing a quantity of interest to additive number theory, such as the number of representations {f_3(x)} of an integer {x} as the sum of three primes {x = p_1+p_2+p_3}, as a Fourier-analytic integral over the unit circle {{\bf R}/{\bf Z}} involving exponential sums such as

\displaystyle  S(x,\alpha) := \sum_{p \leq x} e( \alpha p) \ \ \ \ \ (1)
where the sum here ranges over all primes up to {x}, and {e(x) := e^{2\pi i x}}. For instance, the expression {f(x)} mentioned earlier can be written as

\displaystyle  f_3(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha. \ \ \ \ \ (2)
The strategy is then to obtain sufficiently accurate bounds on exponential sums such as {S(x,\alpha)} in order to obtain non-trivial bounds on quantities such as {f_3(x)}. For instance, if one can show that {f_3(x)>0} for all odd integers {x} greater than some given threshold {x_0}, this implies that all odd integers greater than {x_0} are expressible as the sum of three primes, thus establishing all but finitely many instances of the odd Goldbach conjecture.
Remark 1 In practice, it can be more efficient to work with smoother sums than the partial sum (1), for instance by replacing the cutoff {p \leq x} with a smoother cutoff {\chi(p/x)} for a suitable chocie of cutoff function {\chi}, or by replacing the restriction of the summation to primes by a more analytically tractable weight, such as the von Mangoldt function {\Lambda(n)}. However, these improvements to the circle method are primarily technical in nature and do not have much impact on the heuristic discussion in this post, so we will not emphasise them here. One can also certainly use the circle method to study additive combinations of numbers from other sets than the set of primes, but we will restrict attention to additive combinations of primes for sake of discussion, as it is historically one of the most studied sets in additive number theory.

In many cases, it turns out that one can get fairly precise evaluations on sums such as {S(x,\alpha)} in the major arc case, when {\alpha} is close to a rational number {a/q} with small denominator {q}, by using tools such as the prime number theorem in arithmetic progressions. For instance, the prime number theorem itself tells us that
\displaystyle  S(x,0) \approx \frac{x}{\log x}
and the prime number theorem in residue classes modulo {q} suggests more generally that

\displaystyle  S(x,\frac{a}{q}) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x}
when {q} is small and {a} is close to {q}, basically thanks to the elementary calculation that the phase {e(an/q)} has an average value of {\mu(q)/\phi(q)} when {n} is uniformly distributed amongst the residue classes modulo {q} that are coprime to {q}. Quantifying the precise error in these approximations can be quite challenging, though, unless one assumes powerful hypotheses such as the Generalised Riemann Hypothesis.
In the minor arc case when {\alpha} is not close to a rational {a/q} with small denominator, one no longer expects to have such precise control on the value of {S(x,\alpha)}, due to the “pseudorandom” fluctuations of the quantity {e(\alpha p)}. Using the standard probabilistic heuristic (supported by results such as the central limit theorem or Chernoff’s inequality) that the sum of {k} “pseudorandom” phases should fluctuate randomly and be of typical magnitude {\sim \sqrt{k}}, one expects upper bounds of the shape

\displaystyle  |S(x,\alpha)| \lessapprox \sqrt{\frac{x}{\log x}} \ \ \ \ \ (3)
for “typical” minor arc {\alpha}. Indeed, a simple application of the Plancherel identity, followed by the prime number theorem, reveals that

\displaystyle  \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2\ d\alpha \sim \frac{x}{\log x} \ \ \ \ \ (4)
which is consistent with (though weaker than) the above heuristic. In practice, though, we are unable to rigorously establish bounds anywhere near as strong as (3); upper bounds such as {x^{4/5+o(1)}} are far more typical.
Because one only expects to have upper bounds on {|S(x,\alpha)|}, rather than asymptotics, in the minor arc case, one cannot realistically hope to make much use of phases such as {e(-x\alpha)} for the minor arc contribution to integrals such as (2) (at least if one is working with a single, deterministic, value of {x}, so that averaging in {x} is unavailable). In particular, from upper bound information alone, it is difficult to avoid the “conspiracy” that the magnitude {|S(x,\alpha)|^3} oscillates in sympathetic resonance with the phase {e(-x\alpha)}, thus essentially eliminating almost all of the possible gain in the bounds that could arise from exploiting cancellation from that phase. Thus, one basically has little option except to use the triangle inequality to control the portion of the integral on the minor arc region {\Omega_{minor}}:
\displaystyle  |\int_{\Omega_{minor}} |S(x,\alpha)|^3 e(-x\alpha)\ d\alpha| \leq \int_{\Omega_{minor}} |S(x,\alpha)|^3\ d\alpha.
Despite this handicap, though, it is still possible to get enough bounds on both the major and minor arc contributions of integrals such as (2) to obtain non-trivial lower bounds on quantities such as {f(x)}, at least when {x} is large. In particular, this sort of method can be developed to give a proof of Vinogradov’s famous theorem that every sufficiently large odd integer {x} is the sum of three primes; my own result that all odd numbers greater than {1} can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking advantage of some numerical work on both the Goldbach problems and on the Riemann hypothesis ). It is certainly conceivable that some further variant of the circle method (again combined with a suitable amount of numerical work, such as that of numerically establishing zero-free regions for the Generalised Riemann Hypothesis) can be used to settle the full odd Goldbach conjecture; indeed, under the assumption of the Generalised Riemann Hypothesis, this was already achieved by Deshouillers, Effinger, te Riele, and Zinoviev back in 1997. I am optimistic that an unconditional version of this result will be possible within a few years or so, though I should say that there are still significant technical challenges to doing so, and some clever new ideas will probably be needed to get either the Vinogradov-style argument or numerical verification to work unconditionally for the three-primes problem medium-sized ranges of {x}, such as {x \sim 10^{50}}. (But the intermediate problem of representing all even natural numbers as the sum of at most four primes looks somewhat closer to being feasible, though even this would require some substantially new and non-trivial ideas beyond what is in my five-primes paper.)

However, I (and many other analytic number theorists) are considerably more skeptical that the circle method can be applied to the even Goldbach problem of representing a large even number {x} as the sum {x = p_1 + p_2} of two primes, or the similar (and marginally simpler) twin prime conjecture of finding infinitely many pairs of twin primes, i.e. finding infinitely many representations {2 = p_1 - p_2} of {2} as the difference of two primes. At first glance, the situation looks tantalisingly similar to that of the Vinogradov theorem: to settle the even Goldbach problem for large {x}, one has to find a non-trivial lower bound for the quantity

\displaystyle  f_2(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^2 e(-x\alpha)\ d\alpha \ \ \ \ \ (5)
for sufficiently large {x}, as this quantity {f_2(x)} is also the number of ways to represent {x} as the sum {x=p_1+p_2} of two primes {p_1,p_2}. Similarly, to settle the twin prime problem, it would suffice to obtain a lower bound for the quantity

\displaystyle  \tilde f_2(x) = \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2 e(-2\alpha)\ d\alpha \ \ \ \ \ (6)
that goes to infinity as {x \rightarrow \infty}, as this quantity {\tilde f_2(x)} is also the number of ways to represent {2} as the difference {2 = p_1-p_2} of two primes less than or equal to {x}.
In principle, one can achieve either of these two objectives by a sufficiently fine level of control on the exponential sums {S(x,\alpha)}. Indeed, there is a trivial (and uninteresting) way to take any (hypothetical) solution of either the asymptotic even Goldbach problem or the twin prime problem and (artificially) convert it to a proof that “uses the circle method”; one simply begins with the quantity {f_2(x)} or {\tilde f_2(x)}, expresses it in terms of {S(x,\alpha)} using (5) or (6), and then uses (5) or (6) again to convert these integrals back into a the combinatorial expression of counting solutions to {x=p_1+p_2} or {2=p_1-p_2}, and then uses the hypothetical solution to the given problem to obtain the required lower bounds on {f_2(x)} or {\tilde f_2(x)}.

Of course, this would not qualify as a genuine application of the circle method by any reasonable measure. One can then ask the more refined question of whether one could hope to get non-trivial lower bounds on {f_2(x)} or {\tilde f_2(x)} (or similar quantities) purely from the upper and lower bounds on {S(x,\alpha)} or similar quantities (and of various {L^p} type norms on such quantities, such as the {L^2} bound (4)). Of course, we do not yet know what the strongest possible upper and lower bounds in {S(x,\alpha)} are yet (otherwise we would already have made progress on major conjectures such as the Riemann hypothesis); but we can make plausible heuristic conjectures on such bounds. And this is enough to make the following heuristic conclusions:

  • (i) For “binary” problems such as computing (5), (6), the contribution of the minor arcs potentially dominates that of the major arcs (if all one is given about the minor arc sums is magnitude information), in contrast to “ternary” problems such as computing (2), in which it is the major arc contribution which is absolutely dominant.
  • (ii) Upper and lower bounds on the magnitude of {S(x,\alpha)} are not sufficient, by themselves, to obtain non-trivial bounds on (5), (6) unless these bounds are extremely tight (within a relative error of {O(1/\log x)} or better); but
  • (iii) obtaining such tight bounds is a problem of comparable difficulty to the original binary problems.
I will provide some justification for these conclusions below the fold; they are reasonably well known “folklore” to many researchers in the field, but it seems that they are rarely made explicit in the literature (in part because these arguments are, by their nature, heuristic instead of rigorous) and I have been asked about them from time to time, so I decided to try to write them down here.

In view of the above conclusions, it seems that the best one can hope to do by using the circle method for the twin prime or even Goldbach problems is to reformulate such problems into a statement of roughly comparable difficulty to the original problem, even if one assumes powerful conjectures such as the Generalised Riemann Hypothesis (which lets one make very precise control on major arc exponential sums, but not on minor arc ones). These are not rigorous conclusions – after all, we have already seen that one can always artifically insert the circle method into any viable approach on these problems – but they do strongly suggest that one needs a method other than the circle method in order to fully solve either of these two problems. I do not know what such a method would be, though I can give some heuristic objections to some of the other popular methods used in additive number theory (such as sieve methods, or more recently the use of inverse theorems); this will be done at the end of this post.



— 1. Minor arc dominance —
Let us first explain why minor arc contributions to (5) or (6) are expected to dominate. For sake of discussion let us just work with the twin prime integral (6), as the situation with the even Goldbach integral (5) is similar.

First, let us get a crude heuristic prediction as to the size of the quantity {\tilde f_2(x)}, which is the number of twin primes that are both less than {x}. Note that from the prime number theorem, an integer {n} chosen uniformly at random between {1} and {x} has a probability about {1/\log x} of being prime, and similarly {n+2} has a probability about {1/\log x} of being prime. Making the heuristic (but very nonrigorous) hypothesis that these two events are approximately independent, we thus expect a random number {n} between {1} and {x} to have a probability about {1/\log^2 x} of being a twin prime, leading to the prediction

\displaystyle  \tilde f_2(x) \sim \frac{x}{\log^2 x}. \ \ \ \ \ (7)
As it turns out, this prediction is almost certainly inaccurate, due to “local” correlations between the primality of {n} and primality of {n+2} caused by residue classes with respect to small moduli (and in particular due to the parity of {n}, which clearly has an extremely large influence on whether {n} and {n+2} will be prime). However, the procedure for correcting for these local correlations is well known, and only ends up modifying the prediction (7) by a multiplicative constant {\Pi_2 = 1.3203\ldots} known as the twin prime constant; see for instance this previous blog post for more discussion. Our analysis here is focused on orders of growth in {x} rather than on multiplicative constants, and so we will ignore the effect of the twin prime constant in this discussion.
Now let us predict the heuristic contribution of the major and minor arcs to (5). We begin with the primary major arc when {\alpha} is close to zero. As already noted, the prime number theorem gives
\displaystyle  S(x,0) = \sum_{p \leq x} 1 \sim \frac{x}{\log x}.
From the uncertainty principle, we then expect

\displaystyle  |S(x,\alpha)| = |\sum_{p \leq x} e(\alpha p)| \sim \frac{x}{\log x}
when {\alpha = O(1/x)}, because the phase {e(\alpha p)} does not go any significant oscillation in this case as {p} ranges over the primes from {1} to {x}. On the other hand, as {|\alpha|} begins to exceed {1/x}, we expect the exponential sum to start decaying, because the prime number theorem suggests the heuristic

\displaystyle  \sum_{p \leq x} e(\alpha p) \approx \frac{1}{\log x} \sum_{n \leq x} e(\alpha n)
for {|\alpha|} slightly larger than {1/x}, and the latter sum does decay as {|\alpha|} begins to exceed {1/x}. (Admittedly, the decay is rather slow, of the order of {\frac{1}{x|\alpha|}}, but one can speed it up by using smooth cutoffs in the exponential sum, and in any event the decay is already sufficient for analysing {L^2} type expressions such as (5).) In view of this, we expect the contribution to (5) of the major arc when {\alpha} is close to {0} to be roughly

\displaystyle  (\frac{x}{\log x})^2 \times O(\frac{1}{x}) = \frac{x}{\log^2 x}
which, encouragingly, agrees with the heuristic prediction (7) (and indeed, if one unpacks all the Fourier analysis, one sees that these two predictions are ultimately coming from the same source). Furthermore, it is not difficult to make these sorts of heuristics rigorous using summation by parts and some version of the prime number theorem with an explicit error term; but this will not be our concern here.
Next, we look at the major arcs when {\alpha} is close to {a/q} for some fixed (and fairly small) {q}, and when {a} is coprime to {q}. We begin with the study of
\displaystyle  S(x,a/q) = \sum_{p \leq x} e(ap/q).
The prime number theorem in arithmetic progressions suggests that the {\sim x/\log x} primes are equally distributed in the {\phi(q)} residue classes {b \mod q} coprime to {q}, which heuristically suggests that

\displaystyle  S(x,a/q) \approx \frac{x}{\log x} \frac{1}{\phi(q)} \sum_{b \mod q: (b,q)=1} e(ab/q).
A standard computation shows that

\displaystyle  \sum_{b \mod q: (b,q)=1} e(ab/q) = - \mu(q) \ \ \ \ \ (8)
(this can be seen first by working in the case when {q} is a prime or a power of a prime, and then handling the general case via the Chinese remainder theorem). This leads to the heuristic
\displaystyle  S(x,a/q) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x}
and similarly from the uncertainty principle one expects

\displaystyle  S(x,\alpha) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x} \ \ \ \ \ (9)
when {|\alpha-a/q| \ll 1/x}, with some decay as {|\alpha-a/q|} begins to exceed {1/x}. On the other hand, the set of all {\alpha} in these arcs has measure {O( \phi(q)/x )}. Thus, if one were to ignore the effects of the {e(-2\alpha)} term in (5), and just estimate things crudely by the triangle inequality, one would heuristically end up with a net bound of
\displaystyle  O( \frac{\phi(q)}{x} ) \times O( |\frac{\mu(q)}{\phi(q)} \frac{x}{\log x}| )^2 = O( \frac{\mu^2(q)}{\phi(q)} \frac{x}{\log^2 x} )
for any given {q}. This does decay in {q}, but too slowly to ensure absolute convergence; given that a positive proportion of integers are squarefree, and that {\phi(q)} is more or less comparable to {q} for typical {q}, we expect the logarithmic divergence

\displaystyle  \sum_{q \leq Q} \frac{\mu^2(q)}{\phi(q)} \approx \sum_{q \leq Q} \frac{1}{q} \approx \log Q
as {Q \rightarrow \infty} (and indeed one can make this heuristic rigorous by standard methods for estimating sums of multiplicative functions, e.g. via Dirichlet series methods). This should be compared with the analogous situation for (2), in which the major arc contribution of a given denominator {q}, when estimated in absolute value by the same method, would be of size

\displaystyle  O( \frac{\phi(q)}{x} ) \times O( |\frac{\mu(q)}{\phi(q)} \frac{x}{\log x}| )^3 = \frac{\mu^2(q)}{\phi^2(q)} \frac{x^2}{\log^3 x}
and {\sum_{q=1}^\infty \frac{\mu^2(q)}{\phi^2(q)}} is convergent.
Now, the apparent divergence of the major arc contributions is not actually a problem, because we have an asymptotic for {S(x,\alpha)} rather than merely an upper bound, and can therefore exploit the cancellation coming from the {e(-2\alpha)} term. Indeed, if we apply the approximation (9) without discarding the {e(-2\alpha)} phase, we see (heuristically, at least) that the contribution of the major arcs at denominator {q} are more like
\displaystyle  O( \frac{\phi(q)}{x} ) \times O( |\frac{\mu(q)}{\phi(q)} \frac{x}{\log x}| )^2 \times \frac{1}{\phi(q)} \sum_{(a,q)=1} e(-2a/q).
Using (8), we thus expect the total contribution at denominator {q} to actually be of order {O( \frac{\mu^2(q)}{\phi(q)^2} \frac{x}{\log^2 x})}, gaining an additional factor of {1/\phi(q)} over the preceding bound, leading to a summable contribution of the major arcs. (And if one pursues this calculation more carefully, one even arrives at the more refined prediction of {\Pi_2 \frac{x}{\log^2 x}} for the net major arc contribution.)
So far, so good. But difficulties begin to arise when one turns attention to the minor arcs, and particularly when {\alpha} is only close to rationals {a/q} with very large denominator {q}, such as {q \sim \sqrt{x}}, which is the typical scenario if one defines “close to” as “within {O(1/x)} of”. Here, we do not expect to have the asymptotic (9), because the effect of random fluctuations in the primes on {S(x,\alpha)}, which as discussed previously is expected to be of size {O(\sqrt{\frac{x}{\log x}})} and can thus dominate the main term in (9) if {q} is close to {\sqrt{x}}. (In any case, even on the Generalised Riemann Hypothesis, we are well short of being able to maintain the asymptotic (9) for anywhere near that large a value of {q}, instead only reaching levels such as {q \sim x^{1/3}} or so before the error terms begin to dominate the main term.) As such, we can no longer easily exploit the cancellation of the {e(-2\alpha)} phase, and are more or less forced to estimate the minor arc contribution by taking absolute values:
\displaystyle  |\int_{\Omega_{minor}} |S(x,\alpha)|^2 e(-2\alpha)\ d\alpha| \leq \int_{\Omega_{minor}} |S(x,\alpha)|^2\ d\alpha.
But now the problem is that, in view of heuristics such as (3) or (4), and keeping in mind that the set {\Omega_{minor}} of minor arcs have measure close to {1}, this absolute value integral is expected to be of the order of {\frac{x}{\log x}}, which exceeds the major arc contribution of {O(\frac{x}{\log^2 x})} by a logarithmic factor. (One can also use Montgomery’s uncertainty principle to show that the major arc contributions are a fraction of the minor arc ones, for any reasonable choice of division between major and minor arc.) Thus we see that the minor arc contribution overwhelms the major arc one if we cannot control the oscillation of {|S(x,\alpha)|^2} on the minor arcs. (It is instructive to see why the situation is reversed with the ternary sum (2), in which case the major arc terms dominate.)
— 2. Loose bounds do not suffice —
A slightly different perspective on the difficulties in applying the circle method to binary problems can be seen by noting how “fragile” the truth of such problems is with respect to “edits” made to the set of primes. To describe what I mean by this, let us focus now on the even Goldbach problem of representing a large but fixed number {x} as the sum of two primes {p_1,p_2} (but the same discussion also applies, with minor modifications, to the twin prime problem). As discussed previously, we expect about {O(x / \log^2 x)} sums of this form (actually one expects slightly more sums than this when {x} has a lot of prime factors, but never mind this for now). One can even establish this upper bound rigorously by sieve theory if one wishes, but this is not particularly relevant as will only be concerned with a heuristic discussion here.

The point is that the number of sums here is still significantly smaller than the total number of primes less than {x}, which is approximately {x/\log x} by the prime number theorem. To exploit this discrepancy, let us define the set of redacted primes to be those primes less than {x} which do not arise in one of the representations of {x} as the sum of two primes; in other words, a redacted prime is a prime {p} less than {x} such that {x-p} is not a prime. Thus, from a relative viewpoint, the redacted primes comprise almost all of the actual primes less than {x}; only a proportion of {O(1/\log x)} or so of the primes less than {x} are not redacted. As such, the redacted primes look very similar to the actual primes less than {x}, but with one key difference: there is no way to express {x} as the sum of two redacted primes. So the even Goldbach property has been destroyed by passing from the primes to the redacted primes.

On the other hand, given that the redacted primes have density about {1-O(\frac{1}{\log x})} in the set of all primes {p} less than {x}, and are expected to be distributed more or less uniformly within that set (other than some local irregularities coming from small residue classes which turn out not to be terribly relevant), we thus expect heuristically that the redacted exponential sum
\displaystyle  S'(x,\alpha) := \sum'_{p \leq x} e(\alpha p)
where the sum is restricted to redacted primes, is typically close to the unredacted exponential sum {S(x,\alpha)} in the sense that

\displaystyle  S'(x,\alpha) = (1 - O(\frac{1}{\log x})) S(x,\alpha).
(This is not quite a good heuristic when {S(x,\alpha)} happens to vanish, but writing a more accurate heuristic here would be messier and not add much illumination to the discussion.)
This has the following consequence. Suppose one is planning to prove the even Goldbach conjecture by using some approximations on {S(x,\alpha)}, say of the form
\displaystyle  S(x,\alpha) = f(x,\alpha) + O( g(x,\alpha) )
for some explicit quantities {f, g}. (One can (and should) also consider integral bounds rather than pointwise bounds, but let us discuss pointwise bounds here for sake of illustration.) If these bounds are too “loose” in the sense that {g(x,\alpha) \gg f(x,\alpha) / \log x}, then the preceding heuristics suggest that the redacted exponential sums are going to obey a similar estimate:

\displaystyle  S'(x,\alpha) = f(x,\alpha) + O( g(x,\alpha) ).
However, as mentioned previously, the redacted version of (5) vanishes. Thus, one cannot hope to prove the even Goldbach conjecture purely by loose bounds, as these bounds are incapable of distinguishing between the genuine primes and the redacted primes. In particular, upper bounds of the type one expects to have available for the minor arc exponential sums are far too loose to be useful in this regard (as in such cases {f} is simply zero).
Admittedly, for major arc values of {\alpha}, a sufficiently strong version of the prime number in arithmetic progressions will give tight bounds (in the sense that the error term improves upon the main term by at least a logarithmic factor). However, as pointed out in the preceding section, the major arcs are not the dominant contribution to (5). Furthermore, if one normalises the redacted exponential sum by dividing out by the relative density of the redacted primes in the actual primes, one expects the major arc value of {S'(x,\alpha)} to become much closer to that of {S(x,\alpha)}, so that the two expressions become indistinguishable even to quite tight bounds.

It is also instructive to see how this situation is different in the ternary case, when one is trying to find triples {p_1,p_2,p_3} of primes that sum to a given odd prime {x}. There, it is no longer possible to easily redact the primes to remove all such triples, because one expects the number of such triples to be on the order of {x^2/\log^3 x}, compared with the total number {x/\log x} of primes. Indeed, it is known that Vinogradov’s conjecture continues to hold even if one deletes a moderately large positive fraction of the primes (this is a result of Li and Pan, adapting an earlier argument of Green).

— 3. Tight bounds are morally equivalent to binary problems —
In the preceding two sections, we argued that the minor arc estimates are the most important, and that the bounds here must be quite tight in order to have any hope of solving the conjecture this way. On the other hand, if the bounds are tight enough, then indeed one has enough control to start proving some binary conjectures. As far as I am aware, the first results in the literature formalising this observation are due to Srinivasan (in the context of the even Goldbach problem). For the twin prime problem, a more precise formalisation was worked out recently by Maier and Sankaranarayanan. Roughly speaking, these latter authors show that if one can obtain a tight {L^2} asymptotic for {S(x,\alpha)} on minor arcs, roughly of the form

\displaystyle  \int_{I \cap \Omega_{minor}} |S(x,\alpha)|^2\ d\alpha = (1 + O(\log^{-A} x)) C |I| \frac{x}{\log x} \ \ \ \ \ (10)
for some absolute constant {C,A} and all intervals {I} of size {|I| \gg \log^{-A} x}, then the twin prime conjecture holds, basically because one can then control the minor arc contribution to (6) by an integration by parts. (Actually the constant {C} is necessarily equal to {1} in this conjecture, basically because of (4) and the fact that the major arcs only occupy a small fraction of the total {L^2} norm.) Furthermore, the standard random fluctuation heuristics suggest that the conjecture (10) is likely to be true (and Maier and Sankaranarayanan verify the conjecture when the primes are replaced with a certain set of almost primes). In fact, the bound (10) is strong enough to similarly count the number of pairs {(p,p+h)} for any fixed {h}, with a very good error term (smaller than the main term by several powers of {\log x}); in other words, this conjecture would imply the binary case of the Hardy-Littlewood prime tuples conjecture with the expected asymptotics.
On the other hand, it is not difficult to reverse this procedure and deduce the conjecture (10) from the prime tuples conjecture. This is because {|S(x,\alpha)|^2} is the Fourier transform of the prime tuples counting function {r(h) := |\{ p,p+h \leq x: p,p+h \hbox{ prime}\}|}:
\displaystyle  |S(x,\alpha)|^2 = \sum_h r(h) e(-h\alpha).
As such, strong bounds on {r(h)} would be expected to yield strong bounds on {|S(x,\alpha)|^2}, and in particular the expected bounds on the prime tuples conjecture leads to the expected bound (10). So we see that tight bounds on minor arc exponential sums are basically just a Fourier reformulation of the underlying binary problems being considered, and any argument that established such bounds could likely be converted into a bound on the binary problem directly, without direct use of the circle method. As such, I believe that one has to look outside of the circle method in order to make progress on these binary problems.
— 4. Other methods —
Needless to say, I do not actually have a viable strategy in hand for solving these binary problems, but I can comment briefly on some of the other existing methods in additive number theory, which also unfortunately have well-known limitations in this regard.

The first obvious candidate for solving these problems is sieve theory, which was in fact invented specifically with problems such as the twin prime conjecture in mind. However, there is a major obstruction to using sieve theory to obtain non-trivial lower bounds (as opposed to upper bounds, which are much easier) on prime patterns, namely the parity problem. I discuss this problem in this previous blog post, and do not have much more to add here to what I already wrote in that post.

Another potential approach, following the methods of additive combinatorics (and particularly the sub-branch of additive combinatorics focused on arithmetic progressions), is to try to develop some sort of inverse theorem, in analogy with the Gowers-type inverse theorems that relate the lack of arithmetic progressions with some sort of correlation with a structured function (such as a Fourier character). Unfortunately, when it comes to binary patterns such as twins or pairs of numbers summing to a fixed sum {x}, one cannot have any simple class of structured functions that capture the absence of these patterns. We already saw a glimpse of this with the redacted primes in the even Goldbach problem, which are likely to have almost identical correlations with structured functions (such as characters) as the unredacted primes. One can also construct quite pseudorandom-looking sets that lack a binary pattern, for instance by randomly selecting one member of each pair {a<b} that sums to {x}, and standard probabilistic computations then show that such sets will typically have low correlation with any set of structured functions of low entropy, which is the only type of functions for which we expect randomness heuristics (such as the Mobius randomness conjecture, discussed for instance in this lecture of Sarnak) to hold.

In a similar vein, one can try to hope for some sort of transference principle argument, taking advantage of the fact that the primes lie inside the almost primes in order to model the primes by a much denser set of natural numbers. The lack of a good inverse theorem is going to be a major obstacle to this strategy; but actually, even if that problem was somehow evaded, the parity problem serves as a separate obstruction. Indeed, the parity problem suggests that the maximum density of the primes inside the almost primes that one can realistically hope to achieve (while still getting good control on the almost primes) is {1/2}. As such, the densest model one could hope for in the natural numbers would also have density {1/2}. But this is just the critical density for avoiding patterns such as twins or Goldbach-type pairs; for instance, the natural numbers which equal {0} or {1} mod {4} have density {1/2} but no twins. So even just a small loss of density in the model could potentially kill off all the twins, and in the absence of an inverse theorem there would be no computable statistic to prevent this from happening. (On the other hand, this obstruction does not prevent one from finding pairs of primes which differ by at most {4}, say; this is consistent with the Goldston-Yildirim-Pintz results on small prime gaps, which does not exactly use a dense model, but works relative to the almost primes, treating the primes “as if” they were dense in some sense.)

One can view the “enemy” in these binary problems as a “conspiracy” amongst the primes to behave in a certain pathological way – avoiding twins, for instance, or refusing to sum together to some exceptional integer {x}. Conspiracies are typically very difficult to eliminate rigorously, about the only thing which is strong enough to rule out a conspiracy is a conflicting conspiracy (see my previous blog post on this topic). A good example of this is Heath-Brown’s result that the existence of a Siegel zero (which is a conspiracy amongst primes that, among other things, would completely ruin the Generalised Riemann Hypothesis) implies the truth of the twin prime conjecture (basically by upending all the previous heuristics, and causing the major arc sums to dominate the minor arcs instead of vice versa). But since we do not expect any conspiracies to occur amongst the primes, one cannot directly use these sorts of “dueling conspiracies” methods to unilaterally rule out any given conspiracy.

However, there is perhaps the outside chance that a binary conspiracy (such as a conspiracy between primes to stop having twins) could somehow be used to conspire against itself (in the spirit of “self-defeating object” arguments, discussed previously several times on this blog). This is actually a fairly common technique in analytic number theory (though not usually described in such terms); for instance, Linnik’s dispersion method can be viewed as a way to eliminate a potential conspiracy by transforming it to compete against itself; bilinear sum methods, such as those used to control the minor arc sum {S(x,\alpha)}, can also be viewed in this way. However, these methods generally require some sort of underlying group action (such as multiplicative dilation) in order to transform the initial conspiracy into a competing conspiracy, and no obvious such action is present for these binary problems.


Filed under: expository, math.NT Tagged: circle method, exponential sums, Goldbach conjecture, major arcs, minor arcs, parity problem, prime number theorem, prime numbers, twin prime conjecture

DIGITAL JUICE

No comments:

Post a Comment

Thank's!