The purpose of this post is to isolate a combinatorial optimisation problem regarding subset sums; any improvement upon the current known bounds for this problem would lead to numerical improvements for the quantities pursued in the Polymath8 project.
First, some (rough) motivational background, omitting all the number-theoretic details and focusing on the combinatorics. (But readers who just want to see the combinatorial problem can skip the motivation and jump ahead to Lemma 5.) As part of the Polymath8 project we are trying to establish a certain estimate called for as wide a range of as possible. Currently the best result we have is:
Theorem 1 holds whenever .
Enlarging this region would lead to a better value of certain parameters , which in turn control the best bound on asymptotic gaps between consecutive primes. See this previous post for more discussion of this. At present, the best value of is applied by taking sufficiently close to , so improving Theorem 1 in the neighbourhood of this value is particularly desirable.
I won’t state exactly what is here (you can find a formulation in previous posts). But it involves a certain number-theoretic function, the von Mangoldt function . To prove the theorem, the first step is to use a certain identity (the Heath-Brown identity) to decompose into a lot of pieces, which take the form
for some bounded (in Zhang’s paper never exceeds ) and various weights supported at various scales that multiply up to approximately :
We can write , thus ignoring negligible errors, are non-negative real numbers that add up to :
A key technical feature of the Heath-Brown identity is that the weight associated to sufficiently large values of (e.g. ) are “smooth” in a certain sense, but we will not detail this here (this is a subject for a subsequent post; actually, I may update the current post in the near future to add more specifics).
The operation is Dirichlet convolution, which is commutative and associative. We can thus regroup the convolution (1) in a number of ways. For instance, given any partition into disjoint sets , we can rewrite (1) as
where is the convolution of those with , and similarly for .
Zhang’s argument splits into two major pieces, in which certain classes of (1) are established. Cheating a little bit, the following three results are established:
Theorem 2 (Type 0 estimate) The term (1) gives an acceptable contribution to whenever
for some .
Theorem 3 (Type I/II estimate) The term (1) gives an acceptable contribution to whenever one can find a partition such that
where is a quantity such that
Theorem 4 (Type III estimate) The term (1) gives an acceptable contribution to whenever one can find distinct with
and
The above assertions are oversimplifications; there are some additional minor smallness hypotheses on that are needed but at the current (small) values of under consideration they are not relevant and so will be omitted.
The deduction of Theorem 1 from Theorems 2, 3, 4 is then accomplished from the following, purely combinatorial, lemma:
Lemma 5 (Subset sum lemma) Let be such that
Let be non-negative reals such that
Then at least one of the following statements hold:
- (Type 0) There is such that .
- (Type I/II) There is a partition such that
where is a quantity such that
- (Type III) One can find distinct with
and
The purely combinatorial question is whether the hypothesis (2) can be relaxed here to a weaker condition. This would allow us to improve the ranges for Theorem 1 (and hence for the values of and alluded to earlier) without needing further improvement on Theorems 2, 3, 4 (although such improvement is also going to be a focus of Polymath8 investigations in the future).
Let us review how this lemma is currently proven. The key sublemma is the following:
Lemma 6 Let , and let be non-negative numbers summing to . Then one of the following three statements hold:
- (Type 0) There is a with .
- (Type I/II) There is a partition such that
- (Type III) There exist distinct with and .
Proof: Suppose Type I/II never occurs, then every partial sum is either “small” in the sense that it is less than or equal to , or “large” in the sense that it is greater than or equal to , since otherwise we would be in the Type I/II case either with as is and the complement of , or vice versa.
Call a summand “useless” if it cannot be used to turn a small partial sum into a large partial sum, thus there are no such that is small and is large. We then split where are the useless elements and are the useful elements.
By induction we see that if and is small, then is also small. Thus every sum of useful elements is either less than or larger than . Since a useful element must be able to convert a small sum to a large sum, we conclude that every useful element has size greater than . We may assume we are not in Type 0, then every useful element is at least and at most . In particular, there have to be at least three useful elements, otherwise cannot be as large as 1. As , we have , and we conclude that the sum of any two useful elements is large. Taking to be three useful elements in increasing order we land in Type I/II.
Now we see how Lemma 6 implies Lemma 5. Let be as in Lemma 5. We take almost as large as we can for the Type I/II case, thus we set
for some sufficiently small . We observe from (2) that we certainly have
and
with plenty of room to spare. We then apply Lemma 5. The Type 0 case of that lemma then implies the Type 0 case of Lemma 6, while the Type I/II case of Lemma 5 also implies the Type I/II case of Lemma 6. Finally, suppose that we are in the Type III case of Lemma 5. Since
we thus have
and so we will be done if
Inserting (3) and taking small enough, it suffices to verify that
but after some computation this is equivalent to (2).
It seems that there is some slack in this computation; some of the conclusions of the Type III case of Lemma 5, in particular, ended up being “wasted”, and it is possible that one did not fully exploit all the partial sums that could be used to create a Type I/II situation. So there may be a way to make improvements through purely combinatorial arguments.
A technical remark: for the application to Theorem 1, it is possible to enforce a bound on the number of summands in Lemma 5. Mor precisely, we may assume that is an even number of size at most for any natural number we please, at the cost of adding the additioal constraint to the Type III conclusion. Since is already at least , which is at least , one can safely take , so can be taken to be an even number of size at most , which in principle makes the problem of optimising Lemma 5 a fixed linear programming problem. (Zhang takes , but this appears to be overkill. On the other hand, does not appear to be a parameter that overly influences the final numerical bounds.)
Filed under: math.CO, math.NT, polymath Tagged: polymath8, subset sum, Yitang Zhang
DIGITAL JUICE
No comments:
Post a Comment
Thank's!