# Cambridge Combinatorics Seminar

1. Top Mathematician
tjtn
[...]

Bounded gaps, impressive as it is technically, will be lost like tears in the rain unless it leads to methods that give more specific statements like twin primes. "One of the even numbers less than 300 appears infinitely often" is not nearly as compelling. A lot of the publicity around bounded gaps is the personal drama around the proof.

What a crock of shit. It's not that impressive technically, but it's a very compelling result.

It was impressive before Maynard showed up

1 weektjtn
Quote 2 Up 1 Down Report
2. Top Mathematician
mqol
[...]

Bounded gaps, impressive as it is technically, will be lost like tears in the rain unless it leads to methods that give more specific statements like twin primes. "One of the even numbers less than 300 appears infinitely often" is not nearly as compelling. A lot of the publicity around bounded gaps is the personal drama around the proof.

What a crock of shit. It's not that impressive technically, but it's a very compelling result.

I would argue that it is impressive technically as well... The main idea of the GPY sieve (and the multidimensional simplification due to Maynard and Tao) is not that hard to explain to someone mathematically fluent, but as with all things sieve theoretic, the devil is in the details.

Also, I would argue that if Gowers is right about "genuine improvement in understanding or discovery of new phenomena" underlying an improvement to ${4}^{k}$ then bounded gaps is the wrong comparison. You should be comparing to the original GPY result.

1 weekmqol
Quote 2 Up 0 Down Report
3. Top Mathematician
evsd
[...]

Bounded gaps, impressive as it is technically, will be lost like tears in the rain unless it leads to methods that give more specific statements like twin primes. "One of the even numbers less than 300 appears infinitely often" is not nearly as compelling. A lot of the publicity around bounded gaps is the personal drama around the proof.

What a crock of shit. It's not that impressive technically, but it's a very compelling result.

The actual statement proved is wrapped in enough layers of indirection to make it only so-so compelling. No way to specify a repeating gap within the finite set of possibilities, no bounds on any interesting related quantity. A lot of heroics to get a vague statement.

1 weekevsd
Quote 2 Up 2 Down Report
4. Top Mathematician
mqol
[...]

What a crock of shit. It's not that impressive technically, but it's a very compelling result.

The actual statement proved is wrapped in enough layers of indirection to make it only so-so compelling. No way to specify a repeating gap within the finite set of possibilities, no bounds on any interesting related quantity. A lot of heroics to get a vague statement.

Twin primes says that $\underset{n\to \infty }{liminf}{p}_{n+1}-{p}_{n}=2.$

Zhang says (for example) $\underset{n\to \infty }{liminf}{p}_{n+1}-{p}_{n}\le 7×{10}^{7}.$

How is this vague?

1 weekmqol
Quote 7 Up 1 Down Report
5. Top Mathematician
gyzc
[...]

The actual statement proved is wrapped in enough layers of indirection to make it only so-so compelling. No way to specify a repeating gap within the finite set of possibilities, no bounds on any interesting related quantity. A lot of heroics to get a vague statement.

Twin primes says that $\underset{n\to \infty }{liminf}{p}_{n+1}-{p}_{n}=2.$

Zhang says (for example) $\underset{n\to \infty }{liminf}{p}_{n+1}-{p}_{n}\le 7×{10}^{7}.$

How is this vague?

More importantly, how is it interesting?

1 weekgyzc
Quote 1 Up 7 Down Report
6. Top Mathematician
evsd

Zhang says (for example) $\underset{n\to \infty }{liminf}{p}_{n+1}-{p}_{n}\le 7×{10}^{7}.$

How is this vague?

Did you read the post with that word? For starters, "No way to specify a repeating gap within the finite set of possibilities". If you grok the details of the proof there are highly technical statements (theta < 1/2) that are compelling to experts who live and breathe the stuff. Just not anything in tje same plane as twin primes and related heuristics.

1 weekevsd
Quote 2 Up 0 Down Report
7. Top Mathematician
oukq

^ I have hesitated to weigh in on this thread as I am neither a combinatorist not an anti-combinatorist, but knee jerk reactions like the one above remind me of what one writer once called the "Argument From Personal Incredulity". Gowers himself mentions the problem of finding the "correct" value of k^{-1}log_2 R(k,k) in his pre-2000 essay www. dpmms. cam. ac. uk/ ~wtg10/2cultures.pdf where he makes the following comments:

I consider this to be one of the major problems in combinatorics and have devoted many months of my life unsuccessfully trying to solve it. And yet I feel almost embarrassed to write this, conscious as I am that many mathematicians would regard the question as more of a puzzle than a serious mathematical problem.

The reason I think so highly of this problem is that it has become clear to me (as it has to many others who have tried it) that it is very unlikely to be solved by a clever ad hoc argument, tailored to this problem only. (Actually, I am referring mainly to the part of the problem concerning the upper bound for R(k).) To be a little vague about it, the proof that R(k) \leq 4^k has a local character, in the sense that one throws away most of the graph and concentrates on small neighbourhoods of a few vertices. A better bound seems to demand a more global argument, involving the whole graph, and there is simply no adequate model for such an argument in graph theory. Therefore, a solution to this problem is almost bound to introduce a major new technique.

It may turn out that the new paper show that the final sentence is incorrect (I leave it to actual combinatorists to comment) but there seems to be a misconception among some on this thread "oh they couldn't go below 4^k because people couldn't be bothered to optimize some parameters, lol". It seems to be more the case that there was a qualitative/conceptual gap in the collective understanding of graph colourings, so that the techniques people had available easily yielded 4^k but could not go below. Certainly people had worked on this between 1950 and 2000 without being able to do better than polynomial improvements to the exponential upper bound; IIRC there were some contributions by Thomason.

The problem has status among (some?) combinatorists, because it was felt that progress would require a genuine improvement in understanding or discovery of new phenomena, and in that limited sense people are hailing it as a breakthrough akin to the first proof of bounded gaps in primes, rather than the (more) incremental improvements that followed.

A more in-depth discussion of Gowers can be found in his paper 'Rough structure and classification' (some time earlier than his 2 cultures essay). He listed a number of his favourite open problems including what seemed his top 3 questions in graph Ramsey theory (and probably the top problems he mentioned in his tweet): diagonal Ramsey, multicolour Ramsey $R\left(3,3,...,3\right)$, Erdos-Hajnal. All 3 are still very difficult.

1 weekoukq
Quote 3 Up 0 Down Report
8. Top Mathematician
samb

I hope actual mathematicians are not using this forum. So much ignorance and stupidity on full display.

For transparency, I think this R(k,k) breakthrough is huge, and one of the biggest breakthroughs in combinatorics in a long time. However, I don't think it can be compared to FLT (and I don't think many serious people are trying to compare it to FLT).

First of all, it is just a true, empirical fact that several top mathematicians have thought seriously about this problem (getting exponential improvement to diagonal Ramsey upper bound) for at least some decent amount of time.

The problem is interesting for a start because the proof of the lower bound was one of the first uses of what is now known as the probabilistic method, which is omnipresent in modern combinatorics and theoretical computer science. The upper bound is a simple induction, which gives a bound far from the lower bound. And no one, despite much effort, has been able to change either of the exponents.

The sensitivity conjecture was not proven with "elementary algebra", at least in the sense of "high school algebra". Huang found a 'proof from the book', that uses spectral graph theory in a very nice way to prove a nice fact about the boolean graph, which then implies the sensitivity conjecture as it was originally formulated.

There seems to be this mass delusion about what combinatorics research is like, despite several (recent) examples repeatedly disproving the delusion. The union-closed conjecture and Kelley-Meka were not proven by "Hungarian combinatorics", where you take an energetic IMO kid and have him make several puzzle-like deductions. Gilmer used information theory, and Kelley-Meka's paper could be considered borderline harmonic analysis. The proofs of Szemeredi's theorem all involve nice ideas, some of which led to higher order Fourier analysis. Literally none of this stuff is Hungarian-style combinatorics (which I don't personally have an issue with).

1 weeksamb
Quote 13 Up 0 Down Report
9. Top Mathematician
evsd

The problem is interesting for a start because the proof of the lower bound was one of the first uses of what is now known as the probabilistic method, which is omnipresent in modern combinatorics and theoretical computer science. The upper bound is a simple induction, which gives a bound far from the lower bound. And no one, despite much effort, has been able to change either of the exponents.

This reads like an admission that the problem is artificial, and not important in its own right (in the sense that not much else in mathematics depends on the size of Ramsey numbers), but is kept around to test improvements in/on the probabilistic method. If there is some intrinsic interest to all this Ramsey stuff I have the feeling people could explain it without talking over and over about handshakes.

1 weekevsd
Quote 2 Up 8 Down Report
10. Top Mathematician
nwtf

The problem is interesting for a start because the proof of the lower bound was one of the first uses of what is now known as the probabilistic method, which is omnipresent in modern combinatorics and theoretical computer science. The upper bound is a simple induction, which gives a bound far from the lower bound. And no one, despite much effort, has been able to change either of the exponents.

This reads like an admission that the problem is artificial, and not important in its own right (in the sense that not much else in mathematics depends on the size of Ramsey numbers), but is kept around to test improvements in/on the probabilistic method. If there is some intrinsic interest to all this Ramsey stuff I have the feeling people could explain it without talking over and over about handshakes.

In fact (Hungarian-style) combinatorialists do the way they do because they firmly believe that the methods/techniques used to approach the big problems could be useful in many settings even if they fail (very badly) to make substantial progress on those big problems. As pointed out in that diagonal Ramsey paper, the authors emphasised the quasirandomness method made a huge impact in the area despite being nearly useless for the upper bound question.

1 weeknwtf
Quote 5 Up 0 Down Report
11. Top Mathematician
sfhc

I'm dismayed that people contest the important of the Ramsey number bound (I'm not even a combinatorialist but just have a normal mathematical education). I think the combo community overplayed their hand by hyping genuinely non-important result (sensitivity conjecture, looking at you), so know it looks like the boy that cried wolf too many times once genuine breakthroughs come in e.g with the Ramsey number of 3APs. I guess they just didn't expect that they would ever make progress on important problems so they started hyping a lot of crap.

1 weeksfhc
Quote 8 Up 4 Down Report
12. Top Mathematician
evsd

I'm dismayed that people contest the important of the Ramsey number bound (I'm not even a combinatorialist but just have a normal mathematical education).

I'm familiar with all the reasons it's hyped. But there is a basic problem with the entirety of Ramsey theory, though R(k,k) and R(3,k) are admittedly closer to being natural problems than all the others in that field.

1 weekevsd
Quote 6 Up 0 Down Report
13. Top Mathematician
vobl

I'm dismayed that people contest the important of the Ramsey number bound (I'm not even a combinatorialist but just have a normal mathematical education). I think the combo community overplayed their hand by hyping genuinely non-important result (sensitivity conjecture, looking at you), so know it looks like the boy that cried wolf too many times once genuine breakthroughs come in e.g with the Ramsey number of 3APs. I guess they just didn't expect that they would ever make progress on important problems so they started hyping a lot of crap.

To be fair the most impressive result of the entire Ramsey theory is still Kim's R(3,k) theorem that earned him a Fulkerson prize. The first and only asymptotically exact result regarding behaviours of Ramsey numbers. Needless to say his semi-random method inspired many developments in Extremal and Probabilistic Combinatorics.

1 weekvobl
Quote 2 Up 2 Down Report
14. Top Mathematician
uydi

For transparency, I think this R(k,k) breakthrough is huge, and one of the biggest breakthroughs in combinatorics in a long time.

Maybe, but only for sociological reasons, not for its intrinsic interest. To give just one example, Keevash's work is simply far more fundamental, beautiful and mathematically interesting.

1 weekuydi
Quote 3 Up 3 Down Report
15. Top Mathematician
gnir

Both the results for the Ramsey numbers and the results for combinatorial designs by Keevash are breakthroughs. I think the debate over which is better comes down to a matter of taste.

1 weekgnir
Quote 6 Up 0 Down Report
16. Top Mathematician
evsd

Both the results for the Ramsey numbers and the results for combinatorial designs by Keevash are breakthroughs. I think the debate over which is better comes down to a matter of taste.

There is a distinction here. Keevash was also overhyped, in the sense that the smallest parameters of a new design proven to exist by his method must be huge. But combinatorial designs have a raison d'etre (e.g., realization of some simple groups) that doesn't come down to sociology, stories about parties and handshakes, "because it's there", or a serving as a proxy for some other things that actually are interesting.

1 weekevsd
Quote 1 Up 0 Down Report
17. Top Mathematician
fswd

Both the results for the Ramsey numbers and the results for combinatorial designs by Keevash are breakthroughs. I think the debate over which is better comes down to a matter of taste.

There is a distinction here. Keevash was also overhyped, in the sense that the smallest parameters of a new design proven to exist by his method must be huge. But combinatorial designs have a raison d'etre (e.g., realization of some simple groups) that doesn't come down to sociology, stories about parties and handshakes, "because it's there", or a serving as a proxy for some other things that actually are interesting.

As someone said above the central problems of Ramsey theory (or Hungarian-style combinatorics as a whole) are not just about handshakes and parties and such, but about the methods employed to tackle them that are believed to be widely applicable. It's just that those problems are "very simple to state but very difficult to solve", people in the field think they are beautiful/attractive and would like to understand not just why their answers are yes/no but also how their mathematical power increased once they finally got a grip on them.

It goes without saying those simple to state open problems are often long-standing because they're more likely to be asked first and have resisted myriads of attempts to prove/disprove. There are, of course, innocent-looking problems which turned out to be indeed stupid questions and didn't stand the test of time. But this shows why the big, old (and rare) open questions got their status.

1 weekfswd
Quote 3 Up 0 Down Report
18. Top Mathematician
jkul

Mathematical progress is slow, and we should celebrate achievements when they occur. The achievements we find in real life may not impress outsiders as much as other outstanding open problems. But this is the nature of mathematical research, and we should all take our opportunities to celebrate where we can.

So much of the negativity reeks of jealousy; someone else is getting positive attention that you so crave and feel that they do not deserve. (“Mom, it’s not fair!!” the poster screams while typing a furious screed on how this latest achievement is not true progess.)

Btw, I say this very much as an outsider to combinatorics and with no personal attachment to the present problem or anyone involved in it. This is a larger cultural problem on this site that I bump into so often.

1 weekjkul
Quote 12 Up 0 Down Report
19. Top Mathematician
evsd

As someone said above the central problems of Ramsey theory (or Hungarian-style combinatorics as a whole) are not just about handshakes and parties and such, but about the methods employed to tackle them that are believed to be widely applicable. It's just that those problems are "very simple to state but very difficult to solve", people in the field think they are beautiful/attractive and would like to understand not just why their answers are yes/no but also how their mathematical power increased once they finally got a grip on them.

It goes without saying those simple to state open problems are often long-standing because they're more likely to be asked first and have resisted myriads of attempts to prove/disprove. There are, of course, innocent-looking problems which turned out to be indeed stupid questions and didn't stand the test of time. But this shows why the big, old (and rare) open questions got their status.

That's a lot of words to repeat the exact list of reasons given as unconvincing (minus the parties and handshakes). I get it, and I think everyone reading this also understands, that serving as a test case for the probabilistic method is a not-uninteresting technical justification for studying something. Though there are many other test cases and it raises the question of what is special about R(k,k) in the universe of test questions for that technique. None of this speaks to the intrinsic mathematical interest of Ramsey theoretic questions or the ratio of that to the amount of hype.

1 weekevsd
Quote 2 Up 3 Down Report
20. Top Mathematician
pocq

Both the results for the Ramsey numbers and the results for combinatorial designs by Keevash are breakthroughs. I think the debate over which is better comes down to a matter of taste.

Really, no. Improving 4 to 3.9999 via an ad hoc argument when the experts think the answer is $\sqrt{2}$, vs showing existence of designs for all parameters where previously none had been known to exist even computationally for small parameters with an interesting and conceptual method.

Also, BTW, though Gowers is harping on about how this is the biggest breakthrough of the last half century etc, he himself has done much more important things, for instance both his work on Szemeredi (which is more important than the Ramsey thing by so much, he might have simply forgotten about it) also his hypergraph regularity work.

1 weekpocq
Quote 2 Up 6 Down Report
Your screen is so tiny that we decided to disable the captcha and posting feature
Store settings & IDs (locally, encrypted)
Formatting guidelines: Commonmark with no images and html allowed. $and$\$ for LaTeX. Input previewed in last post of thread. For a link to be allowed it must include the http(s) tag and come from the list of allowed domains.