A pair of mathematicians, Alexei Lisitsa and Boris Konev of the University of Liverpool, U.K., have come up with an interesting problem—if a computer produces a proof of a math problem that is too big to study, can it be judged as true anyway? In a paper they've uploaded to the preprint server arXiv, the two describe how they set a computer program to proving a small part of what's known as "Erdős discrepancy problem"—the proof produced a data file that was 13-gigabytes in size—far too large for any human to check, leading to questions as to whether the proof can be taken as a real proof.
Anyone who has taken a high level math course can attest to the fact that math proofs can sometimes grow long—very long. Some mathematicians have dedicated years to creating them, filling whole text volumes in the process. Quite naturally then, mathematicians have increasingly turned to computers to perform some of the more mundane parts of proof creation. It wasn't long, however, before some began to realize that at some point, the proofs spit out by the computer would be too long, complicated or both for a human reader to fully comprehend. It appears, with this new effort that that day might have come.
Erdős discrepancy problem revolves around trying to find patterns in an infinite list of just the two numbers "1" and "-1". Named after Paul Erdős, the discrepancy problem arises when cutting off the infinite sequence at some point and then creating a finite sequence using a defined constant. When the numbers are added up, the result is called the discrepancy figure. Lisitsa and Konev entered the problem (with a discrepancy constant of 2) into a computer running what they describe as state of the art SAT solvers—software that has been written to create mathematical proofs. The proof that the computer came up with proves, the two researchers claim, "that no sequence of length 1161 and discrepancy 2 exists."
Unfortunately the file produced was too large to read—for comparison's sake, it was a couple of gigabytes larger than the whole of Wikipedia. This leads to an interesting conundrum for mathematicians going forward.
Do we begin accepting proofs that computers create as actual proofs if they are too long or perhaps too difficult for our minds to comprehend? If so, we might just be at a crossroads. Do we trust computers to handle things for us that our beyond our abilities, or constrain our reach by refusing to allow for the creation of things that we cannot ever possibly understand?