|Current selected tag: math. Clear.|
Your new post is loading...
Mathematical artist Hamid Naderi Yeganeh, a student of mathematics at University of Qom in Iran, won the Gold Medal at the 38th Iranian Mathematical Society‘s Competition in May 2014. He creates figures with thousands of mathematical line segments. “The endpoints of each segment are related to the trigonometric functions. We can create many beautiful symmetric figures by this method. Also, there are some interesting asymmetric figures, such as fish,” Hamid explains. Hundreds of stray threads fray at the edges of a mesmerizing geometric tapestry.
Of course, the fields of art and math have long run parallel -- think of the Golden Ratio and M.C. Escher's ever-winding staircases. Yeganeh himself cites Escher's "Reptiles" and "Circle Limit III" as inspirations for his work. Escher's beloved illusions, rooted as they were in mathematical concepts such as tessellation and the more head-scratching idea of hyperbolic geometry, are a testament to how math-oriented art can be.
Though the marriage of the two seemingly separate fields of study is a natural one, Yeganeh's captions for his works are best left to be deciphered by those who studied math beyond the high school level. An illustrious gray-and-purple work featuring soft lines that loop in the shape of a lotus flower bears the considerably less spiritual title, "4,000 Line Segments.
A team at the University of Central Missouri, headed by Curtis Cooper has announced, via press release from the Mersenne organization, that they have found the largest prime number ever—it is 274,207,281 – 1, it has over 22 million digits. The new record has broken the old record by approximately 5 million digits.
Cooper and his team are part of the Great Internet Mersenne Prime Search (GIMPS) collaboration, which as its name suggests, is an effort by a lot of volunteers to find ever larger prime numbers—or, more specifically, a particular class of prime numbers that are called Mersenne, where it is one less than a power of two. Not surprisingly, Cooper and his team also held the old record, they have actually broken the record four times. He has told the press that he was notified by an email sent by the software running on a PC that the prime number had been found. The find came after a month of number crunching on a single Intel based PC. Interestingly, the PC tried to notify Cooper and his team about the find back in September of last year, but a glitch prevented it from being sent. It was only during a maintenance cycle that the message reporting the number prime number found, was sent. The official discovery date is January 7th.
The search for new and bigger prime numbers is conducted using software developed by the GIMPS team, called prime95—it grinds away, day after day, until a new prime number is found. And while the numbers that it finds are of interest, they no longer serve much if any practical use, the software has been used for other purposes though—it has found flaws in Intel CPUs, for example.
The new prime number has been named M74207281—in the press release, the team says that it was "calculated by multiplying together 74,207,281 twos then subtracting one." It has already been tested and confirmed by three different independent teams running software on different machines. The find makes Cooper eligible for a $3000 award. The GIMPS group also made known their goal of winning a hundred and fifty thousand dollar award by finding a prime number with 100 million digits.
The Royal Swedish Academy of Sciences has announced the recipients of the 2016 Crafoord Prizes in Mathematics and Astronomy. The Crafoord Prize in Mathematics has been awarded to Yakov Eliashberg of Stanford University “for the development of contact and symplectic topology and groundbreaking discoveries of rigidity and flexibility phenomena.”
The 2016 Crafoord Prize in Astronomy has been awarded to Roy Kerr of theUniversity of Canterbury, Christchurch, New Zealand and to Roger Blandford of Stanford University “for fundamental work on rotating black holes and their astrophysical consequences.” The prize money is 6 million Swedish kronor per prize, and the Crafoord Prize in Astronomy is shared equally between the Laureates.
The Royal Swedish Academy of Sciences, founded in 1739, is an independent organization whose overall objective is to promote the sciences and to strengthen their influence in society. The Academy states that it “takes special responsibility for the natural sciences and mathematics, but endeavors to promote the exchange of ideas between various disciplines.”
The Academy awarded the Crafoord Prize for the first time in 1982 after receiving “a considerable donation” from the Lund industrialist Holger Crafoord and his wife Anna-Greta in 1980. This donation forms the basis of the Anna-Greta and Holger Crafoord Fund, whose aims are “to promote pure research in mathematics and astronomy, biosciences (in the first place ecology), geosciences and polyarthritis (rheumatoid arthritis).” These disciplines are chosen so as to complement those for which the Nobel Prizes are awarded.
The prize sum of SEK 6 million makes the Crafoord one of the world´s largest scientific prizes. The international prize is awarded for one field per year in a fixed order to researchers who have made decisive contributions within their fields:
In 1655 the English mathematician John Wallis published a book in which he derived a formula for pi as the product of an infinite series of ratios. Now researchers from the University of Rochester, in a surprise discovery, have found the same formula in quantum mechanical calculations of the energy levels of a hydrogen atom.
"We weren't looking for the Wallis formula for pi. It just fell into our laps," said Carl Hagen, a particle physicist at the University of Rochester. Having noticed an intriguing trend in the solutions to a problem set he had developed for students in a class on quantum mechanics, Hagen recruited mathematician Tamar Friedmann and they realized this trend was in fact a manifestation of the Wallis formula for pi.
"It was a complete surprise - I jumped up and down when we got the Wallis formula out of equations for the hydrogen atom," said Friedmann. "The special thing is that it brings out a beautiful connection between physics and math. I find it fascinating that a purely mathematical formula from the 17th century characterizes a physical system that was discovered 300 years later."
The researchers report their findings in the Journal of Mathematical Physics.
In quantum mechanics, a technique called the variational approach can be used to approximate the energy states of quantum systems, like molecules, that can't be solved exactly. Hagen was teaching the technique to his students when he decided to apply it to a real-world object: the hydrogen atom. The hydrogen atom is actually one of the rare quantum mechanical systems whose energy levels can be solved exactly, but by applying the variational approach and then comparing the result to the exact solution, students could calculate the error in the approximation.
When Hagen started solving the problem himself, he immediately noticed a trend. The error of the variational approach was about 15 percent for the ground state of hydrogen, 10 percent for the first excited state, and kept getting smaller as the excited states grew larger. This was unusual, since the variational approach normally only gives good approximations for the lowest energy levels.
Hagen recruited Friedmann to take a look at what would happen with increasing energy. They found that the limit of the variational solution approaches the model of hydrogen developed by physicist Niels Bohr in the early 20th century, which depicts the orbits of the electron as perfectly circular. This would be expected from Bohr's correspondence principle, which states that for large radius orbits, the behavior of quantum systems can be described by classical physics.
"At the lower energy orbits, the path of the electron is fuzzy and spread out," Hagen explained. "At more excited states, the orbits become more sharply defined and the uncertainty in the radius decreases."
A Japanese mathematician claims to have solved one of the most important problems in his field.
Sometime on the morning of 30 August 2012, Shinichi Mochizuki quietly posted four papers on his website. The papers were huge — more than 500 pages in all — packed densely with symbols, and the culmination of more than a decade of solitary work. They also had the potential to be an academic bombshell. In them, Mochizuki claimed to have solved the abc conjecture, a 27-year-old problem in number theory that no other mathematician had even come close to solving. If his proof was correct, it would be one of the most astounding achievements of mathematics this century and would completely revolutionize the study of equations with whole numbers.
Mochizuki, however, did not make a fuss about his proof. The respected mathematician, who works at Kyoto University's Research Institute for Mathematical Sciences (RIMS) in Japan, did not even announce his work to peers around the world. He simply posted the papers, and waited for the world to find out.
Probably the first person to notice the papers was Akio Tamagawa, a colleague of Mochizuki's at RIMS. He, like other researchers, knew that Mochizuki had been working on the conjecture for years and had been finalizing his work. That same day, Tamagawa e-mailed the news to one of his collaborators, number theorist Ivan Fesenko of the University of Nottingham, UK. Fesenko immediately downloaded the papers and started to read. But he soon became “bewildered”, he says. “It was impossible to understand them.”
Fesenko e-mailed some top experts in Mochizuki's field of arithmetic geometry, and word of the proof quickly spread. Within days, intense chatter began on mathematical blogs and online forums (see Nature http://doi.org/725; 2012). But for many researchers, early elation about the proof quickly turned to scepticism. Everyone — even those whose area of expertise was closest to Mochizuki's — was just as flummoxed by the papers as Fesenko had been. To complete the proof, Mochizuki had invented a new branch of his discipline, one that is astonishingly abstract even by the standards of pure maths. “Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” number theorist Jordan Ellenberg, of the University of Wisconsin–Madison, wrote on his blog a few days after the paper appeared.
Three years on, Mochizuki's proof remains in mathematical limbo — neither debunked nor accepted by the wider community. Mochizuki has estimated that it would take an expert in arithmetic geometry some 500 hours to understand his work, and a maths graduate student about ten years. So far, only four mathematicians say that they have been able to read the entire proof.
GNAWING ON HIS left index finger with his chipped old British teeth, temporal veins bulging and brow pensively squinched beneath the day-before-yesterday’s hair, the mathematician John Horton Conway unapologetically whiles away his hours tinkering and thinkering—which is to say he’s ruminating, although he will insist he’s doing nothing, being lazy, playing games.
Based at Princeton University, though he found fame at Cambridge (as a student and professor from 1957 to 1987), Conway, 77, claims never to have worked a day in his life. Instead, he purports to have frittered away reams and reams of time playing. Yet he is Princeton’s John von Neumann Professor in Applied and Computational Mathematics (now emeritus). He’s a fellow of the Royal Society. And he is roundly praised as a genius. “The word ‘genius’ gets misused an awful lot,” said Persi Diaconis, a mathematician at Stanford University. “John Conway is a genius. And the thing about John is he’ll think about anything.… He has a real sense of whimsy. You can’t put him in a mathematical box.”
Via Ashish Umre, Complexity Digest
Can the flap of a butterfly's wings in Brazil set off a tornado in Texas? This intriguing hypothetical scenario, commonly called "the butterfly effect," has come to embody the popular conception of a chaotic system, in which a small difference in initial conditions will cascade toward a vastly different outcome in the future.
Understanding and modeling chaos can help address a variety of scientific and engineering questions, and so researchers have worked to develop better mathematical definitions of chaos. These definitions, in turn, will aid the construction of models that more accurately represent real-world chaotic systems.
Now, researchers from the University of Maryland have described a new definition of chaos that applies more broadly than previous definitions. This new definition is compact, can be easily approximated by numerical methods and works for a wide variety of chaotic systems. The discovery could one day help advance computer modeling across a wide variety of disciplines, from medicine to meteorology and beyond. The researchers present their new definition in the July 28, 2015 issue of the journal Chaos.
"Our definition of chaos identifies chaotic behavior even when it lurks in the dark corners of a model," said Brian Hunt, a professor of mathematics with a joint appointment in the Institute for Physical Science and Technology (IPST) at UMD. Hunt co-authored the paper with Edward Ott, a Distinguished University Professor of Physics and Electrical and Computer Engineering with a joint appointment in the Institute for Research in Electronics and Applied Physics (IREAP) at UMD.
The study of chaos is relatively young. MIT meteorologist Edward Lorenz, whose work gave rise to the term "the butterfly effect," first noticed chaotic characteristics in weather models in the mid-20th century. In 1963, he published a set of differential equations to describe atmospheric airflow and noted that tiny variations in initial conditions could drastically alter the solution to the equations over time, making it difficult to predict the weather in the long term.
Mathematically, extreme sensitivity to initial conditions can be represented by a quantity called a Lyapunov exponent. This number is positive if two infinitesimally close starting points diverge exponentially as time progresses. Yet, Lyapunov exponents have limitations as a definition of chaos: they only test for chaos in particular solutions of a model, not in the model itself, and they can be positive even when the underlying model is considered too straightforward to be deemed chaotic.
At the turn of the 21st century, physicists Thomas Fink and Yong Mao at the University of Cambridge in England analyzed what knots could be tied with a normal-sized necktie. They found 85 knots, which included the four traditional ones as well as nine more "aesthetic knots" they dubbed nice-looking enough to wear. However, Fink and Mao's knots all forced the front of the tie knot — the "façade" — to be a flat stretch of fabric. In 2003, in "The Matrix Reloaded," the second of the films making up the Matrix trilogy, the villain known as "The Merovingian" introduced fancy new knots where the façade was textured with many surfaces and edges instead, tied with the narrow end of a tie.
"In early winter 2013, my wife showed me a video tutorial demonstrating the Trinity Knot," Vejdemo-Johansson recalled. "I then quickly found the Eldredge, and I was utterly hooked. I stopped wearing bowties, my main fashion affectation before this point, and started wearing neckties, always with one of these new style knots, with a structured and interesting façade."
After discovering these novel knots, Vejdemo-Johansson found that Fink and Mao's research intentionally excluded them. Vejdemo-Johansson and his colleagues then began investigating how many more tie knots might exist. For the sake of comfort, they focused on ties up to 13 moves, and they did not count bowtie knots.
Fink and Mao developed a system for describing any tie knot as a sequence of symbols, with each symbol representing a certain move, such as whether the part of the necktie you use to tie the knot goes to your right or left, or wraps over or under the knot. They also specified a number of rules, such as how tie knots must be completed by folding one end of the tie under the rest of it.
Vejdemo-Johansson and his colleagues discovered a way to radically simplify this language of symbols and the number of rules for tying ties. This greatly expanded the potential number of ways that neckties can be tied. "We show that if you extend the rules to allow more recently created tie knots, there are many, many, many more knots than Fink and Mao counted," Vejdemo-Johansson said.
All in all, the researchers discovered 266,682 tie knots that seem tie-able with a normal necktie. Of these, 24,882 are singly-tucked knots like most tie knots that people know how to tie — when you pull the active end of the tie knot under itself to lock it in place, you lift a single piece of cloth to push the tie underneath. The scientists have a website where one can see random knots from this list with instructions on how to tie them.
Blooms are 3D-printed sculptures that are designed to animate when spun under a strobe light. The rotation speed is synchronized to the strobe so that one flash occurs every time the bloom turns 137.5º—the golden angle. The placement of the appendages is determined by the same method nature uses in pinecones and sunflowers. Each petal on the bloom is placed at a unique distance from the top-center of the form. If you follow what appears to be a single petal as it works its way out and down the bloom, what you are actually seeing is all the petals on the bloom in the order of their respective distances from the top-center. The number of spirals on any of these blooms is always a Fibonacci number.
Urban sociologists have long known that a set of remarkable laws govern the large-scale interaction between individuals such as the probability that one person will befriend another and the size of the cities they live in. The latter is an example of the Zipf’s law. If cities are listed according to size, then the rank of a city is inversely proportional to the number of people who live in it. For example, if the biggest city in the US has a population of 8 million people, the second-biggest city will have a population of 8 million divided by 2, the third biggest will have a population of 8 million divided by 3 and so on.
This simple relationship is known as a scaling law and turns out to fit the observed distribution of city sizes extremely well. Another interesting example is the probability that one person will be friends with another. This turns out to be inversely proportional to the number of people who live closer to the first person than the second.
What’s curious about these laws is that although they are widely accepted, nobody knows why they are true. There is no deeper theoretical model from which these laws emerge. Instead, they come simply from the measured properties of cities and friendships.
Today, all that changes thanks to the work of Henry Lin and Abraham Loeb at the Harvard-Smithsonian Centre for Astrophysics in Cambridge. These guys have discovered a single unifying principle that explains the origin of these laws.
And here’s the thing: their approach is mathematically equivalent to the way that cosmologists describe the growth of galaxies in space. In other words, cities form out of variations in population density in exactly the same way that galaxies formed from variations in matter density in the early universe.
These guys begin by creating a mathematical model of the way human population density varies across a flat Euclidean plane. They say they can ignore the effects of the Earth’s curvature in their model because any variations in population density will be small compared to the radius of the Earth. That is exactly how cosmologists think about the way galaxies evolved. They first consider the matter density of the early universe. Next, they look at the mathematical structure of any variations in this density. And finally they use this mathematics to examine how this density can change over time as more matter is added or taken away from specific regions. Because of the many decades of work on cosmology, these mathematical tools are already well understood and easily applied to the similar problem of the population density on Earth. All that is needed is some data to calibrate the mathematical model.
In May 2013, the mathematician Yitang Zhang launched what has proven to be a banner year and a half for the study of prime numbers, those numbers that aren’t divisible by any smaller number except 1. Zhang, of the University of New Hampshire, showed for the first time that even though primes get increasingly rare as you go further out along the number line, you will never stop finding pairs of primes that are a bounded distance apart — within 70 million, he proved. Dozens ofmathematicians then put their heads together to improve on Zhang’s 70 million bound, bringing it down to 246 — within striking range of the celebrated twin primes conjecture, which posits that there are infinitely many pairs of primes that differ by only 2.
Now, mathematicians have made the first substantial progress in 76 years on the reverse question: How far apart can consecutive primes be? The average spacing between primes approaches infinity as you travel up the number line, but in any finite list of numbers, the biggest prime gap could be much larger than the average. No one has been able to establish how large these gaps can be. This past August, two different groups of mathematicians released papers proving a long-standing conjecture by the mathematician Paul Erdős about how large prime gaps can get. The two teams have joined forces to strengthen their result on the spacing of primes still further, and expect to release a new paper later this month.
Many mathematicians believe that the true size of large prime gaps is probably considerably larger — more on the order of (log X)2, an idea first put forth by the Swedish mathematician Harald Cramér in 1936. Gaps of size (log X)2 are what would occur if the prime numbers behaved like a collection of random numbers, which in many respects they appear to do. But no one can come close to proving Cramér’s conjecture, Tao said. “We just don’t understand prime numbers very well.” Erdős made a more modest conjecture: It should be possible, he said, to replace the 1/3 in Rankin’s formula by as large a number as you like, provided you go out far enough along the number line. That would mean that prime gaps can get much larger than in Rankin’s formula, though still smaller than in Cramér’s.
The two new proofs of Erdős’ conjecture are both based on a simple way to construct large prime gaps. A large prime gap is the same thing as a long list of non-prime, or “composite,” numbers between two prime numbers. Here’s one easy way to construct a list of, say, 100 composite numbers in a row: Start with the numbers 2, 3, 4, … , 101, and add to each of these the number 101 factorial (the product of the first 101 numbers, written 101!). The list then becomes 101! + 2, 101! + 3, 101! + 4, … , 101! + 101. Since 101! is divisible by all the numbers from 2 to 101, each of the numbers in the new list is composite: 101! + 2 is divisible by 2, 101! + 3 is divisible by 3, and so on. “All the proofs about large prime gaps use only slight variations on this high school construction,” said James Maynard of Oxford, who wrote the second of the two papers.
Professor Michael Barnsley, a mathematician, has developed a new way to uncover simple patterns that might underlie apparently complex systems, such as clouds, cracks in materials or the movement of the stock market. The method, named fractal Fourier analysis, is based on new branch of mathematics called fractal geometry. This method could help scientists to better understand the complicated signals that the body gives out, such as nerve impulses or brain waves. “It opens up a whole new way of analyzing signals,” said Professor Michael Barnsley, who presented his work at the New Directions in Fractal Geometry conference at ANU.
“Fractal Geometry is a new branch of mathematics that describes the world as it is, rather than acting as though it’s made of straight lines and spheres. There are very few straight lines and circles in nature. The shapes you find in nature are rough.” The new analysis method is closely related to conventional Fourier analysis, which is integral to modern image handling and audio signal processing.
“Fractal Fourier analysis provides a method to break complicated signals up into a set of well understood building blocks, in a similar way to how conventional Fourier analysis breaks signals up into a set of smooth sine waves,” Professor Barnsley said.
Professor Barnsley’s work draws on the work of Karl Weierstrass from the late 19th Century, who discovered a family of mathematical functions that were continuous, but could not be differentiated.
“There are terrific advances to be made by breaking loose from the thrall of continuity and differentiability,” Professor Barnsley said.
“The body is full of repeating branch structures – the breathing system, the blood supply system, the arrangement of skin cells, even cancer is a fractal.”
Last digits of nearby primes have ‘anti-sameness’ bias.
Two mathematicians have found a strange pattern in prime numbers — showing that the numbers are not distributed as randomly as theorists often assume. “Every single person we’ve told this ends up writing their own computer program to check it for themselves,” says Kannan Soundararajan, a mathematician at Stanford University in California, who reported the discovery with his colleague Robert Lemke Oliver in a paper submitted to the arXiv preprint server on 11 March1. “It is really a surprise,” he says.
Prime numbers near to each other tend to avoid repeating their last digits, the mathematicians say: that is, a prime that ends in 1 is less likely to be followed by another ending in 1 than one might expect from a random sequence. “As soon as I saw the numbers, I could see it was true,” says mathematician James Maynard of the University of Oxford, UK. “It’s a really nice result.”
Although prime numbers are used in a number of applications, such as cryptography, this ‘anti-sameness’ bias has no practical use or even any wider implication for number theory, as far as Soundararajan and Lemke Oliver know. But, for mathematicians, it’s both strange and fascinating.
If the sequence were truly random, then a prime with 1 as its last digit should be followed by another prime ending in 1 one-quarter of the time. That’s because after the number 5, there are only four possibilities — 1, 3, 7 and 9 — for prime last digits. And these are, on average, equally represented among all primes, according to a theorem proved around the end of the nineteenth century, one of the results that underpin much of our understanding of the distribution of prime numbers. Another is the prime number theorem, which quantifies how much rarer the primes become as numbers get larger.
Instead, Lemke Oliver and Soundararajan saw that in the first billion primes, a 1 is followed by a 1 about 18% of the time, by a 3 or a 7 each 30% of the time, and by a 9 22% of the time. They found similar results when they started with primes that ended in 3, 7 or 9: variation, but with repeated last digits the least common. The bias persists but slowly decreases as numbers get larger.
A pulsating star in the constellation Lyra generates a unique fractal pattern that hints at unknown stellar processes.
What struck John Learned about the blinking of KIC 5520878, a bluish-white star 16,000 light-years away, was how artificial it seemed. A “variable” star, KIC 5520878 brightens and dims in a six-hour cycle, seesawing between cool-and-clear and hot-and-opaque. Overlaying this rhythm is a second, subtler variation of unknown origin; this frequency interplays with the first to make some of the star’s pulses brighter than others. In the fluctuations, Learned had identified interesting and, he thought, possibly intelligent sequences, such as prime numbers (which have been floated as a conceivable basis of extraterrestrial communication). He then found hints that the star’s pulses were chaotic.
But when Learned mentioned his investigations to a colleague, William Ditto, last summer, Ditto was struck by the ratio of the two frequencies driving the star’s pulsations. “I said, ‘Wait a minute, that’s the golden mean.’” This irrational number, which begins 1.618, is found in certain spirals, golden rectangles and now the relative speeds of two mysterious stellar processes. It meant that the blinking of KIC 5520878 wasn’t an extraterrestrial signal, Ditto realized, but something else that had never before been found in nature: a mathematical curiosity caught halfway between order and chaos called a “strange nonchaotic attractor.”
Dynamical systems — such as pendulums, the weather and variable stars — tend to fall into circumscribed patterns of behavior that are a subset of all the ways they could possibly behave. A pendulum wants to swing from side to side, for example, and the weather stays within a general realm of possibility (it will never be zero degrees in summer). Plotting these patterns creates a shape called an “attractor.”
Mathematicians in the 1970s used attractors to model the behavior of chaotic systems like the weather, and they found that the future path of such a system through its attractor is extremely dependent on its exact starting point. This sensitivity to initial conditions, known as the butterfly effect, makes the behavior of chaotic systems unpredictable; you can’t tell the forecast very far in advance if the flap of a butterfly’s wings today can make the difference, two weeks from now, between sunshine and a hurricane. The infinitely detailed paths that most chaotic systems take through their attractors are called “fractals.” Zoom in on a fractal, and new variations keep appearing, just as new outcrops appear whenever you zoom in on the craggy coastline of Great Britain. Attractors with this fractal structure are called “strange attractors.”
Then in 1984, mathematicians led by Celso Grebogi, Edward Ott and James Yorke of the University of Maryland in College Park discovered an unexpected new category of objects — strange attractors shaped not by chaos but by irrationality. These shapes formed from the paths of a system driven at two frequencies with no common multiple — that is, frequencies whose ratio was an irrational number, like pi or the golden mean. Unlike other strange attractors, these special “nonchaotic” ones did not exhibit a butterfly effect; a small change to a system’s initial state had a proportionally small effect on its resulting fractal journey through its attractor, making its evolution relatively stable and predictable.
“It was quite surprising to find these fractal structures in something that was totally nonchaotic,” said Grebogi, a Brazilian chaos theorist who is now a professor at the University of Aberdeen in Scotland.
Though no example could be positively identified, scientists speculated that strange nonchaotic attractors might be everywhere around and within us. It seemed possible that the climate, with its variable yet stable patterns, could be such a system. The human brain might be another.
The first laboratory demonstration of strange nonchaotic dynamics occurred in 1990, spearheaded by Ott and none other than William Ditto. Working at the Naval Surface Warfare Center in Silver Spring, Maryland, Ditto, Ott and several collaborators induced a magnetic field inside a metallic strip of tinsel called a “magnetoelastic ribbon” and varied the field’s strength at two different frequencies related by the golden ratio. The ribbon stiffened and relaxed in a strange nonchaotic pattern, bringing to life the mathematical discovery from six years earlier. “We were the first people to see this thing; we were pleased with that,” Ditto said. “Then I forgot about it for 20 years.
The study of variable stars entered boom times in 2009 with the launch of the Kepler telescope, which looked for small aberrations in starlight as a sign of distant planets. The telescope gathered a trove of unprecedented data on the pulsations of variable stars throughout the galaxy. Other, ground-based surveys have added further riches.
The data revealed subtle variations in many of the stars’ pulsations that hinted at stellar processes beyond those described by Eddington. The pulses of starlight could be separated into two main frequencies: a faster one like the beat of a snare drum and a slower one like a gong, with the two rhythms played out of sync. And in more than 100 of these variable stars — including those, like KIC 5520878, belonging to a subclass called “RRc” — the ratios defining the duration of one frequency relative to the other inexplicably fell between 1.58 and 1.64.
The world is full of complex structures such as bridges, roads, wind turbines, power stations, and so on, that have to be carefully monitored to ensure their integrity.
The back of a tiger could have been a blank canvas. Instead, nature painted the big cat with parallel stripes, evenly spaced and perpendicular to the spine. Scientists don't know exactly how stripes develop, but since the 1950s, mathematicians have been modeling possible scenarios. In Cell Systems on December 23, Harvard researchers assemble a range of these models into a single equation to identify what variables control stripe formation in living things.
For days, rumors about the biggest advance in years in so-called complexity theory have been lighting up the Internet. That’s only fitting, as the breakthrough involves comparing networks just like researchers’ webs of online connections. László Babai, a mathematician and computer scientist at the University of Chicago in Illinois, has developed a mathematical recipe or "algorithm" that supposedly can take two networks—no matter how big and tangled—and tell whether they are, in fact, the same, in far fewer steps than the previous best algorithm. Computer scientists are abuzz, as the task had been something of a poster child for hard-to-solve problems.
It took more than 80 years, but a problem posed by a mathematician who delighted in concocting tricky ones has finally been solved. UCLA mathematician Terence Tao has produced a solution to the Erdős discrepancy problem, named after the enigmatic Hungarian numbers wizard Paul Erdős. Tao’s proof, posted online September 18 at arXiv.org, shows that the difference (or discrepancy) between the quantities of two elements within certain sequences can grow without bound, even if someone does the best possible job of minimizing the discrepancy.
“Based on Tao’s stature, I would trust it straightaway,” even though the proof hasn’t yet been peer-reviewed, says Alexei Lisitsa, a computer scientist at the University of Liverpool in England. While the problem probably doesn’t have real-world applications, Tao says, “the act of solving a problem like this often gives a trick for solving more complicated things.”
The Navier-Stokes equations of fluid flow, used to model ocean currents, weather patterns and other phenomena, have been dubbed one of the seven most important problems in modern mathematics.
Now, in a paper posted online on February 3, Terence Tao of the University of California, Los Angeles, a winner of the Fields Medal, mathematics’ highest honor, offers a possible way to break the impasse. He has shown that in an alternative abstract universe closely related to the one described by the Navier-Stokes equations, it is possible for a body of fluid to form a sort of computer, which can build a self-replicating fluid robot that keeps transferring its energy to smaller and smaller copies of itself until the fluid “blows up.” As strange as it sounds, it may be possible, Tao proposes, to construct the same kind of self-replicator in the case of the true Navier-Stokes equations. If so, this fluid computer would settle a question that the Clay Mathematics Institute in 2000 dubbed one of the seven most important problems in modern mathematics, and for which it offered a million-dollar prize. Is a fluid governed by the Navier-Stokes equations guaranteed to flow smoothly for all time, the problem asks, or could it eventually hit a “blowup” in which something physically impossible happens, such as a non-zero amount of energy concentrated into a single point in space?
Tao’s proposal is “a tall order,” said Charles Fefferman of Princeton University. “But it’s a very interesting way of thinking about the long-term future of the problem.” The real ocean doesn’t spontaneously blow up, of course, and perhaps for that reason, most mathematicians have concentrated their energy on trying to prove that the solutions to the Navier-Stokes equations remain smooth and well-behaved forever, a property called global regularity. Purported proofs of global regularity surface every few months, but so far each one has had a fatal flaw. The most recent attempt to garner serious attention, by Mukhtarbay Otelbaev of the Eurasian National University in Astana, Kazakhstan, is still under review, but mathematicians have already uncovered significant problems with the proof, which Otelbaev is trying to solve.
“Everyone in the research community would agree that the tools we have at the moment are not sufficient to prove global regularity,” said Susan Friedlander, of the University of Southern California in Los Angeles. Tao originally set out with a fairly modest goal: simply to make rigorous the intuition that the existing tools are not good enough. Many would-be proofs of global regularity have tried to exploit a principle of conservation of energy, and Tao set out to show that this principle is not sufficient to establish global regularity. He constructed a counterexample, a sort of toy fluid-flow universe whose governing equations have many commonalities with the Navier-Stokes equations, including conservation of energy, but whose solutions can blow up.
A decade earlier, Nets Katz, now of the California Institute of Technology in Pasadena, and Natasa Pavlovic, now of the University of Texas, Austin, had established blowupfor a toy version of a simpler fluid flow model by showing how to transfer a given amount of energy into smaller and smaller size scales until, after a finite amount of time, all the energy would be packed into a single point, and the fluid would blow up. But Katz and Pavlovic’s process distributed the energy across many different size scales at the same time, as if the Cat had lifted his hat to reveal not Little Cat A, but weak versions of many of the smaller cats. When Katz and Pavlovic tried to extend their process to a toy version of the Navier-Stokes equations, the fluid’s viscosity snuffed out this thinned-out energy and no blowup occurred.
The deepest and first ever HD deep-zoom animation into the Burning Ship fractal. This is one of the creepiest and yet stunningly gorgeous fractals ever. A slight change in the formula that generates the famous Mandelbrot set gives this fractal incredible Gothic tower shapes. What the Mandelbrot set does with curlicues and spirals, this fractal does with lines, boxes, and angles. This gargantuan 12,000 frame, 6 minute 40 second, high-definition video magnifies the starting image by a factor of 1.3e100, a hugely deep zoom. It may not be 3D, but nothing like this has ever been seen before!
The fidelity to the structure of a fractal is the most important consideration here. This is not art that happens to have a fractal in it; this is an art dedicated to showing as accurately as possible how the fractals truly look. These images are high-precision because they zoom in way beyond the standard precision of most computers, to magnifications of 1030, 1050, or even to 10120! That's so big that a if a subatomic particle were magnified that much, it would be larger than the universe! Doing this requires special software to handle that extra math, and that is a technical challenge every bit as demanding as the artistic challenges of finding great images.
These animations are also "high-precision" in a different sense -- the images and videos are very high-quality, carefully crafted and beautifully colorized fractal images rendered with precise fidelity to the original underlying fractal structures. Fractals have exquisite detail at all levels of magnification, infinitely great detail, in fact, so there is always something more to see, no matter how much has been explored before. The structures in the Mandelbrot set and other fractals at these extreme levels of magnification can be truly spectacular, unlike anything ever seen before.
Chaos & Complexity are related; both are forms of “Coarse Damping”. While chaos is a form of coarse damping in “Time”, complexity on the other hand is a form of coarse damping in “Structure”!
Complexity arises from the ubiquitous “Collaborative Interplay of Entropy and Symmetry-Breaking” in all naturally damped-driven systems – Complexity is a form of coarse damping to uniformity! A form of coarse symmetry! “Complexity is Coarse Entropy!”
Progressive Complexity arises from the ubiquitous “Competitive Interplay of Entropy and Coarse Entropy” in all naturally damped-driven systems – Evolution is a form of coarse damping to complexity! “Evolution is the Progressive Upheaval and Rejuvenation of Coarse Entropy!”
There are two fundamental forces at work in nature and all evolutionary systems; one is the entropic force of spontaneous decay and disorder (otherwise known as “The Second Law of Thermodynamics”); the other is a universal, and somewhat mysterious, capacity for self-organization and spontaneous emergence!
Via Philippe Vallat
As a grape slowly dries and shrivels, its surface creases, ultimately taking on the wrinkled form of a raisin. Similar patterns can be found on the surfaces of other dried materials, as well as in human fingerprints. While these patterns have long been observed in nature, and more recently in experiments, scientists have not been able to come up with a way to predict how such patterns arise in curved systems, such as microlenses.
Now a team of MIT mathematicians and engineers has developed a mathematical theory, confirmed through experiments, that predicts how wrinkles on curved surfaces take shape. From their calculations, they determined that one main parameter—curvature—rules the type of pattern that forms: The more curved a surface is, the more its surface patterns resemble a crystal-like lattice. The researchers say the theory, reported this week in the journal Nature Materials, may help to generally explain how fingerprints and wrinkles form.
"If you look at skin, there's a harder layer of tissue, and underneath is a softer layer, and you see these wrinkling patterns that make fingerprints," says Jörn Dunkel, an assistant professor of mathematics at MIT. "Could you, in principle, predict these patterns? It's a complicated system, but there seems to be something generic going on, because you see very similar patterns over a huge range of scales."
Often, when someone needs a transplant, there is a relative willing to make this sacrifice, but unable to do so because they aren't a close enough tissue match, which would lead to the organ's rejection by its new host's immune system. Separately, there are some rare individuals who are simply willing to donate a kidney to an unknown recipient. So the medical community has started doing "donation chains," where a group of donor-recipient pairs are matched so that everyone who receives a kidney has a paired donor that gives one to someone else.
That, as it turns out, has created its own problem: given a large pool of donors and recipients, how do you pull a set of optimized donor chains out? It turns out that the optimization belongs to a set of mathematical problems that are called NP-hard, making them extremely difficult to calculate as the length of the chain goes up. But now, some researchers have developed algorithms that can solve the typical challenges faced by hospitals with the processing power of a desktop computer.
The typical donor chain generated by hospitals starts with an unconnected donor, someone who's just willing to give up a kidney to anyone who needs it. From there, it goes through donor-recipient pairs, before ending at someone who doesn't have a paired donor. The longer the chain, the better the chances are of optimizing matches among donors and recipients so there's a minimal chance of immune rejection.
The authors of the new paper try two different computational approaches for optimizing the chains. The first involves what are termed "integer programming" techniques, which are exactly what they sound like: the values used by the system to handle values for things like the suitability of a match and the amount of time spent waiting for an organ. Using this approach, they simply start with a unmatched donor and recurse through potential chains, checking for how optimal they are.
Separately, the note that the kidney donor problem is just a specialized case of what's called the prize-collecting traveling salesman problem. In this version, a person has to visit as many cities on a list as possible, but can skip some at the cost of a penalty. Using a modification of an algorithm designed for the traveling salesman, they implement one that is specialized for kidney donations.
MIT researchers have discovered a new mathematical relationship — between material thickness, temperature, and electrical resistance — that appears to hold in all superconductors. They describe their findings in the latest issue of Physical Review B.
“We were able to use this knowledge to make larger-area devices, which were not really possible to do previously, and the yield of the devices increased significantly,” says Yachin Ivry, a postdoc in MIT’s Research Laboratory of Electronics, and the first author on the paper. Ivry works in the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, a professor of electrical engineering and one of Ivry’s co-authors on the paper. Among other things, the group studies thin films of superconductors.
Superconductors are materials that, at temperatures near absolute zero, exhibit no electrical resistance; this means that it takes very little energy to induce an electrical current in them. A single photon will do the trick, which is why they’re useful as quantum photodetectors. And a computer chip built from superconducting circuits would, in principle, consume about one-hundredth as much energy as a conventional chip.
“Thin films are interesting scientifically because they allow you to get closer to what we call the superconducting-to-insulating transition,” Ivry says. “Superconductivity is a phenomenon that relies on the collective behavior of the electrons. So if you go to smaller and smaller dimensions, you get to the onset of the collective behavior.”