For more than 50 years, the mathematician Neil Sloane has curated the authoritative collection of interesting and important integer sequences.

Neil Sloane is considered by some to be one of the most influential mathematicians of our time. That’s not because of any particular theorem the 75-year-old Welsh native has proved, though over the course of a more than 40-year research career at Bell Labs (later AT&T Labs) he won numerous awards for papers in the fields of combinatorics, coding theory, optics and statistics. Rather, it’s because of the creation for which he’s most famous: the Online Encyclopedia of Integer Sequences (OEIS), often simply called “Sloane” by its users.

This giant repository, which celebrated its 50th anniversary last year, contains more than a quarter of a million different sequences of numbers that arise in different mathematical contexts, such as the prime numbers (2, 3, 5, 7, 11 … ) or the Fibonacci sequence (0, 1, 1, 2, 3, 5, 8, 13 … ). What’s the greatest number of cake slices that can be made with n cuts?

Look up sequence A000125 in the OEIS. How many chess positions can be created in n moves? That’s sequence A048987. The number of ways to arrange n circles in a plane, with only two crossing at any given point, is A250001. That sequence just joined the collection a few months ago. So far, only its first four terms are known; if you can figure out the fifth, Sloane will want to hear from you.

A mathematician whose research generates a sequence of numbers can turn to the OEIS to discover other contexts in which the sequence arises and any papers that discuss it. The repository has spawned countless mathematical discoveries and has been cited more than 4,000 times.

“Many mathematical articles explicitly mention how they were inspired by OEIS, but for each one that does, there are at least ten who do not mention it, not necessarily out of malice, but because they take it for granted,” wrote Doron Zeilberger, a mathematician at Rutgers University.

A team of more than 80 mathematicians from 12 countries has begun charting the terrain of rich, new mathematical worlds, and sharing their discoveries on the Web. The mathematical universe is filled with both familiar and exotic items, many of which are being made available for the first time.

The "L-functions and Modular Forms Database," abbreviated LMFDB, is an intricate catalog of mathematical objects and the connections between them. Making those relationships visible has been made possible largely by the coordinated efforts of a group of researchers developing new algorithms and performing calculations on an extensive network of computers. The project provides a new tool for several branches of mathematics, physics, and computer science.

A "periodic table" of mathematical objects

Project member John Voight, from Dartmouth College, observed that "our project is akin to the first periodic table of the elements. We have found enough of the building blocks that we can see the overall structure and begin to glimpse the underlying relationships." Similar to the elements in the periodic table, the fundamental objects in mathematics fall into categories. Those categories have names like L-function, elliptic curve, and modular form. The L-functions play a special role, acting like 'DNA' which characterizes the other objects. More than 20 million objects have been catalogued, each with its L-function that serves as a link between related items. Just as the value of genome sequencing is greatly increased when many members of a population have been sequenced, the comprehensive material in the LMFDB will be an indispensible tool for new discoveries.

The LMFDB provides a sophisticated web interface that allows both experts and amateurs to easily navigate its contents. Each object has a "home page" and links to related objects, or "friends." Holly Swisher, a project member from Oregon State University, commented that the friends links are one of the most valuable aspects of the project: "The LMFDB is really the only place where these interconnections are given in such clear, explicit, and navigable terms. Before our project it was difficult to find more than a handful of examples, and now we have millions."

The plot above shows the first 511 terms of the Fibonacci sequence represented in binary, revealing an interesting pattern of hollow and filled triangles (Pegg 2003). A fractal-like series of white triangles appears on the bottom edge, due in part to the fact that the binary representation of ends in zeros. Many other similar properties exist.

The Fibonacci numbers give the number of pairs of rabbits months after a single pair begins breeding (and newly born bunnies are assumed to begin breeding when they are two months old), as first described by Leonardo of Pisa (also known as Fibonacci) in his book Liber Abaci. Kepler also described the Fibonacci numbers (Kepler 1966; Wells 1986, pp. 61-62 and 65). Before Fibonacci wrote his work, the Fibonacci numbers had already been discussed by Indian scholars such as Gopāla (before 1135) and Hemachandra (c. 1150) who had long been interested in rhythmic patterns that are formed from one-beat and two-beat notes or syllables. The number of such rhythms having beats altogether is , and hence these scholars both mentioned the numbers 1, 2, 3, 5, 8, 13, 21, ... explicitly (Knuth 1997, p. 80).

The numbers of Fibonacci numbers less than 10, , , ... are 6, 11, 16, 20, 25, 30, 35, 39, 44, ... (OEIS A072353). For , 2, ..., the numbers of decimal digits in are 2, 21, 209, 2090, 20899, 208988, 2089877, 20898764, ... (OEIS A068070). As can be seen, the initial strings of digits settle down to produce the number 208987640249978733769..., which corresponds to the decimal digits of (OEIS A097348), where is the golden ratio. This follows from the fact that for any power function , the number of decimal digits for is given by .

3-D picture-language has far-reaching potential, including in physics. A trio of Harvard researchers has developed a new 3-D pictorial language for mathematics with potential as a tool across a wide spectrum, from pure math to physics.

The trio presented a 3D topological picture-language for quantum information, called quon. Their approach combines charged excitations carried by strings, with topological properties that arise from embedding the strings in the interior of a 3D manifold with boundary. A quon is a composite that acts as a particle. Specifically, a quon is a hemisphere containing a neutral pair of open strings with opposite charge. The mathematicians interpreted multiquons and their transformations in a natural way. They obtained a type of relation, a string–genus “joint relation,” involving both a string and the 3D manifold. They used the joint relation to obtain a topological interpretation of the C∗-Hopf algebra relations, which are currently widely used in tensor networks. The team obtained a 3D representation of the controlled NOT (CNOT) gate that is considerably simpler than earlier work, and a 3D topological protocol for teleportation.

In the past, topological quantum information was formulated by Kitaev (1) and Freedman et al. (2).

The mathematician Mark Kac divided all geniuses into two types: “ordinary” geniuses, who make you feel that you could have done what they did if you were say, a hundred times smarter, and “magical geniuses,” the working of whose minds is, for all intents and purposes, incomprehensible. There is no doubt that Srinivas Ramanujan was a magical genius, one of the greatest of all time. Just looking at any of his almost 4,000 original results can inspire a feeling of bewilderment and awe even in professional mathematicians: What kind of mind can dream up exotic gems like these?

Ramanujan indeed had preternatural insights into infinity: he was a consummate bridge builder between the finite and the infinite, finding ways to represent numbers in the form of infinite series, infinite sums and products, infinite integrals, and infinite continued fractions, an area in which, in the words of Hardy, his mastery was “beyond that of any mathematician in the world.” While most of Ramanujan’s results are far beyond the scope of this column, it turns out that we can get a flavor for some simple infinite forms using nothing more than middle-school algebra. Let’s embark on a journey to the infinite.

To get a 3-D shape from an ordinary polynomial takes a little doing. The first step is to run the polynomial dynamically — that is, to iterate it by feeding each output back into the polynomial as the next input. One of two things will happen: either the values will grow infinitely in size, or they’ll settle into a stable, bounded pattern. To keep track of which starting values lead to which of those two outcomes, mathematicians construct the Julia set of a polynomial. The Julia set is the boundary between starting values that go off to infinity and values that remain bounded below a given value. This boundary line — which differs for every polynomial — can be plotted on the complex plane, where it assumes all manner of highly intricate, swirling, symmetric fractal designs.

If you shade the region bounded by the Julia set, you get the filled Julia set. If you use scissors and cut out the filled Julia set, you get the first piece of the surface of the eventual 3-D shape. To get the second, DeMarco and Lindsey wrote an algorithm. That algorithm analyzes features of the original polynomial, like its degree (the highest number that appears as an exponent) and its coefficients, and outputs another fractal shape that DeMarco and Lindsey call the “planar cap.”

“The Julia set is the base, like the southern hemisphere, and the cap is like the top half,” DeMarco said. “If you glue them together you get a shape that’s polyhedral.” The algorithm was Thurston’s idea. When he suggested it to Lindsey in 2010, she wrote a rough version of the program. She and DeMarco improved on the algorithm in their work together and “proved it does what we think it does,” Lindsey said. That is, for every filled Julia set, the algorithm generates the correct complementary piece.

The filled Julia set and the planar cap are the raw material for constructing a 3-D shape, but by themselves they don’t give a sense of what the completed shape will look like. This creates a challenge. When presented with the six faces of a cube laid flat, one could intuitively know how to fold them to make the correct 3-D shape. But, with a less familiar two-dimensional surface, you’d be hard-pressed to anticipate the shape of the resulting 3-D object.

Giuseppe Peano found in 1890 a way to draw a curve in the plane that filled the entire space: just a simple line covering completely a two dimensional plane. Its discovery meant a big earthquake in the traditional structure of mathematics. Peano’s curve was the first but not the last: one of these space-filling curves was discovered by Hilbert and takes his name. Hilbert’s curve can be created iteratively. Modern computers can create Hilbert’s curve extremely easy. It is also very easy to play with the curve, altering the order in which points are sorted. Many other Hilbert-like curves can be created.

Mathematics involves an intriguing interplay between finite and infinite collections and between discrete and continuous structures. Discussing summation methods, Alexander Kharazishvili looks at the fascinating paradoxes of convergent and divergent sequences.

Classical mathematical analysis involves an intriguing interplay between finite and infinite collections and between discrete and continuous structures. What makes the interplay intriguing is the emergence from the most elementary considerations of results that are paradoxical, or, at least, utterly counterintuitive. What, for example is the sum of 1+2+3+4+...+n?

The sum? Surely this sequence of numbers is simply getting bigger and bigger; and beyond this, it is not converging to anything. Not so. The sum is –1/12.

Niels Abel called divergent series the Devil’s invention.1 Having made outstanding contributions to the subject as the Devil’s plaything, he knew what he was talking about.

In March Ukrainian mathematician Maryna Viazovska, a postdoctoral researcher at the Berlin Mathematical School and at Humboldt University of Berlin, solved the sphere-packing problem in eight dimensions. The next week she and several co-authors extended her techniques to 24 dimensions. The solution of the problem in the seemingly arbitrary dimensions of eight and 24 highlights the fundamental weirdness of sphere packing, which has now been solved only in dimensions one, two, three, eight and 24. The breakthrough has given researchers hope that building on her techniques may be a viable way to answer questions about sphere packing in higher dimensions. “This is the beginning of understanding sphere packings rather than the end,” says Henry Cohn, a mathematician at Microsoft Research and one of Viazovska’s collaborators for the 24-dimensional case.

Although it is virtually impossible to visualize eight-dimensional space, mathematicians are comfortable working with spaces of eight, 24 or thousands of dimensions by analogy to lower-dimensional spaces. In three dimensions points are labeled using three coordinates—length, width and height, or x,y,z—so in eight dimensions points are labeled using eight coordinates.

In three dimensions a sphere is the set of points in three-dimensional space that are all equidistant from one center point. In eight dimensions it is the set of points in eight-dimensional space that are all equidistant from one center point. In any dimension the sphere-packing problem is the question of how equal-size spheres can be arranged with as little empty space between them as possible.

Whereas it would seem logical for mathematicians to solve successively higher dimensions in turn—after solving three, researchers could build on their work to solve four and then five—it is no accident that Viazovska leapfrogged over dimensions four through seven and solved sphere packing in eight dimensions or that 24 was the one to follow that. “Part of what I love about the sphere-packing problem is that every dimension has its own idiosyncrasies,” says Cohn, who has worked at sphere packing for many years. “Some dimensions just behave much better than others.”

Evolution of mathematics traced using unusually comprehensive genealogy database.

Most of the world’s mathematicians fall into just 24 scientific 'families', one of which dates back to the fifteenth century. The insight comes from an analysis of the Mathematics Genealogy Project (MGP), which aims to connect all mathematicians, living and dead, into family trees on the basis of teacher–pupil lineages, in particular who an individual's doctoral adviser was.

The analysis also uses the MGP — the most complete such project — to trace trends in the history of science, including the emergence of the United States as a scientific power in the 1920s and when different mathematical subfields rose to dominance1.

“You can see how mathematics has evolved in time,” says Floriana Gargiulo, who studies networks dynamics at the University of Namur, Belgium and who led the analysis.

The MGP is hosted by North Dakota State University in Fargo and co-sponsored by the American Mathematical Society. Since the early 1990s, its organizers have mined information from university departments and from individuals who make submissions regarding themselves or people they know about.

As of 25 August, the MGP contained 201,618 entries. As well as doctoral advisers (PhD advisers in recent times) and pupils of academic mathematicians, the organizers record details such as the university that awarded the doctorate.

Previously, researchers had used the MGP to reconstruct their own PhD-family trees, or to see how many ‘descendants’ a researcher has (readers can do their own search here). Gargiulo's team wanted to make a comprehensive analysis of the entire database and divide it into distinct families, rather than just looking at how many descendants any one person has. After downloading the database, Gargiulo and her colleagues wrote machine-learning algorithms that cross-checked and complemented the MGP data with information from Wikipedia and from scientists' profiles in the Scopus bibliographic database.

This revealed 84 distinct family trees with two-thirds of the world’s mathematicians concentrated in just 24 of them. The high degree of clustering arises in part because the algorithms assigned each mathematician just one academic parent: when an individual had more than one adviser, they were assigned the one with the bigger network. But the phenomenon chimes with anecdotal reports from those who research their own mathematical ancestry, says MGP director Mitchel Keller, a mathematician at Washington and Lee University in Lexington, Virginia. “Most of them run into Euler, or Gauss or some other big name,” he says. Although the MGP is still somewhat US centric, the goal is for it to become as international as possible, Keller says.

Gargiulo’s team also looked at the dominance of mathematical subfields relative to each other. The researchers found that dominance shifted from mathematical physics to pure maths during the first half of the twentieth century, and later to statistics and other applied disciplines, such as computer science.

Idiosyncrasies in the field of mathematics could explain why it has the most comprehensive genealogy database of any discipline. “Mathematicians are a bit of a world apart,” says Roberta Sinatra, a network and data scientist at Central European University in Budapest who led a 2015 study that mapped the evolution of the subdisciplines of physics by mining data from papers on the Web of Science2.

Mathematicians tend to publish less than other researchers, and they establish their academic reputation not so much on how much they publish or on their number of citations, but on who they have collaborated with, including their mentors, she says. “I think it’s not a coincidence that they have this genealogy project."

At least one discipline is trying to catch up. Historian of astronomy Joseph Tenn of Sonoma State University in California plans by 2017 to launch the AstroGen project to record the PhD advisers and students of astronomers. “I started it," he says, "because so many of my colleagues in astronomy admired and enjoyed perusing the Mathematics Genealogy Project."

The mathematician-physicist Miranda Cheng is working on harnessing a mysterious connection between string theory, algebra and number theory.

A moonshine relates representations of a finite symmetry group to a function with special symmetries, ways that you can transform the function without affecting its output. Underlying this relationship, at least in the case of monstrous moonshine, is a string theory. String theory has two geometries. One is the “worldsheet” geometry. If you have a string — essentially a circle — moving in time, then you get a cylinder. That’s what we call the worldsheet geometry; it’s the geometry of the string itself. If you roll the cylinder and connect the two ends, you get a torus. The torus gives you the symmetry of the j-function. The other geometry in string theory is space-time itself, and its symmetry gives you the monster group.

To have a moonshine tells you that this theory has to have an algebraic structure since you have to be able to do algebra with its elements. If you look at a theory and you ask what kind of particles you have at a certain energy level, this question is infinite, because you can go to higher and higher energies, and then this question goes on and on. In monstrous moonshine, this is manifested in the fact that if you look at the j-function, there are infinitely many terms that basically capture the energy of the particles. But we know there’s an algebraic structure underlying it — there’s a mechanism for how the lower energy states can be related to higher energy states. So this infinite question has a structure; it’s not just random.

As you can imagine, having an algebraic structure helps you understand what the structure is that captures a theory — how, if you look at the lower energy states, they will tell you something about the higher energy states. And then it also gives you more tools to do computations. If you want to understand something at a high-energy level - such as inside black holes -, then I have more information about it. I can compute what I want to compute for high-energy states using this low-energy data I already have in hand. That’s the hope.

Umbral moonshine tells you that there should be a structure like this that we don’t understand yet. Understanding it more generally will force us to understand this algebraic structure. And that will lead to a much deeper understanding of the theory. Again, that’s the hope.

Gas bubbles in a glass of champagne, thin films rupturing into tiny liquid droplets, blood flowing through a pumping heart and crashing ocean waves—although seemingly unrelated, these phenomena have something in common: they can all be mathematically modeled as interface dynamics coupled to the Navier-Stokes equations, a set of equations that predict how fluids flow.

Today, these equations are used everywhere from special effects in movies to industrial research and the frontiers of engineering. However, many computational methods for solving these complex equations cannot accurately resolve the often-intricate fluid dynamics taking place next to moving boundaries and surfaces, or how these tiny structures influence the motion of the surfaces and the surrounding environment.

This is where a new mathematical framework developed by Robert Saye, Lawrence Berkeley National Laboratory's (Berkeley Lab's) 2014 Luis Alvarez Fellow in Computing Sciences, comes in. By reformulating the incompressible Navier-Stokes equations to make them more amenable to numerical computation, the new algorithms are able to capture the small-scale features near evolving interfaces with unprecedented detail, as well as the impact that these tiny structures have on dynamics far away from the interface. A paper describing his work was published in the June 10, 2016 issue of Science Advances.

"These algorithms can accurately resolve the intricate structures near the surfaces attached to the fluid motion. As a result, you can learn all sorts of interesting things about how the motion of the interface affects the global dynamics, which ultimately allows you to design better materials or optimize geometry for better efficiency," says Saye, who is also a member of the Mathematics Group at Berkeley Lab.

"For example, in a glass of champagne, the motion of the little gas bubbles depends crucially on boundary layers surrounding the bubbles. These boundary layers need to be accurately resolved, otherwise you won't see the slight zig-zag pattern that real bubbles take as they float to the top of the glass," he adds. "This particular phenomena is important in bubble aeration, a process used widely in industry to oxygenate liquids and transport materials in liquid chambers."

Thomas Baruchel’s website shows images derived from complex analysis. John D. Cook used the ImageQuilts software by Edward Tufte and Adam Schwartz to create a large variety of scientific and artistic images.

In the mathematical field of dynamical systems, an attractor is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system.[1] System values that get close enough to the attractor values remain close even if slightly disturbed.

An attractor is called strange if it has a fractal structure.[1] This is often the case when the dynamics on it are chaotic, but strange nonchaotic attractors also exist. If a strange attractor is chaotic, exhibiting sensitive dependence on initial conditions, then any two arbitrarily close alternative initial points on the attractor, after any of various numbers of iterations, will lead to points that are arbitrarily far apart (subject to the confines of the attractor), and after any of various other numbers of iterations will lead to points that are arbitrarily close together. Thus a dynamic system with a chaotic attractor is locally unstable yet globally stable: once some sequences have entered the attractor, nearby points diverge from one another but never depart from the attractor.[5]

The term strange attractor was coined by David Ruelle and Floris Takens to describe the attractor resulting from a series of bifurcations of a system describing fluid flow.[6] Strange attractors are often differentiable in a few directions, but some are like a Cantor dust, and therefore not differentiable. Strange attractors may also be found in presence of noise, where they may be shown to support invariant random probability measures of Sinai–Ruelle–Bowen type.[7]

To get a 3-D shape from an ordinary polynomial takes a little doing. The first step is to run the polynomial dynamically — that is, to iterate it by feeding each output back into the polynomial as the next input. One of two things will happen: either the values will grow infinitely in size, or they’ll settle into a stable, bounded pattern. To keep track of which starting values lead to which of those two outcomes, mathematicians construct the Julia set of a polynomial. The Julia set is the boundary between starting values that go off to infinity and values that remain bounded below a given value. This boundary line — which differs for every polynomial — can be plotted on the complex plane, where it assumes all manner of highly intricate, swirling, symmetric fractal designs.

If you shade the region bounded by the Julia set, you get the filled Julia set. If you use scissors and cut out the filled Julia set, you get the first piece of the surface of the eventual 3-D shape. To get the second, DeMarco and Lindsey wrote an algorithm. That algorithm analyzes features of the original polynomial, like its degree (the highest number that appears as an exponent) and its coefficients, and outputs another fractal shape that DeMarco and Lindsey call the “planar cap.”

“The Julia set is the base, like the southern hemisphere, and the cap is like the top half,” DeMarco said. “If you glue them together you get a shape that’s polyhedral.” The algorithm was Thurston’s idea. When he suggested it to Lindsey in 2010, she wrote a rough version of the program. She and DeMarco improved on the algorithm in their work together and “proved it does what we think it does,” Lindsey said. That is, for every filled Julia set, the algorithm generates the correct complementary piece.

The filled Julia set and the planar cap are the raw material for constructing a 3-D shape, but by themselves they don’t give a sense of what the completed shape will look like. This creates a challenge. When presented with the six faces of a cube laid flat, one could intuitively know how to fold them to make the correct 3-D shape. But, with a less familiar two-dimensional surface, you’d be hard-pressed to anticipate the shape of the resulting 3-D object.

“There’s no general mathematical theory that tells you what the shape will be if you start with different types of polygons,” Lindsey said. Mathematicians have precise ways of defining what makes a shape a shape. One is to know its curvature. Any 3-D object without holes has a total curvature of exactly 4π; it’s a fixed value in the same way any circular object has exactly 360 degrees of angle. The shape — or geometry — of a 3-D object is completely determined by the way that fixed amount of curvature is distributed, combined with information about distances between points. In a sphere, the curvature is distributed evenly over the entire surface; in a cube, it’s concentrated in equal amounts at the eight evenly spaced vertices.

The mathematician Simon Plouffe experiments with Maple (computer algebra system) and PSLQ (integer relations algorithm) and here are his findings and remarks.

University of Utah mathematicians propose a theoretical framework to understand how waves and other disturbances move through materials in conditions that vary in both space and time. The theory, called “field patterns,” published today in Proceedings of the Royal Society A.

Field patterns are characteristic patterns of how disturbances react to changing conditions. Because field patterns exhibit characteristics of both propagating waves and localized particles, field pattern theory may answer some of the questions posed by quantum mechanics, in which objects can be treated as both particles and waves. First author Graeme Milton further posits that field patterns could describe the natures of the fundamental components of matter in the universe. “When you open the doors to a new area,” Milton says, “you don’t know where it will go.”

For an example of field patterns, think of a chessboard. The black squares represent one material and the white squares represent another material with different properties. The horizontal dimension (side to side) represents space, and the vertical dimension (forward and back) represents time. Instead of white and black squares, the chessboard is made of two materials of different refractive properties that bend light differently. As a disturbance, such as a pulse of laser light, moves forward in time, it spreads out over space, encountering boundaries between materials in space and then in time as the materials switch properties/colors with each successive row.

Field patterns can describe the propagation of the pulse along characteristic lines with a fixed slope in each square, which is governed by the refractive properties of each square. The characteristic lines branch at the checkerboard square boundaries.

What fractal dimension is, and how this is the core concept defining what fractals themselves are. It is possible to have fractals with an integer dimension. The example to have in mind is some *very* rough curve, which just so happens to achieve roughness level exactly 2. Slightly rough might be around 1.1-dimension; quite rough could be 1.5; but a very rough curve could get up to 2.0 (or more). A classic example of this is the boundary of the Mandelbrot set.

The proper definition of a fractal, at least as Mandelbrot wrote it, is a shape whose "Hausdorff dimension" is greater than its "topological dimension". Hausdorff dimension is similar to the box-counting one I showed in this video, in some sense counting using balls instead of boxes, and it coincides with box-counting dimension in many cases. But it's more general, at the cost of being a bit harder to describe. Topological dimension is something that's always an integer, wherein (loosely speaking) curve-ish things are 1-dimensional, surface-ish things are two-dimensional, etc. For example, a Koch Curve has topological dimension 1, and Hausdorff dimension 1.262. The surface of an ocean would have topological dimension 2, but might have fractal dimension around 2.1. And if a curve with topological dimension 1 has a Hausdorff dimension that *happens* to be exactly 2, it would be considered a fractal, even though it's fractal dimension is an integer. Pick a random fractal from a hat, though, and it will almost certainly have a non-integer dimension.

Srinivasa Ramanujan was born in Erode, Tamilnadu, India, on 22nd December, 1887. In his all too brief life of less than 32 years he made monumental contributions to Mathematics. While some of his contributions made into Journals - proverbial tip of the iceberg - much more remain as entries in several notebooks which he kept. The published papers were brought out in 2000, Ramanujan Papers, by Prism Publishers, Bangalore. The unpublished material in the notebooks are also of great interest to Mathematicians and they are available in book form, thanks to the efforts of the Tata Institute of Fundamental Research, Mumbai, and Narosa Publishers, Delhi.

To commemorate his 126th birthday on 22nd December 2013, the published papers of Srinivasa Ramanujan as well as the unpublished manuscripts are made available to the world at large via the Internet. While the published papers are available in HTML, rendered using MathJax, and PDF, the manuscripts are available in DjVu format which can be easily seen on PCs via the DjVu plugin.

History has seldom seen a person who was so passionate, unorthodox, as well as gifted in a field, as was Srinivasa Ramanujan, the self-taught Indian genius, who made several startling discoveries in the realm of Mathematics. Despite abject poverty and lack of formal training and encouragement, Ramanujan’s love for numbers never waned. And thanks to a chance encounter and ensuing collaboration with G.H.Hardy of Cambridge, one of the most eminent mathematicians of the world, his hidden genius came to light.

Ramanujan went on to make thousands of discoveries with the apparent ease of experiencing and recording a series of religious epiphanies by amystic in a trance. The methods he followed are still shrouded in a veil of mystery, since he usually skipped the formal rigour (and hence made mistakes too sometimes) and relied more on leaps of intuition to arrive at sudden, surprising results.

The several ‘Notebooks’ left behind by Ramanujan are strewn with cryptic formulae and equations, and are still being mined by mathematicians all over the world for beautiful gems and nuggets.

Periods and amplitudes were presented together for the first time in 1994 by Kreimer and David Broadhurst, a physicist at the Open University in England, with a paper following in 1995. The work led mathematicians to speculate that all amplitudes were periods of mixed Tate motives — a special kind of motive named after John Tate, emeritus professor at Harvard University, in which all the periods are multiple values of one of the most influential constructions in number theory, the Riemann zeta function. In the situation with an electron-positron pair going in and a muon-antimuon pair coming out, the main part of the amplitude comes out as six times the Riemann zeta function evaluated at three.

If all amplitudes were multiple zeta values, it would give physicists a well-defined class of numbers to work with. But in 2012 Brown and his collaborator Oliver Schnetz proved that’s not the case. While all the amplitudes physicists come across today may be periods of mixed Tate motives, “there are monsters lurking out there that throw a spanner into the works,” Brown said. Those monsters are “certainly periods, but they’re not the nice and simple periods people had hoped for.”

What physicists and mathematicians do know is that there seems to be a connection between the number of loops in a Feynman diagram and a notion in mathematics called “weight.”

Weight is a number related to the dimension of the space being integrated over: A period integral over a one-dimensional space can have a weight of 0, 1 or 2; a period integral over a two-dimensional space can have weight up to 4, and so on. Weight can also be used to sort periods into different types: All periods of weight 0 are conjectured to be algebraic numbers, which can be the solutions to polynomial equations (this has not been proved); the period of a pendulum always has a weight of 1; pi is a period of weight 2; and the weights of values of the Riemann zeta function are always twice the input (so the zeta function evaluated at 3 has a weight of 6).

This classification of periods by weights carries over to Feynman diagrams, where the number of loops in a diagram is somehow related to the weight of its amplitude. Diagrams with no loops have amplitudes of weight 0; the amplitudes of diagrams with one loop are all periods of mixed Tate motives and have, at most, a weight of 4. For graphs with additional loops, mathematicians suspect the relationship continues, even if they can’t see it yet.

“We go to higher loops and we see periods of a more general type,” Kreimer said. “There mathematicians get really interested because they don’t understand much about motives that are not mixed Tate motives.”

Mathematicians and physicists are currently going back and forth trying to establish the scope of the problem and craft solutions. Mathematicians suggest functions (and their integrals) to physicists that can be used to describe Feynman diagrams. Physicists produce configurations of particle collisions that outstrip the functions mathematicians have to offer. “It’s quite amazing to see how fast they’ve assimilated quite technical mathematical ideas,” Brown said. “We’ve run out of classical numbers and functions to give to physicists.”

So you're moving into your new apartment, and you're trying to bring your sofa. The problem is, the hallway turns and you have to fit your sofa around a corner. If it's a small sofa, that might not be a problem, but a really big sofa is sure to get stuck. If you're a mathematician, you ask yourself: What's the largest sofa you could possibly fit around the corner? It doesn't have to be a rectangular sofa either, it can be any shape.

This is the essence of the moving sofa problem. Here are the specifics: the whole problem is in two dimensions, the corner is a 90-degree angle, and the width of the corridor is 1. What is the largest two-dimensional area that can fit around the corner?

The largest area that can fit around a corner is called—I kid you not—the sofa constant. Nobody knows for sure how big it is, but we have some pretty big sofas that do work, so we know it has to be at least as big as them. We also have some sofas that don't work, so it has to be smaller than those. All together, we know the sofa constant has to be between 2.2195 and 2.8284.

Rice University scientists have invented a technology that could potentially identify hundreds of bacterial pathogens simply, quickly and at low cost using a single set of random DNA probes. Rice’s “universal microbial diagnostic,” or UMD, uses pieces of randomly assembled DNA and mathematical techniques that were originally pioneered for signal processors inside digital phones and cameras.

In a paper online this week in Science Advances, Rice’s research team used lab tests to verify that UMD could identify 11 known strains of bacteria using the same five random DNA probes. Because the probes are not specific to a particular disease, the technology provides a genomic-based bacterial identification system that does not require a one-to-one ratio of DNA probes to pathogenic species.

A century ago, Srinivasa Ramanujan and G. H. Hardy started a famous correspondence about mathematics so amazing that Hardy described it as “scarcely possible to believe.” On May 1, 1913, Ramanujan was given a permanent position at the University of Cambridge. Five years and a day later, he became a Fellow of the Royal Society, then the most prestigious scientific group in the world. In 1919 Ramanujan was deathly ill while on a long ride back to India, from February 27 to March 13 on the steamship Nagoya. All he had was a pen and pad of paper (no Mathematica at that time), and he wanted to write down his equations before he died. He claimed to have solutions for a particular function, but only had time to write down a few before moving on to other areas of mathematics. He wrote the following incomplete equation with 14 others, only 3 of them solved.

Within months, he passed away, probably from hepatic amoebiasis. His final notebook was sent by the University of Madras to G. H. Hardy, who in turn gave it to mathematician G. N. Watson. When Watson died in 1965, the college chancellor found the notebook in his office while looking through papers scheduled to be incinerated. George Andrews rediscovered the notebook in 1976, and it was finally published in 1987. Bruce Berndt and Andrews wrote about Ramanujan’s Lost Notebook in a series of books (Part 1, Part 2, and Part 3). Berndt said, “The discovery of this ‘Lost Notebook’ caused roughly as much stir in the mathematical world as the discovery of Beethoven’s tenth symphony would cause in the musical world.”

In his book analyzing Ramanujan’s results, Berndt investigates the existence of a solution for many of Ramanujan's equations. Now we know, sometimes a solution exists as elegant as other values found by Ramanujan himself.

Researchers have uncovered deep connections among different types of random objects, illuminating hidden geometric structures.

Standard geometric objects can be described by simple rules — every straight line, for example, is just y = ax + b — and they stand in neat relation to each other: Connect two points to make a line, connect four line segments to make a square, connect six squares to make a cube.

These are not the kinds of objects that concern Scott Sheffield. Sheffield, a professor of mathematics at the Massachusetts Institute of Technology, studies shapes that are constructed by random processes. No two of them are ever exactly alike.

Consider the most familiar random shape, the random walk, which shows up everywhere from the movement of financial asset prices to the path of particles in quantum physics. These walks are described as random because no knowledge of the path up to a given point can allow you to predict where it will go next.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

How to integrate my topics' content to my website?

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.

To get content containing either thought or leadership enter:

To get content containing both thought and leadership enter:

To get content containing the expression thought leadership enter:

You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.