Amazing Science
1.1M views | +18 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Simple Geometry That Predicts Molecular Mosaics

The Simple Geometry That Predicts Molecular Mosaics | Amazing Science | Scoop.it
 
 
By treating molecules as geometric tessellations, scientists devised a new way to forecast how 2D materials might self-assemble.

 

We live on and among the by-products of fragmentation which can be found everywhere, from nanoparticles to rock falls to glaciers to continents. Understanding and taming this fragmentation is central to assessing natural hazards and extracting resources, and even for landing probes safely on other planetary bodies. In this study, scientists draw inspiration from an unlikely and ancient source: Plato, who proposed that the element Earth is made of cubes because they may be tightly packed together. They were able to demonstrate that this idea is essentially correct: Appropriately averaged properties of most natural 3D fragments reproduce the topological cube. They used mechanical and geometric models to explain the ubiquity of Plato’s cube in fragmentation and to uniquely map distinct fragment patterns to their formative stress conditions.
 
While Plato envisioned Earth’s building blocks as cubes, a shape rarely found in nature, the solar system is littered with distorted polyhedra—shards of rock and ice produced by ubiquitous fragmentation. The scientists applied the theory of convex mosaics to show that the average geometry of natural two-dimensional (2D) fragments, from mud cracks to Earth’s tectonic plates, has two attractors: “Platonic” quadrangles and “Voronoi” hexagons. In three dimensions (3D), the Platonic attractor is dominant: Remarkably, the average shape of natural rock fragments is cuboid. When viewed through the lens of convex mosaics, natural fragments are indeed geometric shadows of Plato’s forms. Simulations show that generic binary breakup drives all mosaics toward the Platonic attractor, explaining the ubiquity of cuboid averages. Deviations from binary fracture produce more exotic patterns that are genetically linked to the formative stress field. This study computes the universal pattern generator establishing a link for 2D and 3D fragmentation.
Tanja Elbaz's curator insight, November 13, 2023 3:26 PM
 

Showing 1–12 of 30 results

Filter by price

 
100.000$1,300.000$
 

Brand

Element

Skin condition

Rating

    •  
    •  
    •  
 
 
 
  •  

Acxion Fentermina

120.0$ 850.0$
 
 
  •  

Ambien for sale online

150.0$ 650.0$
-50%
 
 
-6%
 
  •  

buy chewable Viagra

324.0$ 305.0$
 
 
  •  

buy concerta online

200.0$ 600.0$
Trending
 
-90%
 
-15%Trending
 
  •  

buy Dilaudid hydromorphone

260.0$ 220.0$
 
 
  •  

buy Endocet without prescription

200.0$ 450.0$
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mathematics of Imaging: Seeing by Solving

Mathematics of Imaging: Seeing by Solving | Amazing Science | Scoop.it

Podcast summary is here

 

Scientists and engineers have created incredible tools that allow us to see in new ways—electron microscopes, magnetic resonance imaging (MRI), computed tomography (CT) scanners, and large telescopic arrays. What do all these tools and machines have in common? Mathematics and statistics—including the Fourier transform (developed in the early 19th century), the modern theory of compressed sensing (developed early in the 21st century), statistical theory, and data analysis—are needed to turn these data into images that we can see and use.

 

The impact of discrete and continuous transforms, such as the Fourier transform, has been tremendous. One reason for this is that in some applications (such as image processing), using the transformed versions of functions makes the math work a lot easier, better, and faster. Apart from Fourier transforms, many other transforms can be generated that give similar advantages.

Let’s take a closer look at how these mathematical tools give us new views of our universe, from small to large, from molecular systems to brains and bones, and on to black holes and gravitational waves.

 

Medical Imaging

X rays are high-energy photon beams that pass through most substances but not through high-density solids such as bone. They were discovered accidentally at the end of the 19th century by Wilhelm Conrad Röntgen, who was experimenting with cathode rays. The medical applications were recognized immediately, and X-ray analysis is a building block of some modern medical imaging systems.
 
In CT, the information from X-ray scans at multiple angles is mathematically integrated to create an image. A beam is passed that gives density measurements of two-dimensional (2D) slices of a three-dimensional (3D) object, like a brain. To get a full picture of the brain slice, the beam is rotated in the same plane as the slice, and at each angle, a density measurement is obtained. The collection of all these individual density measurements can be represented by a contour, as shown in the graphic. The problem is how to reconstruct the slices, and then the actual 3D brain image, from these measurements. This reconstruction problem is an example of an inverse problem: from indirectly measured data, finding the object that produced the data.

 

Gravitational Waves, Black Holes, and Radio Astronomy

In April 2019, Katie Bouman and the Event Horizon team gave the world its first glimpse of a black hole, an image painstakingly reconstructed from hundreds of terabytes of data collected from several interconnected special telescopes around the world. The joining of these specialized telescopes, linked through precise timing of atomic clocks, is a system known as Very Long Baseline Interferometry. This linkage creates the “Earth-size telescope” required to achieve the desired resolution for an object as far away as a black hole. Each pair of telescopes captures a single measurement at a single point in time from the light source. This measurement, or spatial frequency, is the Fourier transform of the incoming radiation relative to the projected baseline (line of sight) between the two telescopes. To augment the sparsity of the data that can be collected in this way, scientists use the rotation of Earth. Measuring how these projected baselines change in time produces short elliptical paths of data, as the graphic shows. The data obtained is still sparse (from very few telescopes) and noisy (from the atmosphere and other sources of corruption). We now arrive at our mathematical problem: how to reconstruct the image that generated this sparse and exceedingly noisy data. This large-scale inverse problem was solved using a classical technique involving the Fourier transform for image reconstruction and regularized maximum likelihood, a modern method using tools from probability and statistics.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

National Academies: Illustrating the Impact of Mathematics on Other Science Disciplines

National Academies: Illustrating the Impact of Mathematics on Other Science Disciplines | Amazing Science | Scoop.it

Today’s mathematical research, both pure and applied, is paving the way for major scientific, engineering, and technological breakthroughs. Cutting-edge work in the mathematical sciences is responsible for advances in artificial intelligence, manufacturing, precision medicine, cybersecurity, and more. Find out how the mathematical sciences are helping to improve our everyday lives by checking out the stories and infographics below.

 

This series of illustrations shows how advances in the mathematical sciences anticipate and enable later technologies that profoundly impact our daily lives, including life-saving advances in medical imaging and treatment, predictive traffic-avoiding routing, communications advances enabling GPS and high-speed cellular communications, safer online commerce with cryptographic security protocols, development of novel materials based on advanced simulations, improved forecasting of extreme weather events, and much more.

The leaps forward in technology have often built upon theoretical work whose impact would not have been predicted at the time of their creation. The same is true today: researchers and practitioners in the mathematical sciences continue to innovate, and we can only begin to imagine the future inventions their work will enable. Mathematical and statistical advances are playing a key role in emerging areas such as cyber warfare, quantum computing, artificial intelligence and machine learning for automation, genetic sequencing and related advances in vaccine creation to fight novel and existing viruses, and supply chain management.

The increasing pace of technological and social development will require many more advances in the mathematical sciences because they are a foundation for advances across science, medicine, business, finance, and even entertainment. New discoveries in mathematics happening today will reverberate for decades and centuries to come.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Solving the differential equation for brain dynamics gives rise to flexible machine learning models

Solving the differential equation for brain dynamics gives rise to flexible machine learning models | Amazing Science | Scoop.it

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

 

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution. 

 

Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets — flexible, causal, robust, and explainable — but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training — while many traditional models are fixed. 

 

The models, dubbed a “closed-form continuous-time” (CfC) neural network, outperformed state-of-the-art counterparts on a slew of tasks, with considerably higher speedups and performance in recognizing human activities from motion sensors, modeling physical dynamics of a simulated walker robot, and event-based sequential image processing. On a medical prediction task, for example, the new models were 220 times faster on a sampling of 8,000 patients. 

 

The new paper on the work is published in Nature Machine Intelligence.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

DeepMind AI invents faster algorithms to solve tough maths puzzles

DeepMind AI invents faster algorithms to solve tough maths puzzles | Amazing Science | Scoop.it
 

Researchers at DeepMind in London have shown that artificial intelligence (AI) can find shortcuts in a fundamental type of mathematical calculation, by turning the problem into a game and then leveraging the machine-learning techniques that another of the company’s AIs used to beat human players in games such as Go and chess.

 

The AI discovered algorithms that break decades-old records for computational efficiency, and the team’s findings, published on 5 October 2022 in Nature1, could open up new paths to faster computing in some fields. “It is very impressive,” says Martina Seidl, a computer scientist at Johannes Kepler University in Linz, Austria. “This work demonstrates the potential of using machine learning for solving hard mathematical problems.”

 

Algorithms chasing algorithms

Advances in machine learning have allowed researchers to develop AIs that generate languagepredict the shapes of proteins2 or detect hackers. Increasingly, scientists are turning the technology back on itself, using machine learning to improve its own underlying algorithms.

 

The AI that DeepMind developed — called AlphaTensor — was designed to perform a type of calculation called matrix multiplication. This involves multiplying numbers arranged in grids — or matrices — that might represent sets of pixels in images, air conditions in a weather model or the internal workings of an artificial neural network. To multiply two matrices together, the mathematician must multiply individual numbers and add them in specific ways to produce a new matrix. In 1969, mathematician Volker Strassen found a way to multiply a pair of 2 × 2 matrices using only seven multiplications3, rather than eight, prompting other researchers to search for more such tricks.

 

DeepMind’s approach uses a form of machine learning called reinforcement learning, in which an AI ‘agent’ (often a neural network) learns to interact with its environment to achieve a multistep goal, such as winning a board game. If it does well, the agent is reinforced — its internal parameters are updated to make future success more likely.

 

AlphaTensor also incorporates a game-playing method called tree search, in which the AI explores the outcomes of branching possibilities while planning its next action. In choosing which paths to prioritize during tree search, it asks a neural network to predict the most promising actions at each step. While the agent is still learning, it uses the outcomes of its games as feedback to hone the neural network, which further improves the tree search, providing more successes to learn from.

 

Each game is a one-player puzzle that starts with a 3D tensor — a grid of numbers — filled in correctly. AlphaTensor aims to get all the numbers to zero in the fewest steps, selecting from a collection of allowable moves. Each move represents a calculation that, when inverted, combines entries from the first two matrices to create an entry in the output matrix. The game is difficult, because at each step the agent might need to select from trillions of moves. “Formulating the space of algorithmic discovery is very intricate,” co-author Hussein Fawzi, a computer scientist at DeepMind, said at a press briefing, but “even harder is, how can we navigate in this space”.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Quantum theory based on real numbers instead of complex numbers can be experimentally falsified

Quantum theory based on real numbers instead of complex numbers can be experimentally falsified | Amazing Science | Scoop.it

Although complex numbers are essential in mathematics, they are not needed to describe physical experiments, as those are expressed in terms of probabilities, hence real numbers. Physics, however, aims to explain, rather than describe, experiments through theories. Although most theories of physics are based on real numbers, quantum theory was the first to be formulated in terms of operators acting on complex Hilbert spaces1,2. This has puzzled countless physicists, including the fathers of the theory, for whom a real version of quantum theory, in terms of real operators, seemed much more natural3. In fact, previous studies have shown that such a ‘real quantum theory’ can reproduce the outcomes of any multipartite experiment, as long as the parts share arbitrary real quantum states4.

 

A team of theoretical physicists now investigate whether complex numbers are actually needed in the quantum formalism. They were able to show this to be case by proving that real and complex Hilbert-space formulations of quantum theory make different predictions in network scenarios comprising independent states and measurements. This allows to devise a Bell-like experiment, the successful realization of which would disprove real quantum theory, in the same way as standard Bell experiments disproved local physics.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Without any prior knowledge AI program uncovered novel relevant variables from performing experiments

Without any prior knowledge AI program uncovered novel relevant variables from performing experiments | Amazing Science | Scoop.it

A new AI program observed physical phenomena and uncovered relevant variables—a necessary precursor to any physics theory.

 

All physical laws are described as mathematical relationships between state variables. These variables give a complete and non-redundant description of the relevant system. However, despite the prevalence of computing power and artificial intelligence, the process of identifying the hidden state variables themselves has resisted automation. Most data-driven methods for modeling physical phenomena still rely on the assumption that the relevant state variables are already known. A longstanding question is whether it is possible to identify state variables from only high-dimensional observational data.

 

Scientists now created a principle for determining how many state variables an observed system is likely to have, and what these variables might be. They were able to demonstrate the effectiveness of this approach using video recordings of a variety of physical dynamical systems, ranging from elastic double pendulums to fire flames. Without any prior knowledge of the underlying physics, our algorithm discovers the intrinsic dimension of the observed dynamics and identifies candidate sets of state variables.

 

Github repository is here

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Minerva: Solving Math and Physics Problems with Language Models

Minerva: Solving Math and Physics Problems with Language Models | Amazing Science | Scoop.it

Language models have demonstrated remarkable performance on a variety of natural language tasks — indeed, a general lesson from many works, including BERTGPT-3Gopher, and PaLM, has been that neural networks trained on diverse data at large scale in an unsupervised way can perform well on a variety of tasks.

Quantitative reasoning is one area in which language models still fall far short of human-level performance.

 

Solving mathematical and scientific questions requires a combination of skills, including correctly parsing a question with natural language and mathematical notation, recalling relevant formulas and constants, and generating step-by-step solutions involving numerical calculations and symbolic manipulation. Due to these challenges, it is often believed that solving quantitative reasoning problems using machine learning will require significant advancements in model architecture and training techniques, granting models access to external tools such as Python interpreters, or possibly a more profound paradigm shift.

 

In “Solving Quantitative Reasoning Problems With Language Models” (to be released soon on the arXiv), we present Minerva, a language model capable of solving mathematical and scientific questions using step-by-step reasoning. We show that by focusing on collecting training data that is relevant for quantitative reasoning problems, training models at scale, and employing best-in-class inference techniques, we achieve significant performance gains on a variety of difficult quantitative reasoning tasks.

 

Minerva solves such problems by generating solutions that include numerical calculations and symbolic manipulation without relying on external tools such as a calculator. The model parses and answers mathematical questions using a mix of natural language and mathematical notation. Minerva combines several techniques, including few-shot promptingchain of thought or scratchpad prompting, and majority voting, to achieve state-of-the-art performance on STEM reasoning tasks. You can explore Minerva’s output with our interactive sample explorer!

 

A Model Built for Multi-step Quantitative Reasoning

To promote quantitative reasoning, Minerva builds on the Pathways Language Model (PaLM), with further training on a 118GB dataset of scientific papers from the arXiv preprint server and web pages that contain mathematical expressions using LaTeXMathJax, or other mathematical typesetting formats. Standard text cleaning procedures often remove symbols and formatting that are essential to the semantic meaning of mathematical expressions. By maintaining this information in the training data, the model learns to converse using standard mathematical notation.
 
Minerva also incorporates recent prompting and evaluation techniques to better solve mathematical questions. These include chain of thought or scratchpad prompting — where Minerva is prompted with several step-by-step solutions to existing questions before being presented with a new question — and majority voting. Like most language models, Minerva assigns probabilities to different possible outputs. When answering a question, rather than taking the single solution Minerva scores as most likely, multiple solutions are generated by sampling stochastically from all possible outputs. These solutions are different (e.g., the steps are not identical), but often arrive at the same final answer. Minerva uses majority voting on these sampled solutions, taking the most common result as the conclusive final answer.
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mathematical Model Predicts Efficacy of Immunotherapies for Patients With Cancer

Mathematical Model Predicts Efficacy of Immunotherapies for Patients With Cancer | Amazing Science | Scoop.it
For 124 patients, four cancer types and two immunotherapy agents, the new model reliably predicted the immune responses and final tumor burden across all cancers and drug combinations examined.

 

Immunotherapy has shown great promise in the fight against cancer. By activating the body’s own immune system to identify and attack its cancer cells, these next generation therapies offer the potential for highly targeted and efficacious cancer treatment with fewer negative side effects than traditional cancer therapies. Immunotherapies also may work more effectively on certain types of cancer that are known to respond poorly to other treatment methods.

 

Despite the substantial advances, immunotherapy still presents notable challenges. While these therapies have been highly effective against certain types of cancer, more than 50 percent of cancer patients fail to respond to immunotherapies. Among patients who do respond, it often occurs more slowly than more traditional treatment regimens, making it challenging to determine when to alter the clinical approach. Several promising biomarkers for immunotherapy response have been identified to help alleviate this issue, but they are often specific to a certain family of drug and disease combinations and may require extensive and invasive diagnostic testing.

 

A team of researchers, led by Houston Methodist's Zhihui Wang, PhD, associate research professor of mathematics in medicine and Vittorio Cristini, PhD, professor of mathematics in medicine and have developed, analyzed and validated a mathematical model that can quantify the sensitivity of a cancer cell type to a specific immunotherapeutic drug. The new analytic tool employs inputs that are already being measured in cancer patients to help optimize treatment approaches based on the individual’s specific disease and immune health, thus enhancing the chances for selecting successful treatments from a wide variety of cancer-immunotherapy drug combinations. The model establishes a framework for engineering personalized treatment strategies., The specifics of the mathematical model were published in Nature Biomedical Engineering in collaboration with researchers at MD Anderson Cancer Center.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Mathematics may have caught up with Google’s quantum-computer supremacy claims

Mathematics may have caught up with Google’s quantum-computer supremacy claims | Amazing Science | Scoop.it

In 2019, word filtered out that a quantum computer built by Google had performed calculations that the company claimed would be effectively impossible to replicate on supercomputing hardware. That turned out to not be entirely correct, since Google had neglected to consider the storage available to supercomputers; if that were included, the quantum computer's lead shrank to just a matter of days.

 

Adding just a handful of additional qubits, however, would re-establish the quantum computer's vast lead. Recently, however, a draft manuscript was placed on the arXiv that points out a critical fact: Google's claims relied on comparisons to a very specific approach to performing the calculation on standard computing hardware. There are other ways to perform the calculation, and the paper suggests one of those would allow a supercomputer to actually pull ahead of its quantum competitor.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How Math Might Reveal Quantum Gravity and Complete the Ultimate Physics Theory

How Math Might Reveal Quantum Gravity and Complete the Ultimate Physics Theory | Amazing Science | Scoop.it

Even in an incomplete state, quantum field theory is the most successful physical theory ever discovered. Nathan Seiberg, one of its leading architects, talks about the gaps in QFT and how mathematicians could fill them.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Can’t solve a riddle? The answer might lie in knowing what doesn’t work

Can’t solve a riddle? The answer might lie in knowing what doesn’t work | Amazing Science | Scoop.it

Ever get stuck trying to solve a puzzle? You look for a pattern, or a rule, and you just can't spot it. So you back up and start over. That's your brain recognizing that your current strategy isn't working, and that you need a new way to solve the problem, according to new research from the University of Washington. With the help of about 200 puzzle-takers, a computer model and functional MRI (fMRI) images, researchers have learned more about the processes of reasoning and decision-making, pinpointing the brain pathway that springs into action when problem-solving goes south.

 

"There are two fundamental ways your brain can steer you through life -- toward things that are good, or away from things that aren't working out," said Chantel Prat, associate professor of psychology and co-author of the new study, published Feb. 23 2021, in the journal Cognitive Science. "Because these processes are happening beneath the hood, you're not necessarily aware of how much driving one or the other is doing." Using a decision-making task developed by Michael Frank at Brown University, the researchers measured exactly how much "steering" in each person's brain involved learning to move toward rewarding things as opposed to away from less-rewarding things. Prat and her co-authors were focused on understanding what makes someone good at problem-solving.

 

The research team first developed a computer model that specified the series of steps they believed were required for solving the Raven's Advanced Performance Matrices (Raven's) -- a standard lab test made of puzzles like the one above. To succeed, the puzzle-taker must identify patterns and predict the next image in the sequence. The model essentially describes the four steps people take to solve a puzzle:

  • Identify a key feature in a pattern;
  • Figure out where that feature appears in the sequence;
  • Come up with a rule for manipulating the feature;
  • Check whether the rule holds true for the entire pattern.

 

At each step, the model evaluated whether it was making progress. When the model was given real problems to solve, it performed best when it was able to steer away from the features and strategies that weren't helping it make progress. According to the authors, this ability to know when your "train of thought is on the wrong track" was central to finding the correct answer.

 

The next step was to see whether this was true in people. To do so, the team had three groups of participants solve puzzles in three different experiments. In the first, they solved the original set of Raven's problems using a paper-and-pencil test, along with Frank's test which separately measured their ability to "choose" the best options and to "avoid" the worse options. Their results suggested that only the ability to "avoid" the worst options related to problem-solving success. There was no relation between one's ability to recognize the best choice in the decision-making test, and to solve the puzzles effectively.

 

The second experiment replaced the paper-and-pencil version of the puzzles with a shorter, computerized version of the task that could also be implemented in an MRI brain-scanning environment. These results confirmed that those who were best at avoiding the worse options in the decision-making task were also the best problem solvers.

 

The final group of participants completed the computerized puzzles while having their brain activity recorded using fMRI. Based on the model, the researchers gauged which parts of the brain would drive problem-solving success. They zeroed in on the basal ganglia -- what Prat calls the "executive assistant" to the prefrontal cortex, or "CEO" of the brain. The basal ganglia assist the prefrontal cortex in deciding which action to take using parallel paths: one that turns the volume "up" on information it believes is relevant, and another that turns the volume "down" on signals it believes to be irrelevant. The "choose" and "avoid" behaviors associated with Frank's decision-making test relate to the functioning of these two pathways. Results from this experiment suggest that the process of "turning down the volume" in the basal ganglia predicted how successful participants were at solving the puzzles.

 

"Our brains have parallel learning systems for avoiding the least good thing and getting the best thing. A lot of research has focused on how we learn to find good things, but this pandemic is an excellent example of why we have both systems. Sometimes, when there are no good options, you have to pick the least bad one! What we found here was that this is even more critical to complex problem-solving than recognizing what's working."

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Structured Light: Optical Framed Knots as Information Carriers

Structured Light: Optical Framed Knots as Information Carriers | Amazing Science | Scoop.it

Modern beam shaping techniques have enabled the generation of optical fields displaying a wealth of structural features, which include three-dimensional topologies such as Möbius, ribbon strips and knots. However, unlike simpler types of structured light, the topological properties of these optical fields have hitherto remained more of a fundamental curiosity as opposed to a feature that can be applied in modern technologies. Due to their robustness against external perturbations, topological invariants in physical systems are increasingly being considered as a means to encode information. Hence, structured light with topological properties could potentially be used for such purposes.

 

Now, a team of scientists introduce the experimental realization of structures known as framed knots within optical polarization fields. They further develop a protocol in which the topological properties of framed knots are used in conjunction with prime factorization to encode information. Beam shaping methods can generate optical fields with nontrivial topologies, which are invariant against perturbations and thus interesting for information encoding. Here, the authors introduce the realization of framed optical knots to encode programs with the conjoined use of prime factorization.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

IBM's Eagle quantum computer just beat a supercomputer at complex math

IBM's Eagle quantum computer just beat a supercomputer at complex math | Amazing Science | Scoop.it
 
Researchers are working out ways to ignore the noise and leverage the power of qubits to advance quantum computing.

 

IBM's Eagle quantum computer has outperformed a conventional supercomputer when solving complex mathematical calculations. This is also the first demonstration of a quantum computer providing accurate results at a scale of 100+ qubits, a company press release said.

 

Qubits, short for quantum bits, are analogs of a bit in quantum computing. Both are the primary or smallest units of information. However, unlike bits that can exist in two states, 0 or 1, a qubit can represent either of the states or in a superposition where it exists in any proportion of the two states. Scientists have been working on using superposition to compute large amounts of information in a fraction of the time it would take on a supercomputer. However, since the superposition can be disturbed by even the slightest interference from the outside environment, quantum computers are error-prone.

 

Scientists have been looking to secure computing environments and work with as few qubits as possible to reduce interference. In the recent past, though, researchers have instead started favoring working with the 'noise' since increasing the number of qubits has exponential advantages. Recently, Interesting Engineering reported how Chinese researchers used their photonic quantum computer Jiuzhang to solve a mathematical problem in less than a second. The same on the fastest supercomputer would have taken at least five years to solve.

 

Now a team of researchers led by Abhinav Kandala at IBM used a similar approach and decided to test the abilities of their "noisy" quantum computer against a conventional supercomputer at the Lawrence Berkeley National Laboratory in California. The Eagle quantum computer used by the team had 127 qubits, and both computers were asked to calculate the most likely behavior of a collection of particles, such as atoms with a spin arranged in a grid and interacting with each other.

Tanja Elbaz's curator insight, November 13, 2023 3:25 PM
 

Showing 1–12 of 30 results

Filter by price

 
100.000$1,300.000$
 

Brand

Element

Skin condition

Rating

    •  
    •  
    •  
 
 
 
  •  

Acxion Fentermina

120.0$ 850.0$
 
 
  •  

Ambien for sale online

150.0$ 650.0$
-50%
 
 
-6%
 
  •  

buy chewable Viagra

324.0$ 305.0$
 
 
  •  

buy concerta online

200.0$ 600.0$
Trending
 
-90%
 
-15%Trending
 
  •  

buy Dilaudid hydromorphone

260.0$ 220.0$
 
 
  •  

buy Endocet without prescription

200.0$ 450.0$
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A peek into the mathematics of black holes

A peek into the mathematics of black holes | Amazing Science | Scoop.it

Black holes exist in our universe. That’s widely accepted today. Physicists have detected the X-rays emitted when black holes feed, analyzed the gravitational waves from black hole collisions and even imaged two of these behemoths. But mathematician Elena Giorgi of Columbia University studies black holes in a different way.

 

“Black holes are mathematical solutions to the Einstein equation,” Giorgi says — the “master equation” that is the basis of the general theory of relativity. She and other mathematicians seek to prove theorems about these solutions and otherwise probe the math of general relativity. Their goal: unlock unsuspected truths about black holes or verify existing suspicions.

 

Within general relativity, “one can understand clean mathematical statements and study those statements, and they can give an unambiguous answer within that theory,” says Christoph Kehle, a mathematician at ETH Zurich’s Institute for Theoretical Studies.

 

Mathematicians can solve equations that have bearing on questions about the nature of black holes’ formation, evolution and stability. Last year, in a paper posted online at arXiv.org, Giorgi and colleagues settled a long-standing mathematical question about black hole stability. A stable black hole, mathematically speaking, is one that if poked, nudged or otherwise disturbed will eventually settle back into being a black hole. Like a rubber band that has been stretched and then released, the black hole doesn’t rip apart, explode or cease to exist, but returns to something like its former self.

 

Black holes seem to be physically stable — otherwise they couldn’t endure in the universe — but proving it mathematically is a different beast. And a necessary feat, Giorgi says. If black holes are stable, as researchers presume, then the math describing them had better reflect that stability. If not, something is wrong with the underlying theory. “Most of my work,” Giorgi says, “is about proving things that we already expected to be true.”

 

Mathematics has a history of big contributions in the realm of black holes. In 1916, Karl Schwarzschild published a solution to Einstein’s equations for general relativity near a single spherical mass. The math showed a limit to how small a mass could be squeezed, an early sign of black holes. More recently, British mathematician Roger Penrose won the 2020 Nobel Prize in physics for his calculations showing that black holes were real-world predictions of general relativity. In a landmark paper published in 1965, Penrose described how matter could collapse to form a black hole with a singularity at its center.

Tanja Elbaz's curator insight, November 14, 2023 10:13 AM
  •  
  • Sale

    Buy Metformin online

    Metformin is a medicine that is used to treat type 2 diabetes by helping to bring your blood sugar levels under control. Metformin is prescribed when lifestyle changes like diet and exercise plans have not been effective in controlling your blood sugars.
    $320.00 $200.00
     
  •  
  • Sale

    Buy mifeprex online

    Mifepristone (also known as RU 486) is used to cause an abortion during the early part of pregnancy. It is used up to week 10 of pregnancy (up to 70 days after the first day of your last menstrual period).$519.00 $360.00
     
  •  
  • Sale

    Buy mounjaro injectable pens

    Mounjaro Mounjaro is a new medication that has been shown to be effective in promoting.weight. Mounjaro contains the drug Tirzepatide which works by targeting a protein named glucose-dependent insulinotropic polypeptide receptor (GIP receptor), which helps to regulate blood sugar levels and also plays a role in appetite and weight regulation
    While Mounjaro (Tirzepatide) has received initial approval for use
    $100.00 – $250.00
     
  •  
  • Sale

    Buy mtp kit pills online

    the process is also termed as ending an undesired pregnancy using abortion pills. It is the safest alternative to surgical abortion. Women who do not wish to go for surgery can go for medical abortion with MTP Kit if the gestation week is within 8 weeks.$590.00 $250.00
     
  •  
  • Sale

    Buy Myrbetriq online

    Myrbetriq is a brand-name drug of Mirabegron. This medicine is used to treat the symptoms of overactive bladder such as frequent urination or sudden need to urinate. Tell your doctor about all your medical conditions before starting Myrbetriq.$320.00 $200.00
     
  •  
  • Sale

    Buy naltrexone contrave

    Our online pharmacy is well known among our customers for being the best one available. Naltrexone to make you can charge to know where those who drive, etc. Where those patients feel funny.$300.00 $200.00
     
  •  
  • Sale

    Buy Opana online

    Opana ER, also available as Numorphan or Opana, is an opioid medication. It helps treat moderate to severe pain. The extended-release form of oxymorphone is for around-the-clock treatment of pain or should not use-as-needed basis for pain.$540.00 $220.00
     
  •  
  • Sale

    Buy Ozempic online

    Ozempic is a brand name for Semaglutide which is used in the treatment of type 2 diabetes. It belongs to a class of drugs called anti-diabetics, Glucagon-like Peptide-1 Agonists.$439.99 $320.99
     
  •  
  • Sale
  •  
  • Sale

    Buy Plavix Online

    $470.99 $240.99
     
  •  
  • Sale
  •  
  • Sale

    Buy pure gbl gamma-butyrolactone

    INGREDIENTS:– CAS: 96-48-0 – EINECS: 202-509-5 – Gamma-Butyrolactone 99,9% GBL Cleaner 1000mLPRECAUTIONS FOR USE– Do not swallow. – Avoid any contact with the eyes. – Keep out of reach of children. – In case of contact with eyes, rinse immediately with clean water and consult your doctor. – Wear suitable protective clothing. – In case of accident or if you feel unwell, seek medical advice immediately.$200.00 $150.00
     
  •  
  • Sale

    Buy Saxenda online

    $740.99 $480.99
     
  •  
  • Sale

    Buy Singulair online

    $450.00 $214.00
     
  •  
  • Sale

    Buy Sustanon online

    $569.00 $350.00
     
  •  
  • Buy testosterone cypionate online

         testosterone cypionate This medication is used in men who do not make enough of a natural substance called testosterone. In males, testosterone is responsible for many normal functions, including growth and development of the genitals, muscles, and bones. It also helps cause normal sexual development (puberty) in boys.$65.00 – $70.00
     
     
  •  
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Explained: The deconstructed Standard Model Equation

Explained: The deconstructed Standard Model Equation | Amazing Science | Scoop.it

The Standard Model of particle physics is often visualized as a table, similar to the periodic table of elements, and used to describe particle properties, such as mass, charge and spin. The table is also organized to represent how these teeny, tiny bits of matter interact with the fundamental forces of nature.

 

But it didn’t begin as a table. The grand theory of almost everything actually represents a collection of several mathematical models that proved to be timeless interpretations of the laws of physics.

Here is a brief tour of the topics covered in this gargantuan equation.

The whole thing

This version of the Standard Model is written in the Lagrangian form. The Lagrangian is a fancy way of writing an equation to determine the state of a changing system and explain the maximum possible energy the system can maintain.  Technically, the Standard Model can be written in several different formulations, but, despite appearances, the Lagrangian is one of the easiest and most compact ways of presenting the theory.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Symmetry: from Galois to the Monster in 196883 dimensions

Symmetry: from Galois to the Monster in 196883 dimensions | Amazing Science | Scoop.it
Symmetry

The mathematical study of symmetry is called group theory. This is because the symmetry operations on an object, or the symmetries that preserve a particular pattern, form a group in the mathematical sense. One symmetry operation followed by another gives a third one in the same group, and this group embodies, in an abstract way, the symmetry of the object or pattern concerned. The application of groups to serious mathematical problems first arose in the work of Évariste Galois, a young French mathematician who died after being fatally wounded in a duel at the age of twenty.

 

Group Theory

Mathematicians study groups in various ways, one of which is to deconstruct them into simpler groups. Those that cannot be deconstructed further — the very ‘atoms’ of the subject — are called ‘simple’ groups, though they can be very complicated. In the book, these finite simple groups are called ‘atoms of symmetry’, and the first ones were discovered by Galois in about 1830.

 

Finite Simple Groups

Most finite simple groups fit into a table, rather like the periodic table of chemical elements. Those in the table are called groups of Lie type, the term “Lie” (pronounced Lee) being in honor of the Norwegian mathematician Sophus Lie. His work in the late nineteenth century led to continuous groups — called Lie groups — and these in turn led to finite groups of ‘Lie type’. The table of all such groups was complete by the early 1960s, but there were exceptions that did not fit in. They are called sporadic groups.

 

Sporadic Groups

In the mid-to-late nineteenth century, the French mathematician Émile Mathieu created five very exceptional groups of permutations, the largest of which is called M24. Mathieu’s groups did not fit into the later periodic table, and remained the only exceptions for a hundred years, until the Croatian mathematician, Zvonimir Janko found a new one that he published in 1966. This inspired the search for other sporadic groups, and their discovery is an intriguing story involving a variety of methods: some geometric, some involving patterns exhibiting interesting permutations, and some by analyzing possible cross-sections (called ‘involution centralizers’ in group theory). These latter cases were very technical, and the construction of the sporadic group was a tricky business, usually involving computer techniques. The Monster — the largest sporadic group — was predicted by the cross-section method, but its size and complicated structure rendered computer methods impractical, and it had to be constructed by hand. There are two main threads that led to the Monster. One was the Leech Lattice and the Conway groups; the other was the Baby Monster discovered by Bernd Fischer.

 

The Leech Lattice

This is a 24-dimensional lattice created in the 1960s by John Leech in Scotland. He used a design discovered in the mid-1930s by the German mathematician, Ernst Witt who had created it in order to construct the largest Mathieu group M24. Leech used it to obtain a remarkable way of packing 24-dimensional spheres, a fact with useful applications to technology. The symmetry of this lattice was investigated in detail by the English mathematician, John Conway. It yielded several sporadic groups, including three new ones, now known as the Conway groups.

 

The Monster

The other thread that led to the Monster emerged from work of the German mathematician, Bernd Fischer who created three large and remarkable sporadic groups that are related to, but far larger than, the three largest Mathieu groups. Fischer then found a huge fourth one, later named the Baby Monster, and the Monster was predicted as an even larger sporadic group having the Baby as a cross-section. Fischer, in collaboration with Donald Livingstone and Michael Thorne in England, calculated the character table of the Monster (a square array of numbers giving immense information about the group in question). They assumed the Monster could operate in 196,883 dimensions — at minimum — a number calculated by Simon Norton at Cambridge. Norton worked out that the Monster, if it existed, would have to preserve an algebra structure in 196,884 dimensions, and the American mathematician, Robert Griess constructed it on that basis. His work used the Leech lattice, and his algebra structure yielded the Monster as its group of symmetries.

 

A nice video about the Monster group is from the mathematician who participated in describing it: Dr. Richard E. Borcherds.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Deepmind: Discovering novel algorithms for mathematics and physics with AlphaTensor

Deepmind: Discovering novel algorithms for mathematics and physics with AlphaTensor | Amazing Science | Scoop.it

AlphaTensor, an AI system for discovering novel, efficient, and exact algorithms for matrix multiplication - a building block of modern computations.

 

Original Article

Algorithms have helped mathematicians perform fundamental operations for thousands of years. The ancient Egyptians created an algorithm to multiply two numbers without requiring a multiplication table, and Greek mathematician Euclid described an algorithm to compute the greatest common divisor, which is still in use today. During the Islamic Golden Age, Persian mathematician Muhammad Ibn Musa al-Khwarizmi designed new algorithms to solve linear and quadratic equations. In fact, al-Khwarizmi’s name, translated into Latin as Algoritmi, led to the term algorithm. But, despite the familiarity with algorithms today – used throughout society from classroom algebra to cutting edge scientific research – the process of discovering new algorithms is incredibly difficult, and an example of the amazing reasoning abilities of the human mind. 

 

In Deepmind's recent paper, published in Nature, they introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks such as matrix multiplication. This sheds light on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.

 

This paper is a stepping stone in DeepMind’s mission to advance science and unlock the most fundamental problems using AI. AlphaTensor builds upon AlphaZero, an agent that has shown superhuman performance on board games, like chess, Go and shogi, and this work shows the journey of AlphaZero from playing games to tackling unsolved mathematical problems for the first time.

 

Deepmind's AlphaTensor focuses on the fundamental task of matrix multiplication, and use deep reinforcement learning (DRL) to search for provably correct and efficient matrix multiplication algorithms. This algorithm discovery process is particularly amenable to automation because a rich space of matrix multiplication algorithms can be formalized as low-rank decompositions of a specific three-dimensional (3D) tensor2, called the matrix multiplication tensor3,4,5,6,7. This space of algorithms contains the standard matrix multiplication algorithm and recursive algorithms such as Strassen’s2, as well as the (unknown) asymptotically optimal algorithm. Although an important body of work aims at characterizing the complexity of the asymptotically optimal algorithm8,9,10,11,12, this does not yield practical algorithms5. AlphaTensor focus here on practical matrix multiplication algorithms, which correspond to explicit low-rank decompositions of the matrix multiplication tensor. In contrast to two-dimensional matrices, for which efficient polynomial-time algorithms computing the rank have existed for over two centuries13, finding low-rank decompositions of 3D tensors (and beyond) is NP-hard14 and is also hard in practice. In fact, the search space is so large that even the optimal algorithm for multiplying two 3 × 3 matrices is still unknown.

 

Nevertheless, in a longstanding research effort, matrix multiplication algorithms have been discovered by attacking this tensor decomposition problem using human search2,15,16, continuous optimization17,18,19 and combinatorial search20. These approaches often rely on human-designed heuristics, which are probably suboptimal. Deepmind instead use DRL to learn to recognize and generalize over patterns in tensors, and use the learned agent to predict efficient decompositions. They formulate the matrix multiplication algorithm discovery procedure (that is, the tensor decomposition problem) as a single-player game, called TensorGame. At each step of TensorGame, the player selects how to combine different entries of the matrices to multiply. A score is assigned based on the number of selected operations required to reach the correct multiplication result.

 

This is a challenging game with an enormous action space (more than 1012 actions for most interesting cases) that is much larger than that of traditional board games such as chess and Go (hundreds of actions). To solve TensorGame and find efficient matrix multiplication algorithms, we develop a DRL agent, AlphaTensor. AlphaTensor is built on AlphaZero1,21, where a neural network is trained to guide a planning procedure searching for efficient matrix multiplication algorithms. The framework uses a single agent to decompose matrix multiplication tensors of various sizes, yielding transfer of learned decomposition techniques across various tensors. To address the challenging nature of the game, AlphaTensor uses a specialized neural network architecture, exploits symmetries of the problem and makes use of synthetic training games.

 

AlphaTensor scales to a substantially larger algorithm space than what is within reach for either human or combinatorial search. In fact, AlphaTensor discovers from scratch many provably correct matrix multiplication algorithms that improve over existing algorithms in terms of number of scalar multiplications. The developers also adapt the algorithm discovery procedure to finite fields, and improve over Strassen’s two-level algorithm for multiplying 4 × 4 matrices for the first time since its inception in 1969.

 

AlphaTensor also discovers a diverse set of algorithms—up to thousands for each size—showing that the space of matrix multiplication algorithms is richer than previously thought. They also exploit the diversity of discovered factorizations to improve state-of-the-art results for large matrix multiplication sizes. Through different use-cases, Deepmind highlight AlphaTensor’s flexibility and wide applicability: AlphaTensor discovers efficient algorithms for structured matrix multiplication improving over known results, and finds efficient matrix multiplication algorithms tailored to specific hardware, by optimizing for actual runtime. These algorithms multiply large matrices faster than human-designed algorithms on the same hardware.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from SynBioFromLeukipposInstitute
Scoop.it!

The geometry of life: when mathematics meets synthetic biology

The geometry of life: when mathematics meets synthetic biology | Amazing Science | Scoop.it

Youtube video is here

 

Tiling patterns can be found thoughout the natural world - from honeycomb to fish scales. But now researchers have come up with a new way to create patterns in petri dishes using bacteria. By engineering bacterial cells to express uniquely adhesive proteins on their surface, the team could create linear patterns - formed as colonies of cells stuck together when they grew. What's more, by varying the exact proteins expressed and modeling where to place the bacterial cells, they were able to control the resulting geometry - creating a range of complex patterns.


Via Gerd Moe-Behrens
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virus World
Scoop.it!

Global Impact of the First Year of COVID-19 Vaccination: a Mathematical Modelling Study

Global Impact of the First Year of COVID-19 Vaccination: a Mathematical Modelling Study | Amazing Science | Scoop.it
The first COVID-19 vaccine outside a clinical trial setting was administered on Dec 8, 2020. To ensure global vaccine equity, vaccine targets were set by the COVID-19 Vaccines Global Access (COVAX) Facility and WHO. However, due to vaccine shortfalls, these targets were not achieved by the end of 2021. We aimed to quantify the global impact of the first year of COVID-19 vaccination programs.
 
A mathematical model of COVID-19 transmission and vaccination was separately fit to reported COVID-19 mortality and all-cause excess mortality in 185 countries and territories. The impact of thr COVID-19 vaccination initiative was determined by estimating the additional lives lost if no vaccines had been distributed. We also estimated the additional deaths that would have been averted had the vaccination coverage targets of 20% set by COVAX and 40% set by WHO been achieved by the end of 2021.

Findings

Based on official reported COVID-19 deaths, we estimated that vaccinations prevented 14·4 million (95% credible interval [Crl] 13·7–15·9) deaths from COVID-19 in 185 countries and territories between Dec 8, 2020, and Dec 8, 2021. This estimate rose to 19·8 million (95% Crl 19·1–20·4) deaths from COVID-19 averted when we used excess deaths as an estimate of the true extent of the pandemic, representing a global reduction of 63% in total deaths (19·8 million of 31·4 million) during the first year of COVID-19 vaccination. In COVAX Advance Market Commitment countries, we estimated that 41% of excess mortality (7·4 million [95% Crl 6·8–7·7] of 17·9 million deaths) was averted. In low-income countries, we estimated that an additional 45% (95% CrI 42–49) of deaths could have been averted had the 20% vaccination coverage target set by COVAX been met by each country, and that an additional 111% (105–118) of deaths could have been averted had the 40% target set by WHO been met by each country by the end of 2021.

Interpretation

COVID-19 vaccination has substantially altered the course of the pandemic, saving tens of millions of lives globally. However, inadequate access to vaccines in low-income countries has limited the impact in these settings, reinforcing the need for global vaccine equity and coverage.
 
Published in The Lancet Infectious Diseases (June 23, 2022):

Via Juan Lama
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

"Math neurons" identified in the human brain

"Math neurons" identified in the human brain | Amazing Science | Scoop.it

The brain has neurons that fire specifically during certain mathematical operations. This is shown by a recent study conducted by the Universities of Tübingen and Bonn. The findings indicate that some of the neurons detected are active exclusively during additions, while others are active during subtractions. They do not care whether the calculation instruction is written down as a word or a symbol. The results have now been published in the journal Current Biology.

 

Most elementary school children probably already know that three apples plus two apples add up to five apples. However, what happens in the brain during such calculations is still largely unknown. The current study by the Universities of Bonn and Tübingen now sheds light on this issue.

 

The researchers benefited from a special feature of the Department of Epileptology at the University Hospital Bonn. It specializes in surgical procedures on the brains of people with epilepsy. In some patients, seizures always originate from the same area of the brain. In order to precisely localize this defective area, the doctors implant several electrodes into the patients. The probes can be used to precisely determine the origin of the spasm. In addition, the activity of individual neurons can be measured via the wiring.

 

Some neurons fire only when summing up

Five women and four men participated in the current study. They had electrodes implanted in the so-called temporal lobe of the brain to record the activity of nerve cells. Meanwhile, the participants had to perform simple arithmetic tasks. "We found that different neurons fired during additions than during subtractions," explains Prof. Florian Mormann from the Department of Epileptology at the University Hospital Bonn.

 

It was not the case that some neurons responded only to a "+" sign and others only to a "-" sign: "Even when we replaced the mathematical symbols with words, the effect remained the same," explains Esther Kutter, who is doing her doctorate in Prof. Mormann's research group. "For example, when subjects were asked to calculate '5 and 3', their addition neurons sprang back into action; whereas for '7 less 4,' their subtraction neurons did."

 

This shows that the cells discovered actually encode a mathematical instruction for action. The brain activity thus showed with great accuracy what kind of tasks the test subjects were currently calculating: The researchers fed the cells' activity patterns into a self-learning computer program. At the same time, they told the software whether the subjects were currently calculating a sum or a difference. When the algorithm was confronted with new activity data after this training phase, it was able to accurately identify during which computational operation it had been recorded.

 

Prof. Andreas Nieder from the University of Tübingen supervised the study together with Prof. Mormann. "We know from experiments with monkeys that neurons specific to certain computational rules also exist in their brains," he says. "In humans, however, there is hardly any data in this regard." During their analysis, the two working groups came across an interesting phenomenon: One of the brain regions studied was the so-called parahippocampal cortex. There, too, the researchers found nerve cells that fired specifically during addition or subtraction. However, when summing up, different addition neurons became alternately active during one and the same arithmetic task. Figuratively speaking, it is as if the plus key on the calculator were constantly changing its location. It was the same with subtraction. Researchers also refer to this as "dynamic coding."

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Advancing mathematics by guiding human intuition with AI

Advancing mathematics by guiding human intuition with AI | Amazing Science | Scoop.it

The practice of mathematics involves discovering patterns and using these to formulate and prove conjectures, resulting in theorems. Since the 1960s, mathematicians have used computers to assist in the discovery of patterns and formulation of conjectures1, most famously in the Birch and Swinnerton-Dyer conjecture2, a Millennium Prize Problem3.

 

Now, mathematical researchers provide examples of new fundamental results in pure mathematics that have been discovered with the assistance of machine learning—demonstrating a method by which machine learning can aid mathematicians in discovering new conjectures and theorems. They propose a process of using machine learning to discover potential patterns and relations between mathematical objects, understanding them with attribution techniques and using these observations to guide intuition and propose conjectures. In a recent paper, they outline this machine-learning-guided framework and demonstrate its successful application to current research questions in distinct areas of pure mathematics, in each case showing how it led to meaningful mathematical contributions on important open problems: a new connection between the algebraic and geometric structure of knots, and a candidate algorithm predicted by the combinatorial invariance conjecture for symmetric groups4.

 

This important work may serve as a model for collaboration between the fields of mathematics and artificial intelligence (AI) that can achieve surprising results by leveraging the respective strengths of mathematicians and machine learning.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A random walk in 10 dimensions (with book blog and python code)

A random walk in 10 dimensions (with book blog and python code) | Amazing Science | Scoop.it

Physics in high dimensions is becoming the norm in modern dynamics.  It is not only that string theory operates in ten dimensions (plus one for time), but virtually every complex dynamical system is described and analyzed within state spaces of high dimensionality.  Population dynamics, for instance, may describe hundreds or thousands of different species, each of whose time-varying populations define a separate axis in a high-dimensional space.  Coupled mechanical systems likewise may have hundreds or thousands (or more) of degrees of freedom that are described in high-dimensional phase space.

 

In high-dimensional landscapes, mountain ridges are much more common than mountain peaks. This has profound consequences for the evolution of life, the dynamics of complex systems, and the power of machine learning. For these reasons, as physics students today are being increasingly exposed to the challenges and problems of high-dimensional dynamics, it is important to build tools they can use to give them an intuitive feeling for the highly unintuitive behavior of systems in high-D.

 

Within the rapidly-developing field of machine learning, which often deals with landscapes (loss functions or objective functions) in high dimensions that need to be minimized, high dimensions are usually referred to in the negative as “The Curse of Dimensionality”.

 

Dimensionality might be viewed as a curse for several reasons.  First, it is almost impossible to visualize data in dimensions higher than d = 4 (the fourth dimension can sometimes be visualized using colors or time series).  Second, too many degrees of freedom create too many variables to fit or model, leading to the classic problem of overfitting.  Put simply, there is an absurdly large amount of room in high dimensions.  Third, our intuition about relationships among areas and volumes are highly biased by our low-dimensional 3D experiences, causing us to have serious misconceptions about geometric objects in high-dimensional spaces.  Physical processes occurring in 3D can be over-generalized to give preconceived notions that just don’t hold true in higher dimensions.

No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A new theorem from the field of quantum machine learning hints at fundamental learnability limit

A new theorem from the field of quantum machine learning hints at fundamental learnability limit | Amazing Science | Scoop.it

A new theorem from the field of quantum machine learning has poked a major hole in the accepted understanding about information scrambling.

“Our theorem implies that we are not going to be able to use quantum machine learning to learn typical random or chaotic processes, such as black holes. In this sense, it places a fundamental limit on the learnability of unknown processes,” said Zoe Holmes, a post-doc at Los Alamos National Laboratory and coauthor of the paper describing the work published today in Physical Review Letters.

“Thankfully, because most physically interesting processes are sufficiently simple or structured so that they do not resemble a random process, the results don’t condemn quantum machine learning, but rather highlight the importance of understanding its limits,” Holmes said.

In the classic Hayden-Preskill thought experiment, a fictitious Alice tosses information such as a book into a black hole that scrambles the text. Her companion, Bob, can still retrieve it using entanglement, a unique feature of quantum physics. However, the new work proves that fundamental constraints on Bob’s ability to learn the particulars of a given black hole’s physics means that reconstructing the information in the book is going to be very difficult or even impossible.  

“Any information run through an information scrambler such as a black hole will reach a point where the machine learning algorithm stalls out on a barren plateau and thus becomes untrainable. That means the algorithm can’t learn scrambling processes,” said Andrew Sornborger a computer scientist at Los Alamos and coauthor of the paper. Sornborger is Director of Quantum Science Center at Los Alamos and leader of the Center’s algorithms and simulation thrust. The Center is a multi-institutional collaboration led by Oak Ridge National Laboratory. 

Barren plateaus are regions in the mathematical space of optimization algorithms where the ability to solve the problem becomes exponentially harder as the size of the system being studied increases. This phenomenon, which severely limits the trainability of large scale quantum neural networks, was described in a recent paper by a related Los Alamos team.

Aigloss's curator insight, May 31, 2021 5:05 PM

W tym linku możemy poznać teoretyczne limity komputerów kwantowych w przewidywaniu skrajnych zjawisk we wszechświecie.

Rescooped by Dr. Stefan Gruenwald from Self-organizing, Systems and Complexity
Scoop.it!

Complexity Explained

Complexity Explained | Amazing Science | Scoop.it

"It is suggested that a system of chemical substances, called morphogens, reacting together and diffusing through a tissue, is adequate to account for the main phenomena of morphogenesis."

– Alan Turing

 

Interactions between components of a complex system may produce a global pattern or behavior. This is often described as self-organization, as there is no central or external controller. Rather, the “control” of a self-organizing system is distributed across components and integrated through their interactions. Self-organization may produce physical/functional structures like crystalline patterns of materials and morphologies of living organisms, or dynamic/informational behaviors like shoaling behaviors of fish and electrical pulses propagating in animal muscles. As the system becomes more organized by this process, new interaction patterns may emerge over time, potentially leading to the production of greater complexity. In some cases, complex systems may self-organize into a “critical” state that could only exist in a subtle balance between randomness and regularity.

 

Patterns that arise in such self-organized critical states often show various peculiar properties, such as self-similarity and heavy-tailed distributions of pattern properties.

 

Examples: single egg cell dividing and eventually self-organizing into complex shape of an organism; cities growing as they attract more people and money; a large population of starlings showing complex flocking patterns.

 

Concepts: self-organization, collective behavior, swarms , patterns, space and time, order from disorder , criticality, self-similarity, burst, self-organized criticality, power laws, heavy-tailed distributions, morphogenesis, decentralized/distributed control, guided self-organization.

 

A Forest Fire Model: This model is an example of self-organized criticality. The interplay of local tree growth and spontaneous, random forrest fires caused by lightning yield complex spatio-temporal patterns in which the size of individual fires follows a power-law and is scale-free. designed by D. Brockmann adapted from Complexity Explorables.


Via june holley