In the interest of propagating language and symbols that assist in helping non-mathematicians to mediate between two and three dimensions, Geoffrey Dutton created "QTM Comix," a concise, interactive introduction to the topological innovations around hierarchical coordinates, leading up to his development of Quaternary Triangular Mesh.
This is Part 23 of a comprehensive series on New Humanism in architecture by Robert Lamb Hart. The thesis is that light energy is a material in design, both as content and context. "Light invites action," and it "changes the geometry of space."
After tracing innovations that have facilitated greater mastery of environment through managing and manipulating light, there is a fascinating analysis of the uses of various colors and black & white. This palette provides "a range of 'languages' of color that have been most effective... Like verbal languages they appear to evolve over time, becoming enormously complex and powerful."
The article concludes:
"In a built environment, colors are, of course, only one factor in a scene, but again experience has proven colorists in our culture able – on their own – to stimulate, or set the stage for such imagined sensations as: warm or cool; soft, smooth or harsh; excitement or serenity; harmony or confusion; formal or casual; refined or rustic; dignity or flamboyance; solidarity or conflict; natural or crisp and machine-like; old and prestigious or new and prestigious; and varying political and religious values. In other words, they have identified workable translations from colors to body states, and the expressive languages of color – from the cool white-black-gray of the Bauhaus to the intense rainbow of a Caribbean village – are as inevitable and forceful in a built environment as they are in love and in battle."
Ferrofluids — those mesmerizing drops of magnetic liquid — can perform a number of neat tricks under certain conditions. But what would happen if you placed a blob of the stuff on a hydrophobic surface?
Tom Leckrone's insight:
Beautifully designed experiment revealing effect of both static and dynamic electromagnetic field.
As a student of Buckminster Fuller's Tensegrity, I was not surprised to see these droplets resolve into six equilateral triangles in hexagonal formation.
Excerpt from paper's Abstract: "The droplets self-assemble under a static external magnetic field into simple patterns that can be switched to complicated dynamic dissipative structures by applying a time-varying magnetic field. The transition between the static and dynamic patterns involves kinetic trapping and shows complexity that can be directly visualized."
I believe that this approach has the potential to accelerate our understanding of self-tuning systems and self-optimizing networks.
A team of researchers from Cologne, Munich and Dresden have managed to create artificial magnetic monopoles. To do this, the scientists merged tiny magnetic whirls, so-called skyrmions.
Tom Leckrone's insight:
Paul Dirac determined that monopoles exist pursuant to his "quantization requirement." (That is, all charges are evenly divisible by the elementary charge.) The applications for sheets of these skyrmions (magnetic whirls) seem limitless. The article focuses on computer hardware applications, but I believe that layers of monopoles could eventually form a digital membrane between humans and machines.
I offer these two excerpts as evidence that new mathematical applications offer individuals and institutional actors novel and transformative approaches to the most daunting problems of our rapidly digitizing world. "Many of these topics pose new and challenging mathematical problems, far beyond the scope of tools used in traditional subjects such as general equilibrium theory and efficient market hypothesis." The ability to lift observations, relationships, and decisionmaking above/beyond the linear is a game changer, and it is now within reach. New mathematical approaches to "agent-based models, radical uncertainty... dynamical systems, nonlinearity, network science, complexity theory, game theory, and other mathematical techniques... extend far beyond the domain of traditional tools used in economics."
And by the way, employees and customers may interact with these tools in the field, in streaming digital!
Creating a Secure 'Sandbox' on Employee Devices. Companies are tackling questions of security on worker mobile phones and tablets by creating special compartments in the devices.
Tom Leckrone's insight:
BYOD is here to stay. Digital handheld devices are everywhere. Fortunately for employers, they have tremendous capacity for enhancing both productivity and work/life balance.
Creating a "secure sandbox" to isolate work activity within the device is the first step in creating healthy enterprise digital workspaces. Designing this workspace in a manner that promotes state of the art practices, including intuitive relationship building, deserves closer analysis. While the first instinct may be to keep personal devices at arms length from corporate affairs, such an approach denies the potential for richer employer-employee engagement. Video cameras, collection of biometric information, myriad communication channels streaming 24/7, social networks, and styles of data mining certainly create new forms of exposure. But, properly managed within an integrated digital enterprise platform, the opportunities for enhanced productivity and engagement substantially outweigh the associated risks.
A properly designed digital sandbox will provide a distinctive look and feel that will convey clear expectations. It will come installed with the full range of tools and applications and allow users to "nominate" other useful tools. Each user will have a customized dashboard that show calendars, agendas, metrics, contacts, etc. Contact permissions will be set according to concentric relationships, with employees having detailed access to closest peers and customized access to other stakeholders according to unit and greater community objectives.
These digital sandboxes are easily arranged to form a Digital Organizational Chart which supports mapping of corporate processes, hierarchies, objectives, mission, vision, etc. The facet (or "screen") of the digital workspace involving process improvement, for example, will quickly focus attention on the sentinel event/metric/practice in question. It will elicit a response from the employee and collate/communicate this response along appropriate channels. The capabilites for data sorting and converting data into information (and information into business intelligence) are phenomenal.
A properly installed virtual workspace is a game changer at every stage of the employment relationship, as it:
- allows for powerful communication of corporate objectives;
- provides for enhanced interactivity;
- eases intelligent aggregation of data and information;
- optimizes communications to provide actionable information on a continuous, "just in time" basis;
- supports more equitable and timely incentivization for top performers;
- opens open and shared feedback loops; and
- efficiently harmonizes task assignment and objective refinement.
In short, the linear and matrixed approaches of the last century have become outmoded by the digital handheld. By installing, promoting, and curating a secure digital sandbox, organizations can situate their employees to collaborate in creating an interactive, "well rounded" organizational structure that can "take on" the Digital Age.
Like small children, scientists are always asking the question 'why?'. One question they've yet to answer is why nature picked quantum physics, in all its weird glory, as a sensible way to behave.
We know that things that follow quantum rules, such as atoms, electrons or the photons that make up light, are full of surprises. They can exist in more than one place at once, for instance, or exist in a shared state where the properties of two particles show what Einstein called "spooky action at a distance", no matter what their physical separation. Because such things have been confirmed in experiments, researchers are confident the theory is right. But it would still be easier to swallow if it could be shown that quantum physics itself sprang from intuitive underlying principles.
One way to approach this problem is to imagine all the theories one could possibly come up with to describe nature, and then work out what principles help to single out quantum physics. A good start is to assume that information follows. Einstein's special relativity and cannot travel faster than light. However, this alone isn't enough to define quantum physics as the only way nature might behave. Corsin and Stephanie think they have come across a new useful principle. "We have found a principle that is very good at ruling out other theories," says Corsin.
In short, the principle to be assumed is that if a measurement yields no information, then the system being measured has not been disturbed. Quantum physicists accept that gaining information from quantum systems causes disturbance. Corsin and Stephanie suggest that in a sensible world the reverse should be true, too. If you learn nothing from measuring a system, then you can't have disturbed it.
As is often the case in research, Corsin and Stephanie reached this point having set out to solve an entirely different problem altogether. Corsin was trying to find a general way to describe the effects of measurements on states, a problem that he found impossible to solve. In an attempt to make progress, he wrote down features that a 'sensible' answer should have. This property of information gain versus disturbance was on the list. He then noticed that if he imposed the property as a principle, some theories would fail.
Corsin and Stephanie are keen to point out it's still not the whole answer to the big 'why' question: theories other than quantum physics, including classical physics, are compatible with the principle. But as researchers compile lists of principles that each rule out some theories to reach a set that singles out quantum physics, the principle of information gain versus disturbance seems like a good one to include.
Just after the Big Bang, the Universe's dimensions may have been completely different to the four-dimensional space-time we know and love today.
Shortly after the Big Bang, the Universe possessed only one dimension of space and one dimension of time. It was basically a straight line. As the Universe began to cool, and expanded, this one dimension of space became “wrapped up” in such a way to create two dimensions of space and one of time — a plane, like a sheet of flat paper.
The transition from one to two dimensions of space was calculated by the researchers to occur when the Universe “cooled” to an energy level of 100 TeV (tera-electron volts, a measurement of energy commonly used in particle physics). A period of time after that, the Universe continued to expand and cool until it reached an energy of 1 TeV. At this point, the Universe got promoted to a higher dimension; three dimensions of space and one dimension of time, i.e., the Universe we live in today.
Mureika and Stojkovic think the Universe will eventually be promoted again, to a five-dimensional state, at some point in the future.
Is it possible to appreciate the geometric/polytopal properties of the amplituhedron without delving into the physics that gave rise to it? All the descriptions I've so far encountered assume famil...
Tom Leckrone's insight:
This is an elemental breakdown of the amplituhedron concept. It certainly seems to carry through with Riemann's manifold concept, which assisted Einstein all these decades ago! Going back even further, no one has been able to discern why Archimedes was working straight out on combinatorics. With the amplituhedron, his objective begins to take shape.
This is a small animation i did as an exercise to experiment and explore all the graphical possibilities of representing the idea of the SPHERE, always thinking…
Tom Leckrone's insight:
Beautiful and thought provoking animation exploring properties of the sphere and evoking unity principles posited by Planck and Buckminster Fuller. For me, this two and a half minute video was extremely meditative.
In China, a "nail house" is a home whose resident refuses to leave in order to make way for new construction. Builders have to elaborately construct around it, often leaving behind an eyesore so awesome that it's almost a sculpture.
Tom Leckrone's insight:
This caught my attention because it harkened back to a classic Bugs Bunny episode. (Included in comments.) I then appreciated the personal political poetry of one person stoicly preserving a vision in the face of overwhelming societal pressure. Finally, I was awe struck to realize that these overtly individual (self-centered) acts occurred in China. Here is more on the topic: http://rendezvous.blogs.nytimes.com/2012/12/01/another-nail-house-in-china-gets-hammered/?_r=0
"In 1587, a promising young scholar by the name of Galileo Galilei held Two lectures to the Florentine Academy on the Shape, Location and Size of Dante’s Inferno."
In his two lectures, Galileo countered prior assumptions concerning the geometric architecture of Dante's vision. In doing so, he unearthed Archimedes' approach to geometry that had been forgotten in the intervening 1800 years. Here is Galileo's explication on the volume of a the spherical Earth:
"But wanting to know its size in respect to the whole volume of earth and water, we should not just follow the opinion of some who have written about the Inferno, who believe it to occupy the sixth part of the volume, because making the computation according to the methods proved by Archimedes in his book On the Sphere and the Cylinder, we will find that the space of the Inferno occupies a little less than 1/14 part of the whole volume; I say this if that space should extend all the way to the surface of the earth, which it doesn't: on the contrary, the mouth remains covered by a great vault of earth, whose summit is Jerusalem and whose thickness is the eighth part of the radius."
As the author Jean-Marc Lévy-Leblond explores, it seems clear that this work influenced Galileo's later successes. It is fascinating to consider the manner in which the "Renascence" of knowledge became popularized by dynamic figures playing out new concepts in the public sphere. Imagine: an innovative scientist working to flesh out the details and dimensions of Hell as envisaged 300 years prior by The Secular Poet. Culture was certainly churning in Florence!
We have come a long way since Einstein said, "You see, wire telegraph is a kind of a very, very long cat. You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this? And radio operates exactly the same way: you send signals here, they receive them there. The only difference is that there is no cat."
Controlling a motor without sensors sounds hard, but it can be done. One approach is to run a sympathetic software model of the system.
Tom Leckrone's insight:
Beautifully clear description by Don Morgan of process of building a sensorless motor, including description of sympathetic algorithm as "observer." Implications for processing noise of Big Data to identify and make sense of patterns and systems.
"algorithmic technique known as field-oriented control...allows us to view the currents that control the motor torque from a static frame associated with the rotor, as opposed to the stator..."
What a breathtakingly elegant approach to the solution!
"The Kalman filter provides a means for inferring the missing information from indirect (and noisy) measurements. It can be shown that no other linear function of the inputs and outputs can give a smaller mean square estimation error--especially if all noises (random variables) are Gaussian noise processes. The optimal estimate from noisy data can be obtained by the method of least squares."
Method of Least Squares:
"In order to control a dynamic system, you need to know what it is doing. When we know what it is doing and we can describe it, we can model it. This usually involves describing the physical plant or system using one or more differential equations. These differential equations form the transfer function of the system.
In many cases, it is possible to derive a closed form for our description. In this case, however, we wish to use this model to estimate, predict, or smooth. So it is to our advantage to create a model that we can use iteratively."
Such a compelling and promising response to the impending digital deluge! Enlisting algorithms to refract/blend/weight data streams opens up new dimensions: We are no longer linear but are instantly omnidirectional. Self-tuning sympathetic algorithms allow for the parsing/balancing of multiple vectors simultaneously. The implications for pattern identification are formidable, but my particular interest is organizational design. Customized and artfully interlinked dashboards and the sorted/curated business intelligence they will foment dynamic reorganization via digitization.
When it comes to a more holistic view of your corporate organism, BELIEVE THE HYP! (The hypotenuse, that is)! @SemprePhi
The bitcoin network speed estimate on bitcoinwatch.com passed 1 exaFLOPS (1,000 petaFLOPS) this week - over 8 times the combined speed of the top 500 supercomputers.
The bitcoin network hashrate estimate on bitcoinwatch.compassed 1 exaFLOPS (1,000 petaFLOPS) this week – over 8 times the combined speed of the top 500 supercomputers. Experts will be quick to point out that this estimate is flawed, since no FLOPS are actually used in bitcoin mining. FLOPS stands for FLoating-point Operations Per Second, and is frequently used as a standard to measure computer speed. Bitcoin mining uses an integer calculation and almost no floating-point operations, so converting bitcoin network speed to this standard is somewhat clumsy.
The FLOPS estimate is based on the opportunity cost of computers using their hardware for mining instead of other applications. Miners are using their graphics cards to perform hashes instead of other FLOPS-based distributed computing. Therefore, a conversion rate of 1 hash = 12.7K FLOP is used to estimate what this hardware could be doing.
The estimate was created in 2011, before the production of ASIC hardware that now dominates the network. ASICS are custom designed chips that can only perform bitcoin mining calculations. The exaFLOPS estimate breaks down with ASICs, because they are not capable of floating-point operations, and therefore there is no opportunity cost associated with their use.
Interestingly, the estimate may still be useful for estimating how well other supercomputers and distributed networking projects would be able to mine bitcoins. Their speed is measured in FLOPS, but they also have the capability of performing the integer operations used in hashing. What would happen if the top 10 supercomputers all switched to bitcoin mining? How much would that affect the network? Lets reverse the equation, and say that they would receive 1 hash for every 12.7k FLOP.
The fastest computer, Sequoia, would measure at about 1.6% of the bitcoin network. Their combined speed is 48 petaFLOPS, roughly equivalent to 5% of the bitcoin network. In fact, the top 500 supercomputers have a combined speed of 12% of the bitcoin network.
To actually use these computers for mining, It would take more than just installing standard mining software. But lets be honest, these computers have better things to work on like curing cancer, solving global warming, and monitoring banking transactions.
We are surrounded by tiny, intelligent devices that capture data about how we live and what we do. Soon we'll be able to choreograph them to respond to our needs, solve our problems, and even save our lives.
Imagine a factory where every machine, every room, feeds back information to solve problems on the production line. Imagine a hotel room (like the ones at the Aria in Las Vegas) where the lights, the stereo, and the window shade are not just controlled from a central station but adjust to your preferences before you even walk in. Think of a gym where the machines know your workout as soon as you arrive, or a medical device that can point toward the closest defibrillator when you have a heart attack. Consider a hybrid car—like the new Ford Fusion—that can maximize energy efficiency by drawing down the battery as it nears a charging station.
There are few more appropriate guides to this impending future than Hawkinson, whose DC-based startup, SmartThings, has built what’s arguably the most advanced hub to tie connected objects together. At his house, more than 200 objects, from the garage door to the coffeemaker to his daughter’s trampoline, are all connected to his SmartThings system. His office can automatically text his wife when he leaves and tell his home A/C system to start powering up.
In this future, the intelligence once locked in our devices now flows into the universe of physical objects. Technologists have struggled to name this emerging phenomenon. Some have called it the Internet of Things or the Internet of Everything or the Industrial Internet—despite the fact that most of these devices aren’t actually on the Internet directly but instead communicate through simple wireless protocols. Other observers, paying homage to the stripped-down tech embedded in so many smart devices, are calling it the Sensor Revolution.
But here’s a better way to think about what we’re building: It’s the Programmable World. After all, what’s remarkable about this future isn’t the sensors, nor is it that all our sensors and objects and devices are linked together. It’s the fact that once we get enough of these objects onto our networks, they’re no longer one-off novelties or data sources but instead become a coherent system, a vast ensemble that can be choreographed, a body that can dance. Really, it’s the opposite of an “Internet,” a term that even today—in the era of the cloud and the app and the walled garden—connotes a peer-to-peer system in which each node is equally empowered. By contrast, these connected objects will act more like a swarm of drones, a distributed legion of bots, far-flung and sometimes even hidden from view but nevertheless coordinated as if they were a single giant machine.
For the Programmable World to reach its full potential, we need to pass through three stages. The first is simply the act of getting more devices onto the network—more sensors, more processors in everyday objects, more wireless hookups to extract data from the processors that already exist. The second is to make those devices rely on one another, coordinating their actions to carry out simple tasks without any human intervention. The third and final stage, once connected things become ubiquitous, is to understand them as a system to be programmed, a bona fide platform that can run software in much the same manner that a computer or smartphone can.
Once we get there, that system will transform the world of everyday objects into a designable environment, a playground for coders and engineers. It will change the whole way we think about the division between the virtual and the physical. This might sound like a scary encroachment of technology, but the Programmable World could actually let us put more of our gadgets away, automating activities we normally do by hand and putting intelligence from the cloud into everything we touch.
Excerpt: "The third and final stage, once connected things become ubiquitous, is to understand them as a system to be programmed, a bona fide platform that can run software in much the same manner that a computer or smartphone can."