omnia mea mecum fero
14.2K views | +0 today
Follow
omnia mea mecum fero
όλα τα δικά μου τα κουβαλάω πάνω μου
Curated by pa3geo
Your new post is loading...
Your new post is loading...
Scooped by pa3geo
Scoop.it!

Δέκα τεχνολογίες του 2015 που θα ορίσουν το 2016

Δέκα τεχνολογίες του 2015 που θα ορίσουν το 2016 | omnia mea mecum fero | Scoop.it
Το διεθνούς κύρους περιοδικό «Technology Review» (Επιθεώρηση Τεχνολογίας), του αμερικανικού πανεπιστημίου MIT, παρουσιάζει τις δέκα τεχνολογίες που αναδύθηκαν το 2015 και αναμένεται να ωριμάσουν περαι
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

Columbia Engineers Make World’s Smallest FM Radio Transmitter

Columbia Engineers Make World’s Smallest FM Radio Transmitter | omnia mea mecum fero | Scoop.it

Placing a sheet of atomically-thin graphene into a feedback circuit causes spontaneous self-oscillation that can be tuned to create frequency modulated (FM) signals.

 

Graphene, a single atomic layer of carbon, is the strongest material known to man, and also has electrical properties superior to the silicon used to make the chips found in modern electronics. The combination of these properties makes graphene an ideal material for nanoelectromechanical systems (NEMS), which are scaled-down versions of the microelectromechanical systems (MEMS) used widely for sensing of vibration and acceleration. For example, Hone explains, MEMS sensors figure out how your smartphone or tablet is tilted to rotate the screen.

 

In this new study, the team took advantage of graphene’s mechanical ‘stretchability’ to tune the output frequency of their custom oscillator, creating a nanomechanical version of an electronic component known as a voltage controlled oscillator (VCO). With a VCO, explains Hone, it is easy to generate a frequency-modulated (FM) signal, exactly what is used for FM radio broadcasting. The team built a graphene NEMS whose frequency was about 100 megahertz, which lies right in the middle of the FM radio band (87.7 to 108 MHz). They used low-frequency musical signals (both pure tones and songs from an iPhone) to modulate the 100 MHz carrier signal from the graphene, and then retrieved the musical signals again using an ordinary FM radio receiver. “This device is by far the smallest system that can create such FM signals,” says Hone.

 

While graphene NEMS will not be used to replace conventional radio transmitters, they have many applications in wireless signal processing. Explains Shepard, “Due to the continuous shrinking of electrical circuits known as ‘Moore’s Law’, today’s cell phones have more computing power than systems that used to occupy entire rooms. However, some types of devices, particularly those involved in creating and processing radio-frequency signals, are much harder to miniaturize. These ‘off-chip’ components take up a lot of space and electrical power. In addition, most of these components cannot be easily tuned in frequency, requiring multiple copies to cover the range of frequencies used for wireless communication.” 

 

Graphene NEMS can address both problems:  they are very compact and easily integrated with other types of electronics, and their frequency can be tuned over a wide range because of graphene’s tremendous mechanical strength.

 

“There is a long way to go toward actual applications in this area,” notes Hone, “but this work is an important first step. We are excited to have demonstrated successfully how this wonder material can be used to achieve a practical technological advancement—something particularly rewarding to us as engineers.”


Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

Magnetic switches could use 10,000 times less power than current silicon transistors

Magnetic switches could use 10,000 times less power than current silicon transistors | omnia mea mecum fero | Scoop.it
New research from UC Berkeley provides a proof of concept for a magnetic switch that could make computers thousands of times more energy-efficient, and provide some amazing new abilities, too.

 

Computer engineering, but in particular mobile computer engineering, is all about playing a zero-sum game with yourself. Power and efficiency are constantly undercutting one another, creating confounding incentives for designers looking to set records for both talk time and processing speed. At this point it seems obvious that both speed and battery life are limited by the old process of laying down increasingly dense little fields of silicon transistors; whether it’s a quantum computer or a graphene chip, getting more computing power for less electrical power will require a fundamental shift in how we build computers.

 

A new study from UC Berkeley hopes to provide the basis for just such an advance, laying out their attempt at a silicon replacement they say uses up to 10,000 timesless power than prior solutions. They have designed a system that uses magnetic switches in place of transistors, negating the need for a constant electric current. The idea of a magnetic transistor has been discussed since the early 1990s, but the idea’s downfall has always been the need to create a strong magnetic field to orient the magnets for easy switching; all or most of the power saved by the magnets is spent creating the field needed to actually use those magnets.

 

This new study, published in Nature, uses a wire made of tantalum, a somewhat rare element used to make capacitors in everything from Blu-Ray players to mobile phones. Tantalum is a good, light-weight conductor, but it has one particularly odd property that’s made it uniquely useful for magnetic applications: when a current flows through the tantalum wire, all clockwise-spinning electrons migrate to one side of the wire, all counter-clockwise-spinning to the other. The physical movement of these electrons creates a polarization in the system — the same sort of polarization prior researchers have had to create with an expensive magnetic field.

 

If this approach were successful and practical, we could begin to capitalize on some of the shared benefits of all magnetic computing strategies, the most glaring of which is that magnetic switches do not require constant current to maintain their state. Much like a liquid crystal in an e-ink display, a magnetic transistor will maintain its assigned state until actively flipped. This means that a theoretical magnetic processor could use far less energy than semi-conducting silicon ones by accruing energy savings whenever it is not actively doing work. And since tantalum is a fairly well-known material, its incorporation into the manufacturing process shouldn’t prove too difficult.


Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

The Futurist magazine’s top 10 forecasts for 2014 and beyond — and Why They Might Not Come True

The Futurist magazine’s top 10 forecasts for 2014 and beyond — and Why They Might Not Come True | omnia mea mecum fero | Scoop.it

The Futurist magazine’s top 10 forecasts for 2014 and beyond. 

Every year, the editors of the Futurist magazine identify the most provocative forecasts and statements about the future that we’ve published recently and we put them to into an annual report called “Outlook.” It’s sprawling exploration of what the future looks like at a particular moment in time. To accompany the report, we draft a list of our top 10 favorite predictions from the magazine’s previous 12 months. What are the criteria to be admitted into the top 10? The forecast should be interesting, relatively high impact, and rising in likelihood. In other words, it’s a bit subjective.

 

There are surely better methods for evaluating statements about the future, but not for our purposes. You see, we aren’t actually interested in attempting to tell our readers what will happen so much as provoking a better discussion about what can happen—and what futures can be avoided, if we discover we’re heading in an unsavory direction.

 

The future isn’t a destination. But the problem with too many conversations about the future, especially those involving futurists, is that predictions tend to take on unmitigated certainty, sounding like GPS directions. When you reach the Singularity, turn left—that sort of thing. In reality, it’s more like wandering around a city, deciding spur of the moment what road to take.


Via Szabolcs Kósa, Margarida Sá Costa, Dr. Stefan Gruenwald
Say Keng Lee's curator insight, October 7, 2013 5:06 AM

Fascinating forecasts!

Rescooped by pa3geo from Amazing Science
Scoop.it!

Mapping the impossible: Matterhorn mapped by a fleet of drones in just under 6 hours

The Matterhorn, which juts out a full kilometre above the surrounding Swiss Alps, dominates the local skyline and has challenged countless mountaineers since it was first scaled in 1865.

 

Now this iconic peak has been mapped in unprecedented detail by a fleet of autonomous, fixed-wing drones, flung into the sky from the summit by their makers. What's more, the entire process took just 6 hours.

 

The mapping, which was unveiled at the Drones and Aerial Robotics Conference in New York City last weekend, was carried out by unmanned aerial vehicle (UAV) company SenseFly, and aerial photography company Pix4D.

 

Three eBee drones were launched from the top of the mountain, skimming their way down 100 metres from the face, capturing points just 20 centimetres apart. When they reached the bottom, a second team intercepted the drones and relaunched them for further mapping.

 

Speaking to Mapbox, the mapping company that built the 3D point cloud of the mountain when the drones had landed, SenseFly's Adam Klaptocz said: "Such a combination of high altitudes, steep rocky terrain and sheer size of dataset has simply not been done before with drones, we wanted to show that it was possible."

 

A video crew follows senseFly's (http://www.sensefly.com/) team of engineers marking a historic milestone in proof of surveying techniques, using eBee minidrones to map the epic Matterhorn and construct a 3D model of "the most beautiful mountain".

The mission involved the coordination of several teams with multiple eBee drones taking over 2200 images in 11 flights, all within a few hours of a sunny alpine morning. The results are stunning: a high-definition 3D point-cloud made of 300 million points covering an area of over 2800 hectares with an average resolution of 20 cm. A special thanks to our partners Pix4D (http://www.pix4d.com) for the creation of the 3D model, Drone Adventures (http://www.droneadventures.org) for mission coordination and MapBox (http://www.mapbox.com) for online visualisation.

senseFly is a Parrot company (http://parrot.com/)


Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by pa3geo from visual data
Scoop.it!

6 Illuminating Big Data Infographics

6 Illuminating Big Data Infographics | omnia mea mecum fero | Scoop.it

Is Big Data still a big mystery to you? 

In recent years, the volume of information coming into companies has exploded, so that many IT organizations are dealing with extremely large sets of data. 

IT leaders are rethinking many aspects of how they manage and deliver information, from investments in infrastructure and analytics tools to new policies for organizing and accessing data so they can deliver more of it, faster. They are concerned that if they don't have the right tools and architectures to deal with all that information, then big data can be a big problem. Check out these infographics on Big Data to see the impact...


Via Lauren Moss
Aurélia-Claire Jaeger's curator insight, January 21, 2013 2:28 AM

Et en plus, c'est beau !

Rescooped by pa3geo from visual data
Scoop.it!

How Google Builds Its Maps—and What It Means for the Future of Everything

How Google Builds Its Maps—and What It Means for the Future of Everything | omnia mea mecum fero | Scoop.it
An exclusive look inside Ground Truth, the secretive program to build the world's best accurate maps.

 

Behind every Google Map, there is a much more complex map that's the key to queries but hidden from view. The deep map contains the logic of places: their no-left-turns and freeway on-ramps, speed limits and traffic conditions. This is the data that Google uses to navigate you from point A to point B.

Last week, Google showed me the internal map and demonstrated how it was built- the first time the company has let anyone see how the project it calls GT, or "Ground Truth," actually works.

Google opened up at a key moment in its evolution. The company began as an online search company, but then the mobile world exploded. Now, where you're searching from has become almost as important as what you're searching for. Google responded by creating an operating system, brand, and ecosystem that has become the only significant rival to Apple's iOS.

And for good reason. If Google's mission is to organize all the world's information, the most important challenge -- far larger than indexing the web -- is to take the world's physical information and make it accessible and useful...

 

Read the entire article for a fascinating look at how Google utilizes mapping systems, geo data, mobile technology, and visual representation to manage massive amounts of data from varying sources, including one of the most important to the success of Google Maps- human intelligence.


Via Lauren Moss
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

The Invisible Bicycle Helmet

“If people say it’s impossible we have to prove them wrong.” Design students Anna and Terese took on a giant challenge as an exam project.

Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

Technology: Can You Disconnect from the ‘Matrix’?

Technology: Can You Disconnect from the ‘Matrix’? | omnia mea mecum fero | Scoop.it

Are you a master of technology in which you use it as a tool to enhance the quality of your life? Or are you addicted to your technology such that it actually hurts the quality of your life? There is a growing body of evidence indicating that overuse of technology has the same neurochemical effects—a shot of dopamine, our bodies’ way of rewarding us—as do addictions to alcohol, drugs, sex, and gambling.


Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

[VIDEO] The Future of the Book.

Meet Nelson, Coupland, and Alice — the faces of tomorrow’s book. Watch global design and innovation consultancy IDEO’s vision for the future of the book. What new experiences might be created by linking diverse discussions, what additional value could be created by connected readers to one another, and what innovative ways we might use to tell our favorite stories and build community around books?


Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

The Evolution of Technology

The Evolution of Technology | omnia mea mecum fero | Scoop.it

Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

MIT Students Create The Future With An iPad And A Glove

MIT Students Create The Future With An iPad And A Glove | omnia mea mecum fero | Scoop.it

The video the group has released shows some pretty fancy stuff, drawing objects in 3D real time, and then manipulating them in collaboration with others. There’s even some slick Minority Report-style interface there, with researches moving red and blue rectangles around in the virtual space they’ve created on the iPad.


Via Tom George, ABroaderView, Sakis Koukouvis
jonathan.r.swift's comment May 25, 2012 7:19 AM
If you look here: http://leapmotion.com/ then this suddenly seems embarassingly out of date!
Sakis Koukouvis's comment, May 25, 2012 7:37 AM
Yes jonathan.r.swift I Know.
It is in Science News since 23 of May: http://www.scoop.it/t/science-news/p/1829881828/video-leap-motion-creates-finger-happy-gesture-control
Rescooped by pa3geo from Science News
Scoop.it!

BBC - Future

BBC - Future | omnia mea mecum fero | Scoop.it
BBC Future description goes here...

 

More on.. FUTURE: http://www.scoop.it/t/science-news?tag=future

 


Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

​What will NASA be doing with its new quantum computer?

​What will NASA be doing with its new quantum computer? | omnia mea mecum fero | Scoop.it
Earlier this year, NASA, in partnership with Google, acquired the world's largest quantum computer. But just what does the space agency plan to do with a device with such revolutionary potential?

 

NASA is currently looking at three very basic applications, including one that would serve as a day-planner for busy astronauts who are up in orbit.

 

"If you're trying to schedule or plan a whole bunch of tasks on the International Space Station, you can do certain tasks only if certain preconditions are met," he explains. "And after you perform the task you end up in another state where you may or may not be able to perform another task. So that's considered a hard optimization problem that a quantum system could potentially solve."

 

They're also looking to schedule jobs on supercomputers. And in fact, NASA Ames is responsible for running the agency's primary supercomputing facility. No doubt, at any instance of time they've got hundreds of individual jobs running on a supercomputer, while many others are waiting for their turn. A very difficult scenario would involve a job waiting to run — one that requires, say, 500 nodes — on a supercomputer with 1,000 nodes available.

 

"Which 500 of these 1,000 nodes should we pick to run the job?," he asks. "It's a very difficult scheduling problem."

 

Another important application is the Kepler search for exoplanets. NASA astronomers use their various telescopes to look at light curves to understand whether any noticeable dimming represents a potential exoplanet as it moves across its host star. This is a massive search problem — one that D-Wave could conceivably help with.

 

"These are the types of applications that we're trying to run," says Biswas. "We're doing it on our D-Wave system, which is the largest in the world, but it's still not large enough to solve the really hard real world problems. But by tackling the smaller problems, we can extrapolate to how a larger problem could be solved on a larger system." "But each of these images may be at a certain wavelength, and you may not get all the information from the image," he explains. "One of the challenges there is what's called data fusion, where you try to get multiple images and somehow fuse them in some smart way so that you can garner information from a fused image that you couldn't get from a single image.

 

And at NASA's Ames Research Center in Silicon Valley, Biswas's team runs the supercomputers that power a significant portion of NASA's endeavors, both public and commercial.

 

"We see quantum computing as a natural extension of our supercomputing efforts," he told me. "In fact, our current belief is that the D-WAVE system and other quantum computers that might come out in the next few years are all going to behave as attached processors to classical silicon computers."

 

Which is actually quite amazing. So in the future, when a user wants to solve a large problem, they would interact with their usual computer, while certain aspects would be handed over to the quantum computer. After performing the calculation, like an optimization problem, it would send the solution back to the traditional silicon-based machine. It'll be like putting your desktop PC on steroids.

 

"Just so we're clear, the D-Wave system is just one of many ways to leverage the effects of quantum physics," he told me. "But in order to use any quantum system, the first thing you need to have is a problem mapped in QUBO form." A QUBO form, which stands for a Quadratic Unconstrained Binary Optimization form, is a mathematical representation of any optimization problem that needs to be solved. At this time — and as far as we know — every single quantum computer requires that the input be in QUBO form.

 

"And that's a serious problem," says Biswas, "because there's no known recipe to devise a problem and then map it into QUBO form. But once we get a QUBO form — which is a graph representation of the problem — we can embed this onto the architecture of the D-Wave machine."

 

The D-Wave processors run 512 qubits which are made up of 64 unit cells. Each unit cell is made up of 8 qubits. And each qubit is made up of a bipartite graph, so there are four quibits on the left and four on the right. Each of the four qubits are connected to the ones on the right and vice-versa. But it's not a fully connected graph.

 

"So what happens therefore, is after you take your problem in QUBO form and you try to embed it into the D-WAVE machine it's not a universal quantum computer. It's not like you have computer keyboard and you can just tell the machine what to do." Essentially, the machine becomes dedicated to the task outlined by the QUBO form — a limitation that could impact scalability.

 

 


Via Dr. Stefan Gruenwald
Scott Gipson's curator insight, December 2, 2013 1:04 AM

       NASA partnered with Google earlier this year to acquire the world’s largest quantum computer. Quantum computers are different from digital computers based on transistors. While digital computers require data to be encoded into binary digits (bits), quantum computation uses quantum properties to represent data and perform operations based on these data. This article discusses the revolutionary potential of the device.

       Quantum systems have the ability to irrevocably change the way we go about computation. Unlike traditional silicon-based computers, these systems tap into the eerie effects of quantum mechanics (namely superposition, entanglement, and parallelism), enabling them to mull over all possible solutions to a problem in a single instant. According to physicist David Deutsch, a quantum system can work on a million computations at once while a standard desktop PC works on just one. These computers will help us find the most convenient solution to a complex problem. As such, they're poised to revolutionize the way we go about data analysis and optimization which include such realms as air traffic control, courier routing, weather prediction, database querying, and hacking tough encryption schemes.

        "Quantum computing has generated a lot of interest recently, particularly the ways in which the D-Wave quantum computer can be used to solve interesting problems. We've had the machine operational since September, and we felt the time is right to give the public a little bit of background on what we've been doing,” said Dr. Rupak Biswas, deputy director of the Exploration Technology Directorate at NASA's Ames Research Center in Silicon Valley.

        Biswas's team is currently looking at three very basic applications, including one that would serve as a day-planner for busy astronauts who are up in orbit. "If you're trying to schedule or plan a whole bunch of tasks on the International Space Station, you can do certain tasks only if certain preconditions are met," he explains. "And after you perform the task you end up in another state where you may or may not be able to perform another task. So that's considered a hard optimization problem that a quantum system could potentially solve."

        NASA is also heavily involved in developing the next generation of air traffic control systems. These involve not only commercial flights, but also cargo and unmanned flights. Currently, much of this is done in a consolidated fashion by air traffic control. But at later stages, when more distributed control is required and highly complex variables like weather need to be taken into account, quantum computing could certainly help.

       This article ties into Chapter 9: Business-to-Business Relations in our Case Studies textbook. “Tactics in business-to-business relations and partner relationship management help companies build productive relationships with other companies” (Guth & Marsh pg. 194). Considering what I’ve read in this article, so far the relationship between the two companies seems to be pretty productive. 

Rescooped by pa3geo from Amazing Science
Scoop.it!

Harvard scientists invent the synaptic transistor that learns while it computes

Harvard scientists invent the synaptic transistor that learns while it computes | omnia mea mecum fero | Scoop.it

It doesn't take a Watson to realize that even the world's best supercomputers are staggeringly inefficient and energy-intensive machines.

 

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

 

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

 

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer.

 

“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS.

 

“Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”

 

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

 

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”


Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by pa3geo from Amazing Science
Scoop.it!

MIT: The Million-Year Data Storage Disk Unveiled

MIT: The Million-Year Data Storage Disk Unveiled | omnia mea mecum fero | Scoop.it

Magnetic hard discs can store data for little more than a decade. But nanotechnologists have now designed and built a disk that can store data for a million years or more.

 

Back in 1956, IBM introduced the world’s first commercial computer capable of storing data on a magnetic disk drive. The IBM 305 RAMAC used fifty 24-inch discs to store up to 5 MB, an impressive feat in those days. Today, however, it’s not difficult to find hard drives that can store 1 TB of data on a single 3.5-inch disk. But despite this huge increase in storage density and a similarly impressive improvement in power efficiency, one thing hasn’t changed. The lifetime over which data can be stored on magnetic discs is still about a decade.

 

That raises an interesting problem. How are we to preserve information about our civilisation on a timescale that outlasts it? In other words, what technology can reliably store information for 1 million years or more?

 

Today, we get an answer thanks to the work of Jeroen de Vries at the University of Twente in the Netherlands and a few pals. These guys have designed and built a disk capable of storing data over this timescale. And they’ve performed accelerated ageing tests which show it should be able to store data for 1 million years and possibly longer.

 

These guys start with some theory about aging. Clearly, it’s impractical to conduct an ageing experiment in real time, particularly when the periods involved are measured in millions of years.  But there is a way to accelerate the process of aging.

 

This is based on the idea that data must be stored in an energy minimum that is separated from other minima by an energy barrier. So to corrupt data by converting a 0 to a 1, for example, requires enough energy to overcome this barrier.

 

The probability that the system will jump in this way is governed by an idea known as Arrhenius law. This relates the probability of jumping the barrier to factors such as its temperature, the Boltzmann constant and how often a jump can be attempted, which is related to the level of atomic vibrations.

 

Some straightforward calculations reveal that to last a million years, the required energy barrier is 63 KBT or 70 KBT to last a billion years. “These values are well within the range of today’s technology,” say de Vries and co.

 

The disk is simple in conception. The data is stored in the pattern of lines etched into a thin metal disc and then covered with a protective layer.

The metal in question is tungsten, which they chose because of its high melting temperature (3,422 degrees C) and low thermal expansion coefficient.  The protective layer is silicon nitride (Si3N4) chosen because of its high resistance to fracture and its low thermal expansion coefficient.

 

The results are impressive. According to Arrhenius law, a disk capable of surviving a million years would have to survive 1 hour at 445 Kelvin, a test that the new disks passed with ease. Indeed, they survived temperatures up to 848 Kelvin, albeit with significant amounts of information loss.


Via Dr. Stefan Gruenwald
No comment yet.
Rescooped by pa3geo from Eclectic Technology
Scoop.it!

The Next Fifty Years In Technology: Here's What's Coming!

The Next Fifty Years In Technology: Here's What's Coming! | omnia mea mecum fero | Scoop.it
What will the next fifty years bring in the world of social media, mobile, robotics and more? Our fifty year timeline shows you just what could be in store

Via Beth Dichter
Beth Dichter's curator insight, May 18, 2013 11:08 PM

Technology changes at such a rapid rate that it is tough to look ahead and envision where it will be in five years let alone fifty years...but that is what this infographic does. This timeline focuses on digital and mobile as well as big data, and references are provided for the information. 
Check it out and see if you agree or disagree. 

Gary Faust's curator insight, May 22, 2013 10:01 PM

Very interesting. Product development is possible, but the human brain has capabilities that a computer sill not surpass in that time. I doubt we understand the human brain sufficiently at this time to speculate about a computer's comparitive ability. Good ideas for science fiction writers, though.

Rescooped by pa3geo from JWT WOW
Scoop.it!

Samsung - Liquid Pixels

Samsung and the Galaxy Note II introduce Liquid Pixels. A short film documenting a piece of interactive water art, controlled solely using the Galaxy Note II and its S Pen technology. The concept was created by Daniel Kupfer, and took 10 days to create and used over 3,000 connections, which were all fitted individually. 


Via JWT_WOW
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

The Technology We Don’t See

The Technology We Don’t See | omnia mea mecum fero | Scoop.it

The goal of technology is to make itself disappear bit by bit.


Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

[VIDEO] A day in the life of HR

PeopleStreme presents a day in the life of Human Resources, our concept for the future of Human Capital Management. We use touch screen and tablet computer technology plus holographic imaging still to come.

Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

[VIDEO] Creativity in the Cloud: From the Big Bang to Twitter

[VIDEO] Creativity in the Cloud: From the Big Bang to Twitter | omnia mea mecum fero | Scoop.it

What does it mean to be connected in the 21st century? Hope, interdependence, and possibly the creation of a new consciousness, says Tiffany Shlain. Shlain is the founder of the Webby awards and creator of a new documentary, Connected: An Autoblogography about Love, Death & Technology, which premiered this year at Sundance.


Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

Yoram Reshef | The Light Catcher

Yoram Reshef | The Light Catcher | omnia mea mecum fero | Scoop.it
For the last 20 years, we serve the future technology with our photography art world...

Via Alessio Erioli, Andrea Graziano, Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

iPads Could Scan Palms for Passwords

iPads Could Scan Palms for Passwords | omnia mea mecum fero | Scoop.it
Tablets may soon authenticate users by reading hand movement.

Via Sakis Koukouvis
No comment yet.
Rescooped by pa3geo from Science News
Scoop.it!

New Device Lets the Blind 'Read' and 'See'

New Device Lets the Blind 'Read' and 'See' | omnia mea mecum fero | Scoop.it
The blind are passing eye tests and seeing the world around them thanks to a new device that converts images to sound. These sounds guide the blind to interpret objects, people and even expressions.

Via Sakis Koukouvis
No comment yet.