The evolution of GIS on the Web has, until recently, been limited by traditional Web mapping technologies. The increase in GIS requirements in the Web and mobile world has necessitated the development of a new breed of map engine. This article, by Dino Ravnić, the co-founder and CEO of GIS Cloud, Ltd., provides an overview of existing mapping technologies and the reasons why he decided to build a unique vector mapping engine in an effort to give the Web a full featured GIS.
Web GIS - the traditional approach
Rendering maps as tiles in raster image format (PNG or JPEG images) is the way geospatial data are commonly delivered on the Web today. This is done by producing tile images on a server and delivering them to a map client. Such technology is used by many Web map providers and it works well for creating nice-looking basemaps like on OpenStreetMap, Google Maps, Bing Maps, etc.
This is all good for Web mapping, but what about GIS? Normally, during the course of creating a GIS project, you need to modify data, reorder layers, tweak symbology and labels, edit and create features, do geoprocessing, analyses, etc. Results of these operations need to be evident instantly and the traditional method of server tile rendering just doesn’t cut it. Creating the full map tile cache can often take hours, even days, to complete. And if you’ve somehow forgotten to turn on labels for that waterways layer before the caching starts, well, take the rest of the week off.
Also you may need to interact with the data, which is not possible if your data are presented as flat raster images. Some improvements have been made by using map clients with technologies like Flash and Silverlight, but still map tile technology has remained the same, and a third-party plug-in is required to render your map.
Rendering vectors as vectors
As we all know, geospatial vector data consist of three basic types: point, lines and polygons. Additionally, each feature holds a number of data attributes. The downside of traditional mapping technologies is that all this useful vector and attribute information is lost in the process of converting and rendering those points, lines and polygons into a raster tile image.
Rendering vectors as vectors can open up a whole new set of possibilities for GIS on the Web and on mobile devices. Just imagine interacting with your data by hovering, clicking or tapping on them. Imagine rendering a huge number of features with dynamically changing symbology on-the-fly at high speeds and low latency. Imagine having full GIS editing in a browser with topology preserving, snapping and all those capabilities you are used to having in your desktop GIS.
What if we had a solution which could actually render vector data in their natural vector form?
To be honest, vectors can be overlaid on top of raster tiles in many map clients, but current implementations fail in real-world situations when you need to deal with thousands, if not millions, of features. In order to achieve such capabilities, existing mapping engines would require a big change, particularly on the back-end, but also on the front-end.
In the last couple of years we’ve witnessed incredible innovation and progress in Web browsers. With their huge presence across desktop, mobile and tablet devices, Web browsers have become a crucial component and the platform for many modern applications. A whole new set of capabilities available in modern browsers has been gathered under the term HTML5.
Among many capabilities the HTML5 standard provides, there is one crucial for improving GIS, and that is HTML5 Canvas. Canvas is basically a bitmap (image) which is dynamically generated in a browser. Its vector rendering performance is what makes it so useful for GIS applications.
Vectors as vectors, plus all the complexity of symbology and map rendering, are now moved to the client side (i.e. browsers) so servers need only deliver raw vector and attribute data. This means the map engines can be more effective and responsive. As mentioned before, this approach requires a totally different strategy on the server where the map tiles are actually being produced.
HTML5 Canvas tiled vector map engine
Croatia-based GIS Cloud created and implemented the world's first HTML5 vector mapping engine based on its original tiled vector map engine, which generated vector map tiles in Flash format. The performance it offered was unprecedented. The original system has been adapted for HTML5 Canvas and is now the primary method to deliver all maps at GIS Cloud.
The heart of the engine is a very fast server component created from scratch that quickly and efficiently reads geometry and attribute data and delivers them to the client as an optimized vector map tile. Once on the map client, the vector map technology uses Leaflet — http://leaflet.cloudmade.com — an excellent modern mapping client library made by CloudMade — to visualize the data.
You can check out a few HTML5 maps on the links below to see the engine in action, but also create and see your own data with this easy-to-use GIS in the cloud system. For GIS Cloud, moving to HTML5 vector mapping has been crucial as it opened up a whole new set of GIS features which are yet to be implemented.
A very slick mapping user experience in browsers on desktops, mobiles and tabletsVectors as vectorsThe ability to render millions of features on-the-flyFast rendering; no need for precaching mapSignificantly less tile bandwidth requiredLess storage neededMaps that are fully interactive: clickable, hoverable and styled dynamicallySymbology that is applied entirely on the client, meaning it’s very easy to make map styling changes without needing to reload a layerWorks out of the box on Web browsers across all platforms that have adopted the HTML5 standard (i.e. desktop, iOS, Android etc.)Excellent grounds for bringing true desktop GIS experience into the Web browser
A second experiment clocking the speed of subatomic particles using a GPS timing receiver has reconfirmed the revolutionary September results — the neutrinos moved faster than the speed of light, according to an announcement from the European Organization for Nuclear Research (CERN).
“One key test was to repeat the measurement with very short beam pulses from CERN,” the agency noted in a press release on its website. “This allowed the extraction time of the protons, that ultimately lead to the neutrino beam, to be measured more precisely… The new measurements do not change the initial conclusion.
Septentrio's precise-timing GPS receiver PolaRx2eTR features prominently in the OPERA experiment. Following the OPERA collaboration's presentation at CERN on September 23, inviting scrutiny of their neutrino time-of-flight measurement from the broader particle physics community, the collaboration has rechecked many aspects of its analysis and taken into account valuable suggestions from a wide range of sources, CERN stated on its website.
The beam sent from CERN consisted of pulses three nanoseconds long separated by up to 524 nanoseconds. Some 20 clean neutrino events were measured at the Gran Sasso Laboratory, and precisely associated with the pulse leaving CERN. This test confirms the accuracy of OPERA's timing measurement, ruling out one potential source of systematic error. "Nevertheless, the observed anomaly in the neutrinos' time of flight from CERN to Gran Sasso still needs further scrutiny and independent measurement before it can be refuted or confirmed," CERN stated.
The OPERA experiment observes a neutrino beam from CERN 730 kilometers away at Italy’s INFN Gran Sasso Laboratory. The OPERA result is based on the observation of more than 15,000 neutrino events measured at Gran Sasso, and appears to indicate that the neutrinos travel at a velocity 20 parts per million above the speed of light, nature’s cosmic speed limit. "Given the potential far-reaching consequences of such a result, independent measurements are needed before the effect can either be refuted or firmly established. This is why the OPERA collaboration has decided to open the result to broader scrutiny," CERN stated.
GOES image of four Atlantic Storms on Sept. 8, 2011. Image Source: NASA/NOAA GOES ProjectDuring the first two weeks of September, and the peak of the Atlantic Hurricane season, NASA satellites were keeping tabs on a number of tropical systems.
Trackable Week presents a new story Monday through Friday this week about creative ways to experience Trackables. Geocaching.com Trackables allow people to tag and track an item from location to location.
What a fun show tonight. This episode of Geocaching World (RVNN.tv) was in that fun and easy style of a Meet and Greet. Here we get away with a lot more. We communicate with the chat room a whole lot more and they interact right back with us.
ICESat has provided surface elevation measurements of the ice sheets since the launch in January 2003, resulting in a unique dataset for monitoring the changes of the cryosphere. The scientists present a novel method for determining the mass balance of the Greenland ice sheet, derived from ICESat altimetry data and find annual mass loss estimates of the Greenland ice sheet in the range of 191 ± 23 Gt yr−1 to 240 ± 28 Gt yr−1 for the period October 2003 to March 2008. These results are in good agreement with several other studies of the Greenland ice sheet mass balance, based on different remote-sensing techniques.
On Feb. 3, 2012, the U.S. State Department hosted its eighth conference in the Tech@State series. The two-day symposium focused on how social media and other internet-enabled data streams are used to create real-time awareness in different contexts, with sessions looking at analyzing large amounts of data, enhancing the understanding of consumer behavior, and live-mapping crisis situations.
One particularly interesting panel focused on the future of social media and real-time awareness, not only for individuals, but for a society that is still learning to deal with developments in social media, communication technologies and crowdsourced information. The speakers discussed how this pervasive technology could aid in real-time awareness, but also raised legitimate concerns about impacts on security and privacy.
Panelist Lou Martinage, with business intelligence firm MicroStrategy, offered the perspective of the private sector, focusing on how social media can be mined for real-time information on consumer preferences and behavior. Martinage described how MicroStrategy works with its clients to dig into the vast amounts of data available in the digital communication sphere, aggregate and analyze the information, and derive insights that can help the companies better address their customers’ needs.
For the past few years, MicroStrategy has been honing in on the potential of Facebook and the valuable consumer data that can be accessed through the platform. For Martinage, the primary use of Facebook is in the field of sentiment monitoring. In other words, Facebook is a spectacular tool for obtaining information on how consumers feel about a company’s product, allowing businesses to personalize marketing and commerce through promotions and offers.
Martinage says the culmination of this effort can be seen in a MicroStrategy application called Wisdom, which allows companies to analyze massive amounts of data to identify trends, gauge consumer confidence, and answer questions about market perception of a product or brand.
Others looked at the potential “dark side” of social media. Rand Waltzman, with the Defense Advanced Research Projects Agency, pointed out that “as more life takes place in the network public sphere, the more good and bad things will happen,” highlighting privacy and national security concerns in particular. One of his main fears is the ease with which individuals can disseminate false or harmful information through the very same technologies that aid in real-time awareness and disaster relief.
For example, Waltzman cited Operation Valhalla in Iraq, where American troops succeeded in rescuing a hostage and confiscating weapons from a terrorist group, killing 16 members of the group in the process. However, before the troops had returned to their base, other members of the terrorist organization swooped in and rearranged the bodies onto prayer mats, took a picture using a mobile phone, and uploaded it to the social media space — making it look like Americans had killed unarmed civilians in the middle of prayer.
Waltzman’s concern lies in the fact that the information can reach a large number of people before the damage can actually be addressed and the false information refuted. The speed at which the internet moves is light years ahead of our capacities to monitor the information available.
Unfortunately, Waltzman added, social media and the internet is an environment filled with contradiction: One can attempt to protect their own privacy, but has a very limited ability to control what others may say about them. In the end, Waltzman said, the positive aspects of social media can easily be perverted, and we need to tackle the policies that are preventing us from better addressing these issues.
While Waltzman’s comments deserve serious consideration, the panel did end on a hopeful note, with the words of Patrick Meier, of the Ushahidi Project. Ushahidi is a platform that allows for the crowdsourcing of information through Twitter, SMS text messages, and other internet sources onto a live map that allows for real-time crisis-mapping. The success of Ushahidi in crises situations, such as the Haiti earthquake in 2010 and the Arab Spring uprisings, has brought the social media and emergency management community’s attention to its potential. The project is a prime example of crowdsourcing working to increase situational awareness in crisis situations.
Meier brought the audience’s attention to the needs of volunteers in crisis situations. Ushahidi currently does much manual crowdsourcing in real time, but developments in machine learning and natural language processing are allowing algorithms to do more work separating valuable information from useless information in the stream of text messages and tweets. Meier’s outlook is very positive, and he has given great thought to issues arising from credibility of information.
Overall, the mood at the conference was one of endless possibilities. Nonetheless, Waltzman raised valid concerns regarding privacy and security in the growing, interconnected network that is the internet. And while Meier and Martinage point out the immense potential for real-time awareness, we must not forget that there are very legitimate concerns that need to be addressed before these technologies can truly permeate every level of our society.
A NASA-led science team has created an accurate, high-resolution map of the height of Earth's forests.
The map will help scientists better understand the role forests play in climate change and how their heights influence wildlife habitats within them, while also helping them quantify the carbon stored in Earth's vegetation.
Scientists from NASA's Jet Propulsion Laboratory, Pasadena, Calif.; the University of Maryland, College Park; and Woods Hole Research Center, Falmouth, Mass., created the map using 2.5 million carefully screened, globally distributed laser pulse measurements from space. The light detection and ranging (lidar) data were collected in 2005 by the Geoscience Laser Altimeter System instrument on NASA's Ice, Cloud and land Elevation Satellite (ICESat).
"Knowing the height of Earth's forests is critical to estimating their biomass, or the amount of carbon they contain," said lead researcher Marc Simard of JPL. "Our map can be used to improve global efforts to monitor carbon. In addition, forest height is an integral characteristic of Earth's habitats, yet is poorly measured globally, so our results will also benefit studies of the varieties of life that are found in particular parts of the forest or habitats."
The map, available at http://lidarradar.jpl.nasa.gov, depicts the highest points in the forest canopy. Its spatial resolution is 0.6 miles (1 kilometer). The map was validated against data from a network of nearly 70 ground sites around the world.
The researchers found that, in general, forest heights decrease at higher elevations and are highest at low latitudes, decreasing in height the farther they are from the tropics. A major exception was found at around 40 degrees south latitude in southern tropical forests in Australia and New Zealand, where stands of eucalyptus, one of the world's tallest flowering plants, tower much higher than 130 feet (40 meters).
The researchers augmented the ICESat data with other types of data to compensate for the sparse lidar data, the effects of topography and cloud cover. These included estimates of the percentage of global tree cover from NASA's Moderate Resolution Imaging Spectroradiometer on NASA's Terra satellite, elevation data from NASA's Shuttle Radar Topography Mission, and temperature and precipitation maps from NASA's Tropical Rainfall Measuring Mission and the WorldClim database. WorldClim is a set of freely available, high-resolution global climate data that can be used for mapping and spatial modeling.
In general, estimates in the new map show forest heights were taller than in a previous ICESat-based map, particularly in the tropics and in boreal forests, and were shorter in mountainous regions. The accuracy of the new map varies across major ecological community types in the forests, and also depends on how much the forests have been disturbed by human activities and by variability in the forests' natural height.
A lot of free GPS apps are present out there but the question is which ones are really really good? Many people use paid apps which are quite popular too, but today we have a list of free GPS apps that are really awesome to use.
Irish weather online has a nice article commenorating the 50th aniversary of huricane debbie landing on Irish shores (though he comes to the conclusion debbie wasnt actually a huricane by the time it hit Ireland).
Live webcast from the US Institute of Peace: This Blogs & Bullets meeting will bring together the companies that sift through and sell this data with the activists that create it and the policy-makers who use it.