visual data
Follow
Find tag "surveys"
50.3K views | +48 today
visual data
learning, conceptualizing + communicating data with infographics, visualizations, etc...
Curated by Lauren Moss
Your new post is loading...
Your new post is loading...
Scooped by Lauren Moss
Scoop.it!

Snail Mail vs Email [infographic]: old school direct lives on...

Snail Mail vs Email [infographic]: old school direct lives on... | visual data | Scoop.it

Snail mail – the original form of direct marketing – lives, according to a recent consumer survey, showing that the average Australian receives around seven letters per week, with government departments the most prolific users of the medium for communication purposes.


Read rates were high for most sender categories, much higher than standard email open rates, with special interest clubs and government leading the pack with open rates of 79% and 78% respectively. Read rates were less favourable for correspondence from real estate agents and local restaurants, while supermarket communication proved more likely to be read than department store mail.

For all categories respondents of the nationally representative survey preferred to receive correspondence via snail mail than email, although there were high numbers of people with no preference either way. Snail mail is preferred for lengthier or important information while email is preferred for brief information. As could be expected, older generations are more likely to be receptive to mail than email...

more...
Scooped by Lauren Moss
Scoop.it!

Visualizing Connections In Data & Analyzing Information

Visualizing Connections In Data & Analyzing Information | visual data | Scoop.it

For many data visualization projects, information comes from a source that has already done some aggregation. This is both a blessing and a curse. Aggregation definitely simplifies the analysis and visualization process, but it can also greatly reduce the visualization and analysis options. This is because aggregation often destroys connections in data. For this reason, it's critical to have an in-depth and thorough knowledge and understanding of the information from aggregated information. There are several different visualization techniques that open up once we have the original data, such as Euler diagrams and parallel sets.


The extra information that can be obtained from visualizations is important to gaining a full understanding of the data, and it can lead to a much more interesting story, as well as far better visualizations and more accurate connections and links within those visualizations.

So, when gathering data about something, remember to dig deeper into it, as there are many important connections that happen within data that can provide knowledge beyond just a simple average or total.


To learn more about the value of these connections, sourcing accurate data, and how it is transformed into useful graphics, read the complete article and check out the case study used to convey the main points outlined above...

more...
kurakura's comment, November 15, 2012 5:17 AM
the last graph on that page is really useful for understanding the data?
Rescooped by Lauren Moss from CrowdSourcing InfoGraphics
Scoop.it!

[INFOGRAPHIC] BIG DATA: What Your IT Team Wants You To Know

[INFOGRAPHIC] BIG DATA: What Your IT Team Wants You To Know | visual data | Scoop.it

The purpose of Big Data is to supply companies with actionable information on any variety of aspects. But this is proving to be far more difficult than it looks with over half of Big Data projects left uncompleted.


Two of the most often reported reasons for project failures are a lack of expertise in data analysis. Reports show that data processing, management and analysis are all difficult in any phase of the project, with IT teams citing each of those reasons more than 40% of the time.

However, failures in Big Data projects may not solely lie on faulty project management. In a recent survey, a staggering 80% of Big Data’s biggest challenges are from a lack of appropriate talent. The field’s relative infancy is making it hard to find the necessary staff to see projects through, resulting in underutilized data and missed project goals.

IT teams are quickly recognizing a chasm between executives and frontline staffers whose job it is to apply findings from Big Data. In the end,it may not be the anticipated cure-all for 21st century business management. It is only as good as good as the system that runs it.


Via Peter Azzopardi, Berend de Jonge
more...
Olivier Vandelaer's curator insight, January 30, 2013 2:45 AM

Looking at the infographic, it clearly reminds me about the start of "Enterprise Data Warehouse": failures by "Innacurate scope", "Technical Roadblocks" & "Siloed data and no collaboration". It looks so familiar.

Tony Agresta's curator insight, January 30, 2013 10:15 AM

Very interesting infographic.  Why do they fail?  For all of the reasons above and then some...    Over 80% of the data being collected today is unstructured and not readily stored in relational database technology burdened by complex extract, transform and load.  There's also pre-existing data, sometimes referred to as "dark data" that includes documents which need to be included and made discoverable for a host of reasons - compliance and regulatory issues are one.   Log activity and e-mail traffic used to detect cyber threats and mitigate risk through analysis of file transfers is yet another set of data that requires immediate attention.

 

Social and mobile are clearly channels that need to be addressed as organizations continue to mine data from the open web in support of CRM, product alerts, real time advertising options and more.  

 

To accomplish all of this, organizations need a platform with enterprise hardened technology that can ingest all of these forms of data in real time, without having to write complex schemas.   Getting back to the point - What do most projects fail?   If companies attempt to do this with technology that is not reliable, not durable and does not leverage the skills of their existing development organization, the project will fail.  

 

We have seen this time and time again.   MarkLogic to the rescue.   With over 350 customers and 500 big data applications, our Enterprise NoSQL approach mitigates the risk.  Why?  Our technology stack includes connectors to Hadoop, integration with leading analytics tools using SQL, Java and Rest APIs, JSON support, real time data ingestion, the ability to handle any form of data, alerting, in database analytics functions, high availability, replication, security and a lot more.  

 

When you match this technology with a world-class services organization with proven implementation skills, we can guarantee your next Big Data project will work.  We have done it hundreds of times with the largest companies in the world and very, very big data.

 

www.marklogic.com



Adrian Carr's curator insight, January 30, 2013 10:27 AM

This is a great infographic - it shows that whilst everyone is doing it (it being "Big Data" - whatever that is...), talent is rare, technology is hard to find and the projects never end.  A far cry from the speed with which companies such as the BBC deployed MarkLogic to serve all data for the sport websites through the Olympics.  Now that was big data, delivered by a talented team in a short space of time.