In this video, Colleen Heinemann, a Ph.D. student with the University of Illinois, describes how visualization makes scientific data more accessible and useful by transforming it into virtual objects you can see, touch and manipulate in 3-D space. "My research interest is in the cross section between High Performance Computing and scientific visualization. I am interested in not only presenting scientific data in an interesting way, but also in how High Performance Computing can be used to optimize the visualization process."
Nuno Edgar Fernandes's insight:
Transforming Scientific Data into Immersive Visualizations
Equities.com is an advanced financial information center and next-generation communication platform that connects self-directed investors with startup and small cap companies, market experts, and professional service providers and vendors.
Despite the obvious advantage of simple life forms capable of fast replication, different levels of cognitive complexity have been achieved by living systems in terms of their potential to cope with environmental uncertainty. Against the inevitable cost associated with detecting environmental cues and responding to them in adaptive ways, we conjecture that the potential for predicting the environment can overcome the expenses associated with maintaining costly, complex structures. We present a minimal formal model grounded in information theory and selection, in which successive generations of agents are mapped into transmitters and receivers of a coded message. Our agents are guessing machines and their capacity to deal with environments of different complexity defines the conditions to sustain more complex agents.
Information theory, predictability and the emergence of complex life Luís F. Seoane, Ricard V. Solé Published 21 February 2018.DOI: 10.1098/rsos.172221
The booming field of artificial intelligence (AI) is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. Just because algorithms are based on code doesn't mean experiments are easily replicated. Far from it. Unpublished codes and a sensitivity to training conditions have made it difficult for AI researchers to reproduce many key results. That is leading to a new conscientiousness about research methods and publication protocols. Last week, at a meeting of the Association for the Advancement of Artificial Intelligence in New Orleans, Louisiana, reproducibility was on the agenda, with some teams diagnosing the problem—and one laying out tools to mitigate it.
Artificial intelligence faces reproducibility crisis Matthew Hutson
Science 16 Feb 2018: Vol. 359, Issue 6377, pp. 725-726 DOI: 10.1126/science.359.6377.725
Deployment is a big chunk of using any technology, and tools to make deployment easier have always been an area of innovation in computing. For instance, the difficulties and uncertainties of installing software and keeping it up-to-date were one factor driving companies to offer software as a service over the Web. Likewise, big data projects present their own set of issues: how do you prepare and ingest the data? How do you view the choices made by algorithms that are complex and dynamic? Can you use hardware acceleration (such as GPUs) to speed analytics, which may need to operate on streaming, real-time data? Those are just a few deployment questions associated with deep learning.
For the last installment of this week for The Information Age, I decided to post a last talk and paper featured at CCS 2016 – the 23rd ACM Conference on Computer and Communications Security (Hofburg Palace Vienna, Austria / October 24-28, 2016). The collection of talks and papers from this conference is a must see for the Computer Science community and the all related fields.
For the last day of this week I would like to share a post from Trent McConaghy in BigChainDB‘s Medium page. It was posted last 3rd january. It is a long read, but a very complete, worthwhile readership for anyone interested in both Blockchain and Artificial Intelligence (AI) technologies.
For the last installment of this week for The Information Age, I decided to post a last talk and paper featured at CCS 2016 – the 23rd ACM Conference on Computer and Communications Security (Hofburg Palace Vienna, Austria / October 24-28, 2016). The collection of talks and papers from this conference is a must see for the Computer Science community and the all related fields.
Last year’s Association for Computing Machines (ACM)’s Conference in Vienna, Austria (CCS 2016 – the 23rd ACM Conference on Computer and Communications Security (Hofburg Palace Vienna, Austria / October 24-28, 2016)), hosted a list of good presentations and talks. Following from yesterday’s talk and paper reviewed of a scalable blockchain proposal ELASTICO, I will continue with this interesting list of talks and papers reviews.
For the start of this week I would like to come back to a talk delivered at Usenix (The Advanced Computing Systems Association) Conference in 2012 by Wilson Hsieh about Google’s distributed cloud database product Spanner.
This blog have been posting quite heavily on the topics of Machine Learning and Quantum Computing. It is obviously related with the increased interest in those fields, both from the academic community and the business community, and for good reasons, as such fields of study keep showing signs of promising breakthroughs. This will potentially unleashing innovative new research major events, as well as new business or commercial applications.
When approaching problems with sequential data, such as natural language tasks, recurrent neural networks (RNNs) typically top the choices. While the temporal nature of RNNs are a natural fit for these problems with text data, convolutional neural networks (CNNs), which are tremendously successful when applied to vision tasks, have also demonstrated efficacy in this space.
In our LSTM tutorial, we took an in-depth look at how long short-term memory (LSTM) networks work and used TensorFlow to build a multi-layered LSTM network to model stock market sentiment from social media content. In this post, we will briefly discuss how CNNs are applied to text data while providing some sample TensorFlow code to build a CNN that can perform binary classification tasks similar to our stock market sentiment model.
Evolutionary Algorithms (EA) are becoming increasingly relevant in today’s world as AI-backed solutions are becoming more widely used in industries like digital marketing, finance, and healthcare. …
"From Matter to Life: Information and Causality" Edited by Sara Imari Walker, Paul C. W. Davies and George F. R. Ellis Cambridge University Press, 2017
Recent advances suggest that the concept of information might hold the key to unravelling the mystery of life's nature and origin. Fresh insights from a broad and authoritative range of articulate and respected experts focus on the transition from matter to life, and hence reconcile the deep conceptual schism between the way we describe physical and biological systems. A unique cross-disciplinary perspective, drawing on expertise from philosophy, biology, chemistry, physics, and cognitive and social sciences, provides a new way to look at the deepest questions of our existence. This book addresses the role of information in life, and how it can make a difference to what we know about the world. Students, researchers, and all those interested in what life is and how it began will gain insights into the nature of life and its origins that touch on nearly every domain of science.
For the start of this week I would like to come back to a talk delivered at Usenix (The Advanced Computing Systems Association) Conference in 2012 by Wilson Hsieh about Google’s distributed cloud database product Spanner.
This blog have been posting quite heavily on the topics of Machine Learning and Quantum Computing. It is obviously related with the increased interest in those fields, both from the academic community and the business community, and for good reasons, as such fields of study keep showing signs of promising breakthroughs. This will potentially unleashing innovative new research major events, as well as new business or commercial applications.
Arthur Gervais (ETH Zurich) & Co were also present at the last CCS 2016 – the 23rd ACM Conference on Computer and Communications Security (Hofburg Palace Vienna, Austria / October 24-28, 2016). They presented one other interesting talk with a supporting paper, and The Information Age liked it and wanted to further share and review.
Recently I have been researching and witnessed some nice papers on the topic of crytocurrencies and the blockchain protocols. It appears that these topics are gaining some academic traction within the Computer Science and computer security communities. As to the broader applicability and functionality of the major developments of the protocols, namely Bitcoin and Ethereum cryptocurrencies, it seems that the challenges to overcome will increase in the following months or years. That is not to diminish their remarkable innovative potential and the promise they bring to establish alternative new forms of digital value creation, sharing and enhancement of transactional capacity.
For the last day of this week I would like to share a post from Trent McConaghy in BigChainDB‘s Medium page. It was posted last 3rd january. It is a long read, but a very complete, worthwhile readership for anyone interested in both Blockchain and Artificial Intelligence (AI) technologies.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.