Scientists are using previously top-secret technology to zoom through the human body down to the level of a single cell. Scientists are also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle.
UNSW biomedical engineer Melissa Knothe Tate is using previously top-secret semiconductor technology to zoom through organs of the human body, down to the level of a single cell.
A world-first UNSW collaboration that uses previously top-secret technology to zoom through the human body down to the level of a single cell could be a game-changer for medicine, an international research conference in the United States has been told.
The imaging technology, developed by high-tech German optical and industrial measurement manufacturer Zeiss, was originally developed to scan silicon wafers for defects.
UNSW Professor Melissa Knothe Tate, the Paul Trainor Chair of Biomedical Engineering, is leading the project, which is using semiconductor technology to explore osteoporosis and osteoarthritis.
Using Google algorithms, Professor Knothe Tate -- an engineer and expert in cell biology and regenerative medicine -- is able to zoom in and out from the scale of the whole joint down to the cellular level "just as you would with Google Maps," reducing to "a matter of weeks analyses that once took 25 years to complete."
Her team is also using cutting-edge microtome and MRI technology to examine how movement and weight bearing affects the movement of molecules within joints, exploring the relationship between blood, bone, lymphatics and muscle. "For the first time we have the ability to go from the whole body down to how the cells are getting their nutrition and how this is all connected," said Professor Knothe Tate. "This could open the door to as yet unknown new therapies and preventions."
Professor Knothe Tate is the first to use the system in humans. She has forged a pioneering partnership with the US-based Cleveland Clinic, Brown and Stanford Universities, as well as Zeiss and Google to help crunch terabytes of data gathered from human hip studies. Similar research is underway at Harvard University and Heidelberg in Germany to map neural pathways and connections in the brains of mice.
The above story is based on materials provided by University of New South Wales.
On the heels of his Allen Institute for Brain Science, the Microsoft co-founder launched the Allen Institute for Artificial Intelligence in an effort to advance the field while reaching back to its past.
People can summarize a complex scene in a few words without thinking twice. It’s much more difficult for computers. But we’ve just gotten a bit closer -- we’ve developed a machine-learning system that can automatically produce captions (like the three above) to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.
Recent research has greatly improved object detection, classification, and labeling. But accurately describing a complex scene requires a deeper representation of what’s going on in the scene, capturing how the various objects relate to one another and translating it all into natural-sounding language.
Many efforts to construct computer-generated natural descriptions of images propose combining current state-of-the-art techniques in both computer vision and natural language processing to form a complete image description approach. But what if we instead merged recent computer vision and language models into a single jointly trained system, taking an image and directly producing a human readable sequence of words to describe it?
This idea comes from recent advances in machine translation between languages, where a Recurrent Neural Network (RNN) transforms, say, a French sentence into a vector representation, and a second RNN uses that vector representation to generate a target sentence in German.
Now, what if we replaced that first RNN and its input words with a deep Convolutional Neural Network (CNN) trained to classify objects in images? Normally, the CNN’s last layer is used in a final Softmax among known classes of objects, assigning a probability that each object might be in the image. But if we remove that final layer, we can instead feed the CNN’s rich encoding of the image into a RNN designed to produce phrases. We can then train the whole system directly on images and their captions, so it maximizes the likelihood that descriptions it produces best match the training descriptions for each image.
Gaming apps can provide big data for researchers - Smartphone-game-apps Washington, Dec 25 :Mobile-based games that are actually tests of cognition or other brain functions can offer researchers an exciting
Our society is increasingly relying on digitalized, aggregated opinions of individuals to make decisions (e.g., product recommendation based on collective ratings). One key requirement of harnessing this “wisdom of crowd” is the independency of individuals' opinions; yet, in real settings, collective opinions are rarely simple aggregations of independent minds. Recent experimental studies document that disclosing prior collective ratings distorts individuals' decision making as well as their perceptions of quality and value, highlighting a fundamental discrepancy between our perceived values from collective ratings and products' intrinsic values.
"mathematicians, statisticians, astronomers, and other academia-based people make good data scientists,” Kirkpatrick says. “But there’s still a disconnect because people in academia don’t usually have the business connections or domain knowledge to transition, nor do they know how to market themselves.”
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.