Deep_In_Depth: Deep Learning, ML & DS
199.5K views | +27 today
 
Scooped by Eric Feuilleaubois
onto Deep_In_Depth: Deep Learning, ML & DS
Scoop.it!

Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image.
more...
No comment yet.
Your new post is loading...
Your new post is loading...
Scooped by Eric Feuilleaubois
Scoop.it!

Deep_In_Depth --> Curated news feed on Deep Learning, ML & DS

Due to change of policy of Scoop.it, Deep_In_Depth is NO LONGER UPDATED on Scoop.it :( 

Please follow on twitter @Deep_In_Depth

Might start using paper.li instead.





@Deep_In_Depth is a curated news feed on:
#DataScience #DeepLearning #MachineLearning #Dataviz #analytics #AI #GPUs #CNNs #Statistics #NeuralNetworks
more...
Joe Boutte's curator insight, September 11, 2017 9:43 PM

Great curation by Dr Eric Feuilleaubois.  Give it a read.

Scooped by Eric Feuilleaubois
Scoop.it!

Automatic Speaker Recognition using Transfer Learning

Automatic Speaker Recognition using Transfer Learning | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Even with today’s frequent technological breakthroughs in speech-interactive devices (think Siri and Alexa), few companies have tried their hand at enabling multi-user profiles. Google Home has been the most ambitious in this area, allowing up to six user profiles. The recent boom of this technology is what made the potential for this project very exciting to our team. We also wanted to engage in a project that is still a hot topic in deep-learning research, create interesting tools, learn more about neural network architectures, and make original contributions where possible.

We sought to create a system able to quickly add user profiles and accurately identify their voices with very little training data, a few sentences as most! This learning from one to only a few samples is known as One Shot Learning. This article will outline the phases of our project in detail.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Get Started with Eager Execution

Get Started with Eager Execution | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
This tutorial describes how to use machine learning to categorize Iris flowers by species. It uses TensorFlow's eager execution to 1. build a model, 2. train the model on example data, and 3. use the model to make predictions on unknown data. Machine Learning experience isn't required to follow this guide, but you'll need to read some Python code.

TensorFlow programming
There many TensorFlow APIs available, but we recommend starting with these high-level TensorFlow concepts:

Enable an eager execution development environment,
Import data with the Datasets API,
Build models and layers with TensorFlow's Keras API.
This tutorial shows these APIs and is structured like many other TensorFlow programs:

Import and parse the data sets.
Select the type of model.
Train the model.
Evaluate the model's effectiveness.
Use the trained model to make predictions.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

This mind-reading AI can see what you're thinking - and draw a picture of it

This mind-reading AI can see what you're thinking - and draw a picture of it | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Scientists around the world are racing to be the first to develop artificially intelligent algorithms that can see inside our minds.

The idea is not new: in the science fiction of the 1950s and 60s, crazed doctors were frequently seen putting weird contraptions on people’s heads to decipher their thoughts. British TV serial Quatermass and the Pit – in which such a machine is used to translate the thoughts of alien invaders – is a prime example.

Now reality is catching up with fantasy. In the past year, AI experts in China, the US and Japan have published research showing that computers can replicate what people are thinking about by using functional magnetic resonance imaging (or fMRI) machines – which measure brain activity – linked to deep neural networks, which replicate human brain functions.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

To protect artificial intelligence from attacks, show it fake data

To protect artificial intelligence from attacks, show it fake data | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it

AI systems can sometimes be tricked into seeing something that’s not actually there, as when Google’s software “saw” a 3-D-printed turtle as a rifle. A way to stop these potential attacks is crucial before the technology can be widely deployed in safety-critical systems like the computer vision software behind self-driving cars.

At MIT Technology Review’s annual EmTech Digital conference in San Francisco this week, Google Brain researcher Ian Goodfellow explained how researchers can protect their systems.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Don’t Let Artificial Intelligence Supercharge Bad Processes

Don’t Let Artificial Intelligence Supercharge Bad Processes | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Scenarios describing the potential for artificial intelligence (AI) seem to gravitate toward hyperbole. In wonderful scenarios, AI enables nirvanas of instant optimal processes and prescient humanoids. In doomsday scenarios, algorithms go rogue and humans are superfluous, at best, and, at worst, subservient to the new silicon masters.

However, both of these scenarios require a sophistication that, at least right now, seems far away. Our recent research indicates that most organizations are still in the early stages of AI implementation and nowhere near either of these outcomes.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Convolutional Neural Network — II –

Convolutional Neural Network — II – | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Continuing our learning from the last post we will be covering the following topics in this post:

Convolution over volume
Multiple filters at one time
One layer of convolution network
Understanding the dimensional change
I have tried to explain most topics through illustrations as much as possible. If something isn’t easy to understand please ping me.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

How AI can learn to generate pictures of cats

How AI can learn to generate pictures of cats | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
In 2014, the research paper Generative Adversarial Nets (GAN) by Goodfellow et al. was a breakthrough in the field of generative models.

Leading researcher Yann Lecun himself called adversarial nets “the coolest idea in machine learning in the last twenty years.”

Today, thanks to this architecture, we’re going to build an AI that generates realistic pictures of cats. How awesome is that?!
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

5 Machine Learning Projects You Should Not Overlook

5 Machine Learning Projects You Should Not Overlook | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
It's about that time again... 5 more machine learning or machine learning-related projects you may not yet have heard of, but may want to consider checking out!
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Building a Toy Detector with Tensorflow Object Detection API

Building a Toy Detector with Tensorflow Object Detection API | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
This project is second phase of my popular project - Is Google Tensorflow Object Detection API the easiest way to implement image recognition? Here I extend the API to train on a new object that is not part of the COCO dataset.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

An Outsider's Tour of Reinforcement Learning

An Outsider's Tour of Reinforcement Learning | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Table of Contents.
Make It Happen. Reinforcement Learning as prescriptive analytics.
Total Control. Reinforcement Learning as Optimal Control.
The Linearization Principle. If a machine learning algorithm does crazy things when restricted to linear models, it’s going to do crazy things on complex nonlinear models too.
The Linear Quadratic Regulator. A quick intro to LQR as why it is a great baseline for benchmarking Reinforcement Learning.
A Game of Chance to You to Him Is One of Real Skill. Laying out the rules of the RL Game and comparing to Iterative Learning Control.
The Policy of Truth. Policy Gradient is a Gradient Free Optimization Method.
A Model, You Know What I Mean? Nominal control and the power of models.
Updates on Policy Gradients. Can we fix policy gradient with algorithmic enhancements?
Clues for Which I Search and Choose. Simple methods solve apparently complex RL benchmarks.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Text Data Preprocessing: A Walkthrough in Python

Text Data Preprocessing: A Walkthrough in Python | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
This post will serve as a practical walkthrough of a text data preprocessing task using some common Python tools.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Smaller, Faster Deep Learning Models

Smaller, Faster Deep Learning Models | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Deep learning is taking off: researchers have built deep learning systems that achieve human-level performance, or even outperform human expert in certain tasks.

One problem in production system, however, is the size of deep learning models: popular ConvNets (VGG, ResNet, Inception, etc.) easily take up several hundreds of MB. While those models can be hosted on the server side for query, we can much more to offer if we are able to compress a model to a small enough size such that users can download it on their devices for offline usage.

One example of compressed model is SqueezeNet, whose authors claimed to have similar accuracy as AlexNet with 50x few parameters and <0.5MB model size. This is desirable for model deployment on mobile device for offline usage.

In following sections I’ll show some benchmark and experiments of SqueezeNet.

more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Highlights from the TensorFlow Developer Summit, 2018

Highlights from the TensorFlow Developer Summit, 2018 | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it

Today, we’re holding the second TensorFlow Developer Summit at the Computer History Museum in Mountain View, CA! The event brings together over 500 TensorFlow users in-person and thousands tuning into the livestream at TensorFlow events around the world. The day is filled with new product announcements along with technical talks from the TensorFlow team and guest speakers.

Machine learning is solving challenging problems that impact everyone around the world. Problems that we thought were impossible or too complex to solve are now possible with this technology. Using TensorFlow, we’ve already seen great advancements in many different fields. For example:

Astrophysicists are using TensorFlow to analyze large amounts of data from the Kepler mission to discover new planets.
Medical researchers are using ML techniques with TensorFlow to assess a person’s cardiovascular risk of a heart attack and stroke.
Air Traffic Controllers are using TensorFlow to predict flight routes through crowded airspace for safe and efficient landings.
Engineers are using TensorFlow to analyze auditory data in the rainforest to detect logging trucks and other illegal activities.
Scientists in Africa are using TensorFlow to detect diseases in Cassava plants to improving yield for farmers.

more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

arXiv - Modeling Customer Engagement from Partial Observations

It is of high interest for a company to identify customers expected to bring the largest profit in the upcoming period. Knowing as much as possible about each customer is crucial for such predictions. However, their demographic data, preferences, and other information that might be useful for building loyalty programs is often missing. Additionally, modeling relations among different customers as a network can be beneficial for predictions at an individual level, as similar customers tend to have similar purchasing patterns. We address this problem by proposing a robust framework for structured regression on deficient data in evolving networks with a supervised representation learning based on neural features embedding. The new method is compared to several unstructured and structured alternatives for predicting customer behavior (e.g. purchasing frequency and customer ticket) on user networks generated from customer databases of two companies from different industries. The obtained results show 4% to 130% improvement in accuracy over alternatives when all customer information is known. Additionally, the robustness of our method is demonstrated when up to 80% of demographic information was missing where it was up to several folds more accurate as compared to alternatives that are either ignoring cases with missing values or learn their feature representation in an unsupervised manner.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

5 Data Scientists on Making the Leap from Academia to Industry

5 Data Scientists on Making the Leap from Academia to Industry | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Set Operations can be executed by both SQL and Python, and the decision of which language to use when depends on your goals.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Getting Started with PyTorch Part 1: Understanding how Automatic Differentiation works

Getting Started with PyTorch Part 1: Understanding how Automatic Differentiation works | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
When I started to code neural networks, I ended up using what everyone else around me was using. TensorFlow.

But recently, PyTorch has emerged as a major contender in the race to be the king of deep learning frameworks. What makes it really luring is it’s dynamic computation graph paradigm. Don’t worry if the last line doesn’t make sense to you now. By the end of this post, it will. But take my word that it makes debugging neural networks way easier.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

AutoML Vision in action: from ramen to branded goods

AutoML Vision in action: from ramen to branded goods | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it

Take a look at the three ramen bowls below. Can you believe that a machine learning (ML) model can identify the exact shop each bowl is made at, out of 41 ramen shops, with 95% accuracy? Data scientist Kenji Doi built an AI-enabled ramen expert classifier that can discern the minute details that make one shop’s bowl of ramen different from the next one’s.

Ramen bowls made at three different Ramen Jiro shops
Ramen Jiro is one of the most popular chain restaurant franchises for ramen fans in Japan, because of its generous portions of toppings, noodles, and soup served at low prices. They have 41 branches around Tokyo, and they serve the same basic menu at each shop.

As you can see in the photo, it's almost impossible for a human (especially if you're new to  Ramen Jiro) to tell what shop each bowl is made at. They just look the same. You wouldn’t think you could identify which of the 41 shops made a particular bowl of soup just by looking at one of these photos.

more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Meet the ‘Lady Gaga of Mathematics’ helming France’s AI task force

Meet the ‘Lady Gaga of Mathematics’ helming France’s AI task force | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
On a crisp Saturday morning in Orsay, a southwestern suburb of Paris with some 16,500 inhabitants, the rue de Paris was bustling. But while many residents were doing their usual weekend shopping at the fishmonger or the butcher shop, further up the street, in a small former chateau that is now the town’s cultural center, about 80 people had set aside their late-morning hours to hear the “voeux” of their legislative representative to the National Assembly, Cédric Villani.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Convolutional Neural Network — I –

Convolutional Neural Network — I – | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Before we jump into the full convolutional neural network, lets first understand the basic underlying concept and then build up on that.

This post will explain the following:

Convolution Concept
Convolutional Neural Network
Strided Convolutions
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

BigDL Spark deep learning library VM now available on Microsoft Azure Marketplace

BigDL Spark deep learning library VM now available on Microsoft Azure Marketplace | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
BigDL deep learning library is a Spark-based framework for creating and deploying deep learning models at scale. While it has previously been deployed on Azure HDInsight and Data Science VM, it is now also available on Azure Marketplace as a fixed VM image represents a further step in reducing deployment complexity.

Since BigDL is an integral part of Spark, a user does not need to explicitly manage distributed computations. While providing a high level control “knobs”, such as the number of compute nodes, cores, and batch size, a BigDL application leverages stable Spark infrastructure for node communications and resource management during its execution. BigDL applications can be written in either Python or Scala and achieve high performance through both algorithm optimization and taking advantage of intimate integration with Intel’s Math Kernel Library (MKL).
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Machine learning mega-benchmark: GPU providers (part 2)

Machine learning mega-benchmark: GPU providers (part 2) | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
We had recently published a large-scale machine learning benchmark using word2vec, comparing several popular hardware providers and ML frameworks in pragmatic aspects such as their cost, ease of use, stability, scalability and performance. Since that benchmark only looked at the CPUs, we also ran an analogous ML benchmark focused on GPUs.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

3 Essential Google Colaboratory Tips & Tricks

3 Essential Google Colaboratory Tips & Tricks | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
Google Colaboratory is a promising machine learning research platform. Here are 3 tips to simplify its usage and facilitate using a GPU, installing libraries, and uploading data files.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

Off the Beaten path – Using Deep Forests to Outperform CNNs and RNNs

Off the Beaten path – Using Deep Forests to Outperform CNNs and RNNs | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
How about a deep learning technique based on decision trees that outperforms CNNs and RNNs, runs on your ordinary desktop, and trains with relatively small datasets.  This could be a major disruptor for AI.
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

GAN: A Beginner’s Guide to Generative Adversarial Networks

GAN: A Beginner’s Guide to Generative Adversarial Networks | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
To understand GANs, you should know how generative algorithms work, and for that, contrasting them with discriminative algorithms is instructive. Discriminative algorithms try to classify input data; that is, given the features of a data instance, they predict a label or category to which that data belongs.

For example, given all the words in an email, a discriminative algorithm could predict whether the message is spam or not_spam. spam is one of the labels, and the bag of words gathered from the email are the features that constitute the input data. When this problem is expressed mathematically, the label is called y and the features are called x. The formulation p(y|x) is used to mean “the probability of y given x”, which in this case would translate to “the probability that an email is spam given the words it contains.”

So discriminative algorithms map features to labels. They are concerned solely with that correlation. One way to think about generative algorithms is that they do the opposite. Instead of predicting a label given certain features, they attempt to predict features given a certain label.

The question a generative algorithm tries to answer is: Assuming this email is spam, how likely are these features? While discriminative models care about the relation between y and x, generative models care about “how you get x.” They allow you to capture p(x|y), the probability of x given y, or the probability of features given a class. (That said, generative algorithms can also be used as classifiers. It just so happens that they can do more than categorize input data.)

Another way to think about it is to distinguish discriminative from generative like this:

Discriminative models learn the boundary between classes
Generative models model the distribution of individual classes
more...
No comment yet.
Scooped by Eric Feuilleaubois
Scoop.it!

The Way Computers Use Memory Is Being Optimised Using Neural Networks. Here’s How

The Way Computers Use Memory Is Being Optimised Using Neural Networks. Here’s How | Deep_In_Depth: Deep Learning, ML & DS | Scoop.it
We know this for a fact that Moore’s Law does does not stand true anymore. Our computation requirement far exceeds what is available right now. There has been an explosion in the complexity and requirements of modern computing, especially with the advent of Machine Learning. This has forced many to look beyond traditional processors towards GPU’s etc. But then there is a group of researchers that believe that neural network itself can be the solution to make current computing more efficient.

Today’s computers assume that the memory usage to perform a certain task happens exactly the way it did in past. So, it creates a rules table that chronicle the usage of memory and mirror it for all future tasks. Think of it as a rules based mechanism for allocating memory, which is highly efficient when the availability for memory is high. But modern computational workloads are order of magnitude larger than traditional workloads. As the working set becomes larger than the predictive table, the prediction accuracy drops sharply. This trend poses a challenge and scaling predictive table is very costly and cumbersome.
more...
No comment yet.