direction
129 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Scooped by Great scooppy
Scoop.it!

AppropriatingNewTechnologies

AppropriatingNewTechnologies | direction | Scoop.it
AppropriatingNewTechnologies - A half-semester class at ITP.
Great scooppy's insight:

Good summary

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

Microexpression - Wikipedia, the free encyclopedia

A microexpression is a brief, involuntary facial expression shown on the face of humans according to emotions experienced. They usually occur in high-stakes situations, where people have something to lose or gain. Microexpressions occur when a person is consciously trying to conceal all signs of how he or she is feeling, or when a person does not consciously know how he or she is feeling.[1][2] Unlike regular facial expressions, it is difficult to hide microexpression reactions. Microexpressions express the seven universal emotions: disgust, anger, fear, sadness, happiness, surprise, and contempt. Nevertheless, in the 1990s, Paul Ekman expanded his list of basic emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles. These emotions are amusement, contempt, embarrassment, excitement, guilt, pride, relief, satisfaction, pleasure, and shame.[3][4] They are very brief in duration, lasting only 1/25 to 1/15 of a second.[5]

Microexpressions were first discovered by Haggard and Isaacs. In their 1966 study, Haggard and Isaacs outlined how they discovered these "micromomentary" expressions while "scanning motion picture films of psychotherapy hours, searching for indications of non-verbal communication between therapist and patient"[6] This reprint edition of Ekman and Friesen's breakthrough research on the facial expression of emotion uses scores of photographs showing emotions of surprise, fear, disgust, contempt, anger, happiness, and sadness. The authors of Unmasking the Face explain how to identify these basic emotions correctly and how to tell when people try to mask, simulate, or neutralize them.

In the 1960s, William S. Condon pioneered the study of interactions at the fraction-of-a-second level. In his famous research project, he scrutinized a four-and-a-half-second film segment frame by frame, where each frame represented 1/25th second. After studying this film segment for a year and a half, he discerned interactional micromovements, such as the wife moving her shoulder exactly as the husband's hands came up, which combined yielded microrhythms.[7]

Great scooppy's insight:

Micro ... everything

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

FACS Investigator's Guide - Chapter 1

FACS Investigator's Guide - Chapter 1: Background, Development, and Overview
Great scooppy's insight:

FACS

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

CiteSeerX — Citation Query EMFACS-7: Emotional Facial Action Coding System, Unpublished manuscript

CiteSeerX — Citation Query EMFACS-7: Emotional Facial Action Coding System, Unpublished manuscript | direction | Scoop.it
CiteSeerX - Scientific documents that cite the following paper: EMFACS-7: Emotional Facial Action Coding System, Unpublished manuscript
Great scooppy's insight:

Emotional facs citations

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

'A Dangerous Figure': this is the face of Britain's unemployed youth - The Verge

'A Dangerous Figure': this is the face of Britain's unemployed youth - The Verge | direction | Scoop.it

'A Dangerous Figure': this is the face of Britain's unemployed youth
The Verge
Across the UK, more than one million people aged 16 to 24 are unemployed.

Great scooppy's insight:

Good statistics

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

Facial Action Coding System - Wikipedia, the free encyclopedia

Facial Action Coding System (FACS) is a system to taxonomize human facial movements by their appearance on the face, based on a system originally developed by a Swedish anatomist named Carl-Herman Hjortsjö.[1] It was later adopted by Paul Ekman and Wallace V. Friesen, and published in 1978.[2] Ekman, Friesen, and Joseph C. Hager published a significant update to FACS in 2002.[3] Movements of individual facial muscles are encoded by FACS from slight different instant changes in facial appearance.[4] It is a common standard to systematically categorize the physical expression of emotions, and it has proven useful to psychologists and to animators. Due to subjectivity and time consumption issues, FACS has been established as a computed automated system that detects faces in videos, extracts the geometrical features of the faces, and then produces temporal profiles of each facial movement.[4]

Using FACS,[5] human coders can manually code nearly any anatomically possible facial expression, deconstructing it into the specific Action Units (AU) and their temporal segments that produced the expression. As AUs are independent of any interpretation, they can be used for any higher order decision making process including recognition of basic emotions, or pre-programmed commands for an ambient intelligent environment. The FACS Manual is over 500 pages in length and provides the AUs, as well as Ekman’s interpretation of their meaning.

FACS defines AUs, which are a contraction or relaxation of one or more muscles. It also defines a number of Action Descriptors, which differ from AUs in that the authors of FACS have not specified the muscular basis for the action and have not distinguished specific behaviors as precisely as they have for the AUs.

Great scooppy's insight:

wiki help

 

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

人機介面互動設計趨勢

人機介面互動設計趨勢 | direction | Scoop.it
台灣科技大學工商業設計系副教授陳建雄以「人機介面互動設計趨勢」為題,從學派的角度來分析人機介面(User Interface,簡稱UI)設計如何與使用者進行互動(Interaction),並分析其發展趨勢...陳副教授表示,目前的UI主要分成兩大類:具體的(Concrete)或抽象的(Abstract)媒介,都是用來輔助使用者順利操作3C電子類產品(包含手機/電腦/電視/車用電子等等)。具體的UI部分,是透過可觸式(Tangile)、按鈕(Button)、旋鈕(Knob)或其他輸入設備來達到互動效果,而這些輸入設備之尺寸大小與角度設計,皆需符合使用者的人體工學,以便操作...
Great scooppy's insight:

Application, scenario summary

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

A Critique of Chernoff Faces

A Critique of Chernoff Faces | direction | Scoop.it
Chernoff Faces are discussed in every information visualization course, and are referenced in many papers that talk about glyphs. Yet the only serious use of faces in visualization is for calibration, not for data display.
Great scooppy's insight:
Chernoff faces
more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

The Machine Perception Toolbox: Introduction

MPT News
Great scooppy's insight:

Demo code

 

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

Human Emotions Explained In 60 Short Interviews - NPR (blog)

Human Emotions Explained In 60 Short Interviews - NPR (blog) | direction | Scoop.it
NPR (blog)
Human Emotions Explained In 60 Short Interviews
NPR (blog)
In some sense we're all experts in emotion. We experience emotion every day, all the time.
Great scooppy's insight:

Good prototype

more...
No comment yet.
Scooped by Great scooppy
Scoop.it!

Kolmogorov

A collection of open source software and documents on machine perception and machine learning. Includes a state of the art face detector (MPISearch), video ...
Great scooppy's insight:

Demo code

more...
No comment yet.