找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction; COST Action 2102 Int Anna Esposito,Nikolaos G. Bourbakis,Ioanni

[复制链接]
查看: 51895|回复: 63
发表于 2025-3-21 18:00:51 | 显示全部楼层 |阅读模式
书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
副标题COST Action 2102 Int
编辑Anna Esposito,Nikolaos G. Bourbakis,Ioannis Hatzil
视频video
丛书名称Lecture Notes in Computer Science
图书封面Titlebook: Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction; COST Action 2102 Int Anna Esposito,Nikolaos G. Bourbakis,Ioanni
描述This book is dedicated to the dreamers, their dreams, and their perseverance in research work. This volume brings together the selected and peer–reviewed contributions of the p- ticipants at the COST 2102 International Conference on Verbal and Nonverbal F- tures of Human–Human and Human–Machine Interaction, held in Patras, Greece, October 29–31, 2007, hosted by the 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2008). The conference was sponsored by COST (European Cooperation in the Field of Scientific and Technical Research, www.cost.esf.org ) in the domain of Information and Communication Technologies (ICT) for disseminating the advances of the - search activity developed within COST Action 2102: “Cross-Modal Analysis of V- bal and Nonverbal Communication”(www.cost2102.eu). COST Action 2102 is a network of about 60 European and 6 overseas laboratories whose aim is to develop “an advanced acoustical, perceptual and psychological analysis of verbal and non-verbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying the human emotional states. Partic
出版日期Conference proceedings 2008
关键词biometric; data mining; emotion recognition; facial expressions; facial patterns; gestures; hci; multimodal
版次1
doihttps://doi.org/10.1007/978-3-540-70872-8
isbn_softcover978-3-540-70871-1
isbn_ebook978-3-540-70872-8Series ISSN 0302-9743 Series E-ISSN 1611-3349
issn_series 0302-9743
copyrightSpringer-Verlag Berlin Heidelberg 2008
The information of publication is updating

书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction影响因子(影响力)




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction影响因子(影响力)学科排名




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction网络公开度




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction网络公开度学科排名




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction被引频次




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction被引频次学科排名




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction年度引用




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction年度引用学科排名




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction读者反馈




书目名称Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction读者反馈学科排名




单选投票, 共有 0 人参与投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用户组没有投票权限
发表于 2025-3-21 21:08:44 | 显示全部楼层
Ekfrasis: A Formal Language for Representing and Generating Sequences of Facial Patterns for Studyinsis) as a software methodology that synthesizes (or generates) automatically various facial expressions by appropriately combining facial features. The main objective here is to use this methodology to generate various combinations of facial expressions and study if these combinations efficiently represent emotional behavioral patterns.
发表于 2025-3-22 03:34:16 | 显示全部楼层
发表于 2025-3-22 05:34:05 | 显示全部楼层
Study on Speaker-Independent Emotion Recognition from Speech on Real-World Data classifiers on utterance level is applied, in attempt to improve the performance of the emotion recognizer. Experimental results demonstrate significant differences on recognizing emotions on acted/real-world speech.
发表于 2025-3-22 09:48:21 | 显示全部楼层
发表于 2025-3-22 13:43:38 | 显示全部楼层
发表于 2025-3-22 18:11:19 | 显示全部楼层
Towards Slovak Broadcast News Automatic Recording and Transcribing Servicelso all automatically extracted metadata (verbal and nonverbal), and also to select incorrectly automatically identified data. The architecture of the present system is linear, which means every module starts after the previous has finished the data processing.
发表于 2025-3-22 21:31:02 | 显示全部楼层
发表于 2025-3-23 01:36:33 | 显示全部楼层
Combining Features for Recognizing Emotional Facial Expressions in Static Images set was obtained combining PCA and LDA features (93% of correct recognition rate), whereas, combining PCA, LDA and Gabor filter features the net gave 94% of correct classification on facial expressions of subjects not included in the training set.
发表于 2025-3-23 09:26:55 | 显示全部楼层
Expressive Speech Synthesis Using Emotion-Specific Speech Inventoriesl for 99% of the logatoms and for all natural sentences. Recognition rates significantly above chance level were obtained for each emotion. The recognition rate for some synthetic sentences exceeded that of natural ones.
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-7 10:27
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表