小母马
发表于 2025-3-25 06:51:49
http://reply.papertrans.cn/88/8765/876451/876451_21.png
Metastasis
发表于 2025-3-25 11:22:01
https://doi.org/10.1007/978-3-319-11397-5NLP; artificial intelligence; machine learning; natural language processing; probability and statistics;
MAZE
发表于 2025-3-25 13:51:56
978-3-319-11396-8Springer International Publishing Switzerland 2014
环形
发表于 2025-3-25 17:19:12
http://reply.papertrans.cn/88/8765/876451/876451_24.png
使困惑
发表于 2025-3-25 22:52:41
Laurent Besacier,Adrian-Horia Dediu,Carlos Martín-Includes supplementary material:
jumble
发表于 2025-3-26 04:05:43
Conference proceedings 2014n Grenoble, France, in October 2014. The 18 full papers presented together with three invited talks were carefully reviewed and selected from 53 submissions. The papers are organized in topical sections on machine translation, speech and speaker recognition, machine learning methods, text extraction and categorization, and mining text.
Genistein
发表于 2025-3-26 07:22:49
Robust Speaker Recognition Using MAP Estimation of Additive Noise in i-vectors Space noise density function using MAP approach. Based on NIST data, we show that it is possible to improve up to 60 % the baseline system performances. A noise adding tool is used to help simulate a real-world noisy environment at different signal-to-noise ratio levels.
细胞学
发表于 2025-3-26 10:28:07
Structured GMM Based on Unsupervised Clustering for Recognizing Adult and Child Speechalizing structured GMM, where the components of Gaussian densities are structured with respect to the speaker classes. In a first approach mixture weights of the structured GMM are set dependent on the speaker class. In a second approach the mixture weights are replaced by explicit dependencies betw
背心
发表于 2025-3-26 16:25:39
Automatic Phonetic Transcription in Two Steps: Forced Alignment and Burst Detectionment based approach reaches accuracies in the range of what has been reported for the inter-transcriber agreement for conversational speech. Furthermore, our burst detector outperforms previous tools with accuracies between 98 % and 74 % for the different conditions in read speech, and between 82 %
Obstacle
发表于 2025-3-26 20:18:27
http://reply.papertrans.cn/88/8765/876451/876451_30.png