Important Dates

  • Paper Submission Due: July 23, 2021

    Paper Submission Due:

    August 6, 2021

  • Notification of acceptance: August 27, 2021

    Notification of acceptance:

    September 3, 2021

  • Camera-ready due: September 10, 2021
  • Early Registration ends: September 15, 2021
  • Late Registration ends: October 5, 2021
  • On-Site Registration: October 15-16, 2021

All deadlines are 11.59 pm UTC-12h (anywhere on earth)

Welcome to ROCLING 2021!

ROCLING 2021 is the 33rd annual Conference on Computational Linguistics and Speech Processing in Taiwan sponsored by the Association for Computational Linguistics and Chinese Language Processing (ACLCLP).The conference will be held in the Engineering Building 5 of National Central University (NCU) in Taoyuan, Taiwan during October 15-16, 2021.

ROCLING 2021 will provide an international forum for researchers and industry practitioners to share their new ideas, original research results and practical development experiences from all language and speech research areas, including computational linguistics, information understanding, and signal processing. ROCLING 2021 will feature oral papers, posters, tutorials, special sessions and shared tasks.

The conference on Computational Linguistics and Speech Processing (ROCLING) was initiated in 1988 by the Association for Computational Linguistics and Chinese Language Processing (ACLCLP) with the major goal to provide a platform for researchers and professionals from around the world to share their experiences related to natural language processing and speech processing. Following are a list of past ROCLING conferences.

Call for Papers

ROCLING 2021 invites paper submissions reporting original research results and system development experiences as well as real-world applications. Each submission will be reviewed based on originality, significance, technical soundness, and relevance to the conference. Accepted papers will be presented orally or as poster presentations. Both oral and poster presentations will be published in the ROCLING 2021 conference proceedings and included in the ACL Anthology. A number of papers will be selected and invited for extension into journal versions and publication in a special issue of the International Journal of Computational Linguistics and Chinese Language Processing (IJCLCLP).

Papers can be written and presented in either Chinese or English. Papers should be made in PDF format and submitted online through the paper submission system. Submitted papers may consist of 4-8 pages of content, plus unlimited references. Upon acceptance, final versions will be given additional pages of content (up to 9 pages) so that reviewers’ comments can be taken into account. ROCLING 2021 mainly targets two scientific tracks: natural language processing (NLP) and speech processing (Speech). Relevant topics for the conference include, but are not limited to, the following areas (in alphabetical order):

Natural Language Processing Speech Processing
  • Cognitive/Psychological Linguistics
  • Discourse and Pragmatics
  • Dialogue System
  • Information Extraction
  • Information Retrieval
  • Language Generation
  • Machine Translation
  • NLP Applications
  • Phonology, Morphology and Word Segmentation
  • Question Answering
  • Resources and Evaluation
  • Semantics: Lexical, Sentence-Level, Textual Inference
  • Sentiment Analysis
  • Summarization
  • Syntax: Tagging, Chunking and Parsing
  • Others
  • Speech Perception, Production and Acquisition
  • Phonetics, Phonology and Prosody
  • Analysis of Paralinguistics in Speech and Language
  • Speaker and Language Identification
  • Analysis of Speech and Audio Signals
  • Speech Coding and Enhancement
  • Speech Synthesis and Spoken Language Generation
  • Speech Recognition
  • Spoken Dialog Systems and Analysis of Conversation
  • Spoken Language Processing:
    Retrieval, Translation, Summarization, Resources and Evaluation
  • Others
Natural Language Processing
  • Cognitive/Psychological Linguistics
  • Discourse and Pragmatics
  • Dialogue System
  • Information Extraction
  • Information Retrieval
  • Language Generation
  • Machine Translation
  • NLP Applications
  • Phonology, Morphology and Word Segmentation
  • Question Answering
  • Resources and Evaluation
  • Semantics: Lexical, Sentence-Level, Textual Inference
  • Sentiment Analysis
  • Summarization
  • Syntax: Tagging, Chunking and Parsing
  • Others
Speech Processing
  • Speech Perception, Production and Acquisition
  • Phonetics, Phonology and Prosody
  • Analysis of Paralinguistics in Speech and Language
  • Speaker and Language Identification
  • Analysis of Speech and Audio Signals
  • Speech Coding and Enhancement
  • Speech Synthesis and Spoken Language Generation
  • Speech Recognition
  • Spoken Dialog Systems and Analysis of Conversation
  • Spoken Language Processing:
    Retrieval, Translation, Summarization, Resources and Evaluation
  • Others

Paper submissions must use the official ROCLING 2021 style templates (Latex and Word). Submission is electronic, using the EasyChair conference management system. The submission site is available at https://easychair.org/conferences/?conf=rocling2021.

As the reviewing will be double-blind, papers must not include authors' names and affiliations. Furthermore, self-references that reveal the author's identity must be avoided. Papers that do not conform to these requirements will be rejected without review. Papers may be accompanied by a resource (software and/or data) described in the paper, but these resources should be anonymized as well.

Page Limitation - Camera-Ready Paper (applicable after acceptance)

According to the format of the paper template, the page limitations for accepted papers are 9 pages (plus unlimited references) in PDF format. The first page of the camera-ready version of the accepted paper should bear the items of paper title, author name, affiliation, and email address. All these items should be properly centered on the top, followed by a concise abstract of the paper.

Copyright Form (applicable after acceptance) download here

Every accepted paper should also be sent with a signed copyright form in PDF format via the online registration system.

Programs

Download Slide Template

Friday, October 15, 2021

TIME EVENT
09:00 - 09:10 Opening Ceremony
(NCU E6-B520)
09:10 - 10:10 NLP Keynote by Prof. Vincent Ng
(NCU E6-B520)
10:10 – 10:30 Coffee Break
10:30 – 12:30 Session 1
Speech and Language Processing-1
(NCU E6-B518)
AI Tutorial I
(NCU E6-B520)
12:30 – 13:00 Lunch
13:00 – 13:30 ACLCLP Assembly
(NCU E6-B520)
13:30 – 15:00 Session 2
Information Retrieval and Text Mining
(NCU E6-B518)
AI Tutorial II-1
(NCU E6-B520)
15:00 – 15:30 Coffee Break
15:30 – 17:30 Session 3
Best Paper Candidates
(NCU E6-B518)
AI Tutorial II-2
(15:30 -17:00)
(NCU E6-B520)

Saturday, October 16, 2021

TIME EVENT
09:00 - 10:00 Speech Keynote by Dr. Jinyu Li
(NCU E6-B520)
10:00 – 10:20 Coffee Break
10:20 – 12:20 Session 4
Sentiment Analysis and Social Media
(NCU E6-B518)
Special Session
Brain and Language
(NCU E6-B520)
12:20 – 12:50 Lunch
12:50 – 13:30 Shared Task
Dimensional Sentiment Analysis for Educational Texts
(NCU E6-B518)
AI Tutorial III
(NCU E6-B520)
13:30 – 15:00 Session 5
Applications
(NCU E6-B518)
AI Tutorial IV-1
(NCU E6-B520)
15:00 – 15:30 Coffee Break
15:30 – 17:00 Session 6
Speech and Language Processing-2
(NCU E6-B518)
AI Tutorial IV-2
(NCU E6-B520)
17:00 – 17:10 Closing Ceremony
(NCU E6-B520)

Session 1
Speech and Language Processing – 1

Time: Friday, October 15, 2021, 10:30 -12:30

10:30-10:50
Universal Recurrent Neural Network Grammar

Chinmay Choudhary and Colm O'riordan

10:50-11:10
Learning to Find Translation of Grammar Patterns in Parallel Corpus

Kai-Wen Tuan, Yi-Jyun Chen, Yi-Chien Lin, Chun-Ho Kwok, Hai-Lun Tu and Jason S. Chang

11:10-11:30
Data centric approach to Chinese Medical Speech Recognition

Sheng-Luen Chung, Yi-Shiuan Li and Hsien-Wei Ting

11:30-11:50
Chinese Medical Speech Recognition with Punctuated Hypothesis

Sheng-Luen Chung, Jin-Huan Fan and Hsien-Wei Ting

11:50-12:10
A Preliminary Study on Environmental Sound Classification Leveraging Large-Scale Pretrained Model and Semi-Supervised Learning

You-Sheng Tsao, Tien-Hong Lo, Jiun-Ting Li, Shi-Yan Weng and Berlin Chen

12:10-12:30
Incorporating Speaker Embedding and Post-Filter Network for Improving Speaker Similarity of Personalized Speech Synthesis System

Sheng-Yao Wang and Yi-Chin Huang

Session 2
Information Retrieval and Text Mining

Time: Friday, October 15, 2021, 13:30 -15:00

13:30-13:40
A BERT-based Siamese-structured Retrieval Model

Hung-Yun Chiang and Kuan-Yu Chen

13:40-13:50
AI Clerk Platform : Information Extraction DIY Platform

Ru-Yng Chang, Wen-Lun Chen and Cheng-Ju Kao

13:50-14:00
A Survey of Approaches to Automatic Question Generation:from 2019 to Early 2021

Chao-Yi Lu and Sin-En Lu

14:00-14:10
Hidden Advertorial Detection on Social Media in Chinese

Meng-Ching Ho, Ching-Yun Chuang, Yi-Chun Hsu and Yu-Yun Chang

14:10-14:20
Improved Text Classification of Long-term Care Materials

Yi Fan Chiang, Chi-Ling Lee, Heng-Chia Liao, Yi-Ting Tsai and Yu-Yun Chang

14:20-14:30
Keyword-centered Collocating Topic Analysis

Yu-Lin Chang, Yongfu Liao, Po-Ya Angela Wang, Mao-Chang Ku and Shu-Kai Hsieh

14:30-14:40
Extracting Academic Senses: Towards An Academic Writer's Dictionary

Hsin-Yun Chung, Li-Kuang Chen and Jason S Chang

14:40-14:50
Exploring the Integration of E2E ASR and Pronunciation Modeling for English Mispronunciation Detection

Hsin-Wei Wang, Bi-Cheng Yan, Yung-Chang Hsu and Berlin Chen

Session 3
Best Paper Candidates

Time: Friday, October 15, 2021, 15:30 -17:30

15:30-15:50
Mining Commonsense and Domain Knowledge from Math Word Problems

Shih-Hung Tsai, Chao-Chun Liang, Hsin-Min Wang and Keh-Yih Su

15:50-16:10
MMTL: The Meta Multi-Task Learning for Aspect Category Sentiment Analysis

Guan-Yuan Chen and Ya-Fen Yeh

16:10-16:30
Using Valence and Arousal-infused Bi-LSTM for Sentiment Analysis in Social Media Product Reviews

Yu-Ya Cheng, Wen-Chao Yeh, Yan-Ming Chen and Yung-Chun Chang

16:30-16:50
Unsupervised Multi-document Summarization for News Corpus with Key Synonyms and Contextual Embeddings

Yen-Hao Huang, Ratana Pornvattanavichai, Fernando Henrique Calderon Alvarado and Yi-Shin Chen

16:50-17:10
Integrated Semantic and Phonetic Post-correction for Chinese Speech Recognition

Yi-Chang Chen, Chun-Yen Cheng, Chien-An Chen, Ming-Chieh Sung and Yi-Ren Yeh

17:10-17:30
Employing Low-Pass Filtered Temporal Speech Features for the Training of Ideal Ratio Mask in Speech Enhancement

Yan-Tong Chen, Zi-Qiang Lin and Jeih-Weih Hung

Session 4
Sentiment Analysis and Social Media

Time: Saturday, October 16, 2021, 10:20 -12:20

10:20-10:40
A Flexible and Extensible Framework for Multiple Answer Modes Question Answering

Cheng-Chung Fan, Chia-Chih Kuo, Shang-Bao Luo, Pei-Jun Liao, Kuang-Yu Chang, Chiao-Wei Hsu, Meng-Tse Wu, Shih-Hong Tsai, Tzu-Man Wu, Aleksandra Smolka, Chao-Chun Liang, Hsin-Min Wang, Kuan-Yu Chen, Yu Tsao and Keh-Yih Su

10:40-11:00
Aspect-Based Sentiment Analysis and Singer Name Entity Recognition using Parameter Generation Network Based Transfer Learning

Hsiao-Wen Tseng, Chia-Hui Chang and Hsiu-Min Chuang

11:00-11:20
Aggregating User-Centric and Post-Centric Sentiments from Social Media for Topical Stance Prediction

Jenq-Haur Wang and Kaun-Ting Chen

11:20-11:40
What confuses BERT? Linguistic Evaluation of Sentiment Analysis on Telecom Customer Opinion

Cing-Fang Shih, Yu-Hsiang Tseng, Ching-Wen Yang, Pin-Er Chen, Hsin-Yu Chou, Lian-Hui Tan, Tzu-Ju Lin, Chun-Wei Wang and Shu-Kai Hsieh

11:40-12:00
A Corpus for Dimensional Sentiment Classification on YouTube Streaming Service

Ching-Wen Hsu, Chun-Lin Chou, Hsuan Liu and Jheng-Long Wu

12:00-12:20
A Study on Using Transfer Learning to Improve BERT Model for Emotional Classification of Chinese Lyrics

Jia-Yi Liao, Ya-Hsuan Lin, Kuan-Cheng Lin and Jia-Wei Chang

Shared Task
Dimensional Sentiment Analysis for Education Texts

Time: Saturday, October 16, 2021, 12:50 -13:30

12:50 - 13:00
ROCLING-2021 Shared Task: Dimensional Sentiment Analysis for Educational Texts

Liang-Chih Yu, Jin Wang, Bo Peng and Chu-Ren Huang

13:00 - 13:05
ntust-nlp-1 at ROCLING-2021 Shared Task: Educational Texts Dimensional Sentiment Analysis using Pretrained Language Models

Yi-Wei Wang, Wei-Zhe Chang, Bo-Han Fang, Yi-Chia Chen, Wei-Kai Huang and Kuan-Yu Chen

13:05 - 13:10
ntust-nlp-2 at ROCLING-2021 Shared Task: BERT-Based Semantic Analyzer With Word-Level Information

Ke-Han Lu and Kuan-Yu Chen

13:10 - 13:15
NCU-NLP at ROCLING-2021 Shared Task: Using MacBERT Transformers for Dimensional Sentiment Analysis

Man-Chen Hung, Chao-Yi Chen, Pin-Jung Chen and Lung-Hao Lee

13:15 - 13:20
SCUDS at ROCLING-2021 Shared Task: Using Pretrained Model for Dimensional Sentiment Analysis Based on Sample Expansion Method

Hsiao-Shih Chen, Pin-Chiung Chen, Shao-Cheng Huang, Yu-Cheng Chiu and Jheng-Long Wu

13:20 - 13:25
SoochowDS at ROCLING-2021 Shared Task: Text Sentiment Analysis Using BERT and LSTM

Ruei-Cyuan Su, Sing-Seong Chong, Tzu-En Su and Ming-Hsiang Su

13:25 - 13:30
CYUT at ROCLING-2021 Shared Task: Based on BERT and MacBERT

Xie-Sheng Hong and Shih-Hung Wu

Session 5
Applications

Time: Saturday, October 16, 2021, 13:30 -15:00

13:30-13:40
Nested Named Entity Recognition for Chinese Electronic Health Records with QA-based Sequence Labeling

Yu-Lun Chiang, Chih-Hao Lin, Cheng-Lung Sung and Keh-Yih Su

13:40-13:50
A Study on Contextualized Language Modeling for Machine Reading Comprehension

Chin-Ying Wu, Yung-Chang Hsu and Berlin Chen

13:50-14:00
Discussion on the Relationship Between Elders’ Daily Conversations and Cognitive Executive Function: Using Word Vectors and Regression Models

Ming-Hsiang Su, Yu-An Ko and Man-Ying Wang

14:00-14:10
Home Appliance Review Research Via Adversarial Reptile

Tai-Jung Kan, Chia-Hui Chang and Hsiu-Min Chuang

14:10-14:20
Numerical Relation Detection in Financial Tweets using Dependency-aware Deep Neural Network

Yu-Chi Liang, Min-Chen Chen, Wen-Chao Yeh and Yung-Chun Chang

14:20-14:30
Incorporating Domain Knowledge into Language Transformers for Multi-Label Classification of Chinese Medical Questions

Po-Han Chen, Yu-Xiang Zeng and Lung-Hao Lee

14:30-14:40
Confiscation Detection of Criminal Judgment Using Text Classification Approach

Hsuan-Tzu Shih, Yu-Cheng Chiu, Hsiao-Shih Chen and Jheng-Long Wu

14:40-14:50
Discussion on Domain Generalization in the Cross-Device Speaker Verification System

Wei-Ting Lin, Yu-Jia Zhang, Chia-Ping Chen, Chung-Li Lu and Bo-Cheng Chan

14:50-15:00
Data Augmentation Technology for Dysarthria Assistive Systems

Wei-Chung Chu, Ying-Hsiu Hung, Wei-Zhong Zheng and Ying-Hui Lai

Session 6
Speech and Language Processing – 2

Time: Saturday, October 16, 2021, 15:30 -17:00

15:30-15:40
Predicting Elders' Cognitive Flexibility From Their Language Use

Man-Ying Wang, Yu-an Ko, Chin-Lan Huang, Jyun-Hong Chen and Te-Tien Ting

15:40-15:50
Improve Chit-Chat and QA Sentence Classification in User Messages of Dialogue System using Dialogue Act Embedding

Chi Hsiang Chao, Xi Jie Hou and Yu Ching Chiu

15:50-16:00
Automatic Extraction of English Grammar Pattern Correction Rules

Kuan-Yu Shen, Yi-Chien Lin and Jason S. Chang

16:00-16:10
Multi-Label Classification of Chinese Humor Texts Using Hypergraph Attention Networks

Hao-Chuan Kao, Man-Chen Hung, Lung-Hao Lee and Yuen-Hsien Tseng

16:10-16:20
Generative Adversarial Networks based on Mixed-Attentions for Citation Intent Classification in Scientific Publications

Yuh-Shyang Wang, Chao-Yi Chen and Lung-Hao Lee

16:20-16:30
Identify Bilingual Patterns and Phrases from a Bilingual Sentence Pair

Yi-Jyun Chen, Hsin-Yun Chung and Jason S. Chang

16:30-16:40
Speech Emotion Recognition Based on CNN+LSTM Model

Wei Mou, Pei-Hsuan Shen, Chu-Yun Chu, Yu-Cheng Chiu, Tsung-Hsien Yang and Ming-Hsiang Su

16:40-16:50
Exploiting Low-Resource Code-Switching Data to Mandarin-English Speech Recognition Systems

Hou-An Lin and Chia-Ping Chen

16:50-17:00
RCRNN-based Sound Event Detection System with Specific Speech Resolution

ChenSung-Jen Huang, Yih-Wen Wang, Chia-Ping Chen, Chung-Li Lu and Bo-Cheng Chan

Registration

Type Early Registration
(Before Sep. 15, 2021)
Late Registration
(Sep. 16 - Oct. 5, 2021)
On-Site Registration
(Oct. 15-16, 2021)
Registration Category Regular Student Regular Student Regular Student
ACLCLP Member NT$2,500 NT$500 NT$3,000 NT$800 NT$3,500 NT$1,000
ACLCLP Non-Member NT$3,500 NT$1,000 NT$4,000 NT$1,300 NT$4,500 NT$1,500
贊助單位/Sponsors 免費/Free
ACLCLP會員會費 一般會員:NT$1,000元;學生會員:NT$500元
Type Early Registration
(Before Sep.15, 2021)
Registration Category Regular Student
ACLCLP Member NT$2,500 NT$500
ACLCLP Non-Member NT$3,500 NT$1,000
Type Late Registration
(Sep. 16 - Oct. 5, 2021)
Registration Category Regular Student
ACLCLP Member NT$3,000 NT$800
ACLCLP Non-Member NT$4,000 NT$1,300
Type On-Site Registration
(Oct. 15-16, 2021)
Registration Category Regular Student
ACLCLP Member NT$3,500 NT$1,000
ACLCLP Non-Member NT$4,500 NT$1,500
贊助單位/Sponsors 免費/Free
ACLCLP會員會費 一般會員:NT$1,000元;學生會員:NT$500元

Click here to Register

Registration Fees

附註說明:

  1. 每篇會議論文的發表至少要繳交一筆「一般人士」報名費。

  2. 報名費一經繳費後恕不接受退費,報名費收據將連同會議資料於10/8一併郵寄。
  3. ACLCLP Member 為「中華民國計算語言學學會」 之有效會員
  4. 本年度尚未繳交年費之舊會員或失效之會員,報名之與會身份/Category請勾選「….(會員+會費)」,

    勿需再申請入會

  5. 非會員欲同時申請入會者,請先至學會網頁之「會員專區」申請加入會員;報名之「與會身份/Category」請勾選「….(會員+會費)」。 (前往會員專區)
  6. 以「學生新會員」及「學生非會員」身份報名者,請於報名時上傳學生身份證明。
  7. 贊助單位敬請於

    10/5

    前完成線上報名手續。
  8. 報名完成後,若需更正個人資料,請於10/5前以Email方式聯絡大會。

Registration Details:

  1. One Regular registration can cover a maximum of One Paper. Student registration can NOT cover paper.

  2. Registration fees are non-refundable.
  3. International registrants have to pay by credit card only (Visa or Master Card). All the conference registration payment will be charged in currency of New Taiwan dollars.
  4. For “full-time Students”, please upload the image (or pdf) of student ID card.

報名及繳費期限/Important Dates for Registration:

  • Early Registration-9/15(Wed)以前:報名費應於

    9/20(Mon)前

    繳交。/Registration due by September 15. Payment must be received before

    September 20

    .
  • Late Registration-9/16(Thu)~10/5(Tue):報名費應於

    10/5(Tue)前

    繳交。/Registration between September 16 to October 5. Payment must be received before

    October 5

    .
  • On-Site Registration-10/6(Wed)線上報名截止,擬報名者請於10/15(Fri)線上報名。/ The registration site will be closed on October 6.

繳費方式/Methods of Payment:

  • 郵政劃撥/ Postal: 戶名:中華民國計算語言學學會,帳號:19166251。

    (同一單位多位報名者可合併劃撥,請於劃撥通訊欄中註明您的「Registration ID」號碼)。

  • 信用卡傳真/Fax credit card : Fax: 02-27881638, E-mail: aclclp@aclclp.org.tw
  • 線上刷卡繳費/credit card on-line。

註冊費事宜/ For registration inquiries, please contact:

聯絡人:黃琪 小姐(中華民國計算語言學學會) (Miss Huang, ACLCLP)
E-mail:aclclp@aclclp.org.tw
Phone Number: 02-27883799 Ext.1502
Fax Number: 02-27881638

NLP Keynote by Prof. Vincent Ng

Vincent Ng (Ph.D., Cornell)

Event Coreference Resolution: Successes and Future Challenges

Speaker: Prof. Vincent Ng
Professor, The University of Texas at Dallas
Time: Friday, October 15, 2021, 09:10 - 10:10

Session Chair: Liang-Chih Yu

Biography

Vincent Ng is a Professor in the Computer Science Department at the University of Texas at Dallas. He is also the director of the Machine Learning and Language Processing Laboratory in the Human Language Technology Research Institute at UT Dallas. He obtained his B.S. from Carnegie Mellon University and his Ph.D. from Cornell University. His research is in the area of Natural Language Processing, focusing on the development of computational methods for addressing key tasks in information extraction and discourse processing.

Abstract

Recent years have seen a gradual shift of focus from entity-based tasks to event-based tasks in information extraction research. This talk will focus on event coreference resolution, the event-based counterpart of the notoriously difficult entity coreference resolution task. Specifically, I will examine the major milestones made in event coreference research since its inception more than two decades ago, including the recent successes of neural event coreference models and their limitations, and discuss possible ways to bring these models to the next level of performance.

Speech Keynote by Dr. Jinyu Li

Jinyu Li

Advancing end-to-end automatic speech recognition

Speaker: Dr. Jinyu Li
Partner Applied Scientist and Technical Lead, Microsoft Corporation, Redmond, USA
Time: Saturday, October 16, 2021, 09:00 - 10:00

Session Chair: Yu Tsao

Biography

Jinyu Li received the Ph.D. degree from Georgia Institute of Technology, Atlanta, in 2008. From 2000 to 2003, he was a Researcher in the Intel China Research Center and Research Manager in iFlytek, China. Currently, he is a Partner Applied Scientist and Technical Lead in Microsoft Corporation, Redmond, USA. He leads a team to design and improve speech modeling algorithms and technologies that ensure industry state-of-the-art speech recognition accuracy for Microsoft.His major research interests cover several topics in speech recognition, including end-to-end modeling, deep learning, noise robustness, etc. He is the leading author of the book "Robust Automatic Speech Recognition -- A Bridge to Practical Applications", Academic Press, Oct, 2015. He is the member of IEEE Speech and Language Processing Technical Committee since 2017. He also served as the associate editor of IEEE/ACM Transactions on Audio, Speech and Language Processing from 2015 to 2020.

Abstract

Recently, the speech community is seeing a significant trend of moving from deep neural network based hybrid modeling to end-to-end (E2E) modeling for automatic speech recognition (ASR). While E2E models achieve the state-of-the-art results in most benchmarks in terms of ASR accuracy, hybrid models still dominate the commercial ASR systems at current time. There are lots of practical factors that affect the production model deployment decision. Traditional hybrid models, being optimized for production for decades, are usually good at these factors. Without providing excellent solutions to all these factors, it is hard for E2E models to be widely commercialized. In this talk, I will overview the recent advances in E2E models with the focus on technologies addressing those challenges from the perspective of industry. Specificly, I will describe methods of 1) building high-accuracy low-latency E2E models, 2) building a single E2E model to serve all multilingual users, 3) customizing and adapting E2E models to a new domain 4) extending E2E models for multi-talker ASR etc. Finally, I will conclude the talk with some challenges we should address in the future.

AI Tutorial I

Speech enhancement (from signal processing to machine learning solutions)
and its applications for assistive hearing technology

Time: Friday, October 15, 2021, 10:30-12:30

Speakers: Yu Tsao (曹昱), Syu Siang Wang (王緒翔)

Abstract

The proportional increase in the elderly population and the inappropriate use of portable audio devices have led to a rapid increase in incidents of hearing loss. Untreated hearing loss can cause feelings of loneliness and isolation in the elderly and may lead to learning difficulties in students. Over the past few years, our group has investigated the application of machine learning and signal processing algorithms in FM assistive hearing systems, hearing aids, and cochlear implants (CIs) to improve speech communication in hearing-impaired patients and the subsequent enhancement in their quality of life. The tremendous progress of hearing-assistive technologies has enabled many hearing-loss recipients to enjoy a high level of speech perception in quiet conditions. However, speech intelligibility in noisy conditions still remains a challenge.

Meanwhile, real-world environments always contain stationary and/or time-varying noises that are collected together with speech signals by recording devices. These received signals inevitably degrade the performance of human-human and human-machine interfaces and have been attracted significant attention over the past years. To address this issue, an important front-end speech process, namely speech enhancement, is exploited to improve voice quality and intelligibility from noise-deteriorated clean speech. In addition, speech enhancement techniques extracting clean components from noisy input are combined with various applications including hearing assistive devices. In this tutorial, we are going to introduce conventional speech enhancement methods, its idea, concepts, and performances, and following with deep-learning denoising approaches. The applications for assistive hearing technology are then provided in the rest of the course.

AI Tutorial II

深度學習在教育科技上的應用
(學術詞彙片語、雙語對應、文法推導、
文法改錯、反向詞典、定義分類與導引詞)

Time: Friday, October 15, 2021, 13:30-15:00, 15:30-17:00

Speakers: 張俊盛、楊謦瑜、吳鑑城、白明弘、杜海倫、陳志杰、段凱文

Abstract

  • 學術詞彙+片語:如何延伸 Paquot (2000) 的 Academic Keyword List,產生定義、翻譯、例句、片語。
  • 雙語詞彙、文法對應:如何改善前世代統計式片語機器翻譯的詞彙對應、片語對應,文法規則對應。
  • 語言學搜尋引擎:如何利用 Google Web 1T 和 UDN 新聞語料庫,搜尋英文、中文的 Pattern Grammar 的文法規則 (Hunston 2000)
  • 文法改錯:如何突破 Grammarly 的限制,處理搭配錯誤。
  • 反向詞典:產生詞典語意(英文、中文)定義的詞彙內嵌,以及相關英文寫作應用。
  • 辭典定義分類:產生詞典語意(英文、中文)定義的語意分類,可以將劍橋英漢辭典和羅氏主題詞典(Roget's Thesaurus)、維基百科的 Wikidata 連結起來。
  • 辭典導引詞:用辭典定義分類技術,產生兩組合理,一致性高的歧異詞導引詞:以劍橋英漢線上辭典為例。

AI Tutorial III

Deep Learning Development Pipeline on CHT AI Platform
– Using Speaker Verification as an Example

Time: Saturday, October 16, 2021, 12:50-13:30

Speaker: 黃梓翔

Abstract

  • 介紹 AI PaaS 機器學習平台的功能
  • 透過 AI PaaS ,向學員展示聲紋辨識的模型生命週期流程,包括聲紋特徵擷取模型的開發、訓練與部署
  • 啟動展示介面,向學員展示「聲紋辨識」的應用情境

AI Tutorial IV

語音標記及建模工具套件 (Speech Labeling and Modeling ToolKit, SLMTK)
於個人化文字轉語音系統之建立

Time: Saturday, October 16, 2021, 13:30-15:00, 15:30-17:00

Speaker: 江振宇

Abstract

SLMTK 為 Speech Labeling and Modeling Toolkit 的縮寫,由國立臺北大學通訊工程學系「語音暨多媒體訊號處理實驗室」開發,SLMTK 是一套可快速且自動化將語音及文本標記成可以建立韻律產生模型以及語音合成模型的語音標記工具,韻律標記的標準以及語音標記格式皆已制訂於套件內,方便分析以及建模使用,亦可以建立出基礎的韻律產生模型以及語音合成模型。另外,SLMTK 產生的語言、語音、以及韻律標記,亦可為語言語音研究者提供具有意義的輔助標記。

目前 SLMTK 已在多個語料庫上進行實驗,並已有商用產品使用 SLMTK 建立 TTS (text-to-speech) 應用。SLMTK 亦支援科技部 “研發整合漸凍症病友智慧溝通系統-成果加值及落地應用” (MOST-109-3011-F-011-001-) 之 “子計畫二:回聲計畫-漸凍症病友文字轉語音系統之建立”,與「中華民國運動神經元疾病病友協會」以及「聲帆股份有限公司」合作,目前已建立 20 位漸凍症病友客製化的文字轉語音系統,能在輔具上輸入文字後,以病友自己特有的聲音發聲。

SLMTK 將用於「中華民國運動神經元疾病病友協會」的 Voicebank 活動,將大量處理語音捐獻者的語音,以協助改善或建立病友的客製化 TTS。目前 SLMTK 支援純中文以及中英夾雜的語音處理,未來將有支援台語以及客語的延伸模組。本 Tutorial 將會分成兩部分講述:

Part I: 文字轉語音系統 overview Part II: 使用 SLMTK 建立客製化文字轉語音系統
  1. TTS as a mapping function
  2. A Brief Review of Speech Generation Process
  3. Information Extracted from Input Text
  4. Segmental/Speech Analysis
  5. Syllable Structure of Mandarin
  6. Speech Production
  7. Prosody - Supra-segmental Analysis
  8. Prosody and Syntax
  9. TTS Pipelines
  1. SLMTK in Brief
  2. Text Analysis
  3. Speech Segmentation
  4. Prosody-Linguistic Feature Integration
  5. Prosody Labeler
  6. Training of TTS Models
Part I: 文字轉語音系統 overview
  1. TTS as a mapping function
  2. A Brief Review of Speech Generation Process
  3. Information Extracted from Input Text
  4. Segmental/Speech Analysis
  5. Syllable Structure of Mandarin
  6. Speech Production
  7. Prosody - Supra-segmental Analysis
  8. Prosody and Syntax
  9. TTS Pipelines
Part II: 使用 SLMTK 建立客製化文字轉語音系統
  1. SLMTK in Brief
  2. Text Analysis
  3. Speech Segmentation
  4. Prosody-Linguistic Feature Integration
  5. Prosody Labeler
  6. Training of TTS Models

Special Session

大腦與語言
Special Session: Brain and Language

徐峻賢 Chun-hsien Hsu
國立中央大學認知神經科學研究所
Institute of Cognitive Neuroscience
National Central University, Taiwan
neurolang@g.ncu.edu.tw

李佳穎 Chia-ying Lee
中央研究院語言學研究所
Institute of Linguistics
Academia Sinica, Taiwan
chiaying@gate.sinica.edu.tw

李佳霖 Chia-lin Lee
國立台灣大學語言學研究所
Graduate Institute of Linguistics
National Taiwan University, Taiwan
chialinlee@ntu.edu.tw

摘要

作為一門跨領域的科學研究項目,神經語言學擅長結合各種領域知識,主要包含腦科學、語言學理論、計算科學、認知心理學,以探討大腦如何處理人類語言。本次座談會希望將相關的研究成果回饋給科學研究社群,以鼓勵更多資訊科學、語言學領域的學者和研究生參與神經語言學的研究。本次座談會提到的語言參數來自各種不同的資源,包含中華民國計算語言學會出版的平衡語料庫、口語資料庫,以及研究者依照研究目的而建置的語料。這些資訊有助於實驗研究,以探索語言發展,以及探索一般正常母語使用者的大腦解讀語言結構的方式。徐峻賢會介紹基本的認知神經科學研究方法,以及使用腦磁圖技術研究構詞理論、口語理解的研究成果,並且分享以深度學習模型探討大腦活動特徵的研究方法。李佳穎將介紹結合行為測量、事件關聯腦電位進行詞彙知識、語言發展的研究成果,以實證研究澄清一般人對於中文的字型、字音結構常有的迷思,並且從語料庫進行語意多樣性的分析說明當前心理詞彙理論的轉變。李佳霖將分享人們處理意義的認知功能和其大腦機制,包含從語言使用的情境提取適當的語意訊息 (比如 “鋼琴” 要指涉音色、形狀、還是操作方式),以及老化造成的改變對於處理意義的認知功能之影響。

Abstract

Neurolinguistic is an interdisciplinary study that incorporates elements of neuroscience, linguistics, computational science, and cognitive psychology to aiming to explore how the brain processes human language. By presenting our research results to the science community, we hope to encourage future studies from researchers and graduate students in the areas of computational science and linguistics. The speech parameters mentioned in this symposium were gathered from distinct sources including Sinica Corpus and COSPRO & Toolkit published by the Association for Computational Linguistics and Chinese Language Processing, and customized databases that were built by researchers regarding their purposes of research. These data are beneficial to research studies aiming to explore topics in language development and language comprehension in native speakers. Dr. Chun-Hsien Hsu is going to introduce some basic research methods in cognitive neuroscience, and the research results of using Magnetoencephalography (MEG) to study morphosyntactic theories and speech comprehension. Dr. Hsu would also shares some approaches to employ deep learning models for studying the features of brain responses to language. Dr. Chia-yin Lee is going to talk about the use of combining behavioral testings and Event-Related Potential (ERP) in vocabulary knowledge and language development studies. Dr. Lee intends to clarify common misconceptions and myths about Chinese orthography and phonetic structures, and explain the current changes/shift in theories of mental lexicon through corpus analyses. Dr. Chia-Lin Lee is going to share with us about the cognitive functions and brain mechanisms involved in the processing of meaningful information, which includes the fetching of appropriate semantic meaning depending on the context of usage ( i.e. the concept of “piano” may comprise the sound of a piano, the shape of the instrument, and different ways to play it etc.), and the effect of aging on general cognitive function and semantic processes.

ROCLING 2021 Shared Task:


Dimensional Sentiment Analysis for Educational Texts

Organizers

I. Background

Sentiment analysis has emerged as a leading technique to automatically identify affective information within texts. In sentiment analysis, affective states are generally represented using either categorical or dimensional approaches (Calvo and Kim, 2013). The categorical approach represents affective states as several discrete classes (e.g., positive, negative, neutral), while the dimensional approach represents affective states as continuous numerical values on multiple dimensions, such as valence-arousal (VA) space (Russell, 1980), as shown in Fig. 1. The valence represents the degree of pleasant and unpleasant (or positive and negative) feelings, and the arousal represents the degree of excitement and calm. Based on this two-dimensional representation, any affective state can be represented as a point in the VA coordinate plane by determining the degrees of valence and arousal of given words (Wei et al., 2011; Malandrakis et al., 2013; Wang et al., 2016; Du and Zhang, 2016; Wu et la., 2017; Yu et al., 2020) or texts (Kim et al., 2010; Paltoglou et al, 2013; Goel et la., 2017; Zhu et al., 2019; Wang et al., 2019; 2020).

In 2016, we hosted a first dimensional sentiment analysis task for Chinese words (Yu et al., 2016b) at the 20th International Conference on Asian Language Processing (IALP 2016). In 2017, we extended this task to include both word- and phrase-level dimensional sentiment analysis (Yu et al., 2017). This year, we explore the sentence-level dimensional sentiment analysis task on educational texts (students’ self-evaluated comments).

II. Task Description

Structured data such as attendance, homework completion and in-class participation have been extensively studied to predict students’ learning performance. Unstructured data, such as self- evaluation comments written by students, is also a useful data resource because it contains rich emotional information that can help illuminate the emotional states of students (Yu et al., 2018). Dimensional sentiment analysis is an effective technique to recognize the valence-arousal ratings from texts, indicating the degree from most negative to most positive for valence, and from most neutral low Valence IV Low-Arousal, Positive-Valence Tired calm to most excited for arousal.

In this task, participants are asked to provide a real-valued score from 1 to 9 for both valence and arousal dimensions for each self-evaluation comment. The input format is “sentence_id, sentence”, and the output format is “sentence_id, vallence_rating, arousal_rating”. Below are the input/output formats of the example sentences.

Example 1:

     Input: 1, 今天教了許多以前沒有學過的東西,所以上起課來很新鮮

     Output: 1, 6.8, 5.2

Example 2:

     Input: 2, 覺得課程進度有點快,內容難以消化

     Output: 2, 3.0, 4.0

III. Data

Training set

  • CVAW 4.0 : including 5,512 single words annotated with valence-arousal ratings (Yu et al., 2016a).
  • CVAP 2.0 : including 2,998 multi-word phrases annotated with valence-arousal ratings (Yu et al., 2017)
  • CVAT 2.0 : including 2,969 sentences annotated with valence-arousal ratings (Yu et al., 2016a).


Test set & Answer: including 1,600 sentences about educational texts


The policy of this shared task is an open test. Participating systems are allowed to use other publicly available data for this shared task, but the use of other data should be specified in the final technical report.

IV. Evaluation

The performance is evaluated by examining the difference between machine-predicted ratings and human-annotated ratings (valence and arousal are treated independently). The evaluation metrics include:

Mean absolute error:

Pearson correlation coefficient:

where Ai denotes the human-annotated ratings, Pi denotes the machine-predicted ratings, n is the number of test samples, A and P respectively denote the arithmetic mean of A and P, and σ is the standard deviation.

Scoring script: Click here

V. Important Dates

Registration: Click here

  • Release of training data: May 1, 2021
  • Release of test data: August 13, 2021
  • Testing results submission due:, August 15, 2021
  • Release of evaluation results: August 18, 2021
  • System description paper due: August 31, 2021
  • Notification of Acceptance: September 5, 2021
  • Camera-ready deadline: September 10, 2021

References

  • Rafael A. Calvo, and Sunghwan Mac Kim. 2013. Emotions in text: dimensional and categorical models. Computational Intelligence, 29(3):527-543.
  • Munmun De Choudhury, Scott Counts, and Michael Gamon. 2012. Not all moods are created equal! Exploring human emotional states in social media. In Proc. of ICWSM-12, pages 66-73.
  • Steven Du and Xi Zhang. 2016. Aicyber’s system for IALP 2016 shared task: Character-enhanced word vectors and Boosted Neural Networks, in Proc. of IALP-16, pages 161–163.
  • Pranav Goel, Devang Kulshreshtha, Prayas Jain and Kaushal Kumar Shukla. 2017. Prayas at EmoInt 2017: An Ensemble of Deep Neural Architectures for Emotion Intensity Prediction in Tweets, in Proc. of WASSA-17, pages 58–65.
  • Sunghwan Mac Kim, Alessandro Valitutti, and Rafael A. Calvo. 2010. Evaluation of unsupervised emotion models to textual affect recognition. In Proc. of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 62-70.
  • N. Malandrakis, A. Potamianos, E. Iosif, and S. Narayanan. 2013. Distributional semantic models for affective text analysis. IEEE Transactions on Audio, Speech, and Language Processing, 21(11): 2379-2392.
  • Myriam Munezero, Tuomo Kakkonen, and Calkin S. Montero. 2011. Towards automatic detection of antisocial behavior from texts. In Proc. of the Workshop on Sentiment Analysis where AI meets Psychology (SAAIP) at IJCNLP-11, pages 20-27.
  • Georgios Paltoglou, Mathias Theunis, Arvid Kappas, and Mike Thelwall. 2013. Predicting emotional responses to long informal text. IEEE Trans. Affective Computing, 4(1):106-115.
  • Jie Ren and Jeffrey V. Nickerson. 2014. Online review systems: How emotional language drives sales. In Proc. of AMCIS-14.
  • James A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology, 39(6):1161.
  • Wen-Li Wei, Chung-Hsien Wu, and Jen-Chun Lin. 2011. A regression approach to affective rating of Chinese words from ANEW. In Proc. of ACII-11, pages 121-131.
  • Liang-Chih Yu, Cheng-Wei Lee, Huan-Yi Pan, Chih-Yueh Chou, Po-Yao Chao, Zhi-Hong Chen, Shu-Fen Tseng, Chien-Lung Chan and K. Robert Lai. 2018. Improving early prediction of academic failure using sentiment analysis on self-evaluated comments, Journal of Computer Assisted Learning, 34(4):358-365.
  • Liang-Chih Yu, Lung-Hao Lee, Shuai Hao, Jin Wang, Yunchao He, Jun Hu, K. Robert Lai, and Xuejie Zhang. 2016a. Building Chinese affective resources in valence-arousal dimensions. In Proc. of NAACL/HLT-16, pages 540-545.
  • Liang-Chih Yu, Lung-Hao Lee, Jin Wang and Kam-Fai Wong. 2017. IJCNLP-2017 Task 2: Dimensional sentiment analysis for Chinese phrases, in Proc. of IJCNLP-17, pages 9-16.
  • Liang-Chih Yu, Lung-Hao Lee and Kam-Fai Wong. 2016b. Overview of the IALP 2016 shared task on dimensional sentiment analysis for Chinese words, in Proc. of IALP-16, pages 156-160.
  • Liang-Chih Yu, Jin Wang, K. Robert Lai and Xuejie Zhang. 2020. Pipelined neural networks for phrase-level sentiment intensity prediction, IEEE Transactions on Affective Computing, 11(3), 447-458.
  • Jin Wang, Liang-Chih Yu, K. Robert Lai and Xuejie Zhang. 2016. Community-based weighted graph model for valence-arousal prediction of affective words, IEEE/ACM Trans. Audio, Speech and Language Processing, 24(11):1957-1968.
  • Jin Wang, Liang-Chih Yu, K. Robert Lai and Xuejie Zhang. 2020. Tree-structured regional CNN- LSTM model for dimensional sentiment analysis, IEEE/ACM Transactions on Audio Speech and Language Processing, 28, 581–591.
  • Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Sixing Wu and Zhigang Yuan. 2017. THU NGN at IJCNLP-2017 Task 2: Dimensional sentiment analysis for Chinese phrases with deep LSTM, in Proc. of IJCNLP-17, pages 42-52.
  • Suyang Zhu, Shoushan Li and Guodong Zhou. 2019. Adversarial attention modeling for multi- dimensional emotion regression, in Proc. of ACL-19, pages 471–480.

Organization

Honorary Chair

Jing-Yang Jou

National Central University

Conference Chairs

Lung-Hao Lee

National Central University

Chia-Hui Chang

National Central University

Kuan-Yu Chen

National Taiwan University of Science and Technology

Program Chairs

Yung-Chun Chang

Taipei Medical University

Yi-Chin Huang

National Pingtung University

Tutorial Chair

Hung-Yi Lee

National Taiwan University

Publication Chair

Jheng-Long Wu

Soochow University

Special Session Chair

Chun-Hsien Hsu

National Central University

Shared Task Chair

Liang-Chih Yu

Yuan Ze University

Organized by

National Central University

National Taiwan University of Science and Technology

The Association for Computational Linguistics and Chinese Language Processing

Co-Organized by

Supported by

Sponsored by

Program Committee

Name (sorted by last names), Organization

Jia-Wei Chang(張家瑋),

National Taichung University of Science

Ru-Yng Chang (張如瑩),

ai clerk international co., Itd

Chung-Chi Chen (陳重吉),

National Taiwan University

Yun-Nung Chen (陳縕儂),

National Taiwan University

Yu-Tai Chien (簡宇泰),

National Taipei University of Business

Hong-Jie Dai (戴鴻傑),

National Kaohsiung University of Science and Technology

Min-Yuh Day (戴敏育),

National Taipei University

Yu-Lun Hsieh (謝育倫),

CloudMile

Wen-Lian Hsu (許聞廉),

Asia University

Hen-Hsen Huang (黃瀚萱),

Academia Sinica

Jeih-weih Hung (洪志偉),

National Chi Nan University

Chih-Hao Ku (顧值豪),

Cleveland State University

Ying-Hui Lai (賴穎暉),

National Yang Ming Chiao Tung University

Cheng-Te Li (李政德),

National Cheng Kung University

Chun-Yen Lin (林君彥),

Taipei Medical University

Jen-Chun Lin (林仁俊),

Academia Sinica

Szu-Yin Lin (林斯寅),

National llan University

Shih-Hung Liu (劉士弘),

Digiwin

Chao-Lin Liu (劉昭麟),

National Chengchi University

Jenn-Long Liu (劉振隆),

I-Shou University

Yi-Fen Liu (劉怡芬),

Feng Chia University

Wen-Hsiang Lu (盧文祥),

National Cheng Kung University

Shang-Pin Ma (馬尚彬),

National Taiwan Ocean University

Emily Chia-Yu Su (蘇家玉),

Taipei Medical University

Ming-Hsiang Su (蘇明祥),

Soochow University

Richard Tzong-Han Tsai (蔡宗翰),

Naitonal Central University

Chun-Wei Tung (童俊維),

National Health Research Institutes

Hsin-Min Wang (王新民),

Academia Sinica

Jenq-Haur Wang (王正豪),

National Taipei University of Technology

Yu-Cheng Wang (王昱晟),

Lunghwa University of Science and Technology

Jheng-Long wu (吳政隆),

Soochow University

Shih-Hung wu (吳世弘),

Chaoyang University of Technology

Jui-Feng Yeh (葉瑞峰),

National Chiayi University

Liang-Chih Yu (禹良治),

Yuan Ze University