EMPATHETIC HUMAN ROBOT INTERACTIONS.We are building a never-seen before humanoid robot that uses facial recognition technology, speech recognition technology, natural language understanding, emotion recognition, context and intent recognition and dialog management, to provide a holistic, empathetic understanding of the human user during interactive conversations. The humanoid robot will also express its own emotion in terms of speech, facial expressions and gesture. The robot will have the capability of learning on the fly during its interactions with users and the environment. We plan to install this robot in public areas of HKUST, where students and visitors can interact with it in different languages. We plan to show case this robot to primary and high school students to attract interest in STEM fields. This project will add interest to campus life and also stimulate intellectual debates on artificial intelligence.
Intelligent systems have evolved to the stage where virtual assistants are ubiquitous in smartphones and consumer-grade robots are affordable. Better human-machine interactions depend on machines being able to empathize with human emotions and discover their intent. We are building empathetic machines that are able to recognize and detect meanings and intent during conversations with the human, from their speech, language, facial expressions and context. In addition to understanding the content of speech, the machine needs to recognize if and when the human user is being sarcastic or joking, or if the human user is distressed and needs comforting. The “empathy module” is indispensable for machines and robots of the future that will become caretakers and companions to humans.
TASK ORIENTED DIALOG SYSTEMS .
Task-oriented dialogue systems are the class of interactive systems that carry out a conversation with human users while completing user requests to, say, book a ticket, look for a restaurant, check the weather, or make a transaction. These systems are the central module of personal assistants such as Siri, smart speakers such as Alexa or Google Home, but also lie at the core of customer care centres, both for online shopping sites such as JD or Tmall in China or for voice enterprise call centres. Automated interaction with customers has become indispensable for modern commercial endeavours across all sectors, allowing reductions in customer care costs and promising efficient streamlined customer service.
In the three decades since the first task-oriented dialogue system was commercialized, their basic architecture has remained substantially unchanged consisting of three pipelined basic components: 1) the intention/domain classifier and dialogue state tracking to determine what task to do; 2) the dialogue manager which controls the system response and the conversation between the user and the machine; 3) natural language generation module which converts the output of the dialogue manager to natural language sentences. These modules are separately programmed by rules or trained by data. With the emergence of powerful “deep learning” architectures, end-to-end task-oriented systems have been proposed to optimize for overall system performance and simplify training. Human-Machine or Human-Human dialogues traces can be used to train the system from end-to-end without loss of performance in a pipelined schema.
There remain, however, many challenges and desiderata features in end-to-end task-oriented systems. For instance, handling multiple domains with a single model (e.g. restaurants and hotel reservation) and having a single chat interface that can handle both casual talk (a.k.a. chit-chat) and tasks-oriented requests.
AI FOR SOCIAL GOOD .Artificial Intelligence systems can lead to negative social impact if they are used to create fake news, and if they are biased. Machine learning leads to models that often contain the same bias as the human samples they are trained from. We work on research topics that actively combat these negative forces. To date we have been working on online abusive language detection including sexist and racist languages. Abusive language detection models tend to have a problem of being biased toward identity words of a certain group of people because of imbalanced training datasets. For example, “You are a good woman” was considered “sexist” when trained on an existing dataset. Such model bias is an obstacle for models to be robust enough for practical use. In this work, we measure gender biases on models trained with different abusive language datasets, while analyzing the effect of different pre-trained word embeddings and model architectures. We also experiment with three bias mitigation methods: (1) debiased word embeddings, (2) gender swap data augmentation, and (3) fine-tuning with a larger corpus. These methods can effectively reduce gender bias by 90-98% and can be extended to correct model bias in other scenarios. We work on computational ways to measure inherent sexist or racist bias in chat bots.
We also work on fact checking statements made online. Fact-checking of textual sources needs to effectively extract relevant information from large knowledge bases. In this paper, we extend an existing pipeline approach to better tackle this problem. We propose a neural ranker using a decomposable attention model that dynamically selects sentences to achieve promising improvement in evidence retrieval F1 by 38.80%, with (×65) speedup compared to a TF-IDF method. Moreover, we incorporate lexical tagging methods into our pipeline framework to simplify the tasks and render the model more generalizable. As a result, our framework achieves promising performance on a large-scale fact extraction and verification dataset with speedup. There are many remaining challenges in terms of quantitative measures of machine bias and computational models to alleviate such bias. How we can use AI actively for good is an overriding theme of our research group.
MULTI-DOMAIN, MULTI-TASK, AND MULTILINGUAL LEARNING .The objective in multi-label learning problems is a simultaneous prediction of many labels for each input instance. During the past years, there were many proposed embedding based approaches to solve this problem by considering label dependencies and decreasing learning and prediction cost. However, compressing the data leads to losing part of the information included in label space. The idea in this work is to divide the whole label space to some small independent groups which leads to independent learning and prediction for each small group in the main space, rather than transforming to the compressed space. We use subspace clustering approaches to extract these small partitions such that the labels in each group do not include any information to improve the results for the labels in the other groups. According to the experiments on different datasets with a various number of features and labels, the approach improves prediction quality with lower computational cost compared to the state-of-the-art.
EMOTION AND SENTIMENT RECOGNITION, AFFECT AND PERSONALITY .We research on multimodal deep learning of affect, personality, emotions and sentiment from voice, language and facial expressions. Intelligent systems have evolved to the stage where virtual assistants are ubiquitous in smartphones and consumer-grade robots are affordable. Better human-machine interactions depend on machines being able to empathize with human emotions and discover their intent. We are building empathetic machines that are able to recognize and detect meanings and intent during conversations with the human, from their speech, language, facial expressions and context. In addition to understanding the content of speech, the machine needs to recognize if and when the human user is being sarcastic or joking, or if the human use.
We propose a multilingual model to recognize Big Five Personality traits from text data in four different languages: English, Spanish, Dutch and Italian. Our analysis shows that words having a similar semantic meaning in different languages do not necessarily correspond to the same personality traits. We propose a personality alignment method, GlobalTrait, which has a mapping for each trait from the source language to the target language (English), such that words that correlate positively to each trait are close together in the multilingual vector space. Using these aligned embeddings for training, we can transfer personality related training features from high-resource languages such as English to other low-resource languages, and get better multilingual results, when compared to using simple monolingual and unaligned multilingual embeddings.We also propose a tri-modal architecture to predict Big Five personality trait scores from video clips with different channels for audio, text, and video data. For each channel, stacked Convolutional Neural Networks are employed. The channels are fused both on decision-level and by concatenating their respective fully connected layers. It is shown that a multimodal fusion approach outperforms each single modality channel, with an improvement of 9.4% over the best individual modality (video). Full back propagation is also shown to be better than a linear combination of modalities, meaning complex interactions between modalities can be leveraged to build better models. Furthermore, we can see the prediction relevance of each modality for each trait. The described model can be used to increase the emotional intelligence of virtual agents.
AI ART .We are collaborating with artists, designers and art and design students at the Central Academy of Fine Arts in Beijing, on using AI tools for creative arts, ranging from machine generated art and paintings, to exploring the artistic meaning of AI technology. We are interested in using AI for the future of design and in using AI as a new medium of art creativity. Meanwhile, we are interested in exploring machine creativity and machine learning of aesthetic taste.
SPEECH AND LANGUAGE DATA ANALYTICS.
Businesses often need to analyze their customer calls, customer feedback, and user comments to find out about what they can do to improve sales, to improve services and to identify new business opportunities. This Big Data needs to be transcribed and analyzed automatically as they are too huge to be analyzed manually. The objective of our research project is to design algorithms that can perform sentiment analysis and emotion recognition from speech and text data. We use statistical modeling and machine learning methods to train our algorithms to determine whether a user sentiment is positive or negative towards a product or service. We also use audio and lexical features from speech and its transcription to recognize whether a caller is satisfied or dissatisfied with a service. We use the same methods to find out what type of sales calls leads to a successful transaction.
MULTILINGUAL AND LOW RESOURCE LANGUAGE ACOUSTIC AND LANGUAGE MODELING.
With the rapid penetration of the global market by smart phones and tablet computers, the need for universal voice-enabled user interface has become increasingly urgent. The most recent example is the launch of the Siri Personal Assistant on the iPhone 4s where most user-to-machine interaction can be achieved by spoken commands. Despite the maturity of speech recognition performance in major languages, it is still a major challenge to build recognition systems for languages with little written data for training the system. There are large numbers of speakers of non-standard languages who still do not have access to practical speech recognition systems. Most of the Indian languages, spoken by millions of people, do not have adequate online data that is useful for training speech recognition systems. In Hong Kong, many meetings and official speeches are delivered in Cantonese, a major non-standard Chinese language. We propose to investigate methods to meet the challenges of acoustic and language modeling issues of large vocabulary, non-standard language speech recognition with little online written text, with the near term application to non-standard Chinese language recognition. Our proposed methods can also be applied to specific applications for which there is little training data initially.
MIXED CODE SPEECH RECOGNITION.
Mixed language is an increasingly common occurrence in today's globalized entertainment, academic and business worlds. We propose new approaches to acoustic and language modeling of mixed language speech collected in a naturalistic setting, with the objective of building a speaker independent mixed language speech recognition system. Such a system must be able to recognize embedded foreign language speech without sacrificing the performance on the native (matrix) language speech. Mixed language is a heterogeneous genre produced by speakers who either use borrowed foreign words, switch from one language to another in the middle of a sentence (intra-sentential code mixing) or at the end of a sentence (inter-sentential code switching). Previous work on automatic recognition of mixed language speech has been restricted to either speaker dependent systems, or on speech with less than 2% borrowed words, or on short phrases with constrained grammar. In reality, we have found English in around 20% of the fluent speech recorded at meetings conducted in Chinese, for example. There has not been any research on speaker-independent systems of this kind so far.
Code-switching has been largely found in countries with immigrants who can speak non-English language as their native language. There are large number of hispanics among the United States population that are able to speak Spanish and English. In the South East Asia countries such as Singapore, Malaysia, and Indonesia, there are exist many cultural ethics come from diasporas that speak several languages such as Chinese, Indonesian, Malay, Arabic, and Indian. Traditionally, the approach of solving the low-resource issue in code-switching is focus on applying linguistic theories to the statistical model. However, these constraints cannot be postulated as a universal rule for all code-switching scenarios, especially for languages that are syntactically divergent. The goal of our research is to propose language-agnostic approaches to transfer information from rich monolingual data to address the lack of data issue in code-switching in different downstream natural language processing tasks. In our recent works, we propose to explore end-to-end neural models for code-switching speech recognition systems, and transfer learning approaches to acquire information from large monolingual data. We also propose a new representational learning method, multilingual meta-embeddings to acquire useful information from pre-trained embeddings.