Abstract
<p>In the era of big data, high volumes of data are being collected by sensors and humans. However, these data usually come from different sources in heterogeneous formats and types (i.e., multimodal data). Each data modality features various characteristics of the observed processes or entities, which can be complementary and equally important for complex decision making. As humans, our intelligence allows us to synthesize information from multiple modalities. However, the most recent advances in data science, machine learning, and artificial intelligence only utilize a single data modality. In this dissertation, novel data analytics and deep learning techniques are developed to process, integrate, and synthesize multimodal data to build high-performant, reliable, and robust automated systems that can be deployed and applied in various domains. Specifically, three major challenges in multimodal data analysis, including multimodal data fusion, imbalanced multimodal data, and integrating deep neural networks with data analytic techniques, are addressed. Various applications and multimodal datasets, including mobility, are incorporated to evaluate the proposed techniques.<br />
</p>