Text–—the major data form for encoding information, and space and time—–the two most important contexts of our physical world, are increasingly converging. Existing text mining and spatiotemporal data mining techniques operate separately and quickly become obsolete in dealing with such complex data. How do we unveil the subtle correlations between different modalities? How do we discover interesting patterns in the multidimensional space? Can we integrate different modalities to make more accurate predictions? In this project, we strive to connect multiple modalities (text, space, time) in a principled way, thus improving predictive analysis and decision making in context-rich scenarios. We study several key problems such as cross-modal prediction and multimodal sequential prediction in this pursuit.
- Regions, Periods, Activities: Uncovering Urban Dynamics via Cross-Modal Representation Learning, WWW 2017
- ReAct: Online Multimodal Embedding for Recency-Aware Spatiotemporal Activity Modeling, SIGIR 2017
- Splitter: Mining Fine-Grained Sequential Patterns in Semantic Trajectories, VLDB 2014
- GMove: Group-Level Mobility Modeling Using Geo-Tagged Social Media, KDD 2016
- DeepMove: Predicting Human Mobility with Attentional Recurrent Networks, WWW 2018
- Bringing Semantics to Spatiotemporal Data Mining: Challenges, Methods, Applications, ICDE 2017
You may be also interested in playing with the Urbanity system we have developed. It connects multiple modalities to model human activities in the physical world with massive social media data.