University of Pittsburgh

Combining Attention-Based Approach and Textual Contexts for Event Sequence Prediction

PhD student
Date: 
Friday, November 16, 2018 - 1:00pm - 1:30pm

With the rapid expansion of the Web, large volumes of event-related data are available in numerous fields such as e-commerce, social activities and electronic health records. These events are correlated so that the patterns of past events may help to predict future events. Understanding those correlations is, therefore, crucial for an important task called ``event sequence prediction'': given the observed event sequence in the past, the goal is to predict what kind of events will occur at what time in the future. One classical mathematical tool for modeling sequences is point process, but it has the drawback of strong assumption on the generative process that may not reflect the reality. Recent studies explored Recurrent Neural Networks (RNNs) to learn a more general representation of the underlying dynamics, but they suffered from long sequence modeling and scalability issues. In addition, the existing methods ignored the textual context (such as a tweet, a blog, or a Facebook update, etc.), which has potential to be helpful in categorizing the events. In this paper, we propose to jointly model the event sequences (i.e., event type and time stamp) and the textual context using an attention-based neural network named Transformer. The key idea of our combined approach is to automatically learn the underlying dependencies among events from the event sequence history. The experiments on the large-scale synthetic and real-world datasets demonstrate our model can achieve better performance than both classical and RNN-based models. 

Copyright 2009–2018 | Send feedback about this site