Time series data are becoming ubiquitous in numerous real-world applications, e.g., IoT devices, healthcare, wearable devices, smart vehicles, financial markets, biological sciences, environmental sciences, etc. Given the availability of massive amounts of data, their complex underlying structures/distributions, together with the high-performance computing platforms, there is a great demand for developing new theories and algorithms to tackle fundamental challenges (e.g., representation, classification, prediction, causal analysis, etc.) in various types of applications. The goal of this workshop is to provide a platform for researchers and AI practitioners from both academia and industry to discuss potential research directions, key technical issues, and present solutions to tackle related challenges in practical applications. The workshop will cover on both the oretical and practical aspects of time series data analysis and aim to trigger research innovations in theories, algorithms, and applications. This year, we will have a particular focus on foundation models as well as large language models (LLMs), and would like to discuss their potential impact and how they can be applied to varieties of time series applications. We will invite researchers and AI practitioners from the related areas of machine learning, data science, statistics, econometrics, and many others to contribute to this workshop.
This workshop encourages submissions of innovative solutions for a broad range of time series analysis problems. Topics of interest include but are not limited to the following:
Submissions should be 5-7 pages long, excluding references, and follow the IJCAI-25 template. Submissions are single-blind, and author's identity will be revealed to the reviewers. An optional appendix of arbitrary length is allowed and should be put at the end of the paper (after references).
Accepted papers will be presented as posters during the workshop and listed on the website (non-archival/without proceedings). Besides, a small number of accepted papers will be selected to be presented as contributed talks. We also welcome submissions of unpublished papers, including those that are submitted/accepted to other venues if that other venue allows so.
Submission link: https://cmt3.research.microsoft.com/AI4TSconf2025
Any questions may be directed to the workshop e-mail address: ai4ts.ijcai@gmail.com
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
Workshop Paper Submission Due Date: May 31st, 2025(AoE) Jun 6th, 2025 (AoE)
Notification of Paper Acceptance: June 10th, 2025(AoE) Jun 14th, 2025
IJCAI-25 Workshops: August 17th, 2025
At least one author of each accepted paper *must* travel to the IJCAI venue in person, and multiple submissions of the same paper to more IJCAI workshops are forbidden.
Time(EDT UTC-4h) | Speaker | Title |
---|---|---|
9:00 am - 9:10 am | Dr. Dongjin Song | Opening Remarks |
9:10 am - 9:55 am | Prof. Elynn Chen | Keynote Talk 1: Transfer Reinforcement Learning: Value-Based Methods for Non-Stationary MDPs |
9:55 am - 10:40 am | Oral Presentations |
Paper 1: Dynamic Modes as Time Representation for Spatiotemporal Forecasting Paper 2: ViFusionTST: Deep Fusion of Time-Series Image Representations from Load Signals for Early Bed-Exit Prediction Paper 3: Long-Term Multivariate Time Series Generation with the Capture of Intervariate Dependencies and Variatewise Characteristics |
10:40 am – 11:00 am | Coffee Break | |
11:00 am – 11:45 am | Prof. Longbing Cao | Keynote Talk 2: Irregular Multivariate Time Series Modeling: In Time and Frequency Domains |
11:50 am - 2:00 pm | Lunch Break | |
2:00 pm - 2:45 pm | Dr. Nicolas Chapados | Keynote Talk 3: Context is Key: A Benchmark for Forecasting with Essential Textual Information |
2:45 pm - 3:30 pm | Prof. Jingchao Ni | Keynote Talk 4: Cross-Modal Knowledge Transfer in Time Series via Multimodal Views |
3:30 pm - 4:00 pm | Coffee Break + Poster Setup | |
4:00 pm - 5:30 pm | Poster Session |
Distinguished Chair Professor in AI & ARC Future Fellow (Level 3)
Macquarie University
Real-life multivariate time series (MTS) are often irregular, presenting irregularities including non-IID, stylistic, asymmetric, inconsistent, and dynamic characteristics. High-dimensional and multi-spectral multivariates are even more challenging to model. This talk reviews such challenges and briefly introduces some of our recent progress in deep MTS modelling of such irregularities, and non-IIDnesses (interactions, couplings, and heterogeneities) in time, frequency, and time+frequency domain. The approaches synergise deep neural learning with statistical and variational learning, copula methods, shallow-to-deep non-IID learning, and basis functions, etc.
In dynamic decision-making scenarios across business, healthcare, and education, leveraging data from diverse populations can significantly enhance reinforcement learning (RL) performance for specific target populations, especially when target samples are limited. We develop comprehensive frameworks for transfer learning in RL, addressing both stationary Markov decision processes (MDPs) with iterative Q*-learning and non-stationary finite-horizon MDPs with backward inductive Q*-learning. For stationary MDPs, we propose an iterative Q*-learning algorithm with knowledge transfer, establishing theoretical justifications through faster convergence rates under similarity assumptions. For non-stationary finite-horizon MDPs, we introduce two key innovations: (1) a novel "re-weighted targeting procedure" that enables cross-satege transfer along multiple temporal steps, and (2) transfer deep Q*-learning that leverages neural networks as function approximators. We demonstrate that while naive sample pooling strategies may succeed in regression settings, they fail in MDPs, necessitating our more sophisticated approach. We establish theoretical guarantees for both settings, revealing the relationship between statistical performance and MDP task discrepancy. Our analysis illuminates how source and target sample sizes impact transfer effectiveness. The framework accommodates both transferable and non-transferable transition density ratios while assuming reward function transferability. Our analytical techniques have broader implications, extending to supervised transfer learning with neural networks and domain shift scenarios. Empirical evidence from both synthetic and real datasets validates our theoretical results, demonstrating significant improvements over single-task learning rates and highlighting the practical value of strategically constructed transferable RL samples in both stationary and non-stationary contexts.
Forecasting is a critical task in decision making across numerous domains. While historical numerical data provide a start, they fail to convey the complete context for reliable and accurate predictions. Human forecasters frequently rely on additional information, such as background knowledge and constraints, which can be efficiently communicated through natural language. However, in spite of recent progress with LLM-based forecasters, their ability to effectively integrate this textual information remains an open question. To address this, we introduce “Context is Key” (CiK), a time-series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context, requiring models to integrate both modalities; crucially, every task in CiK requires understanding textual context to be solved successfully. We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters, and propose a simple-yet-effective LLM prompting method that outperforms all other tested methods on our benchmark. Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings. This benchmark aims at advancing multimodal forecasting, promoting models that are both accurate and accessible to decision-makers with varied technical expertise. The benchmark can be visualized at https://servicenow.github.io/context-is-key-forecasting/.
Time series analysis has progressed from traditional autoregressive models to deep learning, Transformers, and large foundation models. These advances have expanded model design possibilities and, intriguingly, enabled time series problem-solving across multiple modalities, greatly enhancing downstream applications. In this talk, I will present an overview of recent developments in large foundation models for time series, highlighting the typical framework for knowledge transfer from non-time-series modalities to time series. I will then delve into the emerging area of cross-modal knowledge transfer via multimodal views (MMVs) of time series, discussing (1) the generation of MMVs (e.g., linguistic and visual views) of time series; (2) methods for modeling time series via MMVs; and (3) the integration of MMVs with multimodal models. This talk aims to review state-of-the-art AI techniques for time series, uncover unique challenges, and share our recent findings in this promising area.
* Dynamic Modes as Time Representation for Spatiotemporal Forecasting Menglin Kong, Zhihao Zheng, Xudong Wang, Lijun Sun
* Long-Term Multivariate Time Series Generation with the Capture of Intervariate Dependencies and Variatewise Characteristics Kasumi Ohno, Kohei Makino, Makoto Miwa, Yutaka Sasaki
* THEMIS: Unlocking Pretrained Knowledge with Foundation Model Embeddings for Anomaly Detection in Time Series Mahesh Yadav, Kaushik Sarveswaran, Nagaraj Sundaramahalingam, Aravindakumar Venugopalan
* LLM-Based Framework for Next-Generation Cyber Threat Detection
Adamu Hussaini, Almustapha Wakili, Usman Musa, Wei Yu
* Empirically Estimated Uncertainty-Guided Iterative Decoding Strategy of Diffusion Models for Time Series Forecasting
Koki Ueno, Kohei Makino, Yutaka Sasaki
* Shape Analysis of Heart Murmurs: Adding Explainability to Murmur Detection
Chandan Pandey, Pinaki Dey, Anirban Dutta, Naman Mohan Paul, Purvesh Sanjeev Doud, Riddhi Jain, Aniruddha Sinha
* Early Prediction of Multiple Sclerosis Disability Progression via Multimodal Foundation Model Benchmarks
Maxime Usdin, Lito Kriara, Licinio Craveiro
* Uncovering Emotion Correlates to Transitions in EEG Energy Landscapes
Anubhav Anubhav, Kantaro Fujiwara
* Learning What Matters: Causal Time Series Modeling for Arctic Sea Ice Prediction
Emam Hossain, Md Osman Gani
* TEAFormers: TEnsor-Augmented Transformers for Multi-Dimensional Time Series Forecasting
Linghang Kong, Elynn Chen, Yuzhou Chen, Yuefeng Han