IJCAI'25 Workshop




AI4TS: AI for Time Series


Analysis:




Theory, Algorithms, and Applications


Time series data are becoming ubiquitous in numerous real-world applications, e.g., IoT devices, healthcare, wearable devices, smart vehicles, financial markets, biological sciences, environmental sciences, etc. Given the availability of massive amounts of data, their complex underlying structures/distributions, together with the high-performance computing platforms, there is a great demand for developing new theories and algorithms to tackle fundamental challenges (e.g., representation, classification, prediction, causal analysis, etc.) in various types of applications. The goal of this workshop is to provide a platform for researchers and AI practitioners from both academia and industry to discuss potential research directions, key technical issues, and present solutions to tackle related challenges in practical applications. The workshop will cover on both the oretical and practical aspects of time series data analysis and aim to trigger research innovations in theories, algorithms, and applications. This year, we will have a particular focus on foundation models as well as large language models (LLMs), and would like to discuss their potential impact and how they can be applied to varieties of time series applications. We will invite researchers and AI practitioners from the related areas of machine learning, data science, statistics, econometrics, and many others to contribute to this workshop.

This workshop encourages submissions of innovative solutions for a broad range of time series analysis problems. Topics of interest include but are not limited to the following:

  • Time series forecasting and prediction
  • Spatio-temporal forecasting and prediction
  • Time series anomaly detection and diagnosis
  • Time series change point detection
  • Time series classification and clustering
  • Time series similarity search
  • Time series indexing
  • Time series compression
  • Time series pattern discovery
  • Interpretation and explanation in time series
  • Causal inference in time series
  • Bias and fairness in time series
  • Foundation models for time series
  • Large language models (LLMs) for time series
  • Federated learning and security in time series
  • Benchmarks, experimental evaluation, and comparison for time series analysis tasks
  • Time series applications in various areas: E-commerce, Cloud computing, Transportation, Fintech, Healthcare, Internet of things, Wireless networks, Predictive maintenance, Energy, and Climate, etc.

Call for Papers

Contact with us: ai4ts.ijcai@gmail.com

Submissions should be 5-7 pages long, excluding references, and follow the IJCAI-25 template. Submissions are single-blind, and author's identity will be revealed to the reviewers. An optional appendix of arbitrary length is allowed and should be put at the end of the paper (after references).

Accepted papers will be presented as posters during the workshop and listed on the website (non-archival/without proceedings). Besides, a small number of accepted papers will be selected to be presented as contributed talks. We also welcome submissions of unpublished papers, including those that are submitted/accepted to other venues if that other venue allows so.

Submission link: https://cmt3.research.microsoft.com/AI4TSconf2025

Any questions may be directed to the workshop e-mail address: ai4ts.ijcai@gmail.com

The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

Key Dates

 

Workshop Paper Submission Due Date: May 31st, 2025(AoE) Jun 6th, 2025 (AoE)

Notification of Paper Acceptance: June 10th, 2025(AoE) Jun 14th, 2025

IJCAI-25 Workshops: August 17th, 2025

At least one author of each accepted paper *must* travel to the IJCAI venue in person, and multiple submissions of the same paper to more IJCAI workshops are forbidden.

Detailed Workshop Schedule:

August 17th


Location: Room 520F @ Montreal Convention Centre (palais des congrès de montréal)


Time(EDT UTC-4h)SpeakerTitle
9:00 am - 9:10 amDr. Dongjin SongOpening Remarks
9:10 am - 9:55 amProf. Elynn ChenKeynote Talk 1: Transfer Reinforcement Learning: Value-Based Methods for Non-Stationary MDPs
9:55 am - 10:40 amOral Presentations

Paper 1: Dynamic Modes as Time Representation for Spatiotemporal Forecasting

Paper 2: ViFusionTST: Deep Fusion of Time-Series Image Representations from Load Signals for Early Bed-Exit Prediction

Paper 3: Long-Term Multivariate Time Series Generation with the Capture of Intervariate Dependencies and Variatewise Characteristics

10:40 am – 11:00 am Coffee Break
11:00 am – 11:45 amProf. Longbing CaoKeynote Talk 2: Irregular Multivariate Time Series Modeling: In Time and Frequency Domains
11:50 am - 2:00 pm  Lunch Break
2:00 pm - 2:45 pmDr. Nicolas ChapadosKeynote Talk 3: Context is Key: A Benchmark for Forecasting with Essential Textual Information
2:45 pm - 3:30 pmProf. Jingchao NiKeynote Talk 4: Cross-Modal Knowledge Transfer in Time Series via Multimodal Views
3:30 pm - 4:00 pm Coffee Break + Poster Setup
4:00 pm - 5:30 pm Poster Session
 

Speakers

Prof. Longbing Cao

Distinguished Chair Professor in AI & ARC Future Fellow (Level 3)
Macquarie University

Title: Irregular Multivariate Time Series Modeling: In Time and Frequency Domains

Real-life multivariate time series (MTS) are often irregular, presenting irregularities including non-IID, stylistic, asymmetric, inconsistent, and dynamic characteristics. High-dimensional and multi-spectral multivariates are even more challenging to model. This talk reviews such challenges and briefly introduces some of our recent progress in deep MTS modelling of such irregularities, and non-IIDnesses (interactions, couplings, and heterogeneities) in time, frequency, and time+frequency domain. The approaches synergise deep neural learning with statistical and variational learning, copula methods, shallow-to-deep non-IID learning, and basis functions, etc.

Prof. Elynn Chen

Assistant Professor
NYU Stern School of Business

Title: Transfer Reinforcement Learning: Value-Based Methods for Non-Stationary MDPs

In dynamic decision-making scenarios across business, healthcare, and education, leveraging data from diverse populations can significantly enhance reinforcement learning (RL) performance for specific target populations, especially when target samples are limited. We develop comprehensive frameworks for transfer learning in RL, addressing both stationary Markov decision processes (MDPs) with iterative Q*-learning and non-stationary finite-horizon MDPs with backward inductive Q*-learning. For stationary MDPs, we propose an iterative Q*-learning algorithm with knowledge transfer, establishing theoretical justifications through faster convergence rates under similarity assumptions. For non-stationary finite-horizon MDPs, we introduce two key innovations: (1) a novel "re-weighted targeting procedure" that enables cross-satege transfer along multiple temporal steps, and (2) transfer deep Q*-learning that leverages neural networks as function approximators. We demonstrate that while naive sample pooling strategies may succeed in regression settings, they fail in MDPs, necessitating our more sophisticated approach. We establish theoretical guarantees for both settings, revealing the relationship between statistical performance and MDP task discrepancy. Our analysis illuminates how source and target sample sizes impact transfer effectiveness. The framework accommodates both transferable and non-transferable transition density ratios while assuming reward function transferability. Our analytical techniques have broader implications, extending to supervised transfer learning with neural networks and domain shift scenarios. Empirical evidence from both synthetic and real datasets validates our theoretical results, demonstrating significant improvements over single-task learning rates and highlighting the practical value of strategically constructed transferable RL samples in both stationary and non-stationary contexts.

Dr. Nicolas Chapados

Vice-President of Research
ServiceNow Inc.

Title: Context is Key: A Benchmark for Forecasting with Essential Textual Information

Forecasting is a critical task in decision making across numerous domains. While historical numerical data provide a start, they fail to convey the complete context for reliable and accurate predictions. Human forecasters frequently rely on additional information, such as background knowledge and constraints, which can be efficiently communicated through natural language. However, in spite of recent progress with LLM-based forecasters, their ability to effectively integrate this textual information remains an open question. To address this, we introduce “Context is Key” (CiK), a time-series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context, requiring models to integrate both modalities; crucially, every task in CiK requires understanding textual context to be solved successfully. We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters, and propose a simple-yet-effective LLM prompting method that outperforms all other tested methods on our benchmark. Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings. This benchmark aims at advancing multimodal forecasting, promoting models that are both accurate and accessible to decision-makers with varied technical expertise. The benchmark can be visualized at https://servicenow.github.io/context-is-key-forecasting/.

Prof. Jingchao Ni

Assistant Professor
University of Houston

Title: Cross-Modal Knowledge Transfer in Time Series via Multimodal Views

Time series analysis has progressed from traditional autoregressive models to deep learning, Transformers, and large foundation models. These advances have expanded model design possibilities and, intriguingly, enabled time series problem-solving across multiple modalities, greatly enhancing downstream applications. In this talk, I will present an overview of recent developments in large foundation models for time series, highlighting the typical framework for knowledge transfer from non-time-series modalities to time series. I will then delve into the emerging area of cross-modal knowledge transfer via multimodal views (MMVs) of time series, discussing (1) the generation of MMVs (e.g., linguistic and visual views) of time series; (2) methods for modeling time series via MMVs; and (3) the integration of MMVs with multimodal models. This talk aims to review state-of-the-art AI techniques for time series, uncover unique challenges, and share our recent findings in this promising area.

Organization Committee

The following are arranged in alphabetical order

 

Dongjin Song

Assistant Professor, University of Connecticut

 

Qingsong Wen

Head of AI Research & Chief Scientist
Squirrel Ai

 

Sanjay Purushotham

Assistant professor
University of Maryland Baltimore County

 
 

Elynn Chen

Assistant Professor
New York University

 

Haifeng Chen

Head
Data Science and System Security Department at NEC Laboratories America

 

Yao Xie

Professor
Georgia Institute of Technology

 

Yuxuan Liang

Assistant Professor
Hong Kong University of Science and Technology (Guangzhou)

 

Shirui Pan

Professor
Griffith University

 

Wei Cheng

Senior Researcher
NEC Labs America

 

Yingjie Zhou

Associate Professor
Sichuan University

 

Li Zhang

Assistant Professor
University of Texas Rio Grande Valley.