h3 { margin-top: 0; }
Time series data are becoming ubiquitous in numerous real world applications, e.g., IoT devices, healthcare, wearable devices, smart vehicles, financial markets, biological sciences, environmental sciences, etc. Given the availability of assive amounts of data, their complex underlying structures/distributions, together with the high-performance computing platforms, there is a great demand for developing new
theories and algorithms to tackle fundamental challenges(e.g., representation, classification, prediction, causal analysis, etc.) in various types of applications.
The goal of this workshop is to provide a platform for researchers and AI practitioners from both academia and industry to discuss potential research directions, key technical
issues, and present solutions to tackle related challenges in practical applications. The workshop will focus on both the
theoretical and practical aspects of time series data analysis and aims to trigger research innovations in theories, algorithms, and applications. We will invite researchers and AI
practitioners from the related areas of machine learning, data science, statistics, econometrics, and many others to contribute to this workshop.
This workshop encourages submissions of innovative solutions for a broad range of time series analysis problems. Topics of interest include but are not limited to the following:
Submissions should be 4-7 pages long, excluding references, and follow the AAAI2025 template. Submissions are double-blind and author identity should not be revealed to the reviewers. An optional appendix of arbitrary length is allowed and should be put at the end of the paper (after references).
Accepted papers will be presented as posters during the workshop and listed on the website (non-archival/without proceedings). A small number of accepted papers will be selected to be presented as contributed talks (15-minute oral presentations). We also welcome submissions of unpublished papers, including those submitted/accepted to other venues if that other venue allows.
Submission link: https://cmt3.research.microsoft.com/AIforTS2026
Note: Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
Workshop Paper Submission Due Date: October 22, 2025 November 2, 2025 (23:59pm AoE)
Notification of Paper Acceptance: November 5, 2025 November 15, 2025 (23:59pm AoE)
AAAI AI4TS Workshop Date: Jan 26, 2026
Location: Room: Garnet 217
* Each oral presentation consists of 8 minutes presentation and 2 minutes Q&A.
Location: Singapore EXPO, Room: Garnet 217
AI discourse is dominated by large foundation models, yet many of the most operationally critical deployments remain fundamentally time-series problems: multivariate sensor streams, equipment telemetry, longitudinal logs, and financial/health monitoring data. These settings are defined by rare, fragmented events (the failures we care about happen infrequently), stringent latency requirements (detection must be early enough to prevent damage), and privacy and deployment constraints that limit centralized training and cloud-only inference. This keynote advocates Prudent AI for Time Series: right-sized, lightweight, and transparent methods that can be deployed broadly and maintained by lean teams—without sacrificing reliability. I will present a set of practical methodologies and systems spanning the time-series lifecycle: (i) correlation-graph-based early anomaly detection (CAD/EADS), which models multivariate time series as evolving sensor correlation graphs and triggers alerts when correlation structures shift—often earlier than value-threshold alarms; (ii) controllable time-series generation (CTS) to augment scarce anomaly classes and enable simulation when labeled rare events are limited; and (iii) benchmarks and decision-support tooling (TSGBench/CTBench, TSGAssist) that make model selection and evaluation reproducible and accessible. I will conclude with a deployment perspective: plug-and-play “AI boxes” for on-edge time-series monitoring and an “AI-box factory” vision for modular orchestration, governance, and lifecycle oversight—bringing trustworthy time-series AI from research into real operations.
Time series and event data lie at the core of industrial asset lifecycle management, underpinning condition monitoring, anomaly detection, failure prediction, and maintenance decision making. While traditional time series models address forecasting or detection in isolation, real-world operations require integrated reasoning across heterogeneous temporal signals, events, and actions. Recent advances in large language models enable a new class of AI agents that can reason over time series, contextual knowledge, and operational constraints to drive end-to-end workflows. This talk introduces AssetOpsBench, a unified benchmark and evaluation framework for agentic AI in time series and event-driven industrial applications. AssetOpsBench combines real-world multivariate time series data, events from alerts and work orders, and standardized failure taxonomies with scenario-driven tasks such as anomaly diagnosis, predictive maintenance, and intervention planning. It provides a systematic evaluation of LLM performance dimensions to assess agent capabilities in time series perception, temporal reasoning, decision making, and action execution. AssetOpsBench offers a reproducible foundation for advancing trustworthy, scalable AI agents in conditional asset maintenance and broader time series–centric applications in other domains.
Time series models often function as black boxes, limiting scientists' ability to understand underlying temporal mechanisms. My research addresses this gap by building trustworthy AI systems with explainability at their core. This talk will cover methods and evaluation for explainable time series learning, including TimeX++, which learns time series explanations through information bottleneck principles, and F-Fidelity, a robust framework for faithfulness evaluation. I will also discuss future directions such as explanation-assisted learning and human-AI interaction.
* Wavelet-based Disentangled Adaptive Normalization for Non-stationary Time Series Forecasting Junpeng Lin, Tian Lan, Bo Zhang, Ke Lin, Dandan Miao, Huiru He, Jiantao Ye, Yan-fu Li, Chen Zhang
* HINTS: Extraction of Human Insights from Time-Series Without External Sources Sheo Yon Jhin, Noseong Park
* JoCE: Joint Counterfactual Explanation for Interpretable Time Series Anomaly Detection Woojin Jeong, Geonwoo Shin, Jaewook Lee
* Dualformer: Time-Frequency Dual Domain Learning for Long-term Time Series Forecasting Jingjing Bai, Yoshinobu Kawahara
* Uncovering Zero-Shot Generalization Gaps in Time-Series Foundation Models Using Real-World Videos Lujun Li, Lama Sleem, Yiqun Wangl, Yangjie Xu, Niccolo Gentile, Radu State
* Probabilistic Time Series Foundation Model with Uncertainty Decomposition: From Theory to Financial Practice Arundeep Chinta, Lucas Vinh Tran, Jay Katukuri
* FTDL: Enhancing Time Series Forecasting Through Frequency-Domain and Temporal-Domain Learning Yaling Tao, Kentaro Takagi, Takashi Watanabe, Kouta Nakata
* PatchDecomp: Interpretable Patch-Based Time Series Forecasting Hiroki Tomioka, Genta Yoshimura
* Day-Ahead Electricity Price Forecasting for Volatile Markets Using Foundation Models with Regularization Strategy Kritchanat Ponyuenyong, Pengyu Tu, Jia Wei Tan, Wei Soon Cheong, Jamie Ng Suat Ling, Lianlian Jiang
* ELITS - Efficient Lightweight Imputation for Time Series Pranav Sastry, Kalyan Reddy, Sumanta Mukherjee, Vijay Ekambaram, Pankaj Dayama, Prathosh AP
* Assessing Electricity Demand Forecasting with Exogenous Data in Time Series Foundation Models Wei Soon Cheong, Lian Lian Jiang, Jamie Ng Suat Ling
* Spectral Predictability as a Fast Reliability Indicator for Time Series Forecasting Model Selection Oliver Wang, Pengrui Quan, Kang Yang, Mani Srivastava
* State of Health Estimation of Batteries Using a Time-Informed Dynamic Sequence-Inverted Transformer
Janak M. Patel, Milad Ramezankhani, Anirudh Deodhar, Dagnachew Birru
* Learning and Explaining Semantic Concepts in Deep Time Series Models
Matilde Silva, Andre Carreiro, Duarte Folgado
* Do Large Language Models (LLMs) Understand Chronology?
Pattaraphon Kenny Wongchamcharoen, Paul Glasserman
* Video-Text Temporal Localization via Multi-Scale Convolution and Dynamic Routing
Gengtian Shi, Jinze Yu, Chenhao Wu, Shaofei Wang, Eiji Fukuzawa, Junjie Tang, Hiroshi Onoda, Jiang Liu
* Switch-Hurdle: A MoE Encoder with AR Hurdle Decoder for Intermittent Demand Forecasting
Fabian Musat, Simona Cabuz
* Partial Pooling for Improved Forecast Accuracy in Complex Industrial
Law Chew Sang, Kenneth Chin, Lee Kin Kuan, Mohammad Shahbaz Hussain, Nur Syafrina Shahib, Adam Zaini, Abhishek Singh, Meenakshi Mishra, Peng Xu
* Vision and Intention Boost Large Language Model in Long-Term Action Anticipation
Congqi Cao, Lanshu Hu, Yating Yu, Yanning Zhang
* The Forecast Critic: Leveraging Large Language Models for Poor Forecast Identification
Luke Bhan, Hanyu Zhang, Andrew Gordon Wilson, Michael W. Mahoney, Chuck Arvin
* Learning from Charging Time Series: Transferable EV Battery Capacity Estimation
Andrei Zakharov, Ilya Makarov
* Be Wary of Your Time Series Preprocessing
Sofiane Ennadir, Tianze Wang, Sahar Asadi, Oleg Smirnov, Lele Cao
* STI-VAE: Disentangled Dynamic Latent Representations for Financial Risk Prediction
Mingyuan Shao
* Coherent Multi-Agent Trajectory Forecasting in Team Sports with CausalTraj
Wei Zhen Teoh
* Shapelets-Enriched Selective Forecasting using Time Series Foundation Models
Shivani Tomar, Seshu Tirupathi, Elizabeth Daly, Ivana Duspari
* Diagnosing Data Irregularities in Financial Risk Forecasting: A Wavelet-Based Augmentation Study
Charles B Bramble, Xianghua Xie, Gary Tam, Kevin Mclafferty
* Time-series for Causal Inference Method in Environmental Research
Yanran Li, Lingke Jiang
* Adversarial Spatio-Temporal Attention Networks for Epileptic Seizure Forecasting
Zan Li, Kyongmin Yeo, Wesley Gifford, Lara Marcuse, Madeline Fields, Bulent Yener
* Learning Intermittent Time Series with the Partial Autocorrelation Function Integral Transform (PACFIT)
Justin M. Baker, Tyler Headley, Narayanan Kannan, Anand Somayajula, Adrien Weihs, P. Jeffrey Brantingham, Andrea L. Bertozzi
* Small Vocabularies, Big Gains: Pretraining and Tokenization in Time Series Models
Alexis Roger, Gwen Legate, Kashif Rasul, Yuriy Nevmyvaka, Irina Rish
* Sig-Patchformer: A Path Signature Based Transformer for Efficient Time Series Forecasting
Rohan Akkineni, Chandrasekhar Uddagiri
* Signed Dual Attention: Capturing Signed Dependencies in Time Series Forecasting
Balthazar Courvoisier, Tristan Cazenave
* Semantics-Aware Scene Encoder for Interpretable Active Learning in E2E Autonomous Driving
Masaaki Inoue, Shintaro Fukushima
* Data Adjustment Based on Model Characteristics for Few-Shot Time Series Forecasting
Yuna Saka, Tomoaki Yamazaki, Kouzou Ohara
* Time-Series at the Edge: Tiny Separable CNNs for Wearable Gait Detection and Optimal Sensor Placement
Andrea Procopio, Marco Esposito, Sara Raggiunto, Andrey Gizdov, Alberto Belli, Paola Pierleoni
* Accelerating Time Series Foundation Models with Speculative Decoding
Pranav Subbaraman, Fang Sun, Yue Yao, Huacong Tang, Xiao Luo
* A Unified Fall Detection Benchmark and Model for Early, Event-Centric Evaluation
Jerry Liu, Jaelyn Liang, Nidhi Seethapathi