전체 글
-
Multimodal Few-Shot Learning with Frozen Language ModelsPaper Writing 1/Related_Work 2024. 11. 6. 10:21
https://arxiv.org/pdf/2106.13884(Jun 2021 NeurIPS 2021)Abstract When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, we present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language). Using aligned imag..
-
[TSMixer] An All-MLP Architecture for Time Series ForecastingPaper Writing 1/Related_Work 2024. 11. 4. 00:43
(Mar 2023)https://arxiv.org/pdf/2303.06053https://github.com/google-research/google-research/tree/master/tsmixerAbstractReal-world time-series datasets are often multivariate with complex dynamics. To capture this complexity, high capacity architectures like recurrent- or attention-based sequential deep learning models have become popular. However, recent work demonstrates that simple univariate..
-
[GPT4MTS] Prompt-Based Large Language Model for Multimodal Time-Series ForecastingPaper Writing 1/Related_Work 2024. 11. 3. 17:01
https://doi.org/10.1609/aaai.v38i21.30383(March, 2024)AbstractTime series forecasting is an essential area of machine learning with a wide range of real-world applications. Most of the previous forecasting models aim to capture dynamic characteristics from uni-modal numerical historical data. Although extra knowledge can boost the time series forecasting performance, it is hard to collect such i..
-
[Time-MoE] Billion-Scale Time Series Foundation Models with Mixture of ExpertsPaper Writing 1/Related_Work 2024. 11. 1. 10:52
https://arxiv.org/pdf/2409.16040https://github.com/Time-MoE/Time-MoE(Sep 2024)AbstractDeep learning for time series forecasting has seen significant advancements over the past decades. However, despite the success of large-scale pre-training in language and vision domains, pre-trained time series models remain limited in scale and operate at a high cost, hindering the development of larger capab..
-
[Chronos] Learning the Language of Time SeriesPaper Writing 1/Related_Work 2024. 10. 30. 23:38
https://arxiv.org/pdf/2403.07815https://github.com/amazon-science/chronos-forecastingAbstractWe introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models. Chronos tokenizes time series values using scaling and quantization into a fixed vocabulary and trains existing transformer-based language model architectures on these tokenized time series via the cro..
-
[MOMENT] A Family of Open Time-series Foundation ModelsPaper Writing 1/Related_Work 2024. 10. 30. 17:16
(Feb 2024 ICML 2024)https://arxiv.org/pdf/2402.03885https://github.com/moment-timeseries-foundation-model/momentAbstractWe introduce MOMENT, a family of open-source foundation models for general-purpose time series analysis. Pre-training large models on time series data is challenging due to (1) the absence of a large and cohesive public time series repository, and (2) diverse time series charac..