전체 글
-
[LLMTime] Large Language Models Are Zero-Shot Time Series ForecastersPaper Writing 1/Related_Work 2024. 10. 7. 22:01
https://arxiv.org/pdf/2310.07820https://github.com/ngruver/llmtime(Oct 2023)AbstractBy encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text. Developing this approach, we find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zeroshot extrapolate time series at a level comparable to or exceeding the ..
-
[PatchTST] A Time Series is Worth 64 Words: Long-Term Forecasting with TransformersPaper Writing 1/Related_Work 2024. 10. 2. 11:06
https://arxiv.org/pdf/2211.14730(Nov, 2022)AbstractWe propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a s..
-
thought 2Paper Writing 1/Experiments 2024. 10. 1. 14:43
* Desiderata Looking back to when I was working at the NIMS, all the research that NIMS conducted was aimed at supporting decision-making in the field. Accurate weather forecasting technology is directly linked to safety and survival throughout the industry and the country, and it's impact is huge. For example, when a wind warning is issued, ships are unable to operate and many inquiries and com..
-
-
[TimeGPT-1]Paper Writing 1/Related_Work 2024. 9. 30. 17:51
https://arxiv.org/pdf/2310.03589https://github.com/Nixtla/nixtla(Oct 2023)AbstractIn this paper, we introduce TimeGPT, the first foundation model for time series, capable of generating accurate predictions for diverse datasets not seen during training. We evaluate our pre-trained model against established statistical, machine learning, and deep learning methods, demonstrating that TimeGPT zero-s..
-
[GPT4TS] One Fits All: Power General Time Series Analysis by Pretrained LMPaper Writing 1/Related_Work 2024. 9. 30. 14:20
https://arxiv.org/pdf/2302.11939(Neurips 2023 Spotlight) (Feb 2023)AbstractAlthough we have witnessed great success of pre-trained models in natural language processing (NLP) and computer vision (CV), limited progress has been made for general time series analysis. Unlike NLP and CV where a unified model can be used to perform different tasks, specially designed approach still dominates in each ..
-
[Time-LLM] Time Series Forecasting by Reprogramming Large Language ModelsPaper Writing 1/Related_Work 2024. 9. 30. 10:11
https://arxiv.org/pdf/2310.01728ICLR 2024 (Oct 2023)Abstract Time series forecasting holds significant importance in many real-world dynamic systems and has been extensively studied. Unlike natural language process (NLP) and computer vision (CV), where a single large model can tackle multiple tasks, models for time series forecasting are often specialized, necessitating distinct designs for diff..
-
Why Does Contrastive Learning Work?Research/Multimodal 2024. 9. 29. 23:13
* Contrastive learning은 왜 good representation을 학습하도록 하는가?* 그럼 good representation이란 건 뭔가?* Contrastive learning이 성공하기 위한 조건은 무엇인가? 이 궁금증에 대하여 이론적 증명을 보인 논문 2편을 가져왔다. 중간 단계의 수식은 이해하기 어려워서 적당히 넘겼지만, contrastive learning이 어떻게 작동하는지 (loss function이 어떻게 feature representation을 feature space 상에 놓이게 하는지) 를 어느정도 이해한 것 같다. Contrastive learning이 성공하기 위해서는 large batch size, augmentation method, hard negati..