분류 전체보기
-
ModelingPaper Writing 1/Experiments 2024. 10. 20. 13:36
이상한 걸 하나 만들었다.... 지금 돌아가고 있다.... 엄청 에러 내고 수정하고수정하고또 수정한 후에.. 드디어.. 돌아간다. ㅜㅜ 텐서 차원 맞춰주는 거.. 저만 빡센가요....? transpose, permute, rearrange, squeeze, unsqueeze의 향연.... ;;;; 차라리 명시적으로 debugging mode를 코드에 넣어주고, debugging mode 분기 탈 때, layer마다 tensor shape을 print out해주는 게 확실한 거 같아. ㅋㅋㅋ 잠깐 편의점 가서 간식 사와야 겠다. 설마 그 사이에 에러내지 않겠지......?? 오오 신이시여... 부디 아름답게 마무리하도록 해주소서
-
A decoder-only foundation model for time-series forecastingPaper Writing 1/Related_Work 2024. 10. 19. 15:01
https://research.google/blog/a-decoder-only-foundation-model-for-time-series-forecasting/(February 2, 2024 Google Research)TimesFM is a forecasting model, pre-trained on a large time-series corpus of 100 billion real world time-points, that displays impressive zero-shot performance on a variety of public benchmarks from different domains and granularities.Time-series forecasting is ubiquitous in..
-
[Lag-Llama] Towards Foundation Models for Probabilistic Time Series ForecastingPaper Writing 1/Related_Work 2024. 10. 18. 22:02
https://arxiv.org/pdf/2310.08278https://github.com/time-series-foundation-models/lag-llama(Oct 2023)AbstractOver the past years, foundation models have caused a paradigm shift in machine learning due to their unprecedented capabilities for zero-shot and few-shot generalization. However, despite the success of foundation models in modalities such as natural language processing and computer vision..
-
[ForecastPFN] Synthetically-Trained Zero-Shot ForecastingPaper Writing 1/Related_Work 2024. 10. 18. 01:29
https://arxiv.org/pdf/2311.01933https://github.com/abacusai/forecastpfn(Nov 2023)AbstractThe vast majority of time-series forecasting approaches require a substantial training dataset. However, many real-life forecasting applications have very little initial observations, sometimes just 40 or fewer. Thus, the applicability of most forecasting methods is restricted in data-sparse commercial appli..
-
[MOIRAI] Unified Training of Universal Time Series Forecasting TransformersPaper Writing 1/Related_Work 2024. 10. 17. 15:42
https://arxiv.org/pdf/2402.02592https://github.com/SalesforceAIResearch/uni2ts(Feb 2024) AbstractDeep learning for time series forecasting has traditionally operated within a one-model-per-dataset framework, limiting its potential to leverage the game-changing impact of large pre-trained models. The concept of universal forecasting, emerging from pre-training on a vast collection of time series ..
-
리팩토링Paper Writing 1/Experiments 2024. 10. 15. 00:20
복잡한 로직을 풀어헤쳐서 모든 것을 명확하게 한다는 건,기분 좋은 일이다.(인생도 그럴 수 있다면 참 좋으련만..) 타인의 논리와 나의 논리가 마주치는 순간. Baseline model들에 대한 정리가 상당 부분 진척되었다.- 대략 80% 정도..? - 내가 이해한 로직을 바탕으로 최대한 단순화하여서, 내가 가져다 쓸 수 있게 만든 모델과 구성요소들이 논문의 실험 결과를 재현하는지 -> 아직까지는 비슷한 성능 수준을 보여주고 있다.- 이해하지 못하고 남겨둔 부분들이 있는데, 그게 좀 critical한 요소이다. 다시 살펴보고 수정하여야 한다.- 수정 후에 다시 돌려보아야 한다.- 정리가 덜 된 모델을 정리하여서 추가하여야 한다. 관건은, 내가 착안한 부분을 잘 구현하여서 이식한 후에 결과가 성공적으로 나..
-
[TEMPO] Prompt-based Generative Pre-trained Transformer for Time Series ForecastingPaper Writing 1/Related_Work 2024. 10. 13. 01:51
https://arxiv.org/pdf/2310.04948https://github.com/DC-research/TEMPO(Oct 2023)AbstractThe past decade has witnessed significant advances in time series modeling with deep learning. While achieving state-of-the-art results, the best-performing architectures vary highly across applications and domains. Meanwhile, for natural language processing, the Generative Pre-trained Transformer (GPT) has dem..
-
Are Language Models Actually Useful for Time Series Forecasting?Paper Writing 1/Related_Work 2024. 10. 10. 23:56
https://arxiv.org/pdf/2406.16964https://github.com/BennyTMT/TS_models(Jun 2024)AbstractLarge language models (LLMs) are being applied to time series tasks, particularly time series forecasting. However, are language models actually useful for time series? After a series of ablation studies on three recent and popular LLM-based time series forecasting methods, we find that removing the LLM compon..