Research
-
Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention)Research/NLP_reference 2024. 2. 13. 15:15
출처: https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ Sequence-to-sequence models are deep learning models that have achieved a lot of success in tasks like machine translation, text summarization, and image captioning. Google Translate started using such a model in production in late 2016. These models are explained in the two pioneeri..
-
The Illustrated TransformerResearch/NLP_reference 2024. 2. 13. 10:05
※ 출처: https://jalammar.github.io/illustrated-transformer/ Attention is a concept that helped improve the performance of neural machine translation applications. The Transformer is a model that uses attention to boost the speed with which these models can be trained. The Transformer outperforms the Google Neural Machine Translation model in specific tasks. The biggest benefit, however, comes from..