# nlp-journey **Repository Path**: deeplearningrepos/nlp-journey ## Basic Information - **Project Name**: nlp-journey - **Description**: Documents, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation),etc. All codes are implemented intensorflow 2.0. - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-03-30 - **Last Updated**: 2021-08-31 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # nlp journey [![Star](https://img.shields.io/github/stars/msgi/nlp-journey?color=success)](https://github.com/msgi/nlp-journey/) [![Fork](https://img.shields.io/github/forks/msgi/nlp-journey)](https://github.com/msgi/nlp-journey/fork) [![GitHub Issues](https://img.shields.io/github/issues/msgi/nlp-journey?color=success)](https://github.com/msgi/nlp-journey/issues) [![License](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/msgi/nlp-journey) ***All implemented in tensorflow 2.0,[codes](smartnlp/)*** ## 1. Basics * [tutorials](tutorials/) * [frequent questions](docs/fq.md) ## 2. Books([`baiduyun`](https://pan.baidu.com/s/14z5SnM28guarUZfZihdTPw) code:txqx) 1. Handbook of Graphical Models. [`online`](https://stat.ethz.ch/~maathuis/papers/Handbook.pdf) 2. Deep Learning. [`online`](https://www.deeplearningbook.org/) 3. Neural Networks and Deep Learning. [`online`](http://neuralnetworksanddeeplearning.com/) 4. Speech and Language Processing. [`online`](http://web.stanford.edu/~jurafsky/slp3/ed3book.pdf) ## 3. Papers ### 01) Transformer papers 1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [`paper`](https://arxiv.org/abs/1810.04805) 2. GPT-2: Language Models are Unsupervised Multitask Learners. [`paper`](https://blog.openai.com/better-language-models/) 3. Transformer-XL: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. [`paper`](https://arxiv.org/abs/1901.02860) 4. XLNet: Generalized Autoregressive Pretraining for Language Understanding. [`paper`](https://arxiv.org/abs/1906.08237) 5. RoBERTa: Robustly Optimized BERT Pretraining Approach. [`paper`](https://arxiv.org/abs/1907.11692) 6. DistilBERT: a distilled version of BERT: smaller, faster, cheaper and lighter. [`paper`](https://arxiv.org/abs/1910.01108) 7. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. [`paper`](https://arxiv.org/abs/1909.11942) 8. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. [`paper`](https://arxiv.org/abs/1910.10683) 9. ELECTRA: pre-training text encoders as discriminators rather than generators. [`paper`](https://openreview.net/pdf?id=r1xMH1BtvB) 10. GPT3: Language Models are Few-Shot Learners. [`paper`](https://arxiv.org/pdf/2005.14165.pdf) ### 02) Models 1. LSTM(Long Short-term Memory). [`paper`](http://www.bioinf.jku.at/publications/older/2604.pdf) 2. Sequence to Sequence Learning with Neural Networks. [`paper`](https://arxiv.org/pdf/1409.3215.pdf) 3. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. [`paper`](https://arxiv.org/pdf/1406.1078.pdf) 4. Residual Network(Deep Residual Learning for Image Recognition). [`paper`](https://arxiv.org/pdf/1512.03385.pdf) 5. Dropout(Improving neural networks by preventing co-adaptation of feature detectors). [`paper`](https://arxiv.org/pdf/1207.0580.pdf) 6. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. [`paper`](https://arxiv.org/pdf/1502.03167.pdf) ### 03) Summaries 1. An overview of gradient descent optimization algorithms. [`paper`](https://arxiv.org/pdf/1609.04747.pdf) 2. Analysis Methods in Neural Language Processing: A Survey. [`paper`](https://arxiv.org/pdf/1812.08951.pdf) 3. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. [`paper`](https://arxiv.org/pdf/1910.10683.pdf) 4. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. [`paper`](https://arxiv.org/pdf/2001.06937.pdf) 5. A Gentle Introduction to Deep Learning for Graphs. [`paper`](https://arxiv.org/pdf/1912.12693.pdf) 6. A Survey on Deep Learning for Named Entity Recognition. [`paper`](https://arxiv.org/pdf/1812.09449.pdf) 7. More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction. [`paper`](https://arxiv.org/pdf/2004.03186.pdf) 8. Deep Learning Based Text Classification: A Comprehensive Review. [`paper`](https://arxiv.org/pdf/2004.03705.pdf) 9. Pre-trained Models for Natural Language Processing: A Survey. [`paper`](https://arxiv.org/pdf/2003.08271.pdf) 10. A Survey on Contextual Embeddings. [`paper`](https://arxiv.org/pdf/2003.07278.pdf) 11. A Survey on Knowledge Graphs: Representation, Acquisition and Applications. [`paper`](https://arxiv.org/pdf/2002.00388.pdf) 12. Knowledge Graphs. [`paper`](https://arxiv.org/pdf/2003.02320v2.pdf) 13. Pre-trained Models for Natural Language Processing: A Survey. [`paper`](https://arxiv.org/pdf/2003.08271.pdf) ### 04) Pre-training 1. A Neural Probabilistic Language Model. [`paper`](https://www.researchgate.net/publication/221618573_A_Neural_Probabilistic_Language_Model) 2. word2vec Parameter Learning Explained. [`paper`](https://arxiv.org/pdf/1411.2738.pdf) 3. Language Models are Unsupervised Multitask Learners. [`paper`](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) 4. An Empirical Study of Smoothing Techniques for Language Modeling. [`paper`](https://dash.harvard.edu/bitstream/handle/1/25104739/tr-10-98.pdf?sequence=1) 5. Efficient Estimation of Word Representations in Vector Space. [`paper`](https://arxiv.org/pdf/1301.3781.pdf) 6. Distributed Representations of Sentences and Documents. [`paper`](https://arxiv.org/pdf/1405.4053.pdf) 7. Enriching Word Vectors with Subword Information(FastText). [`paper`](https://arxiv.org/pdf/1607.04606.pdf) 8. GloVe: Global Vectors for Word Representation. [`online`](https://nlp.stanford.edu/projects/glove/) 9. ELMo (Deep contextualized word representations). [`paper`](https://arxiv.org/pdf/1802.05365.pdf) 10. Pre-Training with Whole Word Masking for Chinese BERT. [`paper`](https://arxiv.org/pdf/1906.08101.pdf) ### 05) Classification 1. Bag of Tricks for Efficient Text Classification (FastText). [`paper`](https://arxiv.org/pdf/1607.01759.pdf) 2. Convolutional Neural Networks for Sentence Classification. [`paper`](https://arxiv.org/pdf/1408.5882.pdf) 3. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. [`paper`](http://www.aclweb.org/anthology/P16-2034) ### 06) Text generation 1. A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. [`paper`](https://arxiv.org/pdf/1805.06553.pdf) 2. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. [`paper`](https://arxiv.org/pdf/1609.05473.pdf) ### 07) Text Similarity 1. Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. [`paper`](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.723.6492&rep=rep1&type=pdf) 2. Learning Text Similarity with Siamese Recurrent Networks. [`paper`](https://www.aclweb.org/anthology/W16-1617) 3. A Deep Architecture for Matching Short Texts. [`paper`](http://papers.nips.cc/paper/5019-a-deep-architecture-for-matching-short-texts.pdf) ### 08) QA 1. A Question-Focused Multi-Factor Attention Network for Question Answering. [`paper`](https://arxiv.org/pdf/1801.08290.pdf) 2. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. [`paper`](https://arxiv.org/pdf/1812.08989.pdf) 3. A Knowledge-Grounded Neural Conversation Model. [`paper`](https://arxiv.org/pdf/1702.01932.pdf) 4. Neural Generative Question Answering. [`paper`](https://arxiv.org/pdf/1512.01337v1.pdf) 5. Sequential Matching Network A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots.[`paper`](https://arxiv.org/abs/1612.01627) 6. Modeling Multi-turn Conversation with Deep Utterance Aggregation.[`paper`](https://arxiv.org/pdf/1806.09102.pdf) 7. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network.[`paper`](https://www.aclweb.org/anthology/P18-1103) 8. Deep Reinforcement Learning For Modeling Chit-Chat Dialog With Discrete Attributes. [`paper`](https://arxiv.org/pdf/1907.02848.pdf) ### 09) NMT 1. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. [`paper`](https://arxiv.org/pdf/1406.1078v3.pdf) 2. Neural Machine Translation by Jointly Learning to Align and Translate. [`paper`](https://arxiv.org/pdf/1409.0473.pdf) 3. Transformer (Attention Is All You Need). [`paper`](https://arxiv.org/pdf/1706.03762.pdf) ### 10) Summary 1. Get To The Point: Summarization with Pointer-Generator Networks. [`paper`](https://arxiv.org/pdf/1704.04368.pdf) 2. Deep Recurrent Generative Decoder for Abstractive Text Summarization. [`paper`](https://aclweb.org/anthology/D17-1222) ### 11) Relation extraction 1. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. [`paper`](https://www.aclweb.org/anthology/D15-1203) 2. Neural Relation Extraction with Multi-lingual Attention. [`paper`](https://www.aclweb.org/anthology/P17-1004) 3. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. [`paper`](https://aclweb.org/anthology/D18-1514) 4. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. [`paper`](https://www.aclweb.org/anthology/P16-1105) ## 4. Articles - 如何学习自然语言处理(综合版). [`url`](https://mp.weixin.qq.com/s/lJYp4hUZVsp-Uj-5NqoaYQ) - TRANSFORMERS FROM SCRATCH. [`url`](http://peterbloem.nl/blog/transformers) - The Illustrated Transformer.[`url`](https://jalammar.github.io/illustrated-transformer/) - Attention-based-model. [`url`](http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/) - Modern Deep Learning Techniques Applied to Natural Language Processing. [`url`](https://nlpoverview.com/) - 难以置信!LSTM和GRU的解析从未如此清晰(动图+视频)。[`url`](https://blog.csdn.net/dqcfkyqdxym3f8rb0/article/details/82922386) - 从语言模型到Seq2Seq:Transformer如戏,全靠Mask. [`url`](https://spaces.ac.cn/archives/6933) - Applying word2vec to Recommenders and Advertising. [`url`](http://mccormickml.com/2018/06/15/applying-word2vec-to-recommenders-and-advertising/) - 2019 NLP大全:论文、博客、教程、工程进展全梳理. [`url`](https://zhuanlan.zhihu.com/p/108442724) ## 5. Github * CLUE. [`github`](https://github.com/CLUEbenchmark/CLUE) * transformers. [`github`](https://github.com/huggingface/transformers) * HanLP. [`github`](https://github.com/hankcs/HanLP) ## 6. Blog * [52nlp](http://www.52nlp.cn/) * [科学空间/信息时代](https://kexue.fm/category/Big-Data) * [刘建平Pinard](https://www.cnblogs.com/pinard/) * [零基础入门深度学习](https://www.zybuluo.com/hanbingtao/note/433855) * [Jay Alammar](https://jalammar.github.io/) * [Andrej Karpathy blog](http://karpathy.github.io/) * [Edwin Chen](http://blog.echen.me/) * [Distill](https://distill.pub/)