# nlp-journey **Repository Path**: coddest/nlp-journey ## Basic Information - **Project Name**: nlp-journey - **Description**: nlp相关的一些论文及代码, 包括主题模型、词向量(Word Embedding)、命名实体识别(NER)、文本分类(Text Classificatin)、文本生成(Text Generation)、文本相似性(Text Similarity)计算等,涉及到各种与nlp相关的算法,基于keras和tensorflow。 - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 3 - **Created**: 2020-05-04 - **Last Updated**: 2020-12-17 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ## nlp journey > Your Journey to NLP Starts Here ! ### 基础 * [数据结构与算法](docs/alg.md) * [基础知识](docs/basic.md) * [常见问题](docs/fq.md) * [实践笔记](docs/notes.md) ### 经典书目([`百度云`](https://pan.baidu.com/s/1sE_20nHCfej6f9yRaisz7Q) 提取码:b5qq) #### 算法入门 * 算法的乐趣. [`原书地址`](http://www.ituring.com.cn/book/1605) #### 深度学习 * Deep Learning.深度学习必读. [`原书地址`](https://www.deeplearningbook.org/) * Neural Networks and Deep Learning. 入门必读. [`原书地址`](http://neuralnetworksanddeeplearning.com/) * 复旦大学《神经网络与深度学习》邱锡鹏教授. [`原书地址`](https://nndl.github.io/) #### 自然语言处理 * 斯坦福大学《语音与语言处理》第三版:NLP必读. [`原书地址`](http://web.stanford.edu/~jurafsky/slp3/ed3book.pdf) * CS224d: Deep Learning for Natural Language Processing. [`课件地址`](http://cs224d.stanford.edu/) ### 必读论文 #### 算法模型与优化 * LSTM(Long Short-term Memory). [`地址`](http://www.bioinf.jku.at/publications/older/2604.pdf) * Dropout(Improving neural networks by preventing co-adaptation of feature detectors). [`地址`](https://arxiv.org/pdf/1207.0580.pdf) * Residual Network(Deep Residual Learning for Image Recognition). [`地址`](https://arxiv.org/pdf/1512.03385.pdf) * Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. [`地址`](https://arxiv.org/pdf/1502.03167.pdf) #### 综述论文 * Analysis Methods in Neural Language Processing: A Survey. [`地址`](https://arxiv.org/pdf/1812.08951.pdf) * Neural Text Generation: Past, Present and Beyond. [`地址`](https://arxiv.org/pdf/1803.07133.pdf) #### 语言模型 * A Neural Probabilistic Language Model. [`地址`](https://www.researchgate.net/publication/221618573_A_Neural_Probabilistic_Language_Model) * Language Models are Unsupervised Multitask Learners. [`地址`](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) #### 文本增强 * EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks.[`地址`](https://arxiv.org/pdf/1901.11196.pdf) #### 文本预训练 * Efficient Estimation of Word Representations in Vector Space. [`地址`](https://arxiv.org/pdf/1301.3781.pdf) * Distributed Representations of Sentences and Documents. [`地址`](https://arxiv.org/pdf/1405.4053.pdf) * Enriching Word Vectors with Subword Information. [`地址`](https://arxiv.org/pdf/1607.04606.pdf). [`解读`](https://www.sohu.com/a/114464910_465975) * GloVe: Global Vectors for Word Representation. [`官网`](https://nlp.stanford.edu/projects/glove/) * ELMo (Deep contextualized word representations). [`地址`](https://arxiv.org/pdf/1802.05365.pdf) * BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [`地址`](https://arxiv.org/pdf/1810.04805.pdf) * Pre-Training with Whole Word Masking for Chinese BERT. [`地址`](https://arxiv.org/pdf/1906.08101.pdf) * XLNet: Generalized Autoregressive Pretraining for Language Understanding[`地址`](https://arxiv.org/pdf/1906.08237.pdf) #### 文本分类 * A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification. [`地址`](https://arxiv.org/pdf/1510.03820.pdf) * Convolutional Neural Networks for Sentence Classification. [`地址`](https://arxiv.org/pdf/1408.5882.pdf) * Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. [`地址`](http://www.aclweb.org/anthology/P16-2034) #### 文本生成 * A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. [`地址`](https://arxiv.org/pdf/1805.06553.pdf) * SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. [`地址`](https://arxiv.org/pdf/1609.05473.pdf) * Generative Adversarial Text to Image Synthesis. [`地址`](https://arxiv.org/pdf/1605.05396.pdf) #### 文本相似性 * Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. [`地址`](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.723.6492&rep=rep1&type=pdf) * Learning Text Similarity with Siamese Recurrent Networks. [`地址`](https://www.aclweb.org/anthology/W16-1617) #### 短文本匹配 * A Deep Architecture for Matching Short Texts. [`地址`](http://papers.nips.cc/paper/5019-a-deep-architecture-for-matching-short-texts.pdf) #### 自动问答 * A Question-Focused Multi-Factor Attention Network for Question Answering. [`地址`](https://arxiv.org/pdf/1801.08290.pdf) * The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. [`地址`](https://arxiv.org/pdf/1812.08989.pdf) * A Knowledge-Grounded Neural Conversation Model. [`地址`](https://arxiv.org/pdf/1702.01932.pdf) * Neural Generative Question Answering. [`地址`](https://arxiv.org/pdf/1512.01337v1.pdf) * Sequential Matching Network A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots.[`地址`](https://arxiv.org/abs/1612.01627) * Modeling Multi-turn Conversation with Deep Utterance Aggregation.[`地址`](https://arxiv.org/pdf/1806.09102.pdf) * Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network.[`地址`](https://www.aclweb.org/anthology/P18-1103) #### 机器翻译 * Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. [`地址`](https://arxiv.org/pdf/1406.1078v3.pdf) * Transformer (Attention Is All You Need). [`地址`](https://arxiv.org/pdf/1706.03762.pdf) * Transformer-XL:Attentive Language Models Beyond a Fixed-Length Context. [`地址`](https://arxiv.org/pdf/1901.02860.pdf) #### 自动摘要 * Get To The Point: Summarization with Pointer-Generator Networks. [`地址`](https://arxiv.org/pdf/1704.04368.pdf) #### 事件提取 * Event Extraction via Dynamic Multi-Pooling Convolutional Neural. [`地址`](https://pdfs.semanticscholar.org/ca70/480f908ec60438e91a914c1075b9954e7834.pdf) #### 推荐系统 * Behavior Sequence Transformer for E-commerce Recommendation in Alibaba. [`地址`](https://arxiv.org/pdf/1905.06874.pdf) ### 必读博文 * The Illustrated Transformer.[`博文`](https://jalammar.github.io/illustrated-transformer/) * Attention-based-model. [`地址`](http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/) * KL divergence. [`地址`](https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained) * Building Autoencoders in Keras. [`地址`](https://blog.keras.io/building-autoencoders-in-keras.html) * Modern Deep Learning Techniques Applied to Natural Language Processing. [`地址`](https://nlpoverview.com/) * Node2vec embeddings for graph data. [`地址`](https://towardsdatascience.com/node2vec-embeddings-for-graph-data-32a866340fef) * Bert解读. [`地址`](https://www.cnblogs.com/rucwxb/p/10277217.html) [`地址`](https://zhuanlan.zhihu.com/p/49271699) * XLNet:运行机制及和Bert的异同比较. [`地址`](https://zhuanlan.zhihu.com/p/70257427) * 难以置信!LSTM和GRU的解析从未如此清晰(动图+视频)。[`地址`](https://mp.weixin.qq.com/s?__biz=MzI4MDYzNzg4Mw==&mid=2247488287&idx=2&sn=aa7b045337940886d5a7767f95ab0128&chksm=ebb42bcbdcc3a2ddcfb73fb77bead9655d6608b1a951a8b429fb2c38d56ca92289e97e6decd1&mpshare=1&scene=24&srcid=0930GzGGm3m7uZfJyblgWV3k&key=5b1b221b044835abb8ce952ed69e6acdfe5f30700caa3c560c8fe663354916c6753858e4dbbf1b4d1c2eded3876c67c0983d3d51324c321458405b0cacec9103640c28a7a5c068729172703bf23c0348&ascene=14&uin=Mjk3NzQ2NDczMQ%3D%3D&devicetype=Windows+10&version=62060833&lang=zh_CN&pass_ticket=uQSzwn38HjOIK%2BZwFf5AXCp%2Fk0QiE7budc%2Bl5t1yBFtOXA%2BPvSaFwqUWEwEmyZEd) ### 已实现算法 * [构建词向量](nlp/embedding/) - [x] fasttext(skipgram+cbow) - [x] gensim(word2vec) * [数据增强](nlp/augmentation/) - [x] eda * [分类算法](nlp/classification/) - [x] svm - [x] fasttext - [x] textcnn - [x] bilstm+attention - [x] rcnn - [x] han - [x] bert * [NER](nlp/ner/) - [x] bilstm+crf * [文本相似度](nlp/similarity/) - [x] siamese ### 相关github项目 * keras-gpt-2. [`地址`](https://github.com/CyberZHG/keras-gpt-2) * textClassifier. [`地址`](https://github.com/jiangxinyang227/textClassifier) * attention-is-all-you-need-keras. [`地址`](https://github.com/Lsdefine/attention-is-all-you-need-keras) * BERT_with_keras. [`地址`](https://github.com/miroozyx/BERT_with_keras) * keras-bert. [`地址`](https://github.com/CyberZHG/keras-bert) * ELMo-keras. [`地址`](https://github.com/iliaschalkidis/ELMo-keras) * SeqGAN. [`地址`](https://github.com/tyo-yo/SeqGAN) ### 相关博客 * [52nlp](http://www.52nlp.cn/) * [科学空间/信息时代](https://kexue.fm/category/Big-Data) * [刘建平Pinard](https://www.cnblogs.com/pinard/) * [莫坠青云志](https://tobiaslee.top/) * [彗双智能-Keras源码分析](http://wangbn.blogspot.com/) * [机器之心](https://www.jiqizhixin.com/) * [colah](https://colah.github.io/) * [ZHPMATRIX](https://zhpmatrix.github.io/) * [wildml](http://www.wildml.com/) * [徐阿衡](http://www.shuang0420.com/) * [零基础入门深度学习](https://www.zybuluo.com/hanbingtao/note/433855) ### 相关会议 * Association of Computational Linguistics(计算语言学协会). [ACL](https://www.aclweb.org/portal/) * Empirical Methods in Natural Language Processing. [EMNLP]() * International Conference on Computational Linguistics. [COLING](https://www.sheffield.ac.uk/dcs/research/groups/nlp/iccl/index#tab00) * Neural Information Processing Systems(神经信息处理系统会议). [NIPS](https://nips.cc/) * AAAI Conference on Artificial Intelligence. [AAAI](https://www.aaai.org/) * International Joint Conferences on AI. [IJCAI](https://www.ijcai.org/) * International Conference on Machine Learning(国际机器学习大会). [ICML](https://icml.cc/)