diff --git a/tutorials/inference/source_en/nlp_tprr.md b/tutorials/inference/source_en/nlp_tprr.md
index 2924a64964f688adead8a0f21b9eee8c026023db..851beeef243f1805240f6626c0cda425db784453 100644
--- a/tutorials/inference/source_en/nlp_tprr.md
+++ b/tutorials/inference/source_en/nlp_tprr.md
@@ -1,5 +1,261 @@
-# Multi-hop Knowledge Reasoning Question-answering Model TPRR
+# Multi-hop Knowledge Inference Question-answering Model TPRR
-No English version right now, welcome to contribute.
+`Linux` `Ascend` `Model Development` `Expert`
-
+
+
+- [Multi-hop Knowledge Inference Question-answering Model TPRR](#multi-hop-knowledge-inference-question-answering-model-tprr)
+ - [Overview](#overview)
+ - [Preparation](#preparation)
+ - [Software Setup Dependencies](#software-setup-dependencies)
+ - [Data Preparation](#data-preparation)
+ - [Loading Data](#loading-data)
+ - [Defining the Network](#defining-the-network)
+ - [Configuring Parameters of the Model](#configuring-parameters-of-the-model)
+ - [Defining the Model](#defining-the-model)
+ - [Inference Network](#inference-network)
+ - [Executing the Script](#executing-the-script)
+ - [Reference](#reference)
+
+
+
+
+## Overview
+
+To solve multi-hop knowledge inference, a general model, namely Thinking Path Re-Ranker (TPRR), is proposed by HUAWEI based on open-domain multi-hop question answering. In typical question answering, a model only needs to find relevant sentences regarding the question from the original source. However, in multi-hop question answering, the question is answered through multiple ‘hops’. To be exact, given a question, the model should perform inference among multiple relevant documents, thereby getting the answer. A TPRR model consists of three modules, including Retriever, Reranker, and Reader. The Retriever module is used to search for a list of candidate documents that may contain answers from millions of wiki files. The Reranker module is used to sort the candidate documents under an optimal manner. The Reader module is used to parse the answer through multiple sentences in the optimal document, thereby completing the multi-hop knowledge inference. The TPRR model uses the conditional probability to build the complete inference path, and a ‘thinking’ strategy is introduced in the training process to select negative samples. The TPRR model achieves the top-1 position in Fullwiki Setting, an international authoritative HotpotQA evaluation, and the TPRR model achieves the top-1 joint accuracy, clue accuracy, etc. Compared with typical multi-hop question answering models, the TPRR model only needs pure textual information and there is no need to use extra entity extraction techniques. Meanwhile, using mixed precision provided by MindSpore enables to accelerate the framework of the TPRR model. Supported by Ascend, the performance of the TPRR model can be greatly improved.
+> You can download the sample codes in the following link:
+.
+
+```shell
+.
+└─tprr
+ ├─README.md
+ ├─scripts
+ | ├─run_eval_ascend.sh # Launch retriever evaluation in ascend
+ | └─run_eval_ascend_reranker_reader.sh # Launch re-ranker and reader evaluation in ascend
+ |
+ ├─src
+ | ├─build_reranker_data.py # build data for re-ranker from result of retriever
+ | ├─config.py # Evaluation configurations for retriever
+ | ├─hotpot_evaluate_v1.py # Hotpotqa evaluation script
+ | ├─onehop.py # Onehop model of retriever
+ | ├─onehop_bert.py # Onehop bert model of retriever
+ | ├─process_data.py # Data preprocessing for retriever
+ | ├─reader.py # Reader model
+ | ├─reader_albert_xxlarge.py # Albert-xxlarge module of reader model
+ | ├─reader_downstream.py # Downstream module of reader model
+ | ├─reader_eval.py # Reader evaluation script
+ | ├─rerank_albert_xxlarge.py # Albert-xxlarge module of re-ranker model
+ | ├─rerank_and_reader_data_generator.py # Data generator for re-ranker and reader
+ | ├─rerank_and_reader_utils.py # Utils for re-ranker and reader
+ | ├─rerank_downstream.py # Downstream module of re-ranker model
+ | ├─reranker.py # Re-ranker model
+ | ├─reranker_eval.py # Re-ranker evaluation script
+ | ├─twohop.py # Twohop model of retriever
+ | ├─twohop_bert.py # Twohop bert model of retriever
+ | └─utils.py # Utils for retriever
+ |
+ ├─retriever_eval.py # Evaluation net for retriever
+ └─reranker_and_reader_eval.py # Evaluation net for re-ranker and reader
+```
+
+The overall execution process is as follows:
+
+1. Prepare the HotpotQA Development dataset. Load and process the dataset.
+2. Configure the parameters of the TPRR model.
+3. Initialize the TPRR model.
+4. Load the dataset and CheckPoint model, perform inference, view the result, and save the output.
+
+## Preparation
+
+### Software Setup Dependencies
+
+1. Install MindSpore
+
+Before implementation, please ensure that MindSpore is correctly installed. If not, please install MindSpore by referring to the [MindSpore Installation](https://www.mindspore.cn/install/en).
+
+2. Install transformers
+
+ ```shell
+ pip install transformers
+ ```
+
+### Data Preparation
+
+This tutorial uses pre-processed [en-Wikipedia](https://github.com/AkariAsai/learning_to_retrieve_reasoning_paths/tree/master/retriever) and [HotpotQA Development Dataset](https://hotpotqa.github.io/). Please download the [pre-processed data](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/tprr/data.zip) in advanced.
+
+## Loading Data
+
+Download the data under the scripts directory. Load the pre-processed wiki and HotpotQA data files by the Retriever module. Considering the given multi-hop question, search for relevant files from source files. The part of loading data refers to the `src/process_data.py` script.
+
+```python
+def load_data(self):
+ """load data"""
+ print('********************** loading data ********************** ')
+ # wiki data
+ f_wiki = open(self.wiki_path, 'rb')
+ # hotpotqa dev data
+ f_train = open(self.dev_path, 'rb')
+ # doc data
+ f_doc = open(self.dev_data_path, 'rb')
+ data_db = pkl.load(f_wiki, encoding="gbk")
+ dev_data = json.load(f_train)
+ q_doc_text = pkl.load(f_doc, encoding='gbk')
+ return data_db, dev_data, q_doc_text
+```
+
+The result retrieved by the Retriever module is stored under the scripts directory. According to the result, the Reranker module uses a customized DataGenertor class to load the pre-processed wiki and HotpotQA data files, thereby obtaining a sorting list. This list is then stored under the scripts directory. According to the result after being sorted, the Reader module also uses the customized DataGenertor class to load the pre-processed wiki and HotpotQA data files, and extract answers and evidences. The source code of the customized DataGenertor class refers to the `src/rerank_and_reader_data_generator.py` script.
+
+```python
+class DataGenerator:
+ """data generator for reranker and reader"""
+ def __init__(self, feature_file_path, example_file_path, batch_size, seq_len,
+ para_limit=None, sent_limit=None, task_type=None):
+ """init function"""
+ self.example_ptr = 0
+ self.bsz = batch_size
+ self.seq_length = seq_len
+ self.para_limit = para_limit
+ self.sent_limit = sent_limit
+ self.task_type = task_type
+ self.feature_file_path = feature_file_path
+ self.example_file_path = example_file_path
+ self.features = self.load_features()
+ self.examples = self.load_examples()
+ self.feature_dict = self.get_feature_dict()
+ self.example_dict = self.get_example_dict()
+ self.features = self.padding_feature(self.features, self.bsz)
+```
+
+## Defining the Network
+
+### Configuring Parameters of the Model
+
+Users can set up the parameter of the model, including topk, onehop_num, etc. topk indicates the file number of the first hop after the Retriever module perform the sorting operation. A larger value of topk means that there will be more candidate documents. Although the value of recall is increased, more noise will be added and the precision will drop. onehop_num indicates the number of second hop selected from the first hop candidate documents. A larger value of onehop_num means that there will be more candidate documents in the second hop. Although the value of recall is increased, more noise will be added and the precision will drop.
+
+```python
+def ThinkRetrieverConfig():
+ """retriever config"""
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--q_len", type=int, default=64, help="max query len")
+ parser.add_argument("--d_len", type=int, default=192, help="max doc len")
+ parser.add_argument("--s_len", type=int, default=448, help="max seq len")
+ parser.add_argument("--in_len", type=int, default=768, help="in len")
+ parser.add_argument("--out_len", type=int, default=1, help="out len")
+ parser.add_argument("--num_docs", type=int, default=500, help="docs num")
+ parser.add_argument("--topk", type=int, default=8, help="top num")
+ parser.add_argument("--onehop_num", type=int, default=8, help="onehop num")
+ parser.add_argument("--batch_size", type=int, default=1, help="batch size")
+ parser.add_argument("--device_num", type=int, default=8, help="device num")
+ parser.add_argument("--vocab_path", type=str, default='../vocab.txt', help="vocab path")
+ parser.add_argument("--wiki_path", type=str, default='../db_docs_bidirection_new.pkl', help="wiki path")
+ parser.add_argument("--dev_path", type=str, default='../hotpot_dev_fullwiki_v1_for_retriever.json',
+ help="dev path")
+ parser.add_argument("--dev_data_path", type=str, default='../dev_tf_idf_data_raw.pkl', help="dev data path")
+ parser.add_argument("--onehop_bert_path", type=str, default='../onehop.ckpt', help="onehop bert ckpt path")
+ parser.add_argument("--onehop_mlp_path", type=str, default='../onehop_mlp.ckpt', help="onehop mlp ckpt path")
+ parser.add_argument("--twohop_bert_path", type=str, default='../twohop.ckpt', help="twohop bert ckpt path")
+ parser.add_argument("--twohop_mlp_path", type=str, default='../twohop_mlp.ckpt', help="twohop mlp ckpt path")
+ parser.add_argument("--q_path", type=str, default='../queries', help="queries data path")
+ return parser.parse_args()
+```
+
+### Defining the Model
+
+Define the Retriever module and load the parameter of the model.
+
+```python
+def evaluation():
+ model_onehop_bert = ModelOneHop()
+ param_dict = load_checkpoint(config.onehop_bert_path)
+ load_param_into_net(model_onehop_bert, param_dict)
+ model_twohop_bert = ModelTwoHop()
+ param_dict2 = load_checkpoint(config.twohop_bert_path)
+ load_param_into_net(model_twohop_bert, param_dict2)
+ onehop = OneHopBert(config, model_onehop_bert)
+ twohop = TwoHopBert(config, model_twohop_bert)
+```
+
+Define the Reranker module and load the parameter of the model.
+
+```python
+ reranker = Reranker(batch_size=batch_size,
+ encoder_ck_file=encoder_ck_file,
+ downstream_ck_file=downstream_ck_file)
+```
+
+Define the Reader module and load the parameter of the model.
+
+```python
+ reader = Reader(batch_size=batch_size,
+ encoder_ck_file=encoder_ck_file,
+ downstream_ck_file=downstream_ck_file)
+```
+
+## Inference Network
+
+### Executing the Script
+
+Call the shell script under the scripts directory and start inference. Use the following commands to run the script:
+
+```shell
+sh run_eval_ascend.sh
+sh run_eval_ascend_reranker_reader.sh
+```
+
+After inference, the results are saved in a log file under the scripts/eval/ directory. You can view the evaluation result in the corresponding log file.
+
+The evaluation result of the Retriever module: val indicates the number of questions that have been correctly answered, count indicates the number of total questions, PEM indicates the precision of top-8 files that have been exactly matched after relevant questions are sorted.
+
+```python
+# match query num
+val:6959
+# query num
+count:7404
+# one hop match query num
+true count: 7112
+# top8 paragraph exact match
+PEM: 0.9398973527822798
+# top8 paragraph exact match in recall
+true top8 PEM: 0.9784870641169854
+# evaluation time
+evaluation time (h): 1.819070938428243
+```
+
+The evaluation result of the Reranker and Reader modules: total_top1_pem indicates the precision of top-1 directory that has been exactly matched after sorting. joint_em indicates the joint precision of exact matches of predicted results and evidence. joint_f1 indicates the joint f1 score of exact matches of predicted results and evidences.
+
+```python
+# top8 paragraph exact match
+total top1 pem: 0.8803511141120864
+...
+
+# answer exact match
+em: 0.67440918298447
+# answer f1
+f1: 0.8025625656569652
+# answer precision
+prec: 0.8292800393689271
+# answer recall
+recall: 0.8136908451841731
+# supporting facts exact match
+sp_em: 0.6009453072248481
+# supporting facts f1
+sp_f1: 0.844555664157302
+# supporting facts precision
+sp_prec: 0.8640844345841021
+# supporting facts recall
+sp_recall: 0.8446123918845106
+# joint exact match
+joint_em: 0.4537474679270763
+# joint f1
+joint_f1: 0.715119580346802
+# joint precision
+joint_prec: 0.7540052057184267
+# joint recall
+joint_recall: 0.7250240424067661
+```
+
+## Reference
+
+1. Yang Z , Qi P , Zhang S , et al. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018.
+2. Asai A , Hashimoto K , Hajishirzi H , et al. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering[J]. 2019.