From 4ddcd118a9e1278863946a19d306422d42eddd17 Mon Sep 17 00:00:00 2001 From: chenhaozhe Date: Mon, 29 Jun 2020 20:13:51 +0800 Subject: [PATCH] update docs/source_en/benchmark.md. --- docs/source_en/benchmark.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source_en/benchmark.md b/docs/source_en/benchmark.md index 6c541a6755..8c75c4503b 100644 --- a/docs/source_en/benchmark.md +++ b/docs/source_en/benchmark.md @@ -22,8 +22,8 @@ For details about the MindSpore pre-trained model, see [Model Zoo](https://gitee | Network | Network Type | Dataset | MindSpore Version | Resource                 | Precision | Batch Size | Throughput | Speedup | | --- | --- | --- | --- | --- | --- | --- | --- | --- | -| BERT-Large | Attention | zhwiki | 0.2.0-alpha | Ascend: 1 * Ascend 910
CPU:24 Cores | Mixed | 96 | 210 sentences/sec | - | -| | | | | Ascend: 8 * Ascend 910
CPU:192 Cores | Mixed | 96 | 1613 sentences/sec | 0.96 | +| BERT-Large | Attention | zhwiki | 0.2.0-alpha | Ascend: 1 * Ascend 910
CPU:24 Cores | Mixed | 96 | 269 sentences/sec | - | +| | | | | Ascend: 8 * Ascend 910
CPU:192 Cores | Mixed | 96 | 2069 sentences/sec | 0.96 | 1. The preceding performance is obtained based on ModelArts, the HUAWEI CLOUD AI development platform. The network contains 24 hidden layers, the sequence length is 128 tokens, and the vocabulary contains 21128 tokens. 2. For details about other open source frameworks, see [BERT For TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT). \ No newline at end of file -- Gitee