diff --git a/omnidata/omnidata-hive-connector/README.md b/omnidata/omnidata-hive-connector/README.md index c6a9246d0984e400ffb3200111b6504d52887e8d..956a99ba40f9a7153f701105f49845db6057fdd2 100644 --- a/omnidata/omnidata-hive-connector/README.md +++ b/omnidata/omnidata-hive-connector/README.md @@ -2,28 +2,25 @@ -Introduction -============ +## 简介 -The omnidata hive connector library running on Kunpeng processors is a Hive SQL plugin that pushes computing-side operators to storage nodes for computing. It is developed based on original APIs of Apache [Hive 3.1.0](https://github.com/apache/hive/tree/rel/release-3.1.0). This library applies to the big data storage separation scenario or large-scale fusion scenario where a large number of compute nodes read data from remote nodes. In this scenario, a large amount of raw data is transferred from storage nodes to compute nodes over the network for processing, resulting in a low proportion of valid data and a huge waste of network bandwidth. You can find the latest documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions. +基于鲲鹏处理器运行的 omnidata hive connector 库是一个 Hive SQL 插件,其功能是将计算端的算子下推到存储节点进行计算。该库基于 Apache Hive 3.1.0 的原始 API 开发。 +此库适用于大数据存算分离场景或大规模存算融合场景,即大量计算节点从远端节点读取数据。在此类场景中,海量原始数据通过网络从存储节点传输到计算节点进行处理,导致有效数据比例低,造成巨大的网络带宽浪费。 +您可以在项目网页上找到最新文档(包括编程指南)。本 README 文件仅包含基本的设置说明。 +## 构建与打包 -Building And Packageing -==================== +(1) 在 omnidata-hive-connector 项目目录下执行构建命令: -(1) Build the project under the "omnidata-hive-connector" directory: +`mvn clean package` - mvn clean package +(2) 在 omnidata-hive-connector/connector/target 目录下获取生成的 jar 文件。 -(2) Obtain the jar under the "omnidata-hive-connector/connector/target" directory. +## 贡献指南 -Contribution Guidelines -======== +请通过 GitHub issues 跟踪问题(bugs)和功能需求(feature requests)。 -Track the bugs and feature requests via GitHub [issues](https://github.com/kunpengcompute/omnidata-hive-connector/issues). +## 更多信息 -More Information -======== - -For further assistance, send an email to kunpengcompute@huawei.com. +如需进一步帮助,请发送电子邮件至:kunpengcompute@huawei.com diff --git a/omnidata/omnidata-openlookeng-connector/README.md b/omnidata/omnidata-openlookeng-connector/README.md index 7f283f8c7aa8e4d4f8c4978003a6f7c05b7becab..a031ad55aa8263207653811ce8dc3249983f6467 100644 --- a/omnidata/omnidata-openlookeng-connector/README.md +++ b/omnidata/omnidata-openlookeng-connector/README.md @@ -1,31 +1,34 @@ # OmniData Connector -## Overview +## 概述 -OmniData Connector is a data source connector developed for openLooKeng. +OmniData Connector是为 openLooKeng 开发的数据源连接器。 -The OmniData connector allows querying data sources where OmniData Server is deployed. It pushes down some operators such as filter to the OmniData service close to the storage to improve the performance of storage-computing-separated system. +OmniData Connector允许查询已部署 OmniData Server 的数据源。它将过滤(filter)等算子下推到靠近存储层的 OmniData 服务端执行,从而提升存算分离系统的性能。 +## 构建 OmniData Connector -## Building OmniData Connector +OmniData Connector基于 openLooKeng 架构开发。您首先需要以非 root 用户身份构建 openLooKeng。 -1. OmniData Connector is developed under the architecture of openLooKeng. You need to build openLooKeng first as a non-root user. -2. Simply run the following command from the project root directory:
-`mvn clean install -Dos.detected.arch="aarch64"`
-Then you will find omnidata-openlookeng-connector-*.zip in the omnidata-openlookeng-connector/connector/target/ directory. -OmniData Connector has a comprehensive set of unit tests that can take several minutes to run. You can disable the tests during building:
-`mvn clean install -DskipTests -Dos.detected.arch="aarch64"`
+在项目根目录下运行以下命令: -## Deploying OmniData Connector +`mvn clean install -Dos.detected.arch="aarch64"` -1. Unzip omnidata-openlookeng-connector-*.zip to the plugin directory of openLooKeng. -2. Obtain the latest OmniData software package, and replace the boostkit-omnidata-server-\*.jar in the omnidata-openlookeng-connector-\* directory. -3. Set "connector.name=omnidata-openlookeng" in the openLooKeng catalog properties file. +构建完成后,您将在 omnidata-openlookeng-connector/connector/target/ 目录中找到 omnidata-openlookeng-connector-*.zip 文件。 +OmniData Connector包含一套完整的单元测试,运行可能需要几分钟。您可以在构建时禁用测试: -## Contribution Guidelines +`mvn clean install -DskipTests -Dos.detected.arch="aarch64"` -Track the bugs and feature requests via GitHub issues. +## 部署 OmniData Connector -## More Information +1. 将 omnidata-openlookeng-connector-*.zip 解压到 openLooKeng 的 plugin 目录中。 +2. 获取最新的 OmniData 软件包,并用其中的 boostkit-omnidata-server-\*.jar 文件替换解压后 omnidata-openlookeng-connector-\* 目录中的同名文件。 +3. 在 openLooKeng 的 catalog 属性文件 中设置 connector.name=omnidata-openlookeng。 -For further assistance, send an email to kunpengcompute@huawei.com. +## 贡献指南 + +请通过 GitHub issues 跟踪问题(bugs)和功能需求(feature requests)。 + +## 更多信息 + +如需进一步帮助,请发送电子邮件至:kunpengcompute@huawei.com diff --git a/omnidata/omnidata-spark-connector/README.md b/omnidata/omnidata-spark-connector/README.md index de2c8b8c72f7ae75d7346a6306e4dbe70c901bee..250cd9e73a3b10a3ecbc912056398a0b85aa9390 100644 --- a/omnidata/omnidata-spark-connector/README.md +++ b/omnidata/omnidata-spark-connector/README.md @@ -2,27 +2,26 @@ -Introduction -============ +## 简介 -The omnidata spark connector library running on Kunpeng processors is a Spark SQL plugin that pushes computing-side operators to storage nodes for computing. It is developed based on original APIs of Apache [Spark 3.1.1](https://github.com/apache/spark/tree/v3.1.1). This library applies to the big data storage separation scenario or large-scale fusion scenario where a large number of compute nodes read data from remote nodes. In this scenario, a large amount of raw data is transferred from storage nodes to compute nodes over the network for processing, resulting in a low proportion of valid data and a huge waste of network bandwidth. You can find the latest documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions. +基于 鲲鹏处理器 运行的 omnidata spark connector 库是一个 Spark SQL 插件,其功能是将计算端的算子下推到存储节点进行计算。该库基于 Apache Spark 3.1.1 的原始 API 开发。 +此库适用于大数据存算分离场景或大规模存算融合场景,即大量计算节点从远端节点读取数据。在此类场景中,海量原始数据通过网络从存储节点传输到计算节点进行处理,导致有效数据比例低,造成巨大的网络带宽浪费。 -Building And Packageing -==================== +您可以在项目网页上找到最新文档(包括编程指南)。本 README 文件仅包含基本的设置说明。 -(1) Build the project under the "omnidata-spark-connector" directory: +## 构建与打包 + +(1) 在 omnidata-spark-connector 项目目录下执行构建命令: mvn clean package -(2) Obtain the jar under the "omnidata-spark-connector/connector/target" directory. +(2) 在 omnidata-spark-connector/connector/target 目录下获取生成的 jar 文件。 -Contribution Guidelines -======== +## 贡献指南 -Track the bugs and feature requests via GitHub [issues](https://github.com/kunpengcompute/omnidata-spark-connector/issues). +请通过 GitHub issues 跟踪问题(bugs)和功能需求(feature requests)。 -More Information -======== +## 更多信息 -For further assistance, send an email to kunpengcompute@huawei.com. +如需进一步帮助,请发送电子邮件至:kunpengcompute@huawei.com