diff --git a/.gitignore b/.gitignore index a6920db3b63eb9ebb4fa8a890edaf00baed6d52e..27df998c15fffdcc693b850abdd79c3167e1f9d7 100644 --- a/.gitignore +++ b/.gitignore @@ -6,3 +6,7 @@ /*/mvnw /*/mvnw.cmd /*/logs +/result +/result_bak +/statistics +/*.iml diff --git a/License b/License new file mode 100644 index 0000000000000000000000000000000000000000..0d8d38016a0d79e89eaa546465a086856a51b64c --- /dev/null +++ b/License @@ -0,0 +1,125 @@ +木兰宽松许可证, 第2版 +木兰宽松许可证, 第2版 + +2020年1月 http://license.coscl.org.cn/MulanPSL2 + +您对“软件”的复制、使用、修改及分发受木兰宽松许可证,第2版(“本许可证”)的如下条款的约束: + +0. 定义 + +“软件” 是指由“贡献”构成的许可在“本许可证”下的程序和相关文档的集合。 + +“贡献” 是指由任一“贡献者”许可在“本许可证”下的受版权法保护的作品。 + +“贡献者” 是指将受版权法保护的作品许可在“本许可证”下的自然人或“法人实体”。 + +“法人实体” 是指提交贡献的机构及其“关联实体”。 + +“关联实体” 是指,对“本许可证”下的行为方而言,控制、受控制或与其共同受控制的机构,此处的控制是指有受控方或共同受控方至少50%直接或间接的投票权、资金或其他有价证券。 + +1. 授予版权许可 + +每个“贡献者”根据“本许可证”授予您永久性的、全球性的、免费的、非独占的、不可撤销的版权许可,您可以复制、使用、修改、分发其“贡献”,不论修改与否。 + +2. 授予专利许可 + +每个“贡献者”根据“本许可证”授予您永久性的、全球性的、免费的、非独占的、不可撤销的(根据本条规定撤销除外)专利许可,供您制造、委托制造、使用、许诺销售、销售、进口其“贡献”或以其他方式转移其“贡献”。前述专利许可仅限于“贡献者”现在或将来拥有或控制的其“贡献”本身或其“贡献”与许可“贡献”时的“软件”结合而将必然会侵犯的专利权利要求,不包括对“贡献”的修改或包含“贡献”的其他结合。如果您或您的“关联实体”直接或间接地,就“软件”或其中的“贡献”对任何人发起专利侵权诉讼(包括反诉或交叉诉讼)或其他专利维权行动,指控其侵犯专利权,则“本许可证”授予您对“软件”的专利许可自您提起诉讼或发起维权行动之日终止。 + +3. 无商标许可 + +“本许可证”不提供对“贡献者”的商品名称、商标、服务标志或产品名称的商标许可,但您为满足第4条规定的声明义务而必须使用除外。 + +4. 分发限制 + +您可以在任何媒介中将“软件”以源程序形式或可执行形式重新分发,不论修改与否,但您必须向接收者提供“本许可证”的副本,并保留“软件”中的版权、商标、专利及免责声明。 + +5. 免责声明与责任限制 + +“软件”及其中的“贡献”在提供时不带任何明示或默示的担保。在任何情况下,“贡献者”或版权所有者不对任何人因使用“软件”或其中的“贡献”而引发的任何直接或间接损失承担责任,不论因何种原因导致或者基于何种法律理论,即使其曾被建议有此种损失的可能性。 + +6. 语言 + +“本许可证”以中英文双语表述,中英文版本具有同等法律效力。如果中英文版本存在任何冲突不一致,以中文版为准。 + +条款结束 + +如何将木兰宽松许可证,第2版,应用到您的软件 + +如果您希望将木兰宽松许可证,第2版,应用到您的新软件,为了方便接收者查阅,建议您完成如下三步: + +1, 请您补充如下声明中的空白,包括软件名、软件的首次发表年份以及您作为版权人的名字; + +2, 请您在软件包的一级目录下创建以“LICENSE”为名的文件,将整个许可证文本放入该文件中; + +3, 请将如下声明文本放入每个源文件的头部注释中。 + +Copyright (c) [Year] [name of copyright holder] +[Software Name] is licensed under Mulan PSL v2. +You can use this software according to the terms and conditions of the Mulan PSL v2. +You may obtain a copy of Mulan PSL v2 at: + http://license.coscl.org.cn/MulanPSL2 +THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, +EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, +MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. +See the Mulan PSL v2 for more details. +Mulan Permissive Software License,Version 2 +Mulan Permissive Software License,Version 2 (Mulan PSL v2) + +January 2020 http://license.coscl.org.cn/MulanPSL2 + +Your reproduction, use, modification and distribution of the Software shall be subject to Mulan PSL v2 (this License) with the following terms and conditions: + +0. Definition + +Software means the program and related documents which are licensed under this License and comprise all Contribution(s). + +Contribution means the copyrightable work licensed by a particular Contributor under this License. + +Contributor means the Individual or Legal Entity who licenses its copyrightable work under this License. + +Legal Entity means the entity making a Contribution and all its Affiliates. + +Affiliates means entities that control, are controlled by, or are under common control with the acting entity under this License, ‘control’ means direct or indirect ownership of at least fifty percent (50%) of the voting power, capital or other securities of controlled or commonly controlled entity. + +1. Grant of Copyright License + +Subject to the terms and conditions of this License, each Contributor hereby grants to you a perpetual, worldwide, royalty-free, non-exclusive, irrevocable copyright license to reproduce, use, modify, or distribute its Contribution, with modification or not. + +2. Grant of Patent License + +Subject to the terms and conditions of this License, each Contributor hereby grants to you a perpetual, worldwide, royalty-free, non-exclusive, irrevocable (except for revocation under this Section) patent license to make, have made, use, offer for sale, sell, import or otherwise transfer its Contribution, where such patent license is only limited to the patent claims owned or controlled by such Contributor now or in future which will be necessarily infringed by its Contribution alone, or by combination of the Contribution with the Software to which the Contribution was contributed. The patent license shall not apply to any modification of the Contribution, and any other combination which includes the Contribution. If you or your Affiliates directly or indirectly institute patent litigation (including a cross claim or counterclaim in a litigation) or other patent enforcement activities against any individual or entity by alleging that the Software or any Contribution in it infringes patents, then any patent license granted to you under this License for the Software shall terminate as of the date such litigation or activity is filed or taken. + +3. No Trademark License + +No trademark license is granted to use the trade names, trademarks, service marks, or product names of Contributor, except as required to fulfill notice requirements in section 4. + +4. Distribution Restriction + +You may distribute the Software in any medium with or without modification, whether in source or executable forms, provided that you provide recipients with a copy of this License and retain copyright, patent, trademark and disclaimer statements in the Software. + +5. Disclaimer of Warranty and Limitation of Liability + +THE SOFTWARE AND CONTRIBUTION IN IT ARE PROVIDED WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED. IN NO EVENT SHALL ANY CONTRIBUTOR OR COPYRIGHT HOLDER BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE SOFTWARE OR THE CONTRIBUTION IN IT, NO MATTER HOW IT’S CAUSED OR BASED ON WHICH LEGAL THEORY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +6. Language + +THIS LICENSE IS WRITTEN IN BOTH CHINESE AND ENGLISH, AND THE CHINESE VERSION AND ENGLISH VERSION SHALL HAVE THE SAME LEGAL EFFECT. IN THE CASE OF DIVERGENCE BETWEEN THE CHINESE AND ENGLISH VERSIONS, THE CHINESE VERSION SHALL PREVAIL. + +END OF THE TERMS AND CONDITIONS + +How to Apply the Mulan Permissive Software License,Version 2 (Mulan PSL v2) to Your Software + +To apply the Mulan PSL v2 to your work, for easy identification by recipients, you are suggested to complete following three steps: + +Fill in the blanks in following statement, including insert your software name, the year of the first publication of your software, and your name identified as the copyright owner; +Create a file named "LICENSE" which contains the whole context of this License in the first directory of your software package; +Attach the statement to the appropriate annotated syntax at the beginning of each source file. +Copyright (c) [Year] [name of copyright holder] +[Software Name] is licensed under Mulan PSL v2. +You can use this software according to the terms and conditions of the Mulan PSL v2. +You may obtain a copy of Mulan PSL v2 at: + http://license.coscl.org.cn/MulanPSL2 +THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, +EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, +MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. +See the Mulan PSL v2 for more details. diff --git a/datachecker-check/pom.xml b/datachecker-check/pom.xml index 52476805a6fad96a840a7571a46511f789af0f1d..05a46eaa50c1db5dc498a026e40f883cb241523a 100644 --- a/datachecker-check/pom.xml +++ b/datachecker-check/pom.xml @@ -1,4 +1,19 @@ + + 4.0.0 @@ -46,7 +61,6 @@ mysql mysql-connector-java - provided com.alibaba @@ -85,6 +99,14 @@ com.google.guava guava + + org.apache.kafka + kafka-streams + + + org.springframework.kafka + spring-kafka + org.springframework.boot spring-boot-starter-test @@ -103,10 +125,6 @@ org.projectlombok lombok - - mysql - mysql-connector-java - diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/DatacheckerCheckApplication.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/DatacheckerCheckApplication.java index 4ec9eb7c4ac9da244676c8cbe0f185ad818abbf2..16b88395604b540ee86d522e6f2e43f7c8b5cdcf 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/DatacheckerCheckApplication.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/DatacheckerCheckApplication.java @@ -1,18 +1,52 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check; +import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.check.service.EndpointManagerService; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.openfeign.EnableFeignClients; +import org.springframework.context.ConfigurableApplicationContext; import org.springframework.scheduling.annotation.EnableAsync; - +/** + * DatacheckerCheckApplication + * + * @author wang chao + * @date 2022/5/8 19:27 + * @since 11 + **/ +@Slf4j @EnableAsync @EnableFeignClients(basePackages = {"org.opengauss.datachecker.check.client"}) @SpringBootApplication public class DatacheckerCheckApplication { + private static ConfigurableApplicationContext context; + public static void main(String[] args) { - SpringApplication.run(DatacheckerCheckApplication.class, args); - } + context = SpringApplication.run(DatacheckerCheckApplication.class, args); + final EndpointManagerService managerService = context.getBean(EndpointManagerService.class); + managerService.start(); + if (!managerService.isEndpointHealth()) { + log.error("The verification service failed to start due to the abnormal state of the endpoint service"); + managerService.shutdown(); + context.close(); + } + } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/Statistical.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/Statistical.java new file mode 100644 index 0000000000000000000000000000000000000000..d3ff42fed4cff2a79f4447bf63d3c5ddce1c8b5a --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/Statistical.java @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.annotation; + +import java.lang.annotation.Documented; +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; + +/** + * @author :wangchao + * @date :Created in 2022/7/20 + * @since :11 + */ +@Documented +@Target({ElementType.ANNOTATION_TYPE, ElementType.METHOD}) +@Retention(RetentionPolicy.RUNTIME) +public @interface Statistical { + + String name() default ""; +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/aspect/StatisticalAspect.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/aspect/StatisticalAspect.java new file mode 100644 index 0000000000000000000000000000000000000000..267974a9cd2696df10496df96440208f1a965fd5 --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/aspect/StatisticalAspect.java @@ -0,0 +1,101 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.annotation.aspect; + +import lombok.extern.slf4j.Slf4j; +import org.aspectj.lang.ProceedingJoinPoint; +import org.aspectj.lang.Signature; +import org.aspectj.lang.annotation.Around; +import org.aspectj.lang.annotation.Aspect; +import org.aspectj.lang.annotation.Pointcut; +import org.aspectj.lang.reflect.MethodSignature; +import org.opengauss.datachecker.check.annotation.Statistical; +import org.opengauss.datachecker.common.util.FileUtils; +import org.opengauss.datachecker.common.util.JsonObjectUtil; +import org.springframework.beans.factory.annotation.Value; +import org.springframework.stereotype.Component; + +import java.lang.reflect.Method; +import java.time.LocalDateTime; +import java.time.temporal.ChronoUnit; +import java.util.Objects; + +/** + * StatisticalAspect + * + * @author :wangchao + * @date :Created in 2022/7/20 + * @since :11 + */ +@Component +@Aspect +@Slf4j +public class StatisticalAspect { + @Value("${data.check.statistical-enable}") + private boolean shouldStatistical; + + @Value("${data.check.data-path}") + private String path; + + /** + * statistical + */ + @Pointcut("@annotation(org.opengauss.datachecker.check.annotation.Statistical)") + public void statistical() { + log.info("statistical annotation"); + } + + /** + * doAround ProceedingJoinPoint + * + * @param pjp round aspect + * @return method result + * @throws Throwable exception + */ + @Around("statistical()") + public Object doAround(ProceedingJoinPoint pjp) throws Throwable { + LocalDateTime start = LocalDateTime.now(); + Object ret = pjp.proceed(); + if (shouldStatistical) { + logStatistical(pjp, start); + } + return ret; + } + + private void logStatistical(ProceedingJoinPoint pjp, LocalDateTime start) { + final Signature signature = pjp.getSignature(); + MethodSignature methodSignature = null; + if (signature instanceof MethodSignature) { + methodSignature = (MethodSignature) signature; + } + Method method = methodSignature.getMethod(); + Statistical statistical = method.getAnnotation(Statistical.class); + if (!Objects.isNull(statistical)) { + StatisticalRecord record = buildStatistical(statistical, start); + FileUtils.writeAppendFile(getStatisticalFileName(), JsonObjectUtil.format(record)); + } + } + + private String getStatisticalFileName() { + return path.concat("statistical.txt"); + } + + private StatisticalRecord buildStatistical(Statistical statistical, LocalDateTime start) { + LocalDateTime end = LocalDateTime.now(); + return new StatisticalRecord().setStart(JsonObjectUtil.formatTime(start)).setEnd(JsonObjectUtil.formatTime(end)) + .setCost(end.until(start, ChronoUnit.SECONDS)).setName(statistical.name()); + } +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/aspect/StatisticalRecord.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/aspect/StatisticalRecord.java new file mode 100644 index 0000000000000000000000000000000000000000..f2547d65eade2d7c42ab417fa7160dcf58b6d208 --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/annotation/aspect/StatisticalRecord.java @@ -0,0 +1,37 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.annotation.aspect; + +import com.alibaba.fastjson.annotation.JSONType; +import lombok.Data; +import lombok.experimental.Accessors; + +/** + * StatisticalRecord + * + * @author :wangchao + * @date :Created in 2022/7/20 + * @since :11 + */ +@Data +@Accessors(chain = true) +@JSONType(orders = {"name", "start", "end", "cost"}) +public class StatisticalRecord { + private String start; + private String end; + private long cost; + private String name; +} \ No newline at end of file diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/Cache.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/Cache.java index 7d9117bd688171f829fbb7fd4cf112e977e93700..1c05003fef82124b80ec71f32126b5630eae1209 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/Cache.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/Cache.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.cache; import java.util.Set; @@ -10,58 +25,64 @@ import java.util.Set; public interface Cache { /** - * 初始化缓存 并给键值设置默认值 + * Initialize cache and set default values for key values * - * @param keys 缓存键值 + * @param keys Cache key value */ void init(Set keys); /** - * 添加键值对到缓存 + * Add key value pairs to cache * - * @param key 键 - * @param value 值 + * @param key key + * @param value value */ void put(K key, V value); /** - * 根据key查询缓存 + * Query cache according to key * - * @param key 缓存key - * @return 缓存value + * @param key Cache key + * @return Cache value */ V get(K key); /** - * 获取缓存Key集合 + * Get cache key set * - * @return Key集合 + * @return Key set */ Set getKeys(); /** - * 更新缓存数据 + * Update cached data * - * @param key 缓存key - * @param value 缓存value - * @return 更新后的缓存value + * @param key key + * @param value value + * @return Updated cache value */ V update(K key, V value); /** - * 删除指定key缓存 + * Delete the specified key cache * * @param key key */ void remove(K key); /** - * 清除全部缓存 + * Clear all caches */ void removeAll(); /** - * 缓存持久化接口 将缓存信息持久化到本地 + * The cache persistence interface will persist the cache information locally */ void persistent(); + + /** + * The service starts to recover cached information. Recover historical data based on persistent cached data + * Scan the cache file at the specified location, parse the JSON string, and deserialize the current cache data + */ + void recover(); } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/TableStatusRegister.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/TableStatusRegister.java index 0a743c505cbdeb703e153ff0eb37c09dc6faad83..d40cbf2d5e7b8e14cad6d1806690884584bd926a 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/TableStatusRegister.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/cache/TableStatusRegister.java @@ -1,15 +1,37 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.cache; import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.common.entry.check.Pair; import org.opengauss.datachecker.common.exception.ExtractException; import org.springframework.stereotype.Service; -import javax.annotation.PostConstruct; import javax.validation.constraints.NotEmpty; -import java.util.HashSet; +import java.util.ArrayList; +import java.util.List; import java.util.Map; import java.util.Set; -import java.util.concurrent.*; +import java.util.concurrent.BlockingDeque; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.Executors; +import java.util.concurrent.LinkedBlockingDeque; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.stream.IntStream; /** * @author :wangchao @@ -19,106 +41,154 @@ import java.util.concurrent.*; @Slf4j @Service public class TableStatusRegister implements Cache { - /** - * 任务状态缓存默认值 + * Task status: if both the source and destination tasks complete data extraction, the setting status is 3 */ - private static final int TASK_STATUS_DEFAULT_VALUE = 0; + public static final int TASK_STATUS_COMPLETED_VALUE = 3; /** - * 任务状态 源端 和宿端均完成数据抽取 + * Task status: if table data verification the current table cache status will be updated to value = value | 4 */ - public static final int TASK_STATUS_COMPLATED_VALUE = 3; + public static final int TASK_STATUS_CHECK_VALUE = 4; /** - * 任务状态 校验服务已进行当前任务校验 + * Task status: if both the source and destination tasks have completed data verification, the setting status is 7 */ - public static final int TASK_STATUS_COMSUMER_VALUE = 7; + public static final int TASK_STATUS_CONSUMER_VALUE = 7; + /** - * 状态自检线程名称 + * Task status cache. The initial default value of status is 0 */ - private static final String SELF_CHECK_THREAD_NAME = "task-register-self-check-thread"; + private static final int TASK_STATUS_DEFAULT_VALUE = 0; + /** + * Status self check thread name + */ + private static final String SELF_CHECK_THREAD_NAME = "task-status-manager"; /** - * 数据抽取任务对应表 执行状态缓存 - * {@code tableStatusCache} : key 为数据抽取表名称 - * {@code tableStatusCache} : value 为数据抽取表完成状态 - * value 值初始化状态为 0 - * 源端完成表识为 1 则更新当前表缓存状态为 value = value | 1 - * 宿端完成表识为 2 则更新当前表缓存状态为 value = value | 2 - * 数据校验标识为 4 则更新当前表缓存状态为 value = value | 4 + *
+     * Data extraction task execution state cache
+     * {@code tableStatusCache} : key Name of data extraction table
+     * {@code tableStatusCache} : value Data extraction table completion status
+     * value  initialization status is 0
+     * If the source endpoint completes the table identification as 1,
+     * the current table cache status will be updated as value = value | 1
+     * If the sink endpoint completes the table identification as 2,
+     * the current table cache status will be updated as value = value | 2
+     * If the data verification ID is 4, the current table cache status will be updated to value = value | 4
+     * 
*/ private static final Map TABLE_STATUS_CACHE = new ConcurrentHashMap<>(); + private static final Map> TABLE_PARTITIONS_STATUS_CACHE = new ConcurrentHashMap<>(); /** - * 单线程定时任务 + * complete */ - private static final ScheduledExecutorService SCHEDULED_EXECUTOR = Executors.newSingleThreadScheduledExecutor(); + private static final BlockingDeque COMPLETED_TABLE_QUEUE = new LinkedBlockingDeque<>(); + /** - * 完成表集合 + * The service starts to recover cached information. Recover historical data based on persistent cached data + * Scan the cache file at the specified location, parse the JSON string, and deserialize the current cache data */ - private static final Set COMPLATED_TABLE = new HashSet<>(); + @Override + public void recover() { + // Scan the cache file at the specified location, parse the JSON string, and deserialize the current cache data + } + /** - * {@code complatedTableQueue} poll消费记录 + * Start and execute self-test thread */ - private static final Set CONSUMER_COMPLATED_TABLE = new HashSet<>(); + public void selfCheck() { + ScheduledExecutorService scheduledExecutor = Executors.newSingleThreadScheduledExecutor(); + scheduledExecutor.scheduleWithFixedDelay(() -> { + Thread.currentThread().setName(SELF_CHECK_THREAD_NAME); + doCheckingStatus(); + cleanAndShutdown(scheduledExecutor); + }, 5, 1, TimeUnit.SECONDS); + } /** + * When the number of completed data extraction tasks is consistent with the number of consumed completed tasks, + * and is greater than 0, the verification service is considered to have been completed * + * @return boolean */ - private static final BlockingDeque COMPLATED_TABLE_QUEUE = new LinkedBlockingDeque<>(); + public boolean isCheckCompleted() { + return TABLE_STATUS_CACHE.values().stream().allMatch(status -> status == TASK_STATUS_CONSUMER_VALUE); + } /** - * 服务启动恢复缓存信息。根据持久化缓存数据,恢复历史数据 - * 扫描指定位置的缓存文件,解析JSON字符串反序列化当前缓存数据 + * View the overall completion status of all extraction tasks at the source and destination + * + * @return All complete - returns true */ - @PostConstruct - public void recover() { - selfCheck(); - // 扫描指定位置的缓存文件,解析JSON字符串反序列化当前缓存数据 + public boolean isExtractCompleted() { + return extractCompletedCount() == cacheSize(); } /** - * 开启并执行自检线程 + * Task status reset */ - public void selfCheck() { - SCHEDULED_EXECUTOR.scheduleWithFixedDelay(() -> { - Thread.currentThread().setName(SELF_CHECK_THREAD_NAME); - doCheckingStatus(); + public void rest() { + init(TABLE_STATUS_CACHE.keySet()); + } - }, 0, 1, TimeUnit.SECONDS); + /** + * cache is empty + * + * @return cache is empty + */ + public boolean isEmpty() { + return TABLE_STATUS_CACHE.isEmpty(); } /** - * 完成数据抽取任务数量 和 已消费的完成任务数量一致,且大于0时,认为本次校验服务已完成 + * task has extract completed * - * @return + * @return task has extract completed */ - public boolean isCheckComplated() { - return CONSUMER_COMPLATED_TABLE.size() > 0 && CONSUMER_COMPLATED_TABLE.size() == COMPLATED_TABLE.size(); + public boolean hasExtractCompleted() { + return extractCompletedCount() > 0; } /** - * 增量任务状态重置 + * task has extract completed count + * + * @return task has extract completed count */ - public void rest() { - CONSUMER_COMPLATED_TABLE.clear(); - COMPLATED_TABLE.clear(); - init(TABLE_STATUS_CACHE.keySet()); + public int extractCompletedCount() { + return (int) TABLE_STATUS_CACHE.values().stream().filter(status -> status >= TASK_STATUS_COMPLETED_VALUE) + .count(); } - public int complateSize() { - return COMPLATED_TABLE.size(); + /** + * table has check completed count + * + * @return table has check completed count + */ + public int checkCompletedCount() { + return (int) TABLE_STATUS_CACHE.values().stream().filter(status -> status >= TASK_STATUS_CONSUMER_VALUE) + .count(); } - public boolean isEmpty() { - return CONSUMER_COMPLATED_TABLE.size() == 0 && COMPLATED_TABLE.size() == 0; + /** + * extract progress + * + * @return extract progress + */ + public Pair extractProgress() { + return Pair.of(extractCompletedCount(), cacheSize()); } - public boolean hasExtractComplated() { - return COMPLATED_TABLE.size() > 0; + /** + * check progress + * + * @return check progress + */ + public Pair checkProgress() { + return Pair.of(checkCompletedCount(), cacheSize()); } /** - * 初始化缓存 并给键值设置默认值 + * Initialize cache and set default values for key values * * @param keys * @return @@ -131,26 +201,97 @@ public class TableStatusRegister implements Cache { } /** - * 添加表状态对到缓存 + * Add table state pairs to cache * - * @param key 键 - * @param value 值 - * @return 返回任务状态 + * @param key key + * @param value value */ @Override public void put(String key, Integer value) { if (TABLE_STATUS_CACHE.containsKey(key)) { - // 当前key已存在不能重复添加 + // The current key already exists and cannot be added repeatedly throw new ExtractException("The current key= " + key + " already exists and cannot be added repeatedly"); } TABLE_STATUS_CACHE.put(key, value); } /** - * 根据key查询缓存 + * table of partitions status + * + * @param key table name + * @param partitions partitions + */ + public void initPartitionsStatus(String key, Integer partitions) { + if (!TABLE_STATUS_CACHE.containsKey(key)) { + // The current key already exists and cannot be added repeatedly + throw new ExtractException("The current key= " + key + " already exists and cannot be added repeatedly"); + } + Map partitionMap = new ConcurrentHashMap<>(); + IntStream.range(0, partitions).forEach(partition -> { + partitionMap.put(partition, TASK_STATUS_COMPLETED_VALUE); + }); + TABLE_PARTITIONS_STATUS_CACHE.put(key, partitionMap); + } + + /** + * Update cached data * - * @param key 缓存key - * @return 缓存value + * @param key key + * @param value value + * @return Updated cache value + */ + @Override + public Integer update(String key, Integer value) { + if (!TABLE_STATUS_CACHE.containsKey(key)) { + log.error("current key={} does not exist", key); + return 0; + } + + Integer odlValue = TABLE_STATUS_CACHE.get(key); + TABLE_STATUS_CACHE.put(key, odlValue | value); + final Integer status = TABLE_STATUS_CACHE.get(key); + log.debug("update table[{}] status : {} -> {}", key, odlValue, status); + if (status == TASK_STATUS_COMPLETED_VALUE) { + putLast(key); + log.debug("add table[{}] queue last", key); + } + return status; + } + + /** + * Update the current table corresponding to the Kafka partition data extraction status + * + * @param key table + * @param partition partition + * @param value status + */ + public void update(String key, Integer partition, Integer value) { + if (!TABLE_PARTITIONS_STATUS_CACHE.containsKey(key)) { + log.error("current partition key={} does not exist", key); + return; + } + TABLE_PARTITIONS_STATUS_CACHE.get(key).put(partition, TASK_STATUS_COMPLETED_VALUE | value); + log.debug("update table [{}] partition[{}] status : {}", key, partition, TASK_STATUS_CONSUMER_VALUE); + boolean isAllCompleted = TABLE_PARTITIONS_STATUS_CACHE.get(key).values().stream() + .allMatch(status -> status == TASK_STATUS_CONSUMER_VALUE); + if (isAllCompleted) { + update(key, TASK_STATUS_CHECK_VALUE); + } + } + + private void putLast(String key) { + try { + COMPLETED_TABLE_QUEUE.putLast(key); + } catch (InterruptedException e) { + log.error("put key={} queue COMPLETED_TABLE_QUEUE error", key); + } + } + + /** + * Query cache according to key + * + * @param key key + * @return cache value */ @Override public Integer get(String key) { @@ -158,9 +299,9 @@ public class TableStatusRegister implements Cache { } /** - * 获取缓存Key集合 + * Get cache key set * - * @return Key集合 + * @return Key set */ @Override public Set getKeys() { @@ -168,26 +309,16 @@ public class TableStatusRegister implements Cache { } /** - * 更新缓存数据 + * cache size * - * @param key 缓存key - * @param value 缓存value - * @return 更新后的缓存value + * @return cache size */ - @Override - public Integer update(String key, Integer value) { - if (!TABLE_STATUS_CACHE.containsKey(key)) { - log.error("current key={} does not exist", key); - return 0; - } - Integer odlValue = TABLE_STATUS_CACHE.get(key); - TABLE_STATUS_CACHE.put(key, odlValue | value); - return TABLE_STATUS_CACHE.get(key); + public Integer cacheSize() { + return TABLE_STATUS_CACHE.keySet().size(); } - /** - * 删除指定key缓存 + * Delete the specified key cache * * @param key key */ @@ -197,59 +328,77 @@ public class TableStatusRegister implements Cache { } /** - * 清除全部缓存 + * Clear all caches */ @Override public void removeAll() { - COMPLATED_TABLE.clear(); - CONSUMER_COMPLATED_TABLE.clear(); TABLE_STATUS_CACHE.clear(); - COMPLATED_TABLE_QUEUE.clear(); + COMPLETED_TABLE_QUEUE.clear(); log.info("table status register cache information clearing"); } /** - * 缓存持久化接口 将缓存信息持久化到本地 - * 将缓存信息持久化到本地的缓存文件,序列化为JSON字符串,保存到本地指定文件中 + * clean check status and shutdown {@value SELF_CHECK_THREAD_NAME} thread + * + * @param scheduledExecutor scheduledExecutor + */ + public void cleanAndShutdown(ScheduledExecutorService scheduledExecutor) { + if (isCheckCompleted()) { + removeAll(); + scheduledExecutor.shutdownNow(); + log.info("clean check status and shutdown {} thread", SELF_CHECK_THREAD_NAME); + } + } + + /** + * The cache persistence interface will persist the cache information locally + * Persist the cache information to the local cache file, serialize it into JSON string, + * and save it to the local specified file */ @Override public void persistent() { } /** - * 返回并删除 已完成数据抽取任务的统计队列{@code complatedTableQueue} 头节点, - * 如果队列为空,则返回{@code null} + * Return and delete the statistical queue {@code completed_table_queue} header node + * that has completed the data extraction task, + * If the queue is empty, null is returned * - * @return 返回队列头节点,如果队列为空,则返回{@code null} + * @return Return the queue header node. If the queue is empty, return null */ - public String complatedTablePoll() { - return COMPLATED_TABLE_QUEUE.poll(); + public String completedTablePoll() { + return COMPLETED_TABLE_QUEUE.poll(); } /** - * 检查是否存在已完成数据抽取任务。若已完成,则返回true - * - * @return true 有已完成数据抽取的任务 + * Check whether there is a completed data extraction task. If yes, update completed_ Table table + * Check whether there is a completed data verification task. If yes, update consumer_ COMPLETED_ Table table */ private void doCheckingStatus() { Set keys = TABLE_STATUS_CACHE.keySet(); + if (keys.size() <= 0) { + return; + } + final Pair extractProgress = extractProgress(); + final Pair checkProgress = checkProgress(); + log.info("There are [{}] tables in total, [{}] tables are extracted and [{}] table is verified", + checkProgress.getSink(), extractProgress.getSource(), checkProgress.getSource()); + List notExtractCompleteList = new ArrayList<>(); + List notCheckCompleteList = new ArrayList<>(); + List checkCompleteList = new ArrayList<>(); keys.forEach(tableName -> { - final int taskStatus = TABLE_STATUS_CACHE.get(tableName); - log.debug("check table=[{}] status=[{}] ", tableName, taskStatus); - if (!COMPLATED_TABLE.contains(tableName)) { - if (taskStatus == TableStatusRegister.TASK_STATUS_COMPLATED_VALUE) { - COMPLATED_TABLE.add(tableName); - COMPLATED_TABLE_QUEUE.add(tableName); - log.info("extract [{}] complated", tableName); - } - } - - if (!CONSUMER_COMPLATED_TABLE.contains(tableName)) { - if (taskStatus == TableStatusRegister.TASK_STATUS_COMSUMER_VALUE) { - CONSUMER_COMPLATED_TABLE.add(tableName); - log.info("consumer [{}] complated", tableName); - } + Integer status = get(tableName); + if (status < TASK_STATUS_COMPLETED_VALUE) { + notExtractCompleteList.add(tableName); + } else if (status == TASK_STATUS_COMPLETED_VALUE) { + notCheckCompleteList.add(tableName); + } else if (status == TASK_STATUS_CONSUMER_VALUE) { + checkCompleteList.add(tableName); + } else { + log.error("table={} status={} error ", tableName, status); } }); + log.debug("progress information: {} is being extracted, {} is being verified, and {} is completed", + notExtractCompleteList, notCheckCompleteList, checkCompleteList); } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFallbackFactory.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFallbackFactory.java new file mode 100644 index 0000000000000000000000000000000000000000..fc4014254671164a50f6c4032f41973131e4429c --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFallbackFactory.java @@ -0,0 +1,148 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.client; + +import org.opengauss.datachecker.common.entry.check.IncrementCheckConfig; +import org.opengauss.datachecker.common.entry.enums.CheckBlackWhiteMode; +import org.opengauss.datachecker.common.entry.enums.DML; +import org.opengauss.datachecker.common.entry.extract.ExtractTask; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.SourceDataLog; +import org.opengauss.datachecker.common.entry.extract.TableMetadata; +import org.opengauss.datachecker.common.entry.extract.TableMetadataHash; +import org.opengauss.datachecker.common.entry.extract.Topic; +import org.opengauss.datachecker.common.web.Result; +import org.springframework.cloud.openfeign.FallbackFactory; +import org.springframework.stereotype.Component; + +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * ExtractFallbackFactory + * + * @author :wangchao + * @date :Created in 2022/7/25 + * @since :11 + */ +@Component +public class ExtractFallbackFactory implements FallbackFactory { + /** + * Returns an instance of the fallback appropriate for the given cause. + * + * @param cause cause of an exception. + * @return fallback + */ + @Override + public ExtractFeignClient create(Throwable cause) { + return new ExtractFeignClientImpl(); + } + + private class ExtractFeignClientImpl implements ExtractFeignClient { + @Override + public Result health() { + return Result.error("Remote call service health check exception"); + } + + @Override + public Result> queryMetaDataOfSchema() { + return Result.error("Remote call, endpoint loading metadata information exception"); + } + + @Override + public Result> buildExtractTaskAllTables(String processNo) { + return Result.error("Remote call, extract task construction exception"); + } + + @Override + public Result buildExtractTaskAllTables(String processNo, List taskList) { + return Result.error("Remote call, abnormal configuration of the destination extraction task"); + } + + @Override + public Result execExtractTaskAllTables(String processNo) { + return Result.error("Remote call, full extraction business processing process exception"); + } + + @Override + public Result queryTopicInfo(String tableName) { + return Result + .error("Remote call, query the topic information corresponding to the specified table is abnormal"); + } + + @Override + public Result getIncrementTopicInfo(String tableName) { + return Result.error("Remote call, get incremental topic information exception"); + } + + @Override + public Result> queryTopicData(String tableName, int partitions) { + return Result.error("Remote call, query the specified topic data exception"); + } + + @Override + public Result> queryIncrementTopicData(String tableName) { + return Result.error("Remote call, query the specified incremental topic data exception"); + } + + @Override + public Result cleanEnvironment(String processNo) { + return Result.error("Remote call, clean up the opposite end environment exception"); + } + + @Override + public Result cleanTask() { + return Result.error("Remote call, clear the task cache exception at the extraction end"); + } + + @Override + public Result> buildRepairDml(String schema, String tableName, DML dml, Set diffSet) { + return Result.error("Remote call, build and repair statement exceptions according to parameters"); + } + + @Override + public void notifyIncrementDataLogs(List dataLogList) { + } + + @Override + public Result queryTableMetadataHash(String tableName) { + return Result.error("Remote call, query table metadata hash information exception"); + } + + @Override + public Result> querySecondaryCheckRowData(SourceDataLog dataLog) { + return Result.error("Remote call, query secondary verification increment log data exception"); + } + + @Override + public Result getDatabaseSchema() { + return Result + .error("Remote call, query the schema information of the database at the extraction end, abnormal“"); + } + + @Override + public void refreshBlackWhiteList(CheckBlackWhiteMode mode, List tableList) { + + } + + @Override + public Result configIncrementCheckEnvironment(IncrementCheckConfig config) { + return Result.error("Remote call, configuration incremental verification scenario, " + + "configuration information related to debezium is abnormal"); + } + } +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFeignClient.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFeignClient.java index 38897e7acdecadbee82151857e574f3a73f74b74..04f1eb3252bc872459bed85839e8296ae28a6745 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFeignClient.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFeignClient.java @@ -1,9 +1,29 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.client; -import org.opengauss.datachecker.common.entry.check.IncrementCheckConifg; +import org.opengauss.datachecker.common.entry.check.IncrementCheckConfig; import org.opengauss.datachecker.common.entry.enums.CheckBlackWhiteMode; import org.opengauss.datachecker.common.entry.enums.DML; -import org.opengauss.datachecker.common.entry.extract.*; +import org.opengauss.datachecker.common.entry.extract.ExtractTask; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.SourceDataLog; +import org.opengauss.datachecker.common.entry.extract.TableMetadata; +import org.opengauss.datachecker.common.entry.extract.TableMetadataHash; +import org.opengauss.datachecker.common.entry.extract.Topic; import org.opengauss.datachecker.common.web.Result; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; @@ -21,170 +41,168 @@ import java.util.Set; */ public interface ExtractFeignClient { /** - * 服务健康检查 + * Service health check * - * @return 返回接口相应结果 + * @return Return the corresponding result of the interface */ @GetMapping("/extract/health") Result health(); /** - * 端点加载元数据信息 + * Endpoint loading metadata information * - * @return 返回元数据 + * @return Return metadata */ @GetMapping("/extract/load/database/meta/data") Result> queryMetaDataOfSchema(); /** - * 抽取任务构建 + * Extraction task construction * - * @param processNo 执行进程编号 - * @return 返回构建任务集合 + * @param processNo Execution process number + * @return Return to build task collection */ @PostMapping("/extract/build/task/all") Result> buildExtractTaskAllTables(@RequestParam(name = "processNo") String processNo); /** - * 宿端抽取任务配置 + * Destination extraction task configuration * - * @param processNo 执行进程编号 - * @param taskList 源端任务列表 - * @return 请求结果 + * @param processNo Execution process number + * @param taskList Source side task list + * @return Request results */ @PostMapping("/extract/config/sink/task/all") Result buildExtractTaskAllTables(@RequestParam(name = "processNo") String processNo, - @RequestBody List taskList); + @RequestBody List taskList); /** - * 全量抽取业务处理流程 + * Full extraction business processing flow * - * @param processNo 执行进程序号 - * @return 执行结果 + * @param processNo Execution process sequence number + * @return Request results */ @PostMapping("/extract/exec/task/all") Result execExtractTaskAllTables(@RequestParam(name = "processNo") String processNo); - /** - * 查询指定表对应Topic信息 + * Query the topic information corresponding to the specified table * - * @param tableName 表名称 - * @return Topic信息 + * @param tableName tableName + * @return Topic information */ @GetMapping("/extract/topic/info") Result queryTopicInfo(@RequestParam(name = "tableName") String tableName); /** - * 获取增量 Topic信息 + * Get incremental topic information * - * @param tableName 表名称 - * @return 返回表对应的Topic信息 + * @param tableName table Name + * @return Return the topic information corresponding to the table */ @GetMapping("/extract/increment/topic/info") Result getIncrementTopicInfo(@RequestParam(name = "tableName") String tableName); /** - * 查询指定topic数据 + * Query the specified topic data * - * @param tableName 表名称 - * @param partitions topic分区 - * @return topic数据 + * @param tableName table Name + * @param partitions topic partitions + * @return topic data */ @GetMapping("/extract/query/topic/data") Result> queryTopicData(@RequestParam("tableName") String tableName, - @RequestParam("partitions") int partitions); + @RequestParam("partitions") int partitions); /** - * 查询指定增量topic数据 + * Query the specified incremental topic data * - * @param tableName 表名称 - * @return topic数据 + * @param tableName table Name + * @return topic data */ @GetMapping("/extract/query/increment/topic/data") Result> queryIncrementTopicData(@RequestParam("tableName") String tableName); /** - * 清理对端环境 + * Clean up the opposite environment * - * @param processNo 执行进程序号 - * @return 执行结果 + * @param processNo Execution process sequence number + * @return Request results */ @PostMapping("/extract/clean/environment") Result cleanEnvironment(@RequestParam(name = "processNo") String processNo); /** - * 清除抽取端 任务缓存 + * Clear the extraction end task cache * - * @return 执行结果 + * @return Request results */ @PostMapping("/extract/clean/task") Result cleanTask(); /** - * 根据参数构建修复语句 + * Build repair statements based on parameters * - * @param schema 待修复端DB对应schema - * @param tableName 表名称 - * @param dml 修复类型{@link DML} - * @param diffSet 差异主键集合 - * @return 返回修复语句集合 + * @param schema The corresponding schema of the end DB to be repaired + * @param tableName table Name + * @param dml Repair type {@link DML} + * @param diffSet Differential primary key set + * @return Return to repair statement collection */ @PostMapping("/extract/build/repairDML") Result> buildRepairDml(@RequestParam(name = "schema") String schema, - @RequestParam(name = "tableName") String tableName, - @RequestParam(name = "dml") DML dml, - @RequestBody Set diffSet); + @RequestParam(name = "tableName") String tableName, @RequestParam(name = "dml") DML dml, + @RequestBody Set diffSet); /** - * 下发增量日志数据 + * Issue incremental log data * - * @param dataLogList 增量数据日志 + * @param dataLogList incremental log data */ @PostMapping("/extract/increment/logs/data") void notifyIncrementDataLogs(List dataLogList); /** - * 查询表元数据哈希信息 + * Query table metadata hash information * - * @param tableName 表名称 - * @return 表元数据哈希 + * @param tableName tableName + * @return Table metadata hash */ @PostMapping("/extract/query/table/metadata/hash") Result queryTableMetadataHash(@RequestParam(name = "tableName") String tableName); /** - * 提取增量日志数据记录 + * Extract incremental log data records * - * @param dataLog 日志记录 - * @return 返回抽取结果 + * @param dataLog data Log + * @return Return extraction results */ @PostMapping("/extract/query/secondary/data/row/hash") Result> querySecondaryCheckRowData(@RequestBody SourceDataLog dataLog); /** - * 查询抽取端数据库schema信息 + * Query the schema information of the extraction end database * - * @return 返回schema + * @return schema */ @GetMapping("/extract/query/database/schema") Result getDatabaseSchema(); /** - * 更新黑白名单列表 + * Update black and white list * - * @param mode 黑白名单模式枚举{@linkplain CheckBlackWhiteMode} - * @param tableList 黑白名单列表-表名称集合 + * @param mode Black and white list mode enumeration{@linkplain CheckBlackWhiteMode} + * @param tableList Black and white list - table name set */ - @PostMapping("/extract/refush/black/white/list") - void refushBlackWhiteList(@RequestParam CheckBlackWhiteMode mode, @RequestBody List tableList); + @PostMapping("/extract/refresh/black/white/list") + void refreshBlackWhiteList(@RequestParam CheckBlackWhiteMode mode, @RequestBody List tableList); /** - * 配置增量校验场景 debezium相关配置信息 + * Configure the configuration information related to debezium in the incremental verification scenario * - * @param conifg debezium相关配置 - * @return 返回请求结果 + * @param config Debezium related configurations + * @return Request results */ @PostMapping("/extract/debezium/topic/config") - Result configIncrementCheckEnvironment(IncrementCheckConifg conifg); + Result configIncrementCheckEnvironment(IncrementCheckConfig config); } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFeignClientFallBack.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFeignClientFallBack.java deleted file mode 100644 index d7db0fe5d3369331ff790b7abe08186b4f74a16e..0000000000000000000000000000000000000000 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractFeignClientFallBack.java +++ /dev/null @@ -1,168 +0,0 @@ -//package org.opengauss.datachecker.check.client; -// -//import lombok.extern.slf4j.Slf4j; -//import org.opengauss.datachecker.common.entry.check.IncrementCheckConifg; -//import org.opengauss.datachecker.common.entry.enums.CheckBlackWhiteMode; -//import org.opengauss.datachecker.common.entry.enums.DML; -//import org.opengauss.datachecker.common.entry.extract.*; -//import org.opengauss.datachecker.common.web.Result; -// -//import java.util.List; -//import java.util.Map; -//import java.util.Set; -// -///** -// * @author :wangchao -// * @date :Created in 2022/5/30 -// * @since :11 -// */ -//@Slf4j -//public class ExtractFeignClientFallBack { -// public ExtractFeignClient getClient(Throwable throwable) { -// return new ExtractFeignClient() { -// /** -// * 服务健康检查 -// * -// * @return 返回接口相应结果 -// */ -// @Override -// public Result health() { -// log.error("health check error:{}", throwable.getMessage()); -// return Result.error("health check error"); -// } -// -// /** -// * 端点加载元数据信息 -// * -// * @return 返回元数据 -// */ -// @Override -// public Result> queryMetaDataOfSchema() { -// log.error("query database metadata error :{}", throwable.getMessage()); -// return Result.error("query database metadata error"); -// } -// -// /** -// * 抽取任务构建 -// * -// * @param processNo 执行进程编号 -// */ -// @Override -// public Result> buildExtractTaskAllTables(String processNo) { -// log.error("build extract task error , process number ={} :{}", processNo, throwable.getMessage()); -// return Result.error(String.format("build extract task error : process number =%s", processNo)); -// } -// -// /** -// * 宿端抽取任务配置 -// * -// * @param processNo 执行进程编号 -// * @param taskList 源端任务列表 -// * @return 请求结果 -// */ -// @Override -// public Result buildExtractTaskAllTables(String processNo, List taskList) { -// return null; -// } -// -// /** -// * 全量抽取业务处理流程 -// * -// * @param processNo 执行进程序号 -// * @return 执行结果 -// */ -// @Override -// public Result execExtractTaskAllTables(String processNo) { -// log.error("runing extract task error , process number ={} :{}", processNo, throwable.getMessage()); -// return Result.error(String.format("runing extract task error : process number =%s", processNo)); -// } -// -// -// @Override -// public Result queryTopicInfo(String tableName) { -// return null; -// } -// -// @Override -// public Result getIncrementTopicInfo(String tableName) { -// return null; -// } -// -// /** -// * 查询指定topic数据 -// * @param tableName topic名称 -// * @param partitions topic分区 -// * @return topic数据 -// */ -// @Override -// public Result> queryTopicData(String tableName, int partitions) { -// return null; -// } -// -// /** -// * 查询指定增量topic数据 -// * -// * @param tableName 表名称 -// * @return topic数据 -// */ -// @Override -// public Result> queryIncrementTopicData(String tableName) { -// return null; -// } -// -// /** -// * 清理对端环境 -// * -// * @param processNo 执行进程序号 -// * @return 执行结果 -// */ -// @Override -// public Result cleanEnvironment(String processNo) { -// log.error("clean environment error , process number ={} : ", processNo, throwable); -// return null; -// } -// -// @Override -// public Result cleanTask() { -// return null; -// } -// -// @Override -// public Result> buildRepairDml(String schema, String tableName, DML dml, Set diffSet) { -// log.error("build Repair DML error , tableName=[{}] dml=[{}] diffs=[{}] :", tableName, dml.getDescription(), diffSet, throwable); -// return null; -// } -// -// @Override -// public void notifyIncrementDataLogs(List dataLogList) { -// -// } -// -// @Override -// public Result queryTableMetadataHash(String tableName) { -// return null; -// } -// -// @Override -// public Result> querySecondaryCheckRowData(SourceDataLog dataLog) { -// return null; -// } -// -// @Override -// public Result getDatabaseSchema() { -// return null; -// } -// -// @Override -// public void refushBlackWhiteList(CheckBlackWhiteMode mode, List whiteList) { -// -// } -// -// @Override -// public Result configIncrementCheckEnvironment(IncrementCheckConifg conifg) { -// return null; -// } -// }; -// -// } -//} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSinkFeignClient.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSinkFeignClient.java index 0ebc296fb67d1f4693715e489a8b1bdd959a7780..98acf8cea94f4875b416ca5e7c5b5bbb8ebf1054 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSinkFeignClient.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSinkFeignClient.java @@ -1,17 +1,38 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.client; import org.springframework.cloud.openfeign.FeignClient; /** - * 创建一个内部类,声明被调用方的api接口,假如被调用方接口异常就会回调异常类进行异常声明 - * name可以声明value,datachecker-extract-sink 是服务名称直接调用该系统,名称一般采用eureka注册信息,我们未引入eurka,配置url进行调用 - * ExtractSinkFallBack 这是如果fegin调用失败需要熔断以及提示错误信息的类 + *
+ * Create an internal class and declare the API interface of the callee. If the interface of the callee is abnormal,
+ * the exception class will be called back for exception declaration
+ * Name can declare value,datachecker-extract-sink refers to the service name that directly calls the system.
+ * The name usually adopts Eureka registration information. We have not introduced eurka, and configure URL to call
+ * ExtractSinkFallBack .
+ * This is the class that needs to fuse and prompt error information if the fegin call fails
+ * 
  *
  * @author :wangchao
  * @date :Created in 2022/5/29
  * @since :11
  */
-@FeignClient(name = "datachecker-extract-sink", url = "${data.check.sink-uri}")
+@FeignClient(name = "datachecker-extract-sink", url = "${data.check.sink-uri}",
+    fallbackFactory = ExtractFallbackFactory.class)
 public interface ExtractSinkFeignClient extends ExtractFeignClient {
 
 }
\ No newline at end of file
diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSourceFeignClient.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSourceFeignClient.java
index 7a5030fa9d83dd8daaa4e63107d3422ac6e59388..5c1887f2cda4e47e0d31f272bb323034f7a8f33f 100644
--- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSourceFeignClient.java
+++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/ExtractSourceFeignClient.java
@@ -1,17 +1,38 @@
+/*
+ * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd.
+ *
+ * openGauss is licensed under Mulan PSL v2.
+ * You can use this software according to the terms and conditions of the Mulan PSL v2.
+ * You may obtain a copy of Mulan PSL v2 at:
+ *
+ *           http://license.coscl.org.cn/MulanPSL2
+ *
+ * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+ * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+ * See the Mulan PSL v2 for more details.
+ */
+
 package org.opengauss.datachecker.check.client;
 
 import org.springframework.cloud.openfeign.FeignClient;
 
 /**
- * 创建一个内部类,声明被调用方的api接口,假如被调用方接口异常就会回调异常类进行异常声明
- * name可以声明value,datachecker-extract-source是服务名称直接调用该系统,名称一般采用eureka注册信息,我们未引入eurka,配置url进行调用
- * ExtractSourceFallBack 这是如果fegin调用失败需要熔断以及提示错误信息的类
+ * 
+ * Create an internal class and declare the API interface of the callee. If the interface of the callee is abnormal,
+ * the exception class will be called back for exception declaration
+ * Name can declare value,datachecker-extract-source refers to the service name that directly calls the system.
+ * The name usually adopts Eureka registration information. We have not introduced eurka, and configure URL to call
+ * ExtractSourceFallBack .
+ * This is the class that needs to fuse and prompt error information if the fegin call fails
+ * 
  *
  * @author :wangchao
  * @date :Created in 2022/5/29
  * @since :11
  */
-@FeignClient(name = "datachecker-extract-source", url = "${data.check.source-uri}")
+@FeignClient(name = "datachecker-extract-source", url = "${data.check.source-uri}",
+    fallbackFactory = ExtractFallbackFactory.class)
 public interface ExtractSourceFeignClient extends ExtractFeignClient {
 
 }
\ No newline at end of file
diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/FeignClientService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/FeignClientService.java
index 0e7f936ae66fad9a7768d5962d83811b60230517..5471fb2c9de09fc7a483f5c3f31d9cab625efa7c 100644
--- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/FeignClientService.java
+++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/client/FeignClientService.java
@@ -1,6 +1,21 @@
+/*
+ * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd.
+ *
+ * openGauss is licensed under Mulan PSL v2.
+ * You can use this software according to the terms and conditions of the Mulan PSL v2.
+ * You may obtain a copy of Mulan PSL v2 at:
+ *
+ *           http://license.coscl.org.cn/MulanPSL2
+ *
+ * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+ * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+ * See the Mulan PSL v2 for more details.
+ */
+
 package org.opengauss.datachecker.check.client;
 
-import org.opengauss.datachecker.common.entry.check.IncrementCheckConifg;
+import org.opengauss.datachecker.common.entry.check.IncrementCheckConfig;
 import org.opengauss.datachecker.common.entry.enums.DML;
 import org.opengauss.datachecker.common.entry.enums.Endpoint;
 import org.opengauss.datachecker.common.entry.extract.ExtractTask;
@@ -19,7 +34,7 @@ import java.util.Map;
 import java.util.Set;
 
 /**
- * 实现FeignClient 接口调用封装
+ * Implement feign client interface call encapsulation
  *
  * @author :wangchao
  * @date :Created in 2022/5/29
@@ -34,9 +49,9 @@ public class FeignClientService {
     private ExtractSinkFeignClient extractSinkClient;
 
     /**
-     * 根据端点类型获取指定FeignClient
+     * Get the specified feign client according to the endpoint type
      *
-     * @param endpoint 端点类型
+     * @param endpoint endpoint type
      * @return feignClient
      */
     public ExtractFeignClient getClient(@NonNull Endpoint endpoint) {
@@ -44,20 +59,20 @@ public class FeignClientService {
     }
 
     /**
-     * 根据端点类型 获取对应健康状态
+     * Service health check
      *
-     * @param endpoint 端点类型
-     * @return 健康状态
+     * @param endpoint endpoint type
+     * @return Return the corresponding result of the interface
      */
     public Result health(@NonNull Endpoint endpoint) {
         return getClient(endpoint).health();
     }
 
     /**
-     * 加载指定端点的数据库元数据信息
+     * Endpoint loading metadata information
      *
-     * @param endpoint 端点类型
-     * @return 元数据结果
+     * @param endpoint endpoint type
+     * @return Return metadata
      */
     public Map queryMetaDataOfSchema(@NonNull Endpoint endpoint) {
         Result> result = getClient(endpoint).queryMetaDataOfSchema();
@@ -65,74 +80,94 @@ public class FeignClientService {
             Map metadata = result.getData();
             return metadata;
         } else {
-            //调度源端服务获取数据库元数据信息异常
-            throw new DispatchClientException(endpoint, "The scheduling source service gets the database metadata information abnormally," + result.getMessage());
+            // Exception in scheduling source side service to obtain database metadata information
+            throw new DispatchClientException(endpoint,
+                "The scheduling source service gets the database metadata information abnormally," + result
+                    .getMessage());
         }
     }
 
-
     /**
-     * 抽取任务构建
+     * Extraction task construction
      *
-     * @param endpoint  端点类型
-     * @param processNo 执行进程编号
+     * @param endpoint  endpoint type
+     * @param processNo Execution process number
+     * @return Return to build task collection
      */
     public List buildExtractTaskAllTables(@NonNull Endpoint endpoint, String processNo) {
         Result> result = getClient(endpoint).buildExtractTaskAllTables(processNo);
         if (result.isSuccess()) {
             return result.getData();
         } else {
-            //调度抽取服务构建任务异常
-            throw new DispatchClientException(endpoint, "The scheduling extraction service construction task is abnormal," + result.getMessage());
+            // Scheduling extraction service construction task exception
+            throw new DispatchClientException(endpoint,
+                "The scheduling extraction service construction task is abnormal," + result.getMessage());
         }
     }
 
-    public boolean buildExtractTaskAllTables(@NonNull Endpoint endpoint, String processNo, @NonNull List taskList) {
+    /**
+     * Destination extraction task configuration
+     *
+     * @param endpoint  endpoint type
+     * @param processNo Execution process number
+     * @param taskList  Source side task list
+     * @return Request results
+     */
+    public boolean buildExtractTaskAllTables(@NonNull Endpoint endpoint, String processNo,
+        @NonNull List taskList) {
         Result result = getClient(endpoint).buildExtractTaskAllTables(processNo, taskList);
         if (result.isSuccess()) {
             return result.isSuccess();
         } else {
-            //调度抽取服务构建任务异常
-            throw new DispatchClientException(endpoint, "The scheduling extraction service construction task is abnormal," + result.getMessage());
+            // Scheduling extraction service construction task exception
+            throw new DispatchClientException(endpoint,
+                "The scheduling extraction service construction task is abnormal," + result.getMessage());
         }
     }
 
     /**
-     * 全量抽取业务处理流程
+     * Full extraction business processing flow
      *
-     * @param endpoint  端点类型
-     * @param processNo 执行进程序号
-     * @return 执行结果
+     * @param endpoint  endpoint type
+     * @param processNo Execution process sequence number
+     * @return Request results
      */
     public boolean execExtractTaskAllTables(@NonNull Endpoint endpoint, String processNo) {
         Result result = getClient(endpoint).execExtractTaskAllTables(processNo);
         if (result.isSuccess()) {
             return result.isSuccess();
         } else {
-            //调度抽取服务执行任务失败
-            throw new DispatchClientException(endpoint, "Scheduling extraction service execution task failed," + result.getMessage());
+            // Scheduling extraction service execution task failed
+            throw new DispatchClientException(endpoint,
+                "Scheduling extraction service execution task failed," + result.getMessage());
         }
     }
 
     /**
-     * 清理对应端点构建的任务缓存信息 ,任务重置
+     * Clean up the opposite environment
      *
-     * @param endpoint 端点类型
+     * @param endpoint  endpoint type
+     * @param processNo Execution process sequence number
      */
     public void cleanEnvironment(@NonNull Endpoint endpoint, String processNo) {
         getClient(endpoint).cleanEnvironment(processNo);
     }
 
+    /**
+     * Clear the extraction end task cache
+     *
+     * @param endpoint endpoint type
+     */
     public void cleanTask(@NonNull Endpoint endpoint) {
         getClient(endpoint).cleanTask();
     }
 
     /**
-     * 查询指定表对应的Topic信息
+     * Query the topic information corresponding to the specified table
      *
-     * @param endpoint  端点类型
-     * @param tableName 表名称
-     * @return Topic信息
+     * @param endpoint  endpoint type
+     * @param tableName tableName
+     * @return Topic information
      */
     public Topic queryTopicInfo(@NonNull Endpoint endpoint, String tableName) {
         Result result = getClient(endpoint).queryTopicInfo(tableName);
@@ -143,6 +178,13 @@ public class FeignClientService {
         }
     }
 
+    /**
+     * Get incremental topic information
+     *
+     * @param endpoint  endpoint type
+     * @param tableName table Name
+     * @return Return the topic information corresponding to the table
+     */
     public Topic getIncrementTopicInfo(@NonNull Endpoint endpoint, String tableName) {
         Result result = getClient(endpoint).getIncrementTopicInfo(tableName);
         if (result.isSuccess()) {
@@ -152,7 +194,18 @@ public class FeignClientService {
         }
     }
 
-    public List buildRepairDml(Endpoint endpoint, String schema, String tableName, DML dml, Set diffSet) {
+    /**
+     * Build repair statements based on parameters
+     *
+     * @param endpoint  endpoint type
+     * @param schema    The corresponding schema of the end DB to be repaired
+     * @param tableName table Name
+     * @param dml       Repair type {@link DML}
+     * @param diffSet   Differential primary key set
+     * @return Return to repair statement collection
+     */
+    public List buildRepairDml(Endpoint endpoint, String schema, String tableName, DML dml,
+        Set diffSet) {
         Result> result = getClient(endpoint).buildRepairDml(schema, tableName, dml, diffSet);
         if (result.isSuccess()) {
             return result.getData();
@@ -162,15 +215,21 @@ public class FeignClientService {
     }
 
     /**
-     * 增量校验日志通知
+     * Issue incremental log data
      *
-     * @param endpoint    端点类型
-     * @param dataLogList 增量校验日志
+     * @param endpoint    endpoint type
+     * @param dataLogList incremental log data
      */
     public void notifyIncrementDataLogs(Endpoint endpoint, List dataLogList) {
         getClient(endpoint).notifyIncrementDataLogs(dataLogList);
     }
 
+    /**
+     * Query the schema information of the extraction end database
+     *
+     * @param endpoint endpoint type
+     * @return schema
+     */
     public String getDatabaseSchema(Endpoint endpoint) {
         Result result = getClient(endpoint).getDatabaseSchema();
         if (result.isSuccess()) {
@@ -180,7 +239,13 @@ public class FeignClientService {
         }
     }
 
-    public void configIncrementCheckEnvironment(Endpoint endpoint, IncrementCheckConifg conifg) {
+    /**
+     * Configure the configuration information related to debezium in the incremental verification scenario
+     *
+     * @param endpoint endpoint type
+     * @param conifg   Debezium related configurations
+     */
+    public void configIncrementCheckEnvironment(Endpoint endpoint, IncrementCheckConfig conifg) {
         Result result = getClient(endpoint).configIncrementCheckEnvironment(conifg);
         if (!result.isSuccess()) {
             throw new CheckingException(result.getMessage());
diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/AsyncConfig.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/AsyncConfig.java
index 8643fac27907316ba8f019170097879dc6a82b97..981c0357cc6a231d1ffcfc60faa878b30828efe7 100644
--- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/AsyncConfig.java
+++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/AsyncConfig.java
@@ -1,3 +1,18 @@
+/*
+ * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd.
+ *
+ * openGauss is licensed under Mulan PSL v2.
+ * You can use this software according to the terms and conditions of the Mulan PSL v2.
+ * You may obtain a copy of Mulan PSL v2 at:
+ *
+ *           http://license.coscl.org.cn/MulanPSL2
+ *
+ * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+ * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+ * See the Mulan PSL v2 for more details.
+ */
+
 package org.opengauss.datachecker.check.config;
 
 import org.springframework.context.annotation.Bean;
@@ -19,17 +34,11 @@ public class AsyncConfig {
     @Bean("asyncCheckExecutor")
     public ThreadPoolTaskExecutor asyncCheckExecutor() {
         ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
-        // 核心线程数, 当前机器的核心数 线程池创建时初始化线程数量
         executor.setCorePoolSize(Runtime.getRuntime().availableProcessors() * 2);
-        // 最大线程数:线程池最大的线程数,只有在缓冲队列满了之后才会申请超过核心线程数的线程
         executor.setMaxPoolSize(Runtime.getRuntime().availableProcessors() * 4);
-        // 缓冲队列: 用来缓冲执行任务的队列
         executor.setQueueCapacity(Integer.MAX_VALUE);
-        //允许线程空闲时间
         executor.setKeepAliveSeconds(60);
-        // 线程池名称前缀
         executor.setThreadNamePrefix("check-thread");
-        // 缓冲队列满了之后的拒绝策略:
         executor.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardPolicy());
         executor.initialize();
         return executor;
diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckConfig.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckConfig.java
index daa3e208b9330bf762ccd377c8bbb4ac4d9f2dcd..8d9520c3376016c8ea9a35ca1844e8dcaac732ee 100644
--- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckConfig.java
+++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckConfig.java
@@ -1,5 +1,21 @@
+/*
+ * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd.
+ *
+ * openGauss is licensed under Mulan PSL v2.
+ * You can use this software according to the terms and conditions of the Mulan PSL v2.
+ * You may obtain a copy of Mulan PSL v2 at:
+ *
+ *           http://license.coscl.org.cn/MulanPSL2
+ *
+ * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+ * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+ * See the Mulan PSL v2 for more details.
+ */
+
 package org.opengauss.datachecker.check.config;
 
+import lombok.Getter;
 import lombok.extern.slf4j.Slf4j;
 import org.opengauss.datachecker.common.util.JsonObjectUtil;
 import org.springframework.beans.factory.annotation.Autowired;
@@ -8,20 +24,17 @@ import org.springframework.stereotype.Component;
 import org.springframework.web.client.RestTemplate;
 
 import javax.annotation.PostConstruct;
-import java.io.File;
-
 
 /**
  * @author :wangchao
  * @date :Created in 2022/5/23
  * @since :11
  */
+@Getter
 @Slf4j
 @Component
 public class DataCheckConfig {
 
-    private static final String CHECK_RESULT_PATH = File.separator + "Result" + File.separator + "Date" + File.separator;
-
     @Autowired
     private DataCheckProperties properties;
 
@@ -41,6 +54,6 @@ public class DataCheckConfig {
     }
 
     public String getCheckResultPath() {
-        return properties.getDataPath() + CHECK_RESULT_PATH;
+        return properties.getDataPath();
     }
 }
diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckProperties.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckProperties.java
index e82763520f2e8a3ae222a6d0e188ff7df2be354e..c749692509b0817d128fd49926fa4e91fddff1ef 100644
--- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckProperties.java
+++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataCheckProperties.java
@@ -1,3 +1,18 @@
+/*
+ * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd.
+ *
+ * openGauss is licensed under Mulan PSL v2.
+ * You can use this software according to the terms and conditions of the Mulan PSL v2.
+ * You may obtain a copy of Mulan PSL v2 at:
+ *
+ *           http://license.coscl.org.cn/MulanPSL2
+ *
+ * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+ * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+ * See the Mulan PSL v2 for more details.
+ */
+
 package org.opengauss.datachecker.check.config;
 
 import com.alibaba.fastjson.annotation.JSONType;
@@ -28,42 +43,52 @@ public class DataCheckProperties {
     @PostConstruct
     private void checkUrl() {
         if (Objects.equals(sourceUri, sinkUri)) {
-            // 源端和宿端的访问地址冲突,请重新配置。
-            throw new CheckingAddressConflictException("The access addresses of the source end and the destination end conflict, please reconfigure.");
+            // The access addresses of the source end and the destination end conflict, please reconfigure.
+            throw new CheckingAddressConflictException(
+                "The access addresses of the source end and the destination end conflict, please reconfigure.");
         }
     }
 
-
     /**
-     * 数据校验服务地址:源端 源端地址不能为空
+     * Data verification service address: the source address cannot be empty
      */
     @NotEmpty(message = "Source address cannot be empty")
     private String sourceUri;
 
     /**
-     * 数据校验服务地址:宿端 宿端地址不能为空")
+     * Data verification service address: the destination address cannot be empty ")
      */
     @NotEmpty(message = "The destination address cannot be empty")
     private String sinkUri;
 
     /**
-     * 桶容量 默认容量大小为 1
+     * Bucket capacity default capacity size is 1
      */
     @Range(min = 1, message = "The minimum barrel capacity is 1")
     private int bucketExpectCapacity = 1;
 
     /**
-     * 健康检查地址
+     * Health check address
      */
     private String healthCheckApi;
     /**
-     * 数据结果根目录,数据校验结果根目录不能为空
+     * The root directory of data results and the root directory of data verification results cannot be empty
      */
     @NotEmpty(message = "The root directory of data verification results cannot be empty")
     private String dataPath;
 
     /**
-     * 是否增加黑白名单配置
+     * Add black and white list configuration
      */
     private CheckBlackWhiteMode blackWhiteMode;
+    /**
+     * statistical-enable : Configure whether to perform verification time statistics.
+     * If true, the execution time of the verification process will be statistically analyzed automatically.
+     */
+    private boolean canStatisticalEnable;
+    /**
+     * auto-clean-environment: Configure whether to automatically clean the execution environment.
+     * If set to true, the environment will be cleaned automatically after the full verification process is completed.
+     */
+    private boolean canAutoCleanEnvironment;
 }
diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataSourceConfig.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataSourceConfig.java
index 77605ade69351ceb0e2dbd40abbe1e75ef8ce664..d0b9d13a6b90679badd32627755b05f31f157e08 100644
--- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataSourceConfig.java
+++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/DataSourceConfig.java
@@ -1,29 +1,88 @@
-package org.opengauss.datachecker.check.config;
+/*
+ * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd.
+ *
+ * openGauss is licensed under Mulan PSL v2.
+ * You can use this software according to the terms and conditions of the Mulan PSL v2.
+ * You may obtain a copy of Mulan PSL v2 at:
+ *
+ *           http://license.coscl.org.cn/MulanPSL2
+ *
+ * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
+ * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
+ * See the Mulan PSL v2 for more details.
+ */
 
+package org.opengauss.datachecker.check.config;
 
 import com.alibaba.druid.pool.DruidDataSource;
+import com.alibaba.druid.support.http.StatViewServlet;
+import com.alibaba.druid.support.http.WebStatFilter;
 import org.springframework.boot.context.properties.ConfigurationProperties;
+import org.springframework.boot.web.servlet.FilterRegistrationBean;
+import org.springframework.boot.web.servlet.ServletRegistrationBean;
 import org.springframework.context.annotation.Bean;
 import org.springframework.context.annotation.Configuration;
 
 import javax.sql.DataSource;
-
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
 
 @Configuration
 public class DataSourceConfig {
     /**
-     * 
-     *  将自定义的 Druid数据源添加到容器中,不再让 Spring Boot 自动创建
-     *  绑定全局配置文件中的 druid 数据源属性到 com.alibaba.druid.pool.DruidDataSource从而让它们生效
-     *  @ConfigurationProperties(prefix = "spring.datasource"):作用就是将 全局配置文件中
-     *  前缀为 spring.datasource的属性值注入到 com.alibaba.druid.pool.DruidDataSource 的同名参数中
-     *  
+ * build check DruidDataSource * - * @return + * @return DruidDataSource */ @Bean("dataSource") @ConfigurationProperties(prefix = "spring.datasource.druid.datacheck") public DataSource druidDataSourceOne() { return new DruidDataSource(); } + + /** + * Background monitoring + * Configure the servlet of the Druid monitoring management background. + * There is no web.xml file when the servlet container is built in. Therefore ,the servlet registration mode of + * Spring Boot is used. + * Startup access address : http://localhost:8080/druid/api.html + * + * @return return ServletRegistrationBean + */ + @Bean + public ServletRegistrationBean initServletRegistrationBean() { + ServletRegistrationBean bean = + new ServletRegistrationBean<>(new StatViewServlet(), "/druid/*"); + // Configuring the account and password + HashMap initParameters = new HashMap<>(); + // if the second parameter is empty,everyone can access it. + initParameters.put("allow", ""); + // Setting initialization parameters + bean.setInitParameters(initParameters); + return bean; + } + + /** + * Configuring the filter for druid monitoring - web monitoring + * WebStatFilter: used to configure management association monitoring statistice between web and druid data sources. + * + * @return return FilterRegistrationBean + */ + @Bean + public FilterRegistrationBean webStatFilter() { + FilterRegistrationBean bean = new FilterRegistrationBean(); + bean.setFilter(new WebStatFilter()); + + // exclusions: sets the requests to be filtered out so that statistics are not collected. + Map initParams = new HashMap<>(); + // this things don't count. + initParams.put("exclusions", "*.js,*.css,/druid/*,/jdbc/*"); + bean.setInitParameters(initParams); + + // "/*" indicates that all requests are filtered. + bean.setUrlPatterns(Arrays.asList("/*")); + return bean; + } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/GlobalCheckingExceptionHandler.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/GlobalCheckingExceptionHandler.java index 8b6453ddb6857b59b6582a0fbb2c6d20bfcf6399..cd4cddb1f453ac5e2d2b555f4df3b7ded92da0f8 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/GlobalCheckingExceptionHandler.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/GlobalCheckingExceptionHandler.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.config; import lombok.extern.slf4j.Slf4j; @@ -14,44 +29,44 @@ import javax.servlet.http.HttpServletRequest; public class GlobalCheckingExceptionHandler extends GlobalCommonExceptionHandler { /** - * 业务异常处理 + * Business exception handling */ @ExceptionHandler(value = CheckingException.class) public Result checkingException(HttpServletRequest request, CheckingException e) { - log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), - e.getMessage(), e); + log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), e.getMessage(), + e); logError(request, e); return Result.fail(ResultEnum.CHECKING, e.getMessage()); } @ExceptionHandler(value = CheckingAddressConflictException.class) public Result checkingAddressConflictException(HttpServletRequest request, CheckingAddressConflictException e) { - log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), - e.getMessage(), e); + log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), e.getMessage(), + e); logError(request, e); return Result.fail(ResultEnum.CHECKING_ADDRESS_CONFLICT, e.getMessage()); } @ExceptionHandler(value = CheckMetaDataException.class) public Result checkMetaDataException(HttpServletRequest request, CheckMetaDataException e) { - log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), - e.getMessage(), e); + log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), e.getMessage(), + e); logError(request, e); return Result.fail(ResultEnum.CHECK_META_DATA, e.getMessage()); } @ExceptionHandler(value = LargeDataDiffException.class) public Result largeDataDiffException(HttpServletRequest request, LargeDataDiffException e) { - log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), - e.getMessage(), e); + log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), e.getMessage(), + e); logError(request, e); return Result.fail(ResultEnum.LARGE_DATA_DIFF, e.getMessage()); } @ExceptionHandler(value = MerkleTreeDepthException.class) public Result merkleTreeDepthException(HttpServletRequest request, MerkleTreeDepthException e) { - log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), - e.getMessage(), e); + log.error("path:{}, queryParam:{}message:{}", request.getRequestURI(), request.getQueryString(), e.getMessage(), + e); logError(request, e); return Result.fail(ResultEnum.MERKLE_TREE_DEPTH, e.getMessage()); } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/KafkaConsumerConfig.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/KafkaConsumerConfig.java new file mode 100644 index 0000000000000000000000000000000000000000..5e43f4e82fbf7545baa36d4bcf3c70258a66b916 --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/KafkaConsumerConfig.java @@ -0,0 +1,35 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.config; + +import lombok.extern.slf4j.Slf4j; +import org.springframework.boot.autoconfigure.kafka.KafkaProperties; +import org.springframework.boot.context.properties.EnableConfigurationProperties; +import org.springframework.stereotype.Component; + +/** + * KafkaConsumerConfig + * + * @author :wangchao + * @date :Created in 2022/5/17 + * @since :11 + */ +@Slf4j +@Component +@EnableConfigurationProperties(KafkaProperties.class) +public class KafkaConsumerConfig { + +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/SpringDocConfig.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/SpringDocConfig.java index 944edf72b42c2c1a144b93bf02fe34f504bbacbb..22e016b0919185605ba0833ca7d4833e7a371ce1 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/SpringDocConfig.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/config/SpringDocConfig.java @@ -1,10 +1,25 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.config; import io.swagger.v3.oas.models.OpenAPI; import io.swagger.v3.oas.models.info.Info; -import io.swagger.v3.oas.models.parameters.HeaderParameter; +import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.reflect.FieldUtils; -import org.springdoc.core.customizers.OpenApiCustomiser; +import org.opengauss.datachecker.common.exception.CommonException; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.util.ReflectionUtils; @@ -16,46 +31,46 @@ import java.lang.reflect.Field; import java.util.List; /** - * swagger2配置 - * http://localhost:8080/swagger-ui/index.html + * swagger2 configuration * * @author :wangchao * @date :Created in 2022/5/17 * @since :11 */ - -/** - * 2021/8/13 - */ - +@Slf4j @Configuration public class SpringDocConfig implements WebMvcConfigurer { + /** + * mallTinyOpenAPI + * + * @return OpenAPI + */ @Bean public OpenAPI mallTinyOpenAPI() { - return new OpenAPI() - .info(new Info() - .title("data verification") - .description("Data verification tool data verification API") - .version("v1.0.0")); + return new OpenAPI().info( + new Info().title("data verification").description("Data verification tool data verification API") + .version("v1.0.0")); } - /** - * 通用拦截器排除设置,所有拦截器都会自动加springdoc-opapi相关的资源排除信息,不用在应用程序自身拦截器定义的地方去添加,算是良心解耦实现。 + * registry Interceptors + * + * @param registry registry Interceptors */ @SuppressWarnings("unchecked") @Override public void addInterceptors(InterceptorRegistry registry) { try { Field registrationsField = FieldUtils.getField(InterceptorRegistry.class, "registrations", true); - List registrations = (List) ReflectionUtils.getField(registrationsField, registry); + List registrations = + (List) ReflectionUtils.getField(registrationsField, registry); if (registrations != null) { for (InterceptorRegistration interceptorRegistration : registrations) { interceptorRegistration.excludePathPatterns("/springdoc**/**"); } } - } catch (Exception e) { - e.printStackTrace(); + } catch (CommonException e) { + log.error("swagger2 configuration addInterceptors error"); } } } \ No newline at end of file diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckBlackWhiteController.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckBlackWhiteController.java index 4895b0b7dac1cf6bec159e032d4dfbef80fddca2..c0bcfeb5a907b0dd6a477f59f80b539a9112fe02 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckBlackWhiteController.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckBlackWhiteController.java @@ -1,15 +1,31 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.controller; import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import org.opengauss.datachecker.check.service.CheckBlackWhiteService; -import org.opengauss.datachecker.check.service.CheckService; -import org.opengauss.datachecker.common.entry.enums.CheckMode; import org.opengauss.datachecker.common.web.Result; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.validation.annotation.Validated; -import org.springframework.web.bind.annotation.*; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RestController; import java.util.List; @@ -18,7 +34,7 @@ import java.util.List; * @date :Created in 2022/5/25 * @since :11 */ -@Tag(name = "CheckBlackWhiteController", description = "校验服务-黑白名单管理") +@Tag(name = "CheckBlackWhiteController", description = "Verification service - black and white list management") @Validated @RestController @RequestMapping @@ -28,63 +44,99 @@ public class CheckBlackWhiteController { private CheckBlackWhiteService checkBlackWhiteService; /** - * 开启校验 + * Add white list this function clears the historical white list and resets the white list to the current list + * + * @param whiteList whiteList + * @return request result */ - @Operation(summary = "添加白名单列表 该功能清理历史白名单,重置白名单为当前列表") @PostMapping("/add/white/list") - public Result addWhiteList(@Parameter(name = "whiteList", description = "白名单列表") - @RequestBody List whiteList) { + public Result addWhiteList( + @Parameter(name = "whiteList", description = "whiteList") @RequestBody List whiteList) { checkBlackWhiteService.addWhiteList(whiteList); return Result.success(); } - @Operation(summary = "更新白名单列表 该功能在当前白名单基础上新增当前列表到白名单") + /** + * Update white list this function adds the current list to the white list on the basis of the current white list + * + * @param whiteList whiteList + * @return request result + */ @PostMapping("/update/white/list") - public Result updateWhiteList(@Parameter(name = "whiteList", description = "白名单列表") - @RequestBody List whiteList) { + public Result updateWhiteList( + @Parameter(name = "whiteList", description = "whiteList") @RequestBody List whiteList) { checkBlackWhiteService.updateWhiteList(whiteList); return Result.success(); } - @Operation(summary = "移除白名单列表 该功能在当前白名单基础上移除当前列表到白名单") + /** + * Remove white list this function removes the current list from the current white list + * + * @param whiteList whiteList + * @return request result + */ @PostMapping("/delete/white/list") - public Result deleteWhiteList(@Parameter(name = "whiteList", description = "白名单列表") - @RequestBody List whiteList) { + public Result deleteWhiteList( + @Parameter(name = "whiteList", description = "whiteList") @RequestBody List whiteList) { checkBlackWhiteService.deleteWhiteList(whiteList); return Result.success(); } - @Operation(summary = "查询白名单列表 ") + /** + * Query white list + * + * @return white list + */ @PostMapping("/query/white/list") public Result> queryWhiteList() { return Result.success(checkBlackWhiteService.queryWhiteList()); } - @Operation(summary = "添加黑名单列表 该功能清理历史黑名单,重置黑名单为当前列表") + /** + * Add blacklist list this function clears the historical blacklist and resets the blacklist to the current list + * + * @param blackList blackList + * @return request result + */ @PostMapping("/add/black/list") - public Result addBlackList(@Parameter(name = "blackList", description = "黑名单列表") - @RequestBody List blackList) { + public Result addBlackList( + @Parameter(name = "blackList", description = "Blacklist list") @RequestBody List blackList) { checkBlackWhiteService.addBlackList(blackList); return Result.success(); } - @Operation(summary = "更新黑名单列表 该功能在当前黑名单基础上新增当前列表到黑名单") + /** + * Update blacklist list this function adds the current list to the blacklist on the basis of the current blacklist + * + * @param blackList blackList + * @return request result + */ @PostMapping("/update/black/list") - public Result updateBlackList(@Parameter(name = "blackList", description = "黑名单列表") - @RequestBody List blackList) { + public Result updateBlackList( + @Parameter(name = "blackList", description = "Blacklist list") @RequestBody List blackList) { checkBlackWhiteService.updateBlackList(blackList); return Result.success(); } - @Operation(summary = "移除黑名单列表 该功能在当前黑名单基础上移除当前列表到黑名单") + /** + * Remove blacklist list this function removes the current list from the blacklist based on the current blacklist + * + * @param blackList blackList + * @return request result + */ @PostMapping("/delete/black/list") - public Result deleteBlackList(@Parameter(name = "blackList", description = "黑名单列表") - @RequestBody List blackList) { + public Result deleteBlackList( + @Parameter(name = "blackList", description = "Blacklist list") @RequestBody List blackList) { checkBlackWhiteService.deleteBlackList(blackList); return Result.success(); } - @Operation(summary = "查询黑名单列表 ") + /** + * Query blacklist list + * + * @return blackList + */ + @Operation(summary = "Query blacklist list ") @PostMapping("/query/black/list") public Result> queryBlackList() { return Result.success(checkBlackWhiteService.queryBlackList()); diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckStartController.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckStartController.java index d9d11efeeb6943c721e5118a22871e488ef72fec..12ff537f52d8f8969f054a04a7d22844d89935a6 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckStartController.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/CheckStartController.java @@ -1,55 +1,97 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.controller; import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import org.opengauss.datachecker.check.service.CheckService; -import org.opengauss.datachecker.common.entry.check.IncrementCheckConifg; +import org.opengauss.datachecker.common.entry.check.IncrementCheckConfig; import org.opengauss.datachecker.common.entry.enums.CheckMode; import org.opengauss.datachecker.common.web.Result; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.validation.annotation.Validated; -import org.springframework.web.bind.annotation.*; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestMapping; +import org.springframework.web.bind.annotation.RequestParam; +import org.springframework.web.bind.annotation.RestController; /** * @author :wangchao * @date :Created in 2022/5/25 * @since :11 */ -@Tag(name = "CheckStartController", description = "校验服务-校验服务启动命令") +@Tag(name = "CheckStartController", description = "Verification service - verification service start command") @Validated @RestController @RequestMapping public class CheckStartController { - @Autowired private CheckService checkService; /** - * 开启校验 + * Turn on verification + * + * @param checkMode checkMode {@value CheckMode#API_DESCRIPTION} + * @return verification process info */ - @Operation(summary = "开启校验") + @Operation(summary = "Turn on verification") @PostMapping("/start/check") - public Result statCheck(@Parameter(name = "checkMode", description = CheckMode.API_DESCRIPTION) - @RequestParam("checkMode") CheckMode checkMode) { + public Result statCheck( + @Parameter(name = "checkMode", description = CheckMode.API_DESCRIPTION) @RequestParam("checkMode") + CheckMode checkMode) { return Result.success(checkService.start(checkMode)); } - @Operation(summary = "增量校验配置初始化") + /** + * Incremental verification configuration initialization + * + * @param config Debezium incremental migration verification initialization configuration + * @return request result + */ + @Operation(summary = "Incremental verification configuration initialization") @PostMapping("/increment/check/config") - public Result incrementCheckConifg(@RequestBody IncrementCheckConifg incrementCheckConifg) { - checkService.incrementCheckConifg(incrementCheckConifg); + public Result incrementCheckConfig(@RequestBody IncrementCheckConfig config) { + checkService.incrementCheckConfig(config); return Result.success(); } - @Operation(summary = "停止校验服务 并 清理校验服务", description = "对当前进程中的校验状态,以及抽取的数据等相关信息进行全面清理。") + /** + *
+     * Stop the verification service and clean up the verification service.
+     * Comprehensively clean up the verification status in the current process
+     * and the extracted data and other relevant information"
+     * 
+ * + * @return request result + */ @PostMapping("/stop/clean/check") public Result cleanCheck() { checkService.cleanCheck(); return Result.success(); } - @Operation(summary = "查询当前校验服务进程编号") + /** + * Query the current verification service process number + * + * @return process number + */ + @Operation(summary = "Query the current verification service process number") @GetMapping("/get/check/process") public Result getCurrentCheckProcess() { return Result.success(checkService.getCurrentCheckProcess()); diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/IncrementManagerController.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/IncrementManagerController.java index 9d5a96a5746037b6d2174f8aa365917724300bd8..e5ca3bc23302e686187bc631595be7c920e67b41 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/IncrementManagerController.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/IncrementManagerController.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.controller; import io.swagger.v3.oas.annotations.Operation; @@ -19,7 +34,7 @@ import java.util.List; * @date :Created in 2022/5/25 * @since :11 */ -@Tag(name = "IncrementManagerController", description = "校验服务-增量校验管理") +@Tag(name = "IncrementManagerController", description = "Verification service - incremental verification management") @Validated @RestController public class IncrementManagerController { @@ -28,16 +43,15 @@ public class IncrementManagerController { private IncrementManagerService incrementManagerService; /** - * 增量校验日志通知 + * Incremental verification log notification * - * @param dataLogList 增量校验日志 + * @param dataLogList Incremental verification log */ - @Operation(summary = "增量校验日志通知") + @Operation(summary = "Incremental verification log notification") @PostMapping("/notify/source/increment/data/logs") - public void notifySourceIncrementDataLogs(@Parameter(description = "增量校验日志") - @RequestBody @NotEmpty List dataLogList) { + public void notifySourceIncrementDataLogs(@Parameter(description = "Incremental verification log") @RequestBody + @NotEmpty List dataLogList) { incrementManagerService.notifySourceIncrementDataLogs(dataLogList); } - } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/TaskStatusController.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/TaskStatusController.java index 33a114413fa8a130d24c66b613f0fb39f5dbb8de..026199386e7b29043c4e4db8eeede57006b16733 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/TaskStatusController.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/controller/TaskStatusController.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.controller; import io.swagger.v3.oas.annotations.Operation; @@ -21,7 +36,7 @@ import java.util.List; * @date :Created in 2022/5/25 * @since :11 */ -@Tag(name = "TaskStatusController", description = "校验服务-数据抽取任务状态") +@Tag(name = "TaskStatusController", description = "Verification service - data extraction task status") @Validated @RestController public class TaskStatusController { @@ -30,28 +45,30 @@ public class TaskStatusController { private TaskManagerService taskManagerService; /** - * 刷新指定任务的数据抽取表执行状态 + * Refresh the execution status of the data extraction table of the specified task * - * @param tableName 表名称 - * @param endpoint 端点类型 {@link org.opengauss.datachecker.common.entry.enums.Endpoint} + * @param tableName tableName + * @param endpoint endpoint {@link org.opengauss.datachecker.common.entry.enums.Endpoint} */ - @Operation(summary = "刷新指定任务的数据抽取任务执行状态") + @Operation(summary = "Refresh the execution status of the data extraction table of the specified task") @PostMapping("/table/extract/status") - public void refushTableExtractStatus(@Parameter(description = "表名称") @RequestParam(value = "tableName") @NotEmpty String tableName, - @Parameter(description = "数据校验端点类型") @RequestParam(value = "endpoint") @NonNull Endpoint endpoint) { - taskManagerService.refushTableExtractStatus(tableName, endpoint); + public void refushTableExtractStatus( + @Parameter(description = "tableName") @RequestParam(value = "tableName") @NotEmpty String tableName, + @Parameter(description = Endpoint.API_DESCRIPTION) @RequestParam(value = "endpoint") @NonNull + Endpoint endpoint) { + taskManagerService.refreshTableExtractStatus(tableName, endpoint); } /** - * 初始化任务状态 + * Initialize task status * - * @param tableNameList 表名称列表 + * @param tableNameList tableNameList */ - @Operation(summary = "初始化任务状态") + @Operation(summary = "Initialize task status") @PostMapping("/table/extract/status/init") - public void initTableExtractStatus(@Parameter(description = "表名称列表") @RequestBody @NotEmpty List tableNameList) { + public void initTableExtractStatus( + @Parameter(description = "tableNameList") @RequestBody @NotEmpty List tableNameList) { taskManagerService.initTableExtractStatus(tableNameList); } - } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/Bucket.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/Bucket.java index 689755537fb935445bd96e191626d16b2deb72e0..8365861bdf8d435c22fab221ab21685cc5ef9e93 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/Bucket.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/Bucket.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.bucket; import lombok.Data; @@ -7,8 +22,8 @@ import org.opengauss.datachecker.common.util.ByteUtil; import javax.validation.constraints.NotNull; import java.io.Serializable; -import java.util.HashMap; import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; /** * @author :wangchao @@ -21,29 +36,35 @@ public class Bucket implements Serializable { private static final long serialVersionUID = 1L; /** - * 桶初始化容量,如果桶内数据的数量超过指定数量{@code initialCapacity*0.75} 则会自动触发桶容量扩容。 + *
+     * Bucket initialization capacity.
+     * If the amount of data in the bucket exceeds the specified amount {@code initialCapacity*0.75},
+     * the bucket capacity expansion will be automatically triggered.
+     * 
*/ private int initialCapacity; /** - * Bucket桶的容器,容器的初始化容量大小为设置为平均容量大小。 + *
+     * The initialization capacity of the bucket container is set to the average capacity.
      * 

- * 超出平均容量会进行扩容操作 + * If the average capacity is exceeded, the capacity will be expanded + *

*/ - private Map bucket = new HashMap<>(this.initialCapacity); + private Map bucket = new ConcurrentHashMap<>(initialCapacity); /** - * 桶编号 + * bucket number */ private Integer number; /** - * Bucket桶的哈希签名 ,签名初始化值为0 + * Hash signature of bucket bucket. The initialization value of the signature is 0 */ private long signature = 0L; /** - * 桶构造时,要求进行容量大小初始化 + * Capacity initialization is required during barrel construction * - * @param initialCapacity 桶初始化容量大小 + * @param initialCapacity Bucket initialization capacity size */ public Bucket(int initialCapacity) { this.initialCapacity = initialCapacity; diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/BuilderBucketHandler.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/BuilderBucketHandler.java index c6bca8f43ce5ef48c0ab9b8062feb841f3b1ff0d..a7f611e2723f947a1b6f7ef0852bbca041fde375 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/BuilderBucketHandler.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/bucket/BuilderBucketHandler.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.bucket; import org.opengauss.datachecker.common.entry.extract.RowDataHash; @@ -8,31 +23,35 @@ import java.util.Map; import java.util.stream.IntStream; /** + * BuilderBucketHandler + * * @author :wangchao * @date :Created in 2022/5/24 * @since :11 */ public class BuilderBucketHandler { /** - * 默克尔树最大树高度 + * Maximum height of Merkel tree */ private static final int MERKLE_TREE_MAX_HEIGHT = 15; /** - * 最高默克尔树的最大叶子节点数量 + * Maximum number of leaf nodes of the highest Merkel tree */ private static final int BUCKET_MAX_COUNT_LIMITS = 1 << MERKLE_TREE_MAX_HEIGHT; - /** - * 当限定了默克尔树最大树高度为{@value MERKLE_TREE_MAX_HEIGHT}, - * 那么构造的最高默克尔树的最大叶子节点数量为{@code BUCKET_MAX_COUNT_LIMITS} 即 {@value BUCKET_MAX_COUNT_LIMITS}。 + *
+     * When the maximum height of Merkel tree is limited to {@value MERKLE_TREE_MAX_HEIGHT},
+     * Then the maximum number of leaf nodes of the highest Merkel tree constructed is {@code BUCKET_MAX_COUNT_LIMITS},
+     * that is {@value BUCKET_MAX_COUNT_LIMITS}。
      * 

- * 由此,获得最大桶数量值为{@value BUCKET_MAX_COUNT_LIMITS }, - * 桶数量范围我们限定每棵树桶的数量为 2^n 个 + * Thus, the maximum number of barrels obtained is {@value BUCKET_MAX_COUNT_LIMITS }, + * Range of barrels we limit the number of barrels per tree to 2^n + *

*/ private static final int[] BUCKET_COUNT_LIMITS = new int[MERKLE_TREE_MAX_HEIGHT]; - // 初始化{@code BUCKET_COUNT_LIMITS} + // initialize {@code BUCKET_COUNT_LIMITS} static { for (int i = 1; i <= MERKLE_TREE_MAX_HEIGHT; i++) { BUCKET_COUNT_LIMITS[i - 1] = 1 << i; @@ -40,12 +59,12 @@ public class BuilderBucketHandler { } /** - * 空桶容量大小,用于构造特殊的空桶 + * The capacity of empty barrels is used to construct special empty barrels */ private static final int EMPTY_INITIAL_CAPACITY = 0; /** - * 当前桶初始化容量 + * Current bucket initialization capacity */ private final int bucketCapacity; @@ -54,69 +73,72 @@ public class BuilderBucketHandler { } /** - * 将{@code rowDataHashList}数据动态分配到桶{@link org.opengauss.datachecker.check.modules.bucket.Bucket}中。 - *

+ *

+     * Dynamically allocate {@code rowDataHashList} data to
+     * bucket {@link org.opengauss.datachecker.check.modules.bucket.Bucket}.
+     * 
* - * @param rowDataHashList 当前待分配到桶中的记录集合 - * @param totalCount 为{@link org.opengauss.datachecker.common.entry.extract.RowDataHash} 记录总数。 - * 注意:不一定为当前{@code rowDataHashList.size}总数 + * @param rowDataHashList Collection of records currently to be allocated to the bucket + * @param totalCount Record the total number for {@link RowDataHash}. + * Note: not necessarily the current {@code rowDataHashList.size} total * @param bucketMap {@code bucketMap} K为当前桶V的编号。 */ - public void builder(@NonNull List rowDataHashList, int totalCount, @NonNull Map bucketMap) { - // 根据当前记录总数计算当前最大桶数量 - int maxBucketCount = calacMaxBucketCount(totalCount); - // 桶平均容量-用于初始化桶容量大小 + public void builder(@NonNull List rowDataHashList, int totalCount, + @NonNull Map bucketMap) { + // Calculate the current maximum number of barrels according to the total number of current records + int maxBucketCount = calculateMaxBucketCount(totalCount); + // Average bucket capacity - used to initialize the bucket capacity size int averageCapacity = totalCount / maxBucketCount; rowDataHashList.forEach(row -> { long primaryKeyHash = row.getPrimaryKeyHash(); - // 计算桶编号信息 - int bucketNumber = calacBucketNumber(primaryKeyHash, maxBucketCount); + // Calculate bucket number information + int bucketNumber = calculateBucketNumber(primaryKeyHash, maxBucketCount); Bucket bucket; - // 根据row 信息获取指定编号的桶,如果不存在则创建桶 + // Obtain the bucket with the specified number according to the row information, + // and create the bucket if it does not exist if (bucketMap.containsKey(bucketNumber)) { bucket = bucketMap.get(bucketNumber); } else { bucket = new Bucket(averageCapacity).setNumber(bucketNumber); bucketMap.put(bucketNumber, bucket); } - // 将row 添加到指定桶编号的桶中 + // Add row to the bucket with the specified bucket number bucket.put(row); }); } /** - * 根据{@code totalCount}记录总数计算当前最大桶数量。桶的数量为2^n个 + *
+     * Calculate the current maximum number of barrels according to the total number of {@code totalCount} records.
+     * The number of barrels is 2^n
+     * 
* - * @param totalCount 记录总数 - * @return 最大桶数量 + * @param totalCount Total records + * @return Maximum barrels */ - private int calacMaxBucketCount(int totalCount) { + private int calculateMaxBucketCount(int totalCount) { int bucketCount = totalCount / bucketCapacity; - int asInt = IntStream.range(0, 15) - .filter(idx -> BUCKET_COUNT_LIMITS[idx] > bucketCount) - .findFirst() - .orElse(15); + int asInt = IntStream.range(0, 15).filter(idx -> BUCKET_COUNT_LIMITS[idx] > bucketCount).findFirst().orElse(15); return BUCKET_COUNT_LIMITS[asInt]; } /** - * 根据{@code rowHash}值对当前记录进行标记,此标记用于桶的编号 + * Mark the current record according to the {@code rowHash} value, which is used for the number of barrels * - * @param primaryKeyHash 行记录主键哈希值 - * @param bucketCount 桶数量 桶的数量为2^n个 - * @return 行记录桶编号 + * @param primaryKeyHash Row record primary key hash value + * @param bucketCount Number of barrels the number of barrels is 2^n + * @return Line record bucket number */ - private int calacBucketNumber(long primaryKeyHash, int bucketCount) { + private int calculateBucketNumber(long primaryKeyHash, int bucketCount) { return (int) (Math.abs(primaryKeyHash) & (bucketCount - 1)); } - /** - * 根据编号构造空桶 + * Construct empty barrels according to the number * - * @param bucketNumber 桶编号 - * @return 桶 + * @param bucketNumber bucket number + * @return bucket */ public static Bucket builderEmpty(Integer bucketNumber) { return new Bucket(EMPTY_INITIAL_CAPACITY).setNumber(bucketNumber); diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/AbstractCheckDiffResultBuilder.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/AbstractCheckDiffResultBuilder.java index 02f45f718c5fd654a26fbe35ba468f32ba94c0c1..d359845ca5793fd9010cc6b3a85f7983def4fb0f 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/AbstractCheckDiffResultBuilder.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/AbstractCheckDiffResultBuilder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; import lombok.Getter; @@ -41,59 +56,107 @@ public abstract class AbstractCheckDiffResultBuilder keyUpdateSet) { this.keyUpdateSet = keyUpdateSet; - this.repairUpdate = checkRepairSinkDiff(DML.REPLACE, this.schema, this.table, this.keyUpdateSet); - return this.self(); + repairUpdate = checkRepairSinkDiff(DML.REPLACE, schema, table, this.keyUpdateSet); + return self(); } + /** + * Set the keyInsertSet properties of the builder + * + * @param keyInsertSet keyInsertSet + * @return CheckDiffResultBuilder + */ public B keyInsertSet(Set keyInsertSet) { this.keyInsertSet = keyInsertSet; - this.repairInsert = checkRepairSinkDiff(DML.INSERT, this.schema, this.table, this.keyInsertSet); - return this.self(); + repairInsert = checkRepairSinkDiff(DML.INSERT, schema, table, this.keyInsertSet); + return self(); } + /** + * Set the keyDeleteSet properties of the builder + * + * @param keyDeleteSet keyDeleteSet + * @return CheckDiffResultBuilder + */ public B keyDeleteSet(Set keyDeleteSet) { this.keyDeleteSet = keyDeleteSet; - this.repairDelete = checkRepairSinkDiff(DML.DELETE, this.schema, this.table, this.keyDeleteSet); - return this.self(); + repairDelete = checkRepairSinkDiff(DML.DELETE, schema, table, this.keyDeleteSet); + return self(); } - + /** + * build CheckDiffResultBuilder + * + * @param feignClient feignClient + * @return CheckDiffResultBuilder + */ public static AbstractCheckDiffResultBuilder builder(FeignClientService feignClient) { return new AbstractCheckDiffResultBuilderImpl(feignClient); } - private static final class AbstractCheckDiffResultBuilderImpl extends AbstractCheckDiffResultBuilder { + private static final class AbstractCheckDiffResultBuilderImpl + extends AbstractCheckDiffResultBuilder { private AbstractCheckDiffResultBuilderImpl(FeignClientService feignClient) { super(feignClient); } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/CheckDiffResult.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/CheckDiffResult.java index 55436524236d70286d935a1a5b36bc03fe10eb62..8e2e421540430d17792124491c89ea76e9fca15e 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/CheckDiffResult.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/CheckDiffResult.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; import com.alibaba.fastjson.annotation.JSONType; @@ -8,14 +23,15 @@ import java.util.List; import java.util.Set; /** + * CheckDiffResult + * * @author :wangchao * @date :Created in 2022/6/18 * @since :11 */ @Data -@JSONType(orders = {"schema", "table", "topic", "partitions", "createTime", - "keyInsertSet", "keyUpdateSet", "keyDeleteSet", - "repairInsert", "repairUpdate", "repairDelete"}) +@JSONType(orders = {"schema", "table", "topic", "partitions", "createTime", "keyInsertSet", "keyUpdateSet", + "keyDeleteSet", "repairInsert", "repairUpdate", "repairDelete"}) public class CheckDiffResult { private String schema; private String table; @@ -32,16 +48,16 @@ public class CheckDiffResult { private List repairDelete; public CheckDiffResult(final AbstractCheckDiffResultBuilder builder) { - this.table = builder.getTable(); - this.partitions = builder.getPartitions(); - this.topic = builder.getTopic(); - this.schema = builder.getSchema(); - this.createTime = builder.getCreateTime(); - this.keyUpdateSet = builder.getKeyUpdateSet(); - this.keyInsertSet = builder.getKeyInsertSet(); - this.keyDeleteSet = builder.getKeyDeleteSet(); - this.repairUpdate = builder.getRepairUpdate(); - this.repairInsert = builder.getRepairInsert(); - this.repairDelete = builder.getRepairDelete(); + table = builder.getTable(); + partitions = builder.getPartitions(); + topic = builder.getTopic(); + schema = builder.getSchema(); + createTime = builder.getCreateTime(); + keyUpdateSet = builder.getKeyUpdateSet(); + keyInsertSet = builder.getKeyInsertSet(); + keyDeleteSet = builder.getKeyDeleteSet(); + repairUpdate = builder.getRepairUpdate(); + repairInsert = builder.getRepairInsert(); + repairDelete = builder.getRepairDelete(); } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckKafkaConsumer.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckKafkaConsumer.java new file mode 100644 index 0000000000000000000000000000000000000000..df69e457af0a318ec5540f798e7b50ff29a7a2cf --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckKafkaConsumer.java @@ -0,0 +1,114 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.modules.check; + +import com.alibaba.fastjson.JSON; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.kafka.clients.consumer.ConsumerConfig; +import org.apache.kafka.clients.consumer.ConsumerRecords; +import org.apache.kafka.clients.consumer.KafkaConsumer; +import org.apache.kafka.common.TopicPartition; +import org.apache.kafka.common.serialization.StringDeserializer; +import org.opengauss.datachecker.check.client.FeignClientService; +import org.opengauss.datachecker.common.constant.Constants; +import org.opengauss.datachecker.common.entry.enums.Endpoint; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.Topic; +import org.opengauss.datachecker.common.util.ThreadUtil; +import org.springframework.boot.autoconfigure.kafka.KafkaProperties; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Properties; + +/** + * DataCheckKafkaConsumer + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j +public class DataCheckKafkaConsumer { + private KafkaProperties properties; + private FeignClientService feignClient; + + /** + * DataCheckKafkaConsumer constructor method + * + * @param properties KafkaProperties + * @param feignClient FeignClientService + */ + public DataCheckKafkaConsumer(KafkaProperties properties, FeignClientService feignClient) { + this.properties = properties; + this.feignClient = feignClient; + } + + /** + * Query the Kafka partition data corresponding to the specified table + * + * @param endpoint endpoint {@value Endpoint#API_DESCRIPTION} + * @param tableName table Name + * @param partitions Kafka partitions + * @return kafka partitions data + */ + public List queryRowData(Endpoint endpoint, String tableName, int partitions) { + List data = Collections.synchronizedList(new ArrayList<>()); + Topic topic = feignClient.queryTopicInfo(endpoint, tableName); + + KafkaConsumer kafkaConsumer = buildKafkaConsumer(); + kafkaConsumer.assign(List.of(new TopicPartition(topic.getTopicName(), partitions))); + + consumerTopicRecords(data, kafkaConsumer); + if (CollectionUtils.isEmpty(data)) { + ThreadUtil.sleep(1000); + consumerTopicRecords(data, kafkaConsumer); + } + log.debug("consumer kafka topic=[{}] partitions=[{}] dataList=[{}]", topic.toString(), partitions, data.size()); + return data; + } + + private void consumerTopicRecords(List data, KafkaConsumer kafkaConsumer) { + List result = getTopicRecords(kafkaConsumer); + while (result.size() > 0) { + data.addAll(result); + result = getTopicRecords(kafkaConsumer); + } + } + + private List getTopicRecords(KafkaConsumer kafkaConsumer) { + List dataList = new ArrayList<>(); + ConsumerRecords consumerRecords = kafkaConsumer.poll(Duration.ofMillis(200)); + consumerRecords.forEach(record -> { + dataList.add(JSON.parseObject(record.value(), RowDataHash.class)); + }); + return dataList; + } + + private KafkaConsumer buildKafkaConsumer() { + Properties props = new Properties(); + props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, + String.join(Constants.DELIMITER, properties.getBootstrapServers())); + props.put(ConsumerConfig.GROUP_ID_CONFIG, properties.getConsumer().getGroupId()); + props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, properties.getConsumer().getAutoOffsetReset()); + props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); + props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); + return new KafkaConsumer<>(props); + } +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckThread.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckRunnable.java similarity index 43% rename from datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckThread.java rename to datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckRunnable.java index 6943d638b54d3b59478607a62fff5126941cf6fb..a5bbb2108470dbc7b9471a3a2c902094430883eb 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckThread.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckRunnable.java @@ -1,13 +1,30 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; import com.google.common.collect.MapDifference; import com.google.common.collect.Maps; import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.check.cache.TableStatusRegister; import org.opengauss.datachecker.check.client.FeignClientService; import org.opengauss.datachecker.check.modules.bucket.Bucket; import org.opengauss.datachecker.check.modules.bucket.BuilderBucketHandler; import org.opengauss.datachecker.check.modules.merkle.MerkleTree; import org.opengauss.datachecker.check.modules.merkle.MerkleTree.Node; +import org.opengauss.datachecker.check.service.StatisticalService; import org.opengauss.datachecker.common.constant.Constants; import org.opengauss.datachecker.common.entry.check.DataCheckParam; import org.opengauss.datachecker.common.entry.check.DifferencePair; @@ -20,47 +37,64 @@ import org.opengauss.datachecker.common.exception.MerkleTreeDepthException; import org.springframework.lang.NonNull; import org.springframework.util.CollectionUtils; -import java.util.*; +import java.time.LocalDateTime; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; /** + * DataCheckRunnable + * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ @Slf4j -public class DataCheckThread implements Runnable { +public class DataCheckRunnable extends DataCheckKafkaConsumer implements Runnable { private static final int THRESHOLD_MIN_BUCKET_SIZE = 2; - private static final String THREAD_NAME_PRIFEX = "data-check-"; - - private final Topic topic; - private final String tableName; - private final int partitions; - private final int bucketCapacity; - private final String path; - private final String sinkSchema; - - private final FeignClientService feignClient; - - private final List sourceBucketList = new ArrayList<>(); - private final List sinkBucketList = new ArrayList<>(); - private final DifferencePair, Map, Map>> difference - = DifferencePair.of(new HashMap<>(), new HashMap<>(), new HashMap<>()); + private static final String THREAD_NAME_PRIFEX = "DATA_CHECK_"; + private final LocalDateTime start; + private final List sourceBucketList = Collections.synchronizedList(new ArrayList<>()); + private final List sinkBucketList = Collections.synchronizedList(new ArrayList<>()); + private final DifferencePair, Map, Map>> + difference = DifferencePair.of(new HashMap<>(), new HashMap<>(), new HashMap<>()); private final Map> bucketNumberDiffMap = new HashMap<>(); - private final QueryRowDataWapper queryRowDataWapper; + private final FeignClientService feignClient; private final DataCheckWapper dataCheckWapper; + private final StatisticalService statisticalService; + private final TableStatusRegister tableStatusRegister; + private final DataCheckParam checkParam; - public DataCheckThread(@NonNull DataCheckParam checkParam, @NonNull FeignClientService feignClient) { - this.topic = checkParam.getTopic(); - this.tableName = topic.getTableName(); - this.partitions = checkParam.getPartitions(); - this.path = checkParam.getPath(); - this.sinkSchema = checkParam.getSchema(); - this.bucketCapacity = checkParam.getBucketCapacity(); - this.feignClient = feignClient; - this.queryRowDataWapper = new QueryRowDataWapper(feignClient); - this.dataCheckWapper = new DataCheckWapper(); - resetThreadName(); + private String sinkSchema; + private Topic topic; + private String tableName; + private int partitions; + private int bucketCapacity; + private String path; + + /** + * DataCheckRunnable + * + * @param checkParam checkParam + * @param support support + */ + public DataCheckRunnable(@NonNull DataCheckParam checkParam, @NonNull DataCheckRunnableSupport support) { + super(checkParam.getProperties(), support.getFeignClientService()); + this.checkParam = checkParam; + start = LocalDateTime.now(); + feignClient = support.getFeignClientService(); + statisticalService = support.getStatisticalService(); + tableStatusRegister = support.getTableStatusRegister(); + dataCheckWapper = new DataCheckWapper(); } /** @@ -76,30 +110,54 @@ public class DataCheckThread implements Runnable { */ @Override public void run() { - - // 初始化桶列表 + paramInit(); + // Initialize bucket list initBucketList(); - - //不进行默克尔树校验算法场景 + // No Merkel tree verification algorithm scenario if (checkNotMerkleCompare(sourceBucketList.size(), sinkBucketList.size())) { + log.info("No Merkel tree verification algorithm scenario : [{}-{}],source-bucket-row={},sink-bucket-row={}", + tableName, partitions, sourceBucketList.size(), sinkBucketList.size()); + refreshCheckStatus(); return; } - - // 构造默克尔树 约束: bucketList 不能为空,且size>=2 + // Construct Merkel tree constraint: bucketList cannot be empty, and size > =2 MerkleTree sourceTree = new MerkleTree(sourceBucketList); MerkleTree sinkTree = new MerkleTree(sinkBucketList); - // 默克尔树比较 + // Merkel tree comparison if (sourceTree.getDepth() != sinkTree.getDepth()) { - throw new MerkleTreeDepthException(String.format("source & sink data have large different, Please synchronize data again! " + - "merkel tree depth different,source depth=[%d],sink depth=[%d]", - sourceTree.getDepth(), sinkTree.getDepth())); + refreshCheckStatus(); + throw new MerkleTreeDepthException(String.format(Locale.ROOT, + "source & sink data have large different, Please synchronize data again! " + + "merkel tree depth different,source depth=[%d],sink depth=[%d]", sourceTree.getDepth(), + sinkTree.getDepth())); } - //递归比较两颗默克尔树,并将差异记录返回。 + // Recursively compare two Merkel trees and return the difference record. compareMerkleTree(sourceTree, sinkTree); - // 校验结果 校验修复报告 + + // Verification result verification repair report checkResult(); cleanCheckThreadEnvironment(); + statisticalService.statistics(getStatisticsName(tableName, partitions), start); + refreshCheckStatus(); + } + + private void paramInit() { + sinkSchema = feignClient.getDatabaseSchema(Endpoint.SINK); + topic = checkParam.getTopic(); + tableName = topic.getTableName(); + partitions = checkParam.getPartitions(); + path = checkParam.getPath(); + bucketCapacity = checkParam.getBucketCapacity(); + resetThreadName(tableName, partitions); + } + + private void refreshCheckStatus() { + tableStatusRegister.update(tableName, partitions, TableStatusRegister.TASK_STATUS_CHECK_VALUE); + } + + private static String getStatisticsName(String tableName, int partitions) { + return tableName.concat("_").concat(String.valueOf(partitions)); } private void cleanCheckThreadEnvironment() { @@ -111,68 +169,75 @@ public class DataCheckThread implements Runnable { difference.getDiffering().clear(); } - /** - * 初始化桶列表 + * Initialize bucket list */ private void initBucketList() { - // 获取当前任务对应kafka分区号 - // 初始化源端桶列列表数据 + // Get the Kafka partition number corresponding to the current task + // Initialize source bucket column list data initBucketList(Endpoint.SOURCE, partitions, sourceBucketList); - // 初始化宿端桶列列表数据 + // Initialize destination bucket column list data initBucketList(Endpoint.SINK, partitions, sinkBucketList); - // 对齐源端宿端桶列表 + // Align the source and destination bucket list alignAllBuckets(); - // 排序 sortBuckets(sourceBucketList); sortBuckets(sinkBucketList); + log.info("Initialize the verification data and the bucket construction is currently completed of table [{}-{}]", + tableName, partitions); } /** - * 根据桶编号对最终桶列表进行排序 + * Sort the final bucket list by bucket number * - * @param bucketList 桶列表 + * @param bucketList bucketList */ private void sortBuckets(@NonNull List bucketList) { bucketList.sort(Comparator.comparingInt(Bucket::getNumber)); } /** - * 根据统计的源端宿端桶差异信息{@code bucketNumberDiffMap}结果,对齐桶列表数据。 + *
+     * Align the bucket list data according to the statistical results of source
+     * and destination bucket difference information {@code bucketNumberDiffMap}.
+     * 
*/ private void alignAllBuckets() { dataCheckWapper.alignAllBuckets(bucketNumberDiffMap, sourceBucketList, sinkBucketList); } /** - * 拉取指定端点{@code endpoint}服务当前表{@code tableName}的kafka分区{@code partitions}数据。 - * 并将kafka数据分组组装到指定的桶列表{@code bucketList}中 + * Pull the Kafka partition {@code partitions} data + * of the specified table {@code tableName} of the specified endpoint {@code endpoint} service. + *

+ * And assemble Kafka data into the specified bucket list {@code bucketList} * - * @param endpoint 端点类型 - * @param partitions kafka分区号 - * @param bucketList 桶列表 + * @param endpoint Endpoint Type + * @param partitions kafka partitions + * @param bucketList Bucket list */ private void initBucketList(Endpoint endpoint, int partitions, List bucketList) { - Map bucketMap = new HashMap<>(Constants.InitialCapacity.MAP); - // 使用feignclient 拉取kafka数据 + Map bucketMap = new ConcurrentHashMap<>(Constants.InitialCapacity.MAP); + // Use feign client to pull Kafka data List dataList = getTopicPartitionsData(endpoint, partitions); if (CollectionUtils.isEmpty(dataList)) { return; } + log.info("Initialize the verification thread data, and pull the total number of [{}-{}-{}] data records to {}", + endpoint.getDescription(), tableName, partitions, dataList.size()); BuilderBucketHandler bucketBuilder = new BuilderBucketHandler(bucketCapacity); - // 拉取的数据进行构建桶列表 + // Use the pulled data to build the bucket list bucketBuilder.builder(dataList, dataList.size(), bucketMap); - // 统计桶列表信息 + // Statistics bucket list information bucketNoStatistics(endpoint, bucketMap.keySet()); bucketList.addAll(bucketMap.values()); } /** - * 比较两颗默克尔树,并将差异记录返回。 + * Compare the two Merkel trees and return the difference record. * - * @param sourceTree 源端默克尔树 - * @param sinkTree 宿端默克尔树 + * @param sourceTree Source Merkel tree + * @param sinkTree Sink Merkel tree */ private void compareMerkleTree(@NonNull MerkleTree sourceTree, @NonNull MerkleTree sinkTree) { Node source = sourceTree.getRoot(); @@ -190,28 +255,24 @@ public class DataCheckThread implements Runnable { difference.getOnlyOnLeft().putAll(subDifference.getOnlyOnLeft()); difference.getOnlyOnRight().putAll(subDifference.getOnlyOnRight()); }); - + log.info("Complete the data verification of table [{}-{}]", tableName, partitions); } /** - * 比较两个桶内部记录的差异数据 + * Compare the difference data recorded inside the two barrels *

- * 差异类型 {@linkplain org.opengauss.datachecker.common.entry.enums.DiffCategory} * - * @param sourceBucket 源端桶 - * @param sinkBucket 宿端桶 - * @return 差异记录 + * @param sourceBucket Source barrel + * @param sinkBucket Sink barrel + * @return Difference record */ private DifferencePair compareBucket(Bucket sourceBucket, Bucket sinkBucket) { - Map sourceMap = sourceBucket.getBucket(); Map sinkMap = sinkBucket.getBucket(); - - MapDifference difference = Maps.difference(sourceMap, sinkMap); - - Map entriesOnlyOnLeft = difference.entriesOnlyOnLeft(); - Map entriesOnlyOnRight = difference.entriesOnlyOnRight(); - Map> entriesDiffering = difference.entriesDiffering(); + MapDifference bucketDifference = Maps.difference(sourceMap, sinkMap); + Map entriesOnlyOnLeft = bucketDifference.entriesOnlyOnLeft(); + Map entriesOnlyOnRight = bucketDifference.entriesOnlyOnRight(); + Map> entriesDiffering = bucketDifference.entriesDiffering(); Map> differing = new HashMap<>(Constants.InitialCapacity.MAP); entriesDiffering.forEach((key, diff) -> { differing.put(key, Pair.of(diff.leftValue(), diff.rightValue())); @@ -220,41 +281,44 @@ public class DataCheckThread implements Runnable { } /** - * 递归比较两颗默克尔树节点,并记录差异节点。 - *

- * 采用递归-前序遍历方式,遍历比较默克尔树,从而查找差异节点。 - *

- * 若当前遍历的节点{@link org.opengauss.datachecker.check.modules.merkle.MerkleTree.Node}签名相同则终止当前遍历分支。 + *

+     * Recursively compare two Merkel tree nodes and record the difference nodes.
+     * The recursive preorder traversal method is adopted to traverse and compare the Merkel tree,
+     * so as to find the difference node.
+     * If the current traversal node {@link org.opengauss.datachecker.check.modules.merkle.MerkleTree.Node}
+     * has the same signature, the current traversal branch will be terminated.
+     * 
* - * @param source 源端默克尔树节点 - * @param sink 宿端默克尔树节点 - * @param diffNodeList 差异节点记录 + * @param source Source Merkel tree node + * @param sink Sink Merkel tree node + * @param diffNodeList Difference node record */ private void compareMerkleTree(@NonNull Node source, @NonNull Node sink, List> diffNodeList) { - // 如果节点相同,则退出 + // If the nodes are the same, exit if (Arrays.equals(source.getSignature(), sink.getSignature())) { return; } - // 如果节点不相同,则继续比较下层节点,若当前差异节点为叶子节点,则记录该差异节点,并退出 + // If the nodes are different, continue to compare the lower level nodes. + // If the current difference node is a leaf node, record the difference node and exit if (source.getType() == MerkleTree.LEAF_SIG_TYPE) { diffNodeList.add(Pair.of(source, sink)); return; } compareMerkleTree(source.getLeft(), sink.getLeft(), diffNodeList); - compareMerkleTree(source.getRight(), sink.getRight(), diffNodeList); - } /** - * 对各端点构建的桶编号进行统计。统计结果汇总到{@code bucketNumberDiffMap}中。 - *

- * 默克尔比较算法,需要确保双方桶编号的一致。 - *

- * 如果一方的桶编号存在缺失,即{@code Pair}中,S或T的值为-1,则需要生成相应编号的空桶。 + *

+     * Count the bucket numbers built at each endpoint.
+     * The statistical results are summarized in {@code bucketNumberDiffMap}.
+     * Merkel 's comparison algorithm needs to ensure that the bucket numbers of both sides are consistent.
+     * If the bucket number of one party is missing, that is, in {@code Pair}, the value of S or T is -1,
+     * you need to generate an empty bucket with the corresponding number.
+     * 
* - * @param endpoint 端点 - * @param bucketNoSet 桶编号 + * @param endpoint end point + * @param bucketNoSet bucket numbers */ private void bucketNoStatistics(@NonNull Endpoint endpoint, @NonNull Set bucketNoSet) { bucketNoSet.forEach(bucketNo -> { @@ -276,66 +340,64 @@ public class DataCheckThread implements Runnable { } /** - * 拉取指定端点{@code endpoint}的表{@code tableName}的 kafka分区{@code partitions}数据 + * Pull the Kafka partition {@code partitions} data + * of the table {@code tableName} of the specified endpoint {@code endpoint} * - * @param endpoint 端点类型 - * @param partitions kafka分区号 - * @return 指定表 kafka分区数据 + * @param endpoint endpoint + * @param partitions kafka partitions + * @return Specify table Kafka partition data */ private List getTopicPartitionsData(Endpoint endpoint, int partitions) { - return queryRowDataWapper.queryRowData(endpoint, tableName, partitions); + return queryRowData(endpoint, tableName, partitions); } /** - * 不满足默克尔树约束条件下 比较 + * Comparison under Merkel tree constraints * - * @param sourceBucketCount 源端数量 - * @param sinkBucketCount 宿端数量 - * @return 是否满足默克尔校验场景 + * @param sourceBucketCount source bucket count + * @param sinkBucketCount sink bucket count + * @return Whether it meets the Merkel verification scenario */ private boolean checkNotMerkleCompare(int sourceBucketCount, int sinkBucketCount) { - // 满足构造默克尔树约束条件 + // Meet the constraints of constructing Merkel tree if (sourceBucketCount >= THRESHOLD_MIN_BUCKET_SIZE && sinkBucketCount >= THRESHOLD_MIN_BUCKET_SIZE) { return false; } - // 不满足默克尔树约束条件下 比较 + // Comparison without Merkel tree constraint if (sourceBucketCount == sinkBucketCount) { - // sourceSize等于0,即都是空桶 + // sourceSize == 0, that is, all buckets are empty if (sourceBucketCount == 0) { - //表是空表, 校验成功! - log.info("table[{}] is an empty table,this check successful!", tableName); + // Table is empty, verification succeeded! + log.info("table[{}-{}] is an empty table,this check successful!", tableName, partitions); } else { - // sourceSize小于thresholdMinBucketSize 即都只有一个桶,比较 - DifferencePair subDifference = compareBucket(sourceBucketList.get(0), sinkBucketList.get(0)); + // sourceSize is less than thresholdMinBucketSize, that is, there is only one bucket. Compare + DifferencePair subDifference = + compareBucket(sourceBucketList.get(0), sinkBucketList.get(0)); difference.getDiffering().putAll(subDifference.getDiffering()); difference.getOnlyOnLeft().putAll(subDifference.getOnlyOnLeft()); difference.getOnlyOnRight().putAll(subDifference.getOnlyOnRight()); } } else { - throw new LargeDataDiffException(String.format("table[%s] source & sink data have large different," + - "source-bucket-count=[%s] sink-bucket-count=[%s]" + - " Please synchronize data again! ", tableName, sourceBucketCount, sinkBucketCount)); + refreshCheckStatus(); + throw new LargeDataDiffException(String.format( + "table[%s] source & sink data have large different," + "source-bucket-count=[%s] sink-bucket-count=[%s]" + + " Please synchronize data again! ", tableName, sourceBucketCount, sinkBucketCount)); } return true; } private void checkResult() { - CheckDiffResult result = AbstractCheckDiffResultBuilder.builder(feignClient) - .table(tableName) - .topic(topic.getTopicName()) - .schema(sinkSchema) - .partitions(partitions) - .keyUpdateSet(difference.getDiffering().keySet()) - .keyInsertSet(difference.getOnlyOnLeft().keySet()) - .keyDeleteSet(difference.getOnlyOnRight().keySet()) - .build(); + CheckDiffResult result = + AbstractCheckDiffResultBuilder.builder(feignClient).table(tableName).topic(topic.getTopicName()) + .schema(sinkSchema).partitions(partitions) + .keyUpdateSet(difference.getDiffering().keySet()) + .keyInsertSet(difference.getOnlyOnLeft().keySet()) + .keyDeleteSet(difference.getOnlyOnRight().keySet()).build(); ExportCheckResult.export(path, result); + log.info("Complete the output of data verification results of table [{}-{}]", tableName, partitions); } - /** - * 重置当前线程 线程名称 - */ - private void resetThreadName() { - Thread.currentThread().setName(THREAD_NAME_PRIFEX + topic.getTopicName()); + private void resetThreadName(String tableName, int partitions) { + Thread.currentThread().setName(THREAD_NAME_PRIFEX + tableName.toUpperCase(Locale.ROOT) + "_" + partitions); } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckRunnableSupport.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckRunnableSupport.java new file mode 100644 index 0000000000000000000000000000000000000000..da69c54abe3399cab5f8891d32aec3f9b1cf5add --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckRunnableSupport.java @@ -0,0 +1,47 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.modules.check; + +import lombok.Getter; +import org.opengauss.datachecker.check.cache.TableStatusRegister; +import org.opengauss.datachecker.check.client.FeignClientService; +import org.opengauss.datachecker.check.config.DataCheckConfig; +import org.opengauss.datachecker.check.service.StatisticalService; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +/** + * DataCheckRunnableSupport + * + * @author :wangchao + * @date :Created in 2022/8/5 + * @since :11 + */ +@Getter +@Service +public class DataCheckRunnableSupport { + @Autowired + private FeignClientService feignClientService; + + @Autowired + private TableStatusRegister tableStatusRegister; + + @Autowired + private DataCheckConfig dataCheckConfig; + + @Autowired + private StatisticalService statisticalService; +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckService.java index ed336db655e8b20c13c067ca3c38f97e87df09f0..e98e9a8c8c91052962e7026ea158291c67168d57 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckService.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckService.java @@ -1,18 +1,36 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; import lombok.extern.slf4j.Slf4j; -import org.opengauss.datachecker.check.client.FeignClientService; import org.opengauss.datachecker.check.config.DataCheckConfig; import org.opengauss.datachecker.common.entry.check.DataCheckParam; -import org.opengauss.datachecker.common.entry.enums.Endpoint; import org.opengauss.datachecker.common.entry.extract.Topic; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; +import org.springframework.boot.autoconfigure.kafka.KafkaProperties; import org.springframework.lang.NonNull; import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; import org.springframework.stereotype.Service; +import java.util.concurrent.Future; + /** + * DataCheckService + * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 @@ -20,9 +38,11 @@ import org.springframework.stereotype.Service; @Slf4j @Service public class DataCheckService { + @Autowired + private KafkaProperties kafkaProperties; @Autowired - private FeignClientService feignClientService; + private DataCheckRunnableSupport dataCheckRunnableSupport; @Autowired private DataCheckConfig dataCheckConfig; @@ -35,20 +55,48 @@ public class DataCheckService { * @param topic * @param partitions */ - public void checkTableData(@NonNull Topic topic, int partitions) { + public Future checkTableData(@NonNull Topic topic, int partitions) { + DataCheckParam checkParam = buildCheckParam(topic, partitions, dataCheckConfig); + final DataCheckRunnable dataCheckRunnable = new DataCheckRunnable(checkParam, dataCheckRunnableSupport); + return checkAsyncExecutor.submit(dataCheckRunnable); + } + + private DataCheckParam buildCheckParam(Topic topic, int partitions, DataCheckConfig dataCheckConfig) { final int bucketCapacity = dataCheckConfig.getBucketCapacity(); final String checkResultPath = dataCheckConfig.getCheckResultPath(); - - String schema = feignClientService.getDatabaseSchema(Endpoint.SINK); - final DataCheckParam checkParam = new DataCheckParam(bucketCapacity, topic, partitions, checkResultPath, schema); - checkAsyncExecutor.submit(new DataCheckThread(checkParam, feignClientService)); + return new DataCheckParam().setBucketCapacity(bucketCapacity).setTopic(topic).setPartitions(partitions) + .setProperties(kafkaProperties).setPath(checkResultPath); } public void incrementCheckTableData(Topic topic) { + + DataCheckParam checkParam = buildIncrementCheckParam(topic, dataCheckConfig); + final IncrementCheckThread incrementCheck = new IncrementCheckThread(checkParam, dataCheckRunnableSupport); + incrementCheck.setUncaughtExceptionHandler(new DataCheckThreadExceptionHandler()); + checkAsyncExecutor.submit(incrementCheck); + } + + private DataCheckParam buildIncrementCheckParam(Topic topic, DataCheckConfig dataCheckConfig) { final int bucketCapacity = dataCheckConfig.getBucketCapacity(); final String checkResultPath = dataCheckConfig.getCheckResultPath(); - String schema = feignClientService.getDatabaseSchema(Endpoint.SINK); - final DataCheckParam checkParam = new DataCheckParam(bucketCapacity, topic, 0, checkResultPath, schema); - checkAsyncExecutor.submit(new IncrementDataCheckThread(checkParam, feignClientService)); + return new DataCheckParam().setBucketCapacity(bucketCapacity).setTopic(topic).setPartitions(0) + .setPath(checkResultPath); + } + + static class DataCheckThreadExceptionHandler implements Thread.UncaughtExceptionHandler { + + /** + * Method invoked when the given thread terminates due to the + * given uncaught exception. + *

Any exception thrown by this method will be ignored by the + * Java Virtual Machine. + * + * @param thread the thread + * @param throwable the exception + */ + @Override + public void uncaughtException(Thread thread, Throwable throwable) { + log.error(thread.getName() + " exception: " + throwable); + } } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckWapper.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckWapper.java index c6bbe30b957ea395481d207ef7069f055988edb0..a4447d6cdbd3a02a35eddbdcebf308557e2fc87b 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckWapper.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/DataCheckWapper.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; import org.opengauss.datachecker.check.modules.bucket.Bucket; @@ -16,16 +31,16 @@ import java.util.Map; */ public class DataCheckWapper { - /** - * 根据统计的源端宿端桶差异信息{@code bucketNumberDiffMap}结果,对齐桶列表数据。 + * Align the bucket list data according to the statistical results of source and destination bucket + * difference information {@code bucketNumberDiffMap}. * - * @param bucketNumberDiffMap 源端宿端桶差异信息 - * @param sourceBucketList 源端桶列表 - * @param sinkBucketList 宿端通列表 + * @param bucketNumberDiffMap Source destination bucket difference information + * @param sourceBucketList Source bucket list + * @param sinkBucketList Sink bucket list */ public void alignAllBuckets(Map> bucketNumberDiffMap, - @NonNull List sourceBucketList, @NonNull List sinkBucketList) { + @NonNull List sourceBucketList, @NonNull List sinkBucketList) { if (!CollectionUtils.isEmpty(bucketNumberDiffMap)) { bucketNumberDiffMap.forEach((number, pair) -> { if (pair.getSource() == -1) { diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/ExportCheckResult.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/ExportCheckResult.java index e61361ebad89ca0cc549af6cd2ed5a7b5fd9cbc8..6c73654127faafc7ba5793eed2c1d02cfb27b72b 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/ExportCheckResult.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/ExportCheckResult.java @@ -1,22 +1,81 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; +import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.common.util.FileUtils; import org.opengauss.datachecker.common.util.JsonObjectUtil; +import java.io.File; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.time.LocalDateTime; +import java.time.format.DateTimeFormatter; + /** * @author :wangchao * @date :Created in 2022/6/17 * @since :11 */ +@Slf4j public class ExportCheckResult { + private static final DateTimeFormatter FORMATTER_DIR = DateTimeFormatter.ofPattern("yyyyMMddHHmmssSSS"); + + private static final String CHECK_RESULT_BAK_DIR = File.separator + "result_bak" + File.separator; + private static final String CHECK_RESULT_PATH = File.separator + "result" + File.separator; public static void export(String path, CheckDiffResult result) { - FileUtils.createDirectories(path); String fileName = getCheckResultFileName(path, result.getTable(), result.getPartitions()); FileUtils.writeAppendFile(fileName, JsonObjectUtil.format(result)); } private static String getCheckResultFileName(String path, String tableName, int partitions) { - return path.concat(tableName).concat("_").concat(String.valueOf(partitions)).concat(".txt"); + final String fileName = tableName.concat("_").concat(String.valueOf(partitions)).concat(".txt"); + return getResultPath(path).concat(fileName); + } + + /** + * Initialize the verification result environment + * + * @param path Verification result output path + */ + public static void initEnvironment(String path) { + String checkResultPath = getResultPath(path); + FileUtils.createDirectories(checkResultPath); + FileUtils.createDirectories(getResultBakRootDir(path)); + try { + final String backDir = getResultBakDir(path); + Files.move(Path.of(checkResultPath), Path.of(backDir)); + FileUtils.createDirectories(checkResultPath); + } catch (IOException e) { + log.error("initialize the verification result environment error"); + } + log.info("initialize the verification result environment"); + } + + private static String getResultPath(String path) { + return path.concat(CHECK_RESULT_PATH); + } + + private static String getResultBakRootDir(String path) { + return path.concat(CHECK_RESULT_BAK_DIR); + } + + private static String getResultBakDir(String path) { + return getResultBakRootDir(path).concat(FORMATTER_DIR.format(LocalDateTime.now())); } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/IncrementDataCheckThread.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/IncrementCheckThread.java similarity index 57% rename from datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/IncrementDataCheckThread.java rename to datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/IncrementCheckThread.java index 15581c5d2357a192c54bdb7fea18759a96d58cfb..81f83497d2142da839a4c3adac75757fc95bfa0f 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/IncrementDataCheckThread.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/IncrementCheckThread.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; import com.google.common.collect.MapDifference; @@ -22,15 +37,26 @@ import org.opengauss.datachecker.common.web.Result; import org.springframework.lang.NonNull; import org.springframework.util.CollectionUtils; -import java.util.*; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Objects; +import java.util.Set; /** + * IncrementCheckThread + * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ @Slf4j -public class IncrementDataCheckThread implements Runnable { +public class IncrementCheckThread extends Thread { private static final int THRESHOLD_MIN_BUCKET_SIZE = 2; private static final String THREAD_NAME_PRIFEX = "increment-data-check-"; @@ -39,28 +65,32 @@ public class IncrementDataCheckThread implements Runnable { private final int partitions; private final int bucketCapacity; private final String path; - private final FeignClientService feignClient; - private final List sourceBucketList = new ArrayList<>(); private final List sinkBucketList = new ArrayList<>(); - private final DifferencePair, Map, Map>> difference - = DifferencePair.of(new HashMap<>(), new HashMap<>(), new HashMap<>()); - + private final DifferencePair, Map, Map>> + difference = DifferencePair.of(new HashMap<>(), new HashMap<>(), new HashMap<>()); private final Map> bucketNumberDiffMap = new HashMap<>(); private final QueryRowDataWapper queryRowDataWapper; private final DataCheckWapper dataCheckWapper; + private String sinkSchema; - public IncrementDataCheckThread(@NonNull DataCheckParam checkParam, @NonNull FeignClientService feignClient) { - this.topic = checkParam.getTopic(); - this.tableName = topic.getTableName(); - this.partitions = checkParam.getPartitions(); - this.path = checkParam.getPath(); - this.bucketCapacity = checkParam.getBucketCapacity(); - this.feignClient = feignClient; - this.queryRowDataWapper = new QueryRowDataWapper(feignClient); - this.dataCheckWapper = new DataCheckWapper(); - resetThreadName(); + /** + * IncrementCheckThread constructor method + * + * @param checkParam Data Check Param + * @param support Data Check Runnable Support + */ + public IncrementCheckThread(@NonNull DataCheckParam checkParam, @NonNull DataCheckRunnableSupport support) { + super.setName(buildThreadName()); + topic = checkParam.getTopic(); + tableName = topic.getTableName(); + partitions = checkParam.getPartitions(); + path = checkParam.getPath(); + bucketCapacity = checkParam.getBucketCapacity(); + feignClient = support.getFeignClientService(); + queryRowDataWapper = new QueryRowDataWapper(feignClient); + dataCheckWapper = new DataCheckWapper(); } /** @@ -76,98 +106,96 @@ public class IncrementDataCheckThread implements Runnable { */ @Override public void run() { - - // 元数据校验 + sinkSchema = feignClient.getDatabaseSchema(Endpoint.SINK); + // Metadata verification if (!checkTableMetadata()) { return; } - // 初次校验 + // Initial verification firstCheckCompare(); - // 解析初次校验结果 + // Analyze the initial verification results List diffIdList = parseDiffResult(); - // 根据初次校验结果进行二次校验 + // Conduct secondary verification according to the initial verification results secondaryCheckCompare(diffIdList); - // 校验结果 校验修复报告 + // Verification result verification repair report checkResult(); } /** - * 初次校验 + * Initial verification */ private void firstCheckCompare() { - // 初始化桶列表 + // Initialize bucket list initFirstCheckBucketList(); compareCommonMerkleTree(); } /** - * 二次校验 + * the second check * - * @param diffIdList 初次校验差异ID列表 + * @param diffIdList Initial verification difference ID list */ private void secondaryCheckCompare(List diffIdList) { if (CollectionUtils.isEmpty(diffIdList)) { return; } - // 清理当前线程捏校验缓存信息 + // Clean up the current thread pinch check cache information lastDataClean(); - // 初始化桶列表 + // Initialize bucket list initSecondaryCheckBucketList(diffIdList); - // 进行二次校验 + // Conduct secondary verification compareCommonMerkleTree(); } /** - * 初始化桶列表 + * Initialize bucket list */ private void initFirstCheckBucketList() { - // 获取当前任务对应kafka分区号 - // 初始化源端桶列列表数据 + // Get the Kafka partition number corresponding to the current task + // Initialize source bucket column list data initFirstCheckBucketList(Endpoint.SOURCE, sourceBucketList); - // 初始化宿端桶列列表数据 + // Initialize destination bucket column list data initFirstCheckBucketList(Endpoint.SINK, sinkBucketList); - // 对齐源端宿端桶列表 + // Align the source and destination bucket list alignAllBuckets(); - // 排序 sortBuckets(sourceBucketList); sortBuckets(sinkBucketList); } private void initSecondaryCheckBucketList(List diffIdList) { - SourceDataLog dataLog = new SourceDataLog().setTableName(tableName) - .setCompositePrimaryValues(diffIdList); + SourceDataLog dataLog = new SourceDataLog().setTableName(tableName).setCompositePrimaryValues(diffIdList); buildBucket(Endpoint.SOURCE, dataLog); - buildBucket(Endpoint.SINK, dataLog); - // 对齐源端宿端桶列表 + // Align the source and destination bucket list alignAllBuckets(); - // 排序 sortBuckets(sourceBucketList); sortBuckets(sinkBucketList); } private void compareCommonMerkleTree() { - //不进行默克尔树校验算法场景 + // No Merkel tree verification algorithm scenario final int sourceBucketCount = sourceBucketList.size(); final int sinkBucketCount = sinkBucketList.size(); if (checkNotMerkleCompare(sourceBucketCount, sinkBucketCount)) { - // 不满足默克尔树约束条件下 比较 sourceSize等于0,即都是空桶 - if (sourceBucketCount == 0) { - //表是空表, 校验成功! + // If the constraint of Merkel tree is not satisfied, + // the sourceSize is equal to 0, that is, all buckets are empty + if (sourceBucketCount == sinkBucketCount && sinkBucketCount == 0) { + // Table is empty, verification succeeded! log.info("table[{}] is an empty table,this check successful!", tableName); } else { - // sourceSize小于thresholdMinBucketSize 即都只有一个桶,比较 - DifferencePair subDifference = compareBucket(sourceBucketList.get(0), sinkBucketList.get(0)); + // sourceSize is less than thresholdMinBucketSize, that is, there is only one bucket. Compare + DifferencePair subDifference = + compareBucket(sourceBucketList.get(0), sinkBucketList.get(0)); difference.getDiffering().putAll(subDifference.getDiffering()); difference.getOnlyOnLeft().putAll(subDifference.getOnlyOnLeft()); difference.getOnlyOnRight().putAll(subDifference.getOnlyOnRight()); } } - // 构造默克尔树 约束: bucketList 不能为空,且size>=2 + // Construct Merkel tree constraint: bucketList cannot be empty, and size > =2 MerkleTree sourceTree = new MerkleTree(sourceBucketList); MerkleTree sinkTree = new MerkleTree(sinkBucketList); - //递归比较两颗默克尔树,并将差异记录返回。 + // Recursively compare two Merkel trees and return the difference record. compareMerkleTree(sourceTree, sinkTree); } @@ -179,11 +207,10 @@ public class IncrementDataCheckThread implements Runnable { difference.getDiffering().clear(); } - /** - * 根据桶编号对最终桶列表进行排序 + * Sort the final bucket list by bucket number * - * @param bucketList 桶列表 + * @param bucketList bucketList */ private void sortBuckets(@NonNull List bucketList) { bucketList.sort(Comparator.comparingInt(Bucket::getNumber)); @@ -198,9 +225,12 @@ public class IncrementDataCheckThread implements Runnable { } /** - * 增量校验前置条件,当前表结构一致,若表结构不一致则直接退出。不进行数据校验 + *

+     * The precondition of incremental verification is that the current table structure is consistent.
+     * If the table structure is inconsistent, exit directly. No data verification
+     * 
* - * @return 返回元数据校验结果 + * @return Return metadata verification results */ private boolean checkTableMetadata() { TableMetadataHash sourceTableHash = queryTableMetadataHash(Endpoint.SOURCE, tableName); @@ -213,37 +243,37 @@ public class IncrementDataCheckThread implements Runnable { if (result.isSuccess()) { return result.getData(); } else { - throw new DispatchClientException(endpoint, "query table metadata hash " + tableName + - " error, " + result.getMessage()); + throw new DispatchClientException(endpoint, + "query table metadata hash " + tableName + " error, " + result.getMessage()); } } /** - * 不满足默克尔树约束条件下 比较 + * Comparison without Merkel tree constraint * - * @param sourceBucketCount 源端数量 - * @param sinkBucketCount 宿端数量 - * @return 是否满足默克尔校验场景 + * @param sourceBucketCount source bucket count + * @param sinkBucketCount sink bucket count + * @return Whether it meets the Merkel verification scenario */ private boolean checkNotMerkleCompare(int sourceBucketCount, int sinkBucketCount) { - // 满足构造默克尔树约束条件 + // Meet the constraints of constructing Merkel tree return sourceBucketCount < THRESHOLD_MIN_BUCKET_SIZE || sinkBucketCount < THRESHOLD_MIN_BUCKET_SIZE; } /** - * 比较两颗默克尔树,并将差异记录返回。 + * Compare the two Merkel trees and return the difference record. * - * @param sourceTree 源端默克尔树 - * @param sinkTree 宿端默克尔树 + * @param sourceTree source tree + * @param sinkTree sink tree */ private void compareMerkleTree(@NonNull MerkleTree sourceTree, @NonNull MerkleTree sinkTree) { - // 默克尔树比较 + // Merkel tree comparison if (sourceTree.getDepth() != sinkTree.getDepth()) { - throw new MerkleTreeDepthException(String.format("source & sink data have large different, Please synchronize data again! " + - "merkel tree depth different,source depth=[%d],sink depth=[%d]", - sourceTree.getDepth(), sinkTree.getDepth())); + throw new MerkleTreeDepthException(String.format(Locale.ROOT, + "source & sink data have large different, Please synchronize data again! " + + "merkel tree depth different,source depth=[%d],sink depth=[%d]", sourceTree.getDepth(), + sinkTree.getDepth())); } - Node source = sourceTree.getRoot(); Node sink = sinkTree.getRoot(); List> diffNodeList = new LinkedList<>(); @@ -262,22 +292,24 @@ public class IncrementDataCheckThread implements Runnable { } /** - * 根据统计的源端宿端桶差异信息{@code bucketNumberDiffMap}结果,对齐桶列表数据。 + * Align the bucket list data according to the statistical results of source and destination bucket + * difference information {@code bucketNumberDiffMap}. */ private void alignAllBuckets() { dataCheckWapper.alignAllBuckets(bucketNumberDiffMap, sourceBucketList, sinkBucketList); } /** - * 拉取指定端点{@code endpoint}服务当前表{@code tableName}的kafka分区{@code partitions}数据。 - * 并将kafka数据分组组装到指定的桶列表{@code bucketList}中 + *
+     * Pull the Kafka partition {@code partitions} data of the current table {@code tableName} of
+     * the specified endpoint {@code endpoint} service.
+     * And assemble Kafka data into the specified bucket list {@code bucketList}
+     * 
* - * @param endpoint 端点类型 - * @param bucketList 桶列表 + * @param endpoint endpoint + * @param bucketList bucket list */ private void initFirstCheckBucketList(Endpoint endpoint, List bucketList) { - - // 使用feignclient 拉取kafka数据 List dataList = getTopicPartitionsData(endpoint); buildBucket(dataList, endpoint, bucketList); } @@ -289,9 +321,9 @@ public class IncrementDataCheckThread implements Runnable { Map bucketMap = new HashMap<>(); BuilderBucketHandler bucketBuilder = new BuilderBucketHandler(bucketCapacity); - // 拉取的数据进行构建桶列表 + // Pull the data to build the bucket list bucketBuilder.builder(dataList, dataList.size(), bucketMap); - // 统计桶列表信息 + // Statistics bucket list information bucketNumberStatisticsIncrement(endpoint, bucketMap.keySet()); bucketList.addAll(bucketMap.values()); } @@ -302,14 +334,16 @@ public class IncrementDataCheckThread implements Runnable { } /** - * 对各端点构建的桶编号进行统计。统计结果汇总到{@code bucketNumberDiffMap}中。 - *

- * 默克尔比较算法,需要确保双方桶编号的一致。 - *

- * 如果一方的桶编号存在缺失,即{@code Pair}中,S或T的值为-1,则需要生成相应编号的空桶。 + *

+     * Count the bucket numbers built at each endpoint.
+     * The statistical results are summarized in {@code bucketNumberDiffMap}.
+     * Merkel  comparison algorithm needs to ensure that the bucket numbers of both sides are consistent.
+     * If the bucket number of one party is missing, that is, in {@code Pair}, the value of S or T is -1,
+     * you need to generate an empty bucket with the corresponding number.
+     * 
* - * @param endpoint 端点 - * @param bucketNumberSet 桶编号 + * @param endpoint endpoint + * @param bucketNumberSet bucket numbers */ private void bucketNumberStatisticsIncrement(@NonNull Endpoint endpoint, @NonNull Set bucketNumberSet) { bucketNumberSet.forEach(bucketNumber -> { @@ -331,10 +365,11 @@ public class IncrementDataCheckThread implements Runnable { } /** - * 拉取指定端点{@code endpoint}的表{@code tableName}的 kafka分区{@code partitions}数据 + * Pull the Kafka partition {@code partitions} data + * of the table {@code tableName} of the specified endpoint {@code endpoint} * - * @param endpoint 端点类型 - * @return 指定表 kafka分区数据 + * @param endpoint endpoint + * @return Specify table Kafka partition data */ private List getTopicPartitionsData(Endpoint endpoint) { return queryRowDataWapper.queryIncrementRowData(endpoint, tableName); @@ -345,24 +380,19 @@ public class IncrementDataCheckThread implements Runnable { } /** - * 比较两个桶内部记录的差异数据 - *

- * 差异类型 {@linkplain org.opengauss.datachecker.common.entry.enums.DiffCategory} + * Compare the difference data recorded inside the two barrels * - * @param sourceBucket 源端桶 - * @param sinkBucket 宿端桶 - * @return 差异记录 + * @param sourceBucket Source end barrel + * @param sinkBucket Sink end barrel + * @return Difference Pair record */ private DifferencePair compareBucket(Bucket sourceBucket, Bucket sinkBucket) { - Map sourceMap = sourceBucket.getBucket(); Map sinkMap = sinkBucket.getBucket(); - - MapDifference difference = Maps.difference(sourceMap, sinkMap); - - Map entriesOnlyOnLeft = difference.entriesOnlyOnLeft(); - Map entriesOnlyOnRight = difference.entriesOnlyOnRight(); - Map> entriesDiffering = difference.entriesDiffering(); + MapDifference bucketDifference = Maps.difference(sourceMap, sinkMap); + Map entriesOnlyOnLeft = bucketDifference.entriesOnlyOnLeft(); + Map entriesOnlyOnRight = bucketDifference.entriesOnlyOnRight(); + Map> entriesDiffering = bucketDifference.entriesDiffering(); Map> differing = new HashMap<>(); entriesDiffering.forEach((key, diff) -> { differing.put(key, Pair.of(diff.leftValue(), diff.rightValue())); @@ -371,47 +401,44 @@ public class IncrementDataCheckThread implements Runnable { } /** - * 递归比较两颗默克尔树节点,并记录差异节点。 - *

- * 采用递归-前序遍历方式,遍历比较默克尔树,从而查找差异节点。 - *

- * 若当前遍历的节点{@link Node}签名相同则终止当前遍历分支。 + *

+     * Recursively compare two Merkel tree nodes and record the difference nodes.
+     * The recursive preorder traversal method is adopted to traverse and compare the Merkel tree,
+     * so as to find the difference node.
+     * If the current traversal node {@link Node} has the same signature,
+     * the current traversal branch will be terminated.
+     * 
* - * @param source 源端默克尔树节点 - * @param sink 宿端默克尔树节点 - * @param diffNodeList 差异节点记录 + * @param source Source Merkel tree node + * @param sink Sink Merkel tree node + * @param diffNodeList Difference node record */ private void compareMerkleTree(@NonNull Node source, @NonNull Node sink, List> diffNodeList) { - // 如果节点相同,则退出 + // If the nodes are the same, exit if (Arrays.equals(source.getSignature(), sink.getSignature())) { return; } - // 如果节点不相同,则继续比较下层节点,若当前差异节点为叶子节点,则记录该差异节点,并退出 + // If the nodes are different, continue to compare the lower level nodes. + // If the current difference node is a leaf node, record the difference node and exit if (source.getType() == MerkleTree.LEAF_SIG_TYPE) { diffNodeList.add(Pair.of(source, sink)); return; } compareMerkleTree(source.getLeft(), sink.getLeft(), diffNodeList); - compareMerkleTree(source.getRight(), sink.getRight(), diffNodeList); } private void checkResult() { - CheckDiffResult result = AbstractCheckDiffResultBuilder.builder(feignClient) - .table(tableName) - .topic(topic.getTopicName()) - .partitions(partitions) - .keyUpdateSet(difference.getDiffering().keySet()) - .keyInsertSet(difference.getOnlyOnRight().keySet()) - .keyDeleteSet(difference.getOnlyOnLeft().keySet()) - .build(); + CheckDiffResult result = + AbstractCheckDiffResultBuilder.builder(feignClient).table(tableName).topic(topic.getTopicName()) + .schema(sinkSchema).partitions(partitions) + .keyUpdateSet(difference.getDiffering().keySet()) + .keyInsertSet(difference.getOnlyOnRight().keySet()) + .keyDeleteSet(difference.getOnlyOnLeft().keySet()).build(); ExportCheckResult.export(path, result); } - /** - * 重置当前线程 线程名称 - */ - private void resetThreadName() { - Thread.currentThread().setName(THREAD_NAME_PRIFEX + topic.getTopicName()); + private String buildThreadName() { + return THREAD_NAME_PRIFEX + topic.getTopicName(); } } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/QueryRowDataWapper.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/QueryRowDataWapper.java index e34fca287c4b8bca1fd1737e5909f78eb5d3bf65..008438762b4c2647cc98732f8f41911c397e6c18 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/QueryRowDataWapper.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/check/QueryRowDataWapper.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.check; import lombok.extern.slf4j.Slf4j; @@ -26,20 +41,21 @@ public class QueryRowDataWapper { this.feignClient = feignClient; } - /** - * 拉取指定端点{@code endpoint}的表{@code tableName}的 kafka分区{@code partitions}数据 + * Pull the Kafka partition {@code partitions} data + * of the table {@code tableName} of the specified endpoint {@code endpoint} * - * @param endpoint 端点类型 - * @param partitions kafka分区号 - * @return 指定表 kafka分区数据 + * @param endpoint endpoint + * @param partitions kafka partitions + * @return Specify table Kafka partition data */ public List queryRowData(Endpoint endpoint, String tableName, int partitions) { List data = new ArrayList<>(); Result> result = feignClient.getClient(endpoint).queryTopicData(tableName, partitions); if (!result.isSuccess()) { - throw new DispatchClientException(endpoint, "query topic data of tableName " + tableName + - " partitions=" + partitions + " error, " + result.getMessage()); + throw new DispatchClientException(endpoint, + "query topic data of tableName " + tableName + " partitions=" + partitions + " error, " + result + .getMessage()); } while (result.isSuccess() && !CollectionUtils.isEmpty(result.getData())) { data.addAll(result.getData()); @@ -52,8 +68,8 @@ public class QueryRowDataWapper { List data = new ArrayList<>(); Result> result = feignClient.getClient(endpoint).queryIncrementTopicData(tableName); if (!result.isSuccess()) { - throw new DispatchClientException(endpoint, "query topic data of tableName " + tableName + - " error, " + result.getMessage()); + throw new DispatchClientException(endpoint, + "query topic data of tableName " + tableName + " error, " + result.getMessage()); } while (result.isSuccess() && !CollectionUtils.isEmpty(result.getData())) { data.addAll(result.getData()); @@ -65,8 +81,8 @@ public class QueryRowDataWapper { public List queryRowData(Endpoint endpoint, SourceDataLog dataLog) { Result> result = feignClient.getClient(endpoint).querySecondaryCheckRowData(dataLog); if (!result.isSuccess()) { - throw new DispatchClientException(endpoint, "query topic data of tableName " + dataLog.getTableName() + - " error, " + result.getMessage()); + throw new DispatchClientException(endpoint, + "query topic data of tableName " + dataLog.getTableName() + " error, " + result.getMessage()); } return result.getData(); } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTree.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTree.java index 21d62c69f3f7e01598c4df779acd49ca3b93ece0..77feba11e64ccf4a77ffe17539d79361b029e73c 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTree.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTree.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.merkle; import lombok.Data; @@ -21,11 +36,11 @@ public class MerkleTree { private static final int INT_BYTE = 4; public static final int LONG_BYTE = 8; /** - * 默克尔树 节点类型 叶子节点 + * Merkel tree node type leaf node */ public static final byte LEAF_SIG_TYPE = 0x0; /** - * 默克尔树 节点类型 内部节点 + * Merkel tree node type internal node */ public static final byte INTERNAL_SIG_TYPE = 0x01; /** @@ -39,7 +54,7 @@ public class MerkleTree { private static final int NODE_TYPE_BYTE_LENGTH = 1; /** - * Adler32 进行校验 + * Adler32 for verification */ private volatile static Adler32 crc; @@ -48,69 +63,66 @@ public class MerkleTree { } /** - * 叶子节点字节长度 默克尔树在序列化及反序列化时使用。 + * The leaf node byte length Merkel tree is used in serialization and deserialization. */ private int leafSignatureByteLength; /** - * 根节点 + * Root node */ private Node root; private int depth; private int nnodes; /** - * 默克尔树构造函数 + * Merkel tree constructor * - * @param bucketList 桶列表 + * @param bucketList bucketList */ public MerkleTree(List bucketList) { constructTree(bucketList); } - /** - * 根据反序列化结果构造默克尔树 + * Construct Merkel tree according to the result of deserialization * - * @param treeRoot 树根节点 - * @param totalNodes 总节点数 - * @param depth 树深度 - * @param leafLength 叶子节点长度 + * @param treeRoot treeRoot + * @param totalNodes totalNodes + * @param depth depth + * @param leafLength leaf Length */ public MerkleTree(Node treeRoot, int totalNodes, int depth, int leafLength) { - this.root = treeRoot; - this.nnodes = totalNodes; + root = treeRoot; + nnodes = totalNodes; this.depth = depth; - this.leafSignatureByteLength = leafLength; + leafSignatureByteLength = leafLength; } - /** - * 构造默克尔树 + * Construct Merkel tree * - * @param bucketList 桶列表 + * @param bucketList bucketList */ private void constructTree(List bucketList) { if (bucketList == null || bucketList.size() < MerkleConstant.CONSTRUCT_TREE_MIN_SIZE) { throw new IllegalArgumentException("ERROR:Fail to construct merkle tree ! leafHashes data invalid !"); } - this.nnodes = bucketList.size(); + nnodes = bucketList.size(); List parents = buttomLevel(bucketList); - this.nnodes += parents.size(); - this.depth = 1; + nnodes += parents.size(); + depth = 1; while (parents.size() > 1) { parents = constructInternalLevel(parents); - this.depth++; - this.nnodes += parents.size(); + depth++; + nnodes += parents.size(); } - this.root = parents.get(0); + root = parents.get(0); } - /** - * 底部层级叶子节点构建 + * Bottom level leaf node construction * - * @param bucketList 桶列表 - * @return 节点列表 + * @param bucketList bucketList + * @return node list */ private List buttomLevel(List bucketList) { List parents = new ArrayList<>(bucketList.size() / MerkleConstant.EVEN_NUMBER); @@ -123,20 +135,20 @@ public class MerkleTree { } if (bucketList.size() % MerkleConstant.EVEN_NUMBER == 1) { Node leaf1 = constructLeafNode(bucketList.get(bucketList.size() - 1)); - // 奇数个节点的情况,复制最后一个节点 + // In the case of an odd number of nodes, copy the last node Node parent = constructInternalNode(leaf1, null); parents.add(parent); } - // 设置叶子节点签名字节长度 - this.leafSignatureByteLength = parents.get(0).getLeft().getSignature().length; + // Set leaf node signature byte length + leafSignatureByteLength = parents.get(0).getLeft().getSignature().length; return parents; } /** - * 内部节点构建 + * Internal node construction * - * @param children 子节点 - * @return 内部节点集合 + * @param children child node + * @return Internal node set */ private List constructInternalLevel(List children) { List parents = new ArrayList<>(children.size() / MerkleConstant.EVEN_NUMBER); @@ -146,7 +158,7 @@ public class MerkleTree { } if (children.size() % MerkleConstant.EVEN_NUMBER == 1) { - // 奇数个节点的情况,只对left节点进行计算 + // In the case of an odd number of nodes, only the left node is calculated Node parent = constructInternalNode(children.get(children.size() - 1), null); parents.add(parent); } @@ -154,43 +166,39 @@ public class MerkleTree { } /** - * 构建叶子节点 + * Building leaf nodes * - * @param bucket 桶节点 - * @return 默克尔节点 + * @param bucket Bucket node + * @return Merkel node */ private Node constructLeafNode(Bucket bucket) { - return new Node().setType(LEAF_SIG_TYPE) - .setBucket(bucket) - .setSignature(bucket.getSignature()); + return new Node().setType(LEAF_SIG_TYPE).setBucket(bucket).setSignature(bucket.getSignature()); } /** - * 构建内部节点 + * Build internal nodes * - * @param left 左侧节点 - * @param right 右侧节点 - * @return 默克尔节点 + * @param left Left node + * @param right Right node + * @return Merkel node */ private Node constructInternalNode(Node left, Node right) { - return new Node().setType(INTERNAL_SIG_TYPE) - .setLeft(left) - .setRight(right) - .setSignature(internalSignature(left, right)); + return new Node().setType(INTERNAL_SIG_TYPE).setLeft(left).setRight(right) + .setSignature(internalSignature(left, right)); } /** - * 计算内部节点签名 + * Calculate internal node signature * - * @param left 左侧节点 - * @param right 右侧节点 - * @return 内部节点签名 + * @param left Left node + * @param right Right node + * @return Internal node signature */ private byte[] internalSignature(Node left, Node right) { if (right == null) { return left.getSignature(); } - // 这里采用Deler32进行签名 + // Here, deler32 is used for signature crc.reset(); crc.update(left.signature); crc.update(right.signature); @@ -202,12 +210,13 @@ public class MerkleTree { * header (magic header:int)(num nodes:int)(tree depth:int)(leaf length:int) * [(node type:byte)(signature length:int)(signature:byte)(bucket length:int)(bucket:byte)] *

- * bucket 桶序列化实现 + * bucket Bucket serialization implementation * - * @return 返回序列化字节流 + * @return Return serialized byte stream */ public byte[] serialize() { - int header = MAGIC_HEADER_BYTE_LENGTH + NUM_NODES_BYTE_LENGTH + TREE_DEPTH_BYTE_LENGTH + LEAF_SIGNATURE_BYTE_LENGTH; + int header = + MAGIC_HEADER_BYTE_LENGTH + NUM_NODES_BYTE_LENGTH + TREE_DEPTH_BYTE_LENGTH + LEAF_SIGNATURE_BYTE_LENGTH; int maxSignatureByteLength = Math.max(leafSignatureByteLength, LONG_BYTE); int spaceOfNodes = (NODE_TYPE_BYTE_LENGTH + NUM_NODES_BYTE_LENGTH + maxSignatureByteLength) * nnodes; @@ -240,7 +249,7 @@ public class MerkleTree { } /** - * Merkle Tree 节点 + * Merkle Tree */ @Data @Accessors(chain = true) @@ -253,29 +262,25 @@ public class MerkleTree { private Node left; private Node right; /** - * 当前节点签名 signature + * Current node signature */ private byte[] signature; private Bucket bucket; @Override public String toString() { - return " Node{" + - "type=" + type + - ",signature=" + Arrays.toString(signature).replace(",", "") + - ",left=" + left + - ",right=" + right + - '}'; + return " Node{" + "type=" + type + ",signature=" + Arrays.toString(signature).replace(",", "") + ",left=" + + left + ",right=" + right + '}'; } public boolean signatureEqual(Node other) { - int length = this.getSignature().length; + int length = getSignature().length; int length1 = other.getSignature().length; if (length != length1) { return false; } for (int i = 0; i < length; i++) { - if (this.getSignature()[i] != other.getSignature()[i]) { + if (getSignature()[i] != other.getSignature()[i]) { return false; } } @@ -285,20 +290,13 @@ public class MerkleTree { @Override public String toString() { - return "MerkleTree{" + - "nnodes=" + nnodes + - ",depth=" + depth + - ",leafSignatureByteLength=" + leafSignatureByteLength + - ",root=" + root + - '}'; + return "MerkleTree{" + "nnodes=" + nnodes + ",depth=" + depth + ",leafSignatureByteLength=" + + leafSignatureByteLength + ",root=" + root + '}'; } public String toSimpleString() { - return "MerkleTree{" + - "nnodes=" + nnodes + - ",depth=" + depth + - ",leafSignatureByteLength=" + leafSignatureByteLength + - '}'; + return "MerkleTree{" + "nnodes=" + nnodes + ",depth=" + depth + ",leafSignatureByteLength=" + + leafSignatureByteLength + '}'; } interface MerkleConstant { diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTreeDeserializer.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTreeDeserializer.java index 9af70ad4384597487b23252cead8849c53329c0d..e7475c5abacad5761599a4bc9138331ac7664110 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTreeDeserializer.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/merkle/MerkleTreeDeserializer.java @@ -1,25 +1,39 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.merkle; +import org.opengauss.datachecker.check.modules.merkle.MerkleTree.Node; +import org.opengauss.datachecker.common.util.ByteUtil; + import java.nio.ByteBuffer; import java.util.ArrayDeque; import java.util.Queue; -import net.openhft.hashing.LongHashFunction; -import org.opengauss.datachecker.check.modules.merkle.MerkleTree.Node; -import org.opengauss.datachecker.common.util.ByteUtil; - /** - * 默克尔树反序列化 - * bucket 桶反序列化实现 + * Merkel tree deserialization + * bucket Bucket deserialization implementation * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class MerkleTreeDeserializer { - + /** - * 反序列化根据{@link MerkleTree#serialize()}返回的字节数组实现 + * Deserialization is implemented according to the byte array returned by {@link MerkleTree#serialize()} * Serialization format : * header (magic header:int)(num nodes:int)(tree depth:int)(leaf length:int) * [(node type:byte)(signature length:int)(signature:byte)] @@ -30,32 +44,29 @@ public class MerkleTreeDeserializer { public static MerkleTree deserialize(byte[] serializerTree) { ByteBuffer buffer = ByteBuffer.wrap(serializerTree); - // 字节数组头校验 + // Byte array header verification if (buffer.getInt() != MerkleTree.MAGIC_HDR) { - throw new IllegalArgumentException("序列化字节数组没有已合法的Magic Header开头"); + throw new IllegalArgumentException("Serialized byte array does not start with a legal magic header"); } - // 读取头信息 + // Read header information int totalNodes = buffer.getInt(); int depth = buffer.getInt(); int leafLength = buffer.getInt(); - // 读取 root 节点 - Node root = new Node() - .setType(buffer.get()) - .setSignature(readNextSingature(buffer)); + // Read the root node + Node root = new Node().setType(buffer.get()).setSignature(readNextSingature(buffer)); if (root.getType() == MerkleTree.LEAF_SIG_TYPE) { - throw new IllegalArgumentException("首个序列化节点为叶子节点"); + throw new IllegalArgumentException("The first serialized node is a leaf node"); } Queue queue = new ArrayDeque<>(totalNodes / 2 + 1); Node currentNode = root; for (int i = 1; i < totalNodes; i++) { - Node child = new Node() - .setType(buffer.get()) - .setSignature(readNextSingature(buffer)); + Node child = new Node().setType(buffer.get()).setSignature(readNextSingature(buffer)); queue.add(child); - // 处理节点已提升的不完整树 : (如果currentNode 和child节点的签名一致) + // Handle the incomplete tree that the node has been promoted: + // (if the signatures of currentnode and child node are consistent) if (ByteUtil.isEqual(currentNode.getSignature(), child.getSignature())) { currentNode.setLeft(child); currentNode = queue.remove(); @@ -72,7 +83,6 @@ public class MerkleTreeDeserializer { return new MerkleTree(root, totalNodes, depth, leafLength); } - private static byte[] readNextSingature(ByteBuffer buffer) { byte[] singatureBytes = new byte[buffer.getInt()]; buffer.get(singatureBytes); diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerService.java index 4eadc9651b973001906328152bfc1ec350f1510b..cfbb8eaa1bcc2536b7594f3fb3d15c7182e01a88 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerService.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerService.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.task; import org.opengauss.datachecker.common.entry.enums.Endpoint; @@ -11,23 +26,22 @@ import java.util.List; */ public interface TaskManagerService { /** - * 刷新指定任务的数据抽取表执行状态 + * Refresh the execution status of the data extraction table of the specified task * - * @param tableName 表名称 - * @param endpoint 端点类型 {@link org.opengauss.datachecker.common.entry.enums.Endpoint} + * @param tableName tableName + * @param endpoint endpoint {@link org.opengauss.datachecker.common.entry.enums.Endpoint} */ - void refushTableExtractStatus(String tableName, Endpoint endpoint); - + void refreshTableExtractStatus(String tableName, Endpoint endpoint); /** - * 初始化任务状态 + * Initialize task status * - * @param tableNameList 表名称列表 + * @param tableNameList table name list */ void initTableExtractStatus(List tableNameList); /** - * 清理任务状态信息 + * Clean up task status information */ void cleanTaskStatus(); } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerServiceImpl.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerServiceImpl.java index 2a03eff6ede2ef6fc6158af701083ac7ef7f59de..47a5cad1f3489e94dc08c78ac4448c87e469ac3d 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerServiceImpl.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/modules/task/TaskManagerServiceImpl.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.task; import com.alibaba.fastjson.JSON; @@ -24,37 +39,40 @@ public class TaskManagerServiceImpl implements TaskManagerService { private TableStatusRegister tableStatusRegister; /** - * 刷新指定任务的数据抽取表执行状态 + * Refresh the execution status of the data extraction table of the specified task * - * @param tableName 表名称 - * @param endpoint 端点类型 {@link org.opengauss.datachecker.common.entry.enums.Endpoint} + * @param tableName tableName + * @param endpoint endpoint {@link org.opengauss.datachecker.common.entry.enums.Endpoint} */ @Override - public void refushTableExtractStatus(String tableName, Endpoint endpoint) { - log.info("check server refush endpoint=[{}] extract tableName=[{}] status=[{}] ", endpoint.getDescription(), tableName, endpoint.getCode()); + public void refreshTableExtractStatus(String tableName, Endpoint endpoint) { + log.info("check server refresh endpoint=[{}] extract tableName=[{}] status=[{}] ", endpoint.getDescription(), + tableName, endpoint.getCode()); tableStatusRegister.update(tableName, endpoint.getCode()); } /** - * 初始化任务状态 + * Initialize task status * - * @param tableNameList 表名称列表 + * @param tableNameList table name list */ @Override public void initTableExtractStatus(List tableNameList) { - if (tableStatusRegister.isEmpty() || tableStatusRegister.isCheckComplated()) { + if (tableStatusRegister.isEmpty() || tableStatusRegister.isCheckCompleted()) { cleanTaskStatus(); tableStatusRegister.init(new HashSet<>(tableNameList)); - log.info("check server init extract tableNameList=[{}] status= ", JSON.toJSONString(tableNameList)); + log.info("check server init extract tableNameList=[{}] ", JSON.toJSONString(tableNameList)); } else { - //上次校验流程正在执行,不能重新初始化表校验状态数据! - throw new CheckingException("The last verification process is being executed, and the table verification status data cannot be reinitialized!"); + // The last verification process is being executed, + // and the table verification status data cannot be reinitialized! + throw new CheckingException("The last verification process is being executed," + + " and the table verification status data cannot be reinitialized!"); } } /** - * 清理任务状态信息 + * Clean up task status information */ @Override public void cleanTaskStatus() { diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckBlackWhiteService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckBlackWhiteService.java index 5f94250806f8fad13631bc6f86d9fd8ae831804f..129a9cc49ccef5ed5422ed38b60e9590bbd4244b 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckBlackWhiteService.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckBlackWhiteService.java @@ -1,5 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.service; +import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.check.client.FeignClientService; import org.opengauss.datachecker.check.config.DataCheckProperties; import org.opengauss.datachecker.common.entry.enums.CheckBlackWhiteMode; @@ -17,6 +33,7 @@ import java.util.concurrent.ConcurrentSkipListSet; * @date :Created in 2022/6/22 * @since :11 */ +@Slf4j @Service public class CheckBlackWhiteService { private static final Set WHITE = new ConcurrentSkipListSet<>(); @@ -28,41 +45,47 @@ public class CheckBlackWhiteService { @Autowired private DataCheckProperties dataCheckProperties; + @Autowired + private EndpointMetaDataManager endpointMetaDataManager; + /** - * 添加白名单列表 该功能清理历史白名单,重置白名单为当前列表 + * Add white list this function clears the historical white list and resets the white list to the current list * - * @param whiteList 白名单列表 + * @param whiteList whiteList */ public void addWhiteList(List whiteList) { WHITE.clear(); WHITE.addAll(whiteList); - refushWhiteList(); + refreshBlackWhiteList(); + log.info("add whitelist list [{}]", whiteList); } /** - * 更新白名单列表 该功能在当前白名单基础上新增当前列表到白名单 + * Update white list this function adds the current list to the white list on the basis of the current white list * - * @param whiteList 白名单列表 + * @param whiteList whiteList */ public void updateWhiteList(List whiteList) { WHITE.addAll(whiteList); - refushWhiteList(); + refreshBlackWhiteList(); + log.info("update whitelist list [{}]", whiteList); } /** - * 移除白名单列表 该功能在当前白名单基础上移除当前列表到白名单 + * Remove white list this function removes the current list from the current white list * - * @param whiteList 白名单列表 + * @param whiteList whiteList */ public void deleteWhiteList(List whiteList) { WHITE.removeAll(whiteList); - refushWhiteList(); + refreshBlackWhiteList(); + log.info("delete whitelist list [{}]", whiteList); } /** - * 查询白名单列表 + * Query white list * - * @return 白名单列表 + * @return whiteList */ public List queryWhiteList() { return new ArrayList<>(WHITE); @@ -71,35 +94,38 @@ public class CheckBlackWhiteService { public void addBlackList(List blackList) { BLACK.clear(); BLACK.addAll(blackList); - refushWhiteList(); + refreshBlackWhiteList(); + log.info("add blackList list [{}]", blackList); } public void updateBlackList(List blackList) { BLACK.addAll(blackList); - refushWhiteList(); + refreshBlackWhiteList(); + log.info("update blackList list [{}]", blackList); } public void deleteBlackList(List blackList) { BLACK.removeAll(blackList); - refushWhiteList(); + refreshBlackWhiteList(); + log.info("delete blackList list [{}]", blackList); } public List queryBlackList() { return new ArrayList<>(BLACK); } - private void refushWhiteList() { + private void refreshBlackWhiteList() { final CheckBlackWhiteMode blackWhiteMode = dataCheckProperties.getBlackWhiteMode(); if (blackWhiteMode == CheckBlackWhiteMode.WHITE) { - // 白名单模式 - feignClientService.getClient(Endpoint.SOURCE).refushBlackWhiteList(blackWhiteMode, new ArrayList<>(WHITE)); - feignClientService.getClient(Endpoint.SINK).refushBlackWhiteList(blackWhiteMode, new ArrayList<>(WHITE)); + // White list mode + feignClientService.getClient(Endpoint.SOURCE).refreshBlackWhiteList(blackWhiteMode, new ArrayList<>(WHITE)); + feignClientService.getClient(Endpoint.SINK).refreshBlackWhiteList(blackWhiteMode, new ArrayList<>(WHITE)); } else if (blackWhiteMode == CheckBlackWhiteMode.BLACK) { - // 黑名单模式 - feignClientService.getClient(Endpoint.SOURCE).refushBlackWhiteList(blackWhiteMode, new ArrayList<>(BLACK)); - feignClientService.getClient(Endpoint.SINK).refushBlackWhiteList(blackWhiteMode, new ArrayList<>(BLACK)); + // Blacklist mode + feignClientService.getClient(Endpoint.SOURCE).refreshBlackWhiteList(blackWhiteMode, new ArrayList<>(BLACK)); + feignClientService.getClient(Endpoint.SINK).refreshBlackWhiteList(blackWhiteMode, new ArrayList<>(BLACK)); } + endpointMetaDataManager.load(); } - } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckService.java index bfb47672750367770b2520d9cfdaa9a4da09543a..9c5b4cb8599ba5e5c37eb8f58f2b15ae21dba969 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckService.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/CheckService.java @@ -1,7 +1,21 @@ -package org.opengauss.datachecker.check.service; +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ +package org.opengauss.datachecker.check.service; -import org.opengauss.datachecker.common.entry.check.IncrementCheckConifg; +import org.opengauss.datachecker.common.entry.check.IncrementCheckConfig; import org.opengauss.datachecker.common.entry.enums.CheckMode; /** @@ -12,29 +26,29 @@ import org.opengauss.datachecker.common.entry.enums.CheckMode; public interface CheckService { /** - * 开启校验服务 + * Enable verification service * - * @param checkMode 校验方式 - * @return 进程号 + * @param checkMode checkMode + * @return Process number */ String start(CheckMode checkMode); /** - * 查询当前执行的进程号 + * Query the currently executed process number * - * @return 进程号 + * @return Process number */ String getCurrentCheckProcess(); /** - * 清理校验环境 + * Clean up the verification environment */ void cleanCheck(); /** - * 增量校验配置初始化 + * Incremental verification configuration initialization * - * @param incrementCheckConifg 初始化配置 + * @param config Initialize configuration */ - void incrementCheckConifg(IncrementCheckConifg incrementCheckConifg); + void incrementCheckConfig(IncrementCheckConfig config); } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointManagerService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointManagerService.java index 638ee820141faf3638f47f92c0fa8863f9b03ccc..008f2b80581dc1cc1e117319a051e3e289a6ec42 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointManagerService.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointManagerService.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.service; import lombok.extern.slf4j.Slf4j; @@ -9,12 +24,13 @@ import org.opengauss.datachecker.common.web.Result; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; -import javax.annotation.PostConstruct; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.nio.charset.Charset; -import java.util.concurrent.*; +import java.util.concurrent.Executors; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; /** * 数据抽取服务端点管理 @@ -26,82 +42,137 @@ import java.util.concurrent.*; @Slf4j @Service public class EndpointManagerService { - private static final String ENDPOINT_HEALTH_CHECK_THREAD_NAME = "endpoint-health-check-thread"; - private final ScheduledExecutorService scheduledExecutor = Executors.newSingleThreadScheduledExecutor(); + private static final ScheduledExecutorService SCHEDULED_EXECUTOR = Executors.newSingleThreadScheduledExecutor(); @Autowired private FeignClientService feignClientService; - @Autowired private DataCheckProperties dataCheckProperties; + @Autowired + private EndpointStatusManager endpointStatusManager; - @PostConstruct + /** + * Start the health check self check thread + */ public void start() { - scheduledExecutor.scheduleWithFixedDelay(() -> { + endpointHealthCheck(); + SCHEDULED_EXECUTOR.scheduleWithFixedDelay(() -> { Thread.currentThread().setName(ENDPOINT_HEALTH_CHECK_THREAD_NAME); endpointHealthCheck(); - }, 0, 5, TimeUnit.SECONDS); + }, 0, 2, TimeUnit.SECONDS); } + /** + * View the health status of all endpoints + * + * @return health status + */ + public boolean isEndpointHealth() { + return endpointStatusManager.isEndpointHealth(); + } + + /** + * Endpoint health check + */ public void endpointHealthCheck() { - checkEndpoint(dataCheckProperties.getSourceUri(), Endpoint.SOURCE, "源端服务检查"); - checkEndpoint(dataCheckProperties.getSinkUri(), Endpoint.SINK, "目标端服务检查"); + checkEndpoint(dataCheckProperties.getSourceUri(), Endpoint.SOURCE, "source endpoint service check"); + checkEndpoint(dataCheckProperties.getSinkUri(), Endpoint.SINK, "sink endpoint service check"); } private void checkEndpoint(String requestUri, Endpoint endpoint, String message) { - // 服务网络检查ping + // service network check ping try { if (NetworkCheck.networkCheck(getEndpointIp(requestUri))) { - // 服务检查 服务数据库检查 + // service check: service database check Result healthStatus = feignClientService.getClient(endpoint).health(); if (healthStatus.isSuccess()) { + endpointStatusManager.resetStatus(endpoint, Boolean.TRUE); log.debug("{}:{} current state health", message, requestUri); } else { + endpointStatusManager.resetStatus(endpoint, Boolean.FALSE); log.error("{}:{} current service status is abnormal", message, requestUri); } - } } catch (Exception ce) { log.error("{}:{} service unreachable", message, ce.getMessage()); + endpointStatusManager.resetStatus(endpoint, Boolean.FALSE); } } /** - * 根据配置属性中的端点URI地址,解析对应的IP地址 - * URI地址: http://127.0.0.1:8080 https://127.0.0.1:8080 + * Resolve the corresponding IP address according to the endpoint URI address in the configuration attribute + * uri address: http://127.0.0.1:8080 https://127.0.0.1:8080 * - * @param endpointUri 配置属性中的端点URI - * @return 若解析成功,则返回对应IP地址,否则返回null + * @param endpointUri Configure the endpoint URI in the attribute + * @return If the resolution is successful, the corresponding IP address is returned; otherwise, null is returned */ private String getEndpointIp(String endpointUri) { - if ((endpointUri.contains(NetAddress.HTTP) || endpointUri.contains(NetAddress.HTTPS)) - && endpointUri.contains(NetAddress.IP_DELEMTER) && endpointUri.contains(NetAddress.PORT_DELEMTER)) { - return endpointUri.replace(NetAddress.IP_DELEMTER, NetAddress.PORT_DELEMTER).split(NetAddress.PORT_DELEMTER)[1]; + if (checkLegalOfHttpProtocol(endpointUri) && checkLegalOfIp(endpointUri) && checkLegalOfPort(endpointUri)) { + return endpointUri.replace(NetAddress.IP_DELIMITER, NetAddress.PORT_DELIMITER) + .split(NetAddress.PORT_DELIMITER)[1]; } return null; } + private boolean checkLegalOfPort(String endpointUri) { + return checkLegalOfUri(endpointUri, NetAddress.PORT_DELIMITER); + } + + private boolean checkLegalOfIp(String endpointUri) { + return checkLegalOfUri(endpointUri, NetAddress.IP_DELIMITER); + } + + private boolean checkLegalOfUri(String endpointUri, String ipDelemter) { + return endpointUri.contains(ipDelemter); + } + + private boolean checkLegalOfHttpProtocol(String endpointUri) { + return checkLegalOfUri(endpointUri, NetAddress.HTTP) || checkLegalOfUri(endpointUri, NetAddress.HTTPS); + } + + /** + * Close the self check thread of jiangkang + */ + public void shutdown() { + SCHEDULED_EXECUTOR.shutdownNow(); + } + interface NetAddress { + /** + * http + */ String HTTP = "http"; + + /** + * https + */ String HTTPS = "https"; - String IP_DELEMTER = "://"; - String PORT_DELEMTER = ":"; + + /** + * ip delimiter + */ + String IP_DELIMITER = "://"; + + /** + * port delimiter + */ + String PORT_DELIMITER = ":"; } /** - * 网络状态检查 + * Network status check */ static class NetworkCheck { private static final String PING = "ping "; private static final String TTL = "TTL"; /** - * 根据系统命令 ping {@code ip} 检查网络状态 + * Check the network status according to the system command Ping {@code ip} * - * @param ip ip 地址 - * @return 网络检查结果 + * @param ip ip address + * @return Network check results */ public static boolean networkCheck(String ip) { boolean result = false; @@ -116,7 +187,8 @@ public class EndpointManagerService { String cmd = PING + ip; try { Process process = Runtime.getRuntime().exec(cmd); - try (BufferedReader buffer = new BufferedReader(new InputStreamReader(process.getInputStream(), Charset.forName("GBK")))) { + try (BufferedReader buffer = new BufferedReader( + new InputStreamReader(process.getInputStream(), Charset.forName("GBK")))) { while ((line = buffer.readLine()) != null) { sb.append(line); endMsg = line; @@ -136,5 +208,4 @@ public class EndpointManagerService { return result; } } - } diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointMetaDataManager.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointMetaDataManager.java new file mode 100644 index 0000000000000000000000000000000000000000..a68e2bd787ca460552ca06dd30f149028d763f82 --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointMetaDataManager.java @@ -0,0 +1,81 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.service; + +import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.check.client.FeignClientService; +import org.opengauss.datachecker.common.entry.enums.Endpoint; +import org.opengauss.datachecker.common.entry.extract.TableMetadata; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + +/** + * EndpointMetaDataManager + * + * @author :wangchao + * @date :Created in 2022/7/24 + * @since :11 + */ +@Slf4j +@Service +public class EndpointMetaDataManager { + private static final List CHECK_TABLE_LIST = new ArrayList<>(); + + @Autowired + private EndpointStatusManager endpointStatusManager; + + @Autowired + private FeignClientService feignClientService; + + /** + * Reload metadata information + */ + public void load() { + CHECK_TABLE_LIST.clear(); + final Map metadataMap = feignClientService.queryMetaDataOfSchema(Endpoint.SOURCE); + feignClientService.queryMetaDataOfSchema(Endpoint.SINK); + if (!metadataMap.isEmpty()) { + CHECK_TABLE_LIST.addAll( + metadataMap.values().stream().sorted(Comparator.comparing(TableMetadata::getTableRows)) + .map(TableMetadata::getTableName).collect(Collectors.toUnmodifiableList())); + } + log.info("Load endpoint metadata information"); + } + + /** + * View the health status of all endpoints + * + * @return health status + */ + public boolean isEndpointHealth() { + return endpointStatusManager.isEndpointHealth(); + } + + /** + * Return to the verification black and white list + * + * @return black and white list + */ + public List getCheckTableList() { + return CHECK_TABLE_LIST; + } +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointStatusManager.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointStatusManager.java new file mode 100644 index 0000000000000000000000000000000000000000..d0a99d9a6279dfe4eff96a81b4e4f6afffc9082b --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/EndpointStatusManager.java @@ -0,0 +1,59 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.service; + +import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.common.entry.check.Pair; +import org.opengauss.datachecker.common.entry.enums.Endpoint; +import org.springframework.stereotype.Service; + +import java.util.Objects; + +/** + * Data extraction endpoint state management + * + * @author :wangchao + * @date :Created in 2022/5/26 + * @since :11 + */ +@Slf4j +@Service +public class EndpointStatusManager { + private static final Pair STATUS = Pair.of(false, false); + + /** + * Reset the health state of the endpoint + * + * @param endpoint endpoint {@value Endpoint#API_DESCRIPTION} + * @param isHealth endpoint health status + */ + public void resetStatus(Endpoint endpoint, boolean isHealth) { + if (Objects.equals(endpoint, Endpoint.SOURCE)) { + Pair.of(isHealth, STATUS); + } else { + Pair.of(STATUS, isHealth); + } + } + + /** + * View the health status of all endpoints + * + * @return health status + */ + public boolean isEndpointHealth() { + return Objects.equals(STATUS.getSink(), Boolean.TRUE) && Objects.equals(STATUS.getSource(), Boolean.TRUE); + } +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/IncrementManagerService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/IncrementManagerService.java index 30d62ba1cd13d34907ecdc6d47167385b90d8774..5dc56791e99e44167c22f530a08315b24a93bdc4 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/IncrementManagerService.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/IncrementManagerService.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.service; import org.opengauss.datachecker.check.client.FeignClientService; @@ -21,12 +36,12 @@ public class IncrementManagerService { private FeignClientService feignClientService; /** - * 增量校验日志通知 + * Incremental verification log notification * - * @param dataLogList 增量校验日志 + * @param dataLogList Incremental verification log */ public void notifySourceIncrementDataLogs(List dataLogList) { - // 收集上次校验结果,并构建增量校验日志 + // Collect the last verification results and build an incremental verification log dataLogList.addAll(collectLastResults()); feignClientService.notifyIncrementDataLogs(Endpoint.SOURCE, dataLogList); @@ -34,9 +49,9 @@ public class IncrementManagerService { } /** - * 收集上次校验结果,并构建增量校验日志 + * Collect the last verification results and build an incremental verification log * - * @return 上次校验结果解析 + * @return Analysis of last verification result */ private List collectLastResults() { List dataLogList = new ArrayList<>(); diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/StatisticalService.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/StatisticalService.java new file mode 100644 index 0000000000000000000000000000000000000000..2dea1c44d288d3a193010e3e1ecdfd42e378e5e7 --- /dev/null +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/StatisticalService.java @@ -0,0 +1,90 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.check.service; + +import org.opengauss.datachecker.check.annotation.aspect.StatisticalRecord; +import org.opengauss.datachecker.common.util.FileUtils; +import org.opengauss.datachecker.common.util.JsonObjectUtil; +import org.springframework.beans.factory.annotation.Value; +import org.springframework.stereotype.Service; + +import javax.annotation.PostConstruct; +import javax.validation.constraints.NotNull; +import java.io.File; +import java.time.LocalDateTime; +import java.time.temporal.ChronoUnit; + +/** + * StatisticalService + * + * @author :wangchao + * @date :Created in 2022/7/20 + * @since :11 + */ +@Service +public class StatisticalService { + private static final String STATISTICS_RESULT_FILE = "statistics.txt"; + private static final String STATISTICS_RESULT_DIR = "statistics"; + + private String statisticalFileName; + + @Value("${data.check.statistical-enable}") + private boolean shouldEnableStatistical; + + @Value("${data.check.data-path}") + private String path; + + /** + * Start loading statistics save path + */ + @PostConstruct + public void loadFilePath() { + if (shouldEnableStatistical) { + FileUtils.createDirectories(getStatisticalDir()); + statisticalFileName = getStatisticalFileName(); + FileUtils.deleteFile(statisticalFileName); + statistics("check service start ......", LocalDateTime.now()); + } + } + + /** + * Manage statistical information + * + * @param name point information + * @param start start time + */ + public void statistics(String name, @NotNull LocalDateTime start) { + if (shouldEnableStatistical) { + StatisticalRecord record = buildStatistical(name, start); + FileUtils.writeAppendFile(statisticalFileName, JsonObjectUtil.format(record)); + } + } + + private String getStatisticalDir() { + return path.concat(File.separator).concat(STATISTICS_RESULT_DIR); + } + + private String getStatisticalFileName() { + return path.concat(File.separator).concat(STATISTICS_RESULT_DIR).concat(File.separator) + .concat(STATISTICS_RESULT_FILE); + } + + private StatisticalRecord buildStatistical(String name, LocalDateTime start) { + LocalDateTime end = LocalDateTime.now(); + return new StatisticalRecord().setStart(JsonObjectUtil.formatTime(start)).setEnd(JsonObjectUtil.formatTime(end)) + .setCost(start.until(end, ChronoUnit.SECONDS)).setName(name); + } +} diff --git a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/impl/CheckServiceImpl.java b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/impl/CheckServiceImpl.java index c33731aa8c03fa27da0e9589e0512cf7f4c0fc12..ffad95a00e106303fd1af6f353bd8a0528619937 100644 --- a/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/impl/CheckServiceImpl.java +++ b/datachecker-check/src/main/java/org/opengauss/datachecker/check/service/impl/CheckServiceImpl.java @@ -1,27 +1,58 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.service.impl; import com.alibaba.fastjson.JSON; import lombok.extern.slf4j.Slf4j; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.commons.lang3.StringUtils; +import org.opengauss.datachecker.check.annotation.Statistical; import org.opengauss.datachecker.check.cache.TableStatusRegister; import org.opengauss.datachecker.check.client.FeignClientService; +import org.opengauss.datachecker.check.config.DataCheckProperties; import org.opengauss.datachecker.check.modules.check.DataCheckService; +import org.opengauss.datachecker.check.modules.check.ExportCheckResult; import org.opengauss.datachecker.check.service.CheckService; -import org.opengauss.datachecker.common.entry.check.IncrementCheckConifg; +import org.opengauss.datachecker.check.service.EndpointMetaDataManager; +import org.opengauss.datachecker.common.entry.check.IncrementCheckConfig; +import org.opengauss.datachecker.common.entry.check.Pair; import org.opengauss.datachecker.common.entry.enums.CheckMode; import org.opengauss.datachecker.common.entry.enums.Endpoint; import org.opengauss.datachecker.common.entry.extract.ExtractTask; import org.opengauss.datachecker.common.entry.extract.Topic; import org.opengauss.datachecker.common.exception.CheckingException; import org.opengauss.datachecker.common.exception.CheckingPollingException; -import org.opengauss.datachecker.common.util.IdWorker; +import org.opengauss.datachecker.common.exception.CommonException; +import org.opengauss.datachecker.common.util.IdGenerator; +import org.opengauss.datachecker.common.util.JsonObjectUtil; import org.opengauss.datachecker.common.util.ThreadUtil; import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Service; +import javax.annotation.PostConstruct; import javax.annotation.Resource; +import java.time.LocalDateTime; import java.util.List; import java.util.Objects; -import java.util.concurrent.*; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; import java.util.stream.IntStream; @@ -35,38 +66,35 @@ import java.util.stream.IntStream; @Service(value = "checkService") public class CheckServiceImpl implements CheckService { /** - * 校验任务启动标志 + * Verification task start flag *

- * 无论是全量校验和增量校验,同一时间内只能执行一个。 - * 只有本地全量或者增量校验执行完成后,即{@code STARTED}==false时,才可以执行下一个。 - * 否则直接退出,等待当前校验流程执行完毕,自动退出。 + * Whether full or incremental verification, only one can be performed at a time. + * Only after the local full or incremental verification is completed, that is, + * {@code started}=false, can the next one be executed. + * Otherwise, exit directly and wait until the current verification process is completed, + * and then exit automatically. *

- * 暂时不提供强制退出当前校验流程方法。 + * The method of forcibly exiting the current verification process is not provided for the time being. */ private static final AtomicBoolean STARTED = new AtomicBoolean(false); + private static final AtomicBoolean CHECKING = new AtomicBoolean(true); /** - * 进程签名 后期是否删除进程签名逻辑 + * Process signature */ - @Deprecated private static final AtomicReference PROCESS_SIGNATURE = new AtomicReference<>(); /** - * 校验模式 + * Verify Mode */ private static final AtomicReference CHECK_MODE_REF = new AtomicReference<>(); /** - * 校验轮询线程名称 + * Verify polling thread name */ private static final String SELF_CHECK_POLL_THREAD_NAME = "check-polling-thread"; - /** - * 单线程定时任务 - 执行校验轮询线程 Thread.name={@value SELF_CHECK_POLL_THREAD_NAME} - */ - private final ScheduledExecutorService scheduledExecutor = Executors.newSingleThreadScheduledExecutor(); - - private final ThreadPoolExecutor singleThreadExecutor = ThreadUtil.newSingleThreadExecutor(); + private static final String START_MESSAGE = "the execution time of %s process is %s"; @Autowired private FeignClientService feignClientService; @@ -77,47 +105,86 @@ public class CheckServiceImpl implements CheckService { @Resource private DataCheckService dataCheckService; + @Autowired + private DataCheckProperties properties; + + @Autowired + private EndpointMetaDataManager endpointMetaDataManager; + + @Value("${data.check.auto-clean-environment}") + private boolean isAutoCleanEnvironment = true; + + @Value("${data.check.check-with-sync-extracting}") + private boolean isCheckWithSyncExtracting = true; + /** - * 开启校验服务 + * Initialize the verification result environment + */ + @PostConstruct + public void init() { + ExportCheckResult.initEnvironment(properties.getDataPath()); + } + + /** + * Enable verification service * - * @param checkMode 校验方式 + * @param checkMode check Mode */ + @Statistical(name = "CheckServiceStart") @Override public String start(CheckMode checkMode) { if (STARTED.compareAndSet(false, true)) { - log.info("check service is starting, start check mode is [{}]", checkMode.getCode()); - CHECK_MODE_REF.set(checkMode); - if (Objects.equals(CheckMode.FULL, checkMode)) { - startCheckFullMode(); - // 等待任务构建完成,开启任务轮询线程 - startCheckPollingThread(); - } else { - startCheckIncrementMode(); + endpointMetaDataManager.load(); + tableStatusRegister.selfCheck(); + log.info(CheckMessage.CHECK_SERVICE_STARTING, checkMode.getCode()); + try { + CHECK_MODE_REF.set(checkMode); + if (Objects.equals(CheckMode.FULL, checkMode)) { + startCheckFullMode(); + // Wait for the task construction to complete, and start the task polling thread + startCheckPollingThread(); + } else { + startCheckIncrementMode(); + } + } catch (CheckingException ex) { + cleanCheck(); + throw new CheckingException(ex.getMessage()); } } else { - String message = String.format("check service is running, current check mode is [%s] , exit.", checkMode.getDescription()); + String message = String.format(CheckMessage.CHECK_SERVICE_START_ERROR, checkMode.getDescription()); log.error(message); + cleanCheck(); throw new CheckingException(message); } - return PROCESS_SIGNATURE.get(); + return String.format(START_MESSAGE, PROCESS_SIGNATURE.get(), JsonObjectUtil.formatTime(LocalDateTime.now())); + } + + interface CheckMessage { + /** + * Verify the startup message template + */ + String CHECK_SERVICE_STARTING = "check service is starting, start check mode is [{}]"; + + /** + * Verify the startup message template + */ + String CHECK_SERVICE_START_ERROR = "check service is running, current check mode is [%s] , exit."; } /** - * 开启全量校验模式 + * Turn on full calibration mode */ private void startCheckFullMode() { - String processNo = IdWorker.nextId36(); - // 元数据信息查询 - feignClientService.queryMetaDataOfSchema(Endpoint.SOURCE); - feignClientService.queryMetaDataOfSchema(Endpoint.SINK); + String processNo = IdGenerator.nextId36(); log.info("check full mode : query meta data from db schema (source and sink )"); - // 源端任务构建 + // Source endpoint task construction final List extractTasks = feignClientService.buildExtractTaskAllTables(Endpoint.SOURCE, processNo); - extractTasks.forEach(task -> log.debug("check full mode : build extract task source {} : {}", processNo, JSON.toJSONString(task))); - // 宿端任务构建 + extractTasks.forEach(task -> log + .debug("check full mode : build extract task source {} : {}", processNo, JSON.toJSONString(task))); + // Sink endpoint task construction feignClientService.buildExtractTaskAllTables(Endpoint.SINK, processNo, extractTasks); log.info("check full mode : build extract task sink {}", processNo); - // 构建任务执行 + // Perform all tasks feignClientService.execExtractTaskAllTables(Endpoint.SOURCE, processNo); feignClientService.execExtractTaskAllTables(Endpoint.SINK, processNo); log.info("check full mode : exec extract task (source and sink ) {}", processNo); @@ -125,64 +192,46 @@ public class CheckServiceImpl implements CheckService { } /** - * /** - * 数据校验轮询线程 - * 用于实时监测数据抽取任务的完成状态。 - * 当某一数据抽取任务状态变更为完成时,启动一个数据校验独立线程。并开启当前任务,进行数据校验。 + * Data verification polling thread + * It is used to monitor the completion status of data extraction tasks in real time. + * When the status of a data extraction task changes to complete, start a data verification independent thread. + * And start the current task to verify the data. */ public void startCheckPollingThread() { if (Objects.nonNull(PROCESS_SIGNATURE.get()) && Objects.equals(CHECK_MODE_REF.getAcquire(), CheckMode.FULL)) { + ScheduledExecutorService scheduledExecutor = Executors.newSingleThreadScheduledExecutor(); + endpointMetaDataManager.load(); scheduledExecutor.scheduleWithFixedDelay(() -> { Thread.currentThread().setName(SELF_CHECK_POLL_THREAD_NAME); log.debug("check polling processNo={}", PROCESS_SIGNATURE.get()); if (Objects.isNull(PROCESS_SIGNATURE.get())) { throw new CheckingPollingException("process is empty,stop check polling"); } - // 是否有表数据抽取完成 - if (tableStatusRegister.hasExtractComplated()) { - // 获取数据抽取完成表名 - String tableName = tableStatusRegister.complatedTablePoll(); - if (Objects.isNull(tableName)) { - return; - } - Topic topic = feignClientService.queryTopicInfo(Endpoint.SOURCE, tableName); - - if (Objects.nonNull(topic)) { - IntStream.range(0, topic.getPartitions()).forEach(idxPartition -> { - log.info("kafka consumer topic=[{}] partitions=[{}]", topic.toString(), idxPartition); - // 根据表名称 和kafka分区进行数据校验 - dataCheckService.checkTableData(topic, idxPartition); - }); - } - complateProgressBar(); + // Check whether there is a table to complete data extraction + if (isCheckWithSyncExtracting) { + checkTableWithSyncExtracting(); + } else { + checkTableWithExtractEnd(); } - }, 0, 1, TimeUnit.SECONDS); + completeProgressBar(scheduledExecutor); + }, 5, 2, TimeUnit.SECONDS); } } - - private void complateProgressBar() { - singleThreadExecutor.submit(() -> { - Thread.currentThread().setName("complated-process-bar"); - int total = tableStatusRegister.getKeys().size(); - int complated = tableStatusRegister.complateSize(); - log.info("current check process has task total=[{}] , complate=[{}]", total, complated); - }); - } - /** - * 开启增量校验模式 + * Enable incremental verification mode */ private void startCheckIncrementMode() { - // 开启增量校验模式-轮询线程启动 + // Enable incremental verification mode - polling thread start if (Objects.equals(CHECK_MODE_REF.getAcquire(), CheckMode.INCREMENT)) { + ScheduledExecutorService scheduledExecutor = Executors.newSingleThreadScheduledExecutor(); scheduledExecutor.scheduleWithFixedDelay(() -> { Thread.currentThread().setName(SELF_CHECK_POLL_THREAD_NAME); log.debug("check polling check mode=[{}]", CHECK_MODE_REF.get()); - // 是否有表数据抽取完成 - if (tableStatusRegister.hasExtractComplated()) { - // 获取数据抽取完成表名 - String tableName = tableStatusRegister.complatedTablePoll(); + // Check whether there is a table to complete data extraction + if (tableStatusRegister.hasExtractCompleted()) { + // Get the table name that completes data extraction + String tableName = tableStatusRegister.completedTablePoll(); if (Objects.isNull(tableName)) { return; } @@ -190,26 +239,87 @@ public class CheckServiceImpl implements CheckService { if (Objects.nonNull(topic)) { log.info("kafka consumer topic=[{}]", topic.toString()); - // 根据表名称 和kafka分区进行数据校验 + // Verify the data according to the table name and Kafka partition dataCheckService.incrementCheckTableData(topic); } - complateProgressBar(); + completeProgressBar(scheduledExecutor); } - // 当前周期任务完成校验,重置任务状态 - if (tableStatusRegister.isCheckComplated()) { - log.info("当前周期校验完成,重置任务状态!"); + // The current cycle task completes the verification and resets the task status + if (tableStatusRegister.isCheckCompleted()) { + log.info("The current cycle verification is completed, reset the task status!"); tableStatusRegister.rest(); feignClientService.cleanTask(Endpoint.SOURCE); feignClientService.cleanTask(Endpoint.SINK); } - }, 0, 1, TimeUnit.SECONDS); + }, 5, 2, TimeUnit.SECONDS); + } + } + + private void checkTableWithExtractEnd() { + if (tableStatusRegister.isExtractCompleted() && CHECKING.get()) { + log.info("check polling processNo={}, extract task complete. start checking....", PROCESS_SIGNATURE.get()); + CHECKING.set(false); + endpointMetaDataManager.load(); + final List checkTableList = endpointMetaDataManager.getCheckTableList(); + if (CollectionUtils.isEmpty(checkTableList)) { + log.info(""); + } + checkTableList.forEach(tableName -> { + startCheckTableThread(tableName); + ThreadUtil.sleep(100); + }); + } + } + + private void checkTableWithSyncExtracting() { + if (!tableStatusRegister.isCheckCompleted()) { + String tableName = tableStatusRegister.completedTablePoll(); + if (StringUtils.isNotEmpty(tableName)) { + log.info("start checking thread of table {}", tableName); + startCheckTableThread(tableName); + } + } + } + + private void startCheckTableThread(String tableName) { + Topic topic = feignClientService.queryTopicInfo(Endpoint.SOURCE, tableName); + + if (Objects.nonNull(topic)) { + tableStatusRegister.initPartitionsStatus(tableName, topic.getPartitions()); + IntStream.range(0, topic.getPartitions()).forEach(idxPartition -> { + log.info("kafka consumer topic=[{}] partitions=[{}]", topic.toString(), idxPartition); + // Verify the data according to the table name and Kafka partition + try { + final Future future = dataCheckService.checkTableData(topic, idxPartition); + future.get(); + } catch (InterruptedException | ExecutionException e) { + log.info("data check topic=[{}] partitions=[{}] error:", topic.toString(), idxPartition, e); + } + }); + } + } + + private void completeProgressBar(ScheduledExecutorService scheduledExecutor) { + Pair process = tableStatusRegister.extractProgress(); + log.info("current check process has task total=[{}] , complete=[{}]", process.getSink(), process.getSource()); + + // The current task completes the verification and resets the task status + if (tableStatusRegister.isCheckCompleted()) { + log.info("The current verification is completed, reset the task status!"); + if (isAutoCleanEnvironment) { + log.info("The current cycle task completes the verification and resets the check environment"); + cleanCheck(); + feignClientService.cleanTask(Endpoint.SOURCE); + feignClientService.cleanTask(Endpoint.SINK); + } + scheduledExecutor.shutdownNow(); } } /** - * 查询当前执行的进程号 + * Query the currently executed process number * - * @return 进程号 + * @return process number */ @Override public String getCurrentCheckProcess() { @@ -217,32 +327,38 @@ public class CheckServiceImpl implements CheckService { } /** - * 清理校验环境 + * Clean up the verification environment */ @Override public synchronized void cleanCheck() { - cleanBuildedTask(); + final String processNo = PROCESS_SIGNATURE.get(); + cleanBuildTask(processNo); ThreadUtil.sleep(3000); CHECK_MODE_REF.set(null); PROCESS_SIGNATURE.set(null); STARTED.set(false); - log.info("清除当前校验服务标识!"); - log.info("重置校验服务启动标识!"); + CHECKING.set(true); + log.info("clear and reset the current verification service!"); } + /** + * Increment Check Initialize configuration + * + * @param config Initialize configuration + */ @Override - public void incrementCheckConifg(IncrementCheckConifg incrementCheckConifg) { - feignClientService.configIncrementCheckEnvironment(Endpoint.SOURCE, incrementCheckConifg); + public void incrementCheckConfig(IncrementCheckConfig config) { + feignClientService.configIncrementCheckEnvironment(Endpoint.SOURCE, config); } - private void cleanBuildedTask() { + private void cleanBuildTask(String processNo) { try { - feignClientService.cleanEnvironment(Endpoint.SOURCE, PROCESS_SIGNATURE.get()); - feignClientService.cleanEnvironment(Endpoint.SINK, PROCESS_SIGNATURE.get()); - } catch (RuntimeException ex) { + feignClientService.cleanEnvironment(Endpoint.SOURCE, processNo); + feignClientService.cleanEnvironment(Endpoint.SINK, processNo); + } catch (CommonException ex) { log.error("ignore error:", ex); } tableStatusRegister.removeAll(); - log.info("数据抽取任务清除 "); + log.info("The task registry of the verification service clears the data extraction task status information"); } } diff --git a/datachecker-check/src/main/resources/application.yml b/datachecker-check/src/main/resources/application.yml index 6b7412d2bcaab9bf10657f33ff33558685f5c0b0..2175563cc0156c62a4f0c69297851dbaab0c6975 100644 --- a/datachecker-check/src/main/resources/application.yml +++ b/datachecker-check/src/main/resources/application.yml @@ -1,18 +1,30 @@ server: port: 7000 + shutdown: graceful debug: false spring: application: name: datachecker-check + lifecycle: + timeout-per-shutdown-phase: 5 + kafka: + consumer: + group-id: checkgroup + enable-auto-commit: true + auto-commit-interval: 100 + + auto-offset-reset: earliest + key-deserializer: org.apache.kafka.common.serialization.StringDeserializer + value-deserializer: org.apache.kafka.common.serialization.StringDeserializer + max-poll-records: 10000 + datasource: druid: dataCheck: driver-class-name: com.mysql.cj.jdbc.Driver type: com.alibaba.druid.pool.DruidDataSource - #Spring Boot 默认是不注入这些属性值的,需要自己绑定 - #druid 数据源专有配置 initialSize: 5 minIdle: 5 maxActive: 20 @@ -24,21 +36,32 @@ spring: testOnBorrow: false testOnReturn: false poolPreparedStatements: true + filters: stat,wall,log4j + maxPoolPreparedStatementPerConnectionSize: 20 + useGlobalDataSourceStat: true + connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=500 -fegin: - hystrix: - enabled:true +feign: + okhttp: + enabled: true logging: config: classpath:log4j2.xml data: check: - data-path: local_path/xxx # 配置数据校验结果输出本地路径 - bucket-expect-capacity: 10 # 桶容量范围最小值为1 + data-path: local_path/xxx + bucket-expect-capacity: 10 health-check-api: /extract/health - black-white-mode: BLACK #大写 + black-white-mode: BLACK + # statistical-enable : Configure whether to perform verification time statistics. + # If true, the execution time of the verification process will be statistically analyzed automatically. + statistical-enable: false + # auto-clean-environment: Configure whether to automatically clean the execution environment. + # If set to true, the environment will be cleaned automatically after the full verification process is completed. + auto-clean-environment: true + check-with-sync-extracting: true diff --git a/datachecker-check/src/main/resources/log4j2.xml b/datachecker-check/src/main/resources/log4j2.xml index a9dd5a2f40546f60b6ac1dff7389f3e65e9768c8..78997a56cf5eed914272861c2d816d77b3181589 100644 --- a/datachecker-check/src/main/resources/log4j2.xml +++ b/datachecker-check/src/main/resources/log4j2.xml @@ -1,83 +1,62 @@ - + + - - + logs/check - - - - - + - - - - - - - - - - - + + + - - - + + - - - - - - - + + - - + - - + - - + @@ -86,9 +65,9 @@ - - + @@ -99,25 +78,24 @@ - - + - + - + + - \ No newline at end of file diff --git a/datachecker-check/src/test/java/org/opengauss/datachecker/check/config/DataCheckConfigTest.java b/datachecker-check/src/test/java/org/opengauss/datachecker/check/config/DataCheckConfigTest.java index 8db60551a0a12b714fdc753f848201bb97a88962..110db2975081158c40d889d8c5990ba7f6458fb6 100644 --- a/datachecker-check/src/test/java/org/opengauss/datachecker/check/config/DataCheckConfigTest.java +++ b/datachecker-check/src/test/java/org/opengauss/datachecker/check/config/DataCheckConfigTest.java @@ -1,21 +1,41 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.config; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; +/** + * DataCheckConfigTest + * + * @author :wangchao + * @date :Created in 2022/6/18 + * @since :11 + */ +@Slf4j @SpringBootTest class DataCheckConfigTest { - - @Autowired private DataCheckConfig dataCheckConfig; - @Test void testGetCheckResultPaht() { - final String checkResultPaht = dataCheckConfig.getCheckResultPath(); - System.out.println(checkResultPaht); + log.info(checkResultPaht); } } diff --git a/datachecker-check/src/test/java/org/opengauss/datachecker/check/controller/TaskStatusControllerTest.java b/datachecker-check/src/test/java/org/opengauss/datachecker/check/controller/TaskStatusControllerTest.java index e392bbff043a968e7b14bae0cd11e4cca69518b3..6c3c0e3eda41788a913dbd050ed9773474afdd38 100644 --- a/datachecker-check/src/test/java/org/opengauss/datachecker/check/controller/TaskStatusControllerTest.java +++ b/datachecker-check/src/test/java/org/opengauss/datachecker/check/controller/TaskStatusControllerTest.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.controller; import org.junit.jupiter.api.Test; @@ -17,10 +32,16 @@ import static org.assertj.core.api.Assertions.assertThat; import static org.mockito.Mockito.verify; import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post; +/** + * TaskStatusControllerTest + * + * @author :wangchao + * @date :Created in 2022/7/20 + * @since :11 + */ @ExtendWith(SpringExtension.class) @WebMvcTest(TaskStatusController.class) class TaskStatusControllerTest { - @Autowired private MockMvc mockMvc; @@ -31,15 +52,13 @@ class TaskStatusControllerTest { void testRefushTaskExtractStatus() throws Exception { // Setup // Run the test - final MockHttpServletResponse response = mockMvc.perform(post("/table/extract/status") - .param("tableName", "tableName") - .param("endpoint", Endpoint.SOURCE.name()) - .accept(MediaType.APPLICATION_JSON)) - .andReturn().getResponse(); + final MockHttpServletResponse response = mockMvc.perform( + post("/table/extract/status").param("tableName", "tableName").param("endpoint", Endpoint.SOURCE.name()) + .accept(MediaType.APPLICATION_JSON)).andReturn().getResponse(); // Verify the results assertThat(response.getStatus()).isEqualTo(HttpStatus.OK.value()); assertThat(response.getContentAsString()).isEqualTo(""); - verify(taskManagerService).refushTableExtractStatus("tableName", Endpoint.SOURCE); + verify(taskManagerService).refreshTableExtractStatus("tableName", Endpoint.SOURCE); } } diff --git a/datachecker-check/src/test/java/org/opengauss/datachecker/check/modules/bucket/TestBucket.java b/datachecker-check/src/test/java/org/opengauss/datachecker/check/modules/bucket/TestBucket.java index a1ae7da779bfcadff91287fb57149d80efda1278..7c3724a53f395b29b569a113d226a30aa44d98cb 100644 --- a/datachecker-check/src/test/java/org/opengauss/datachecker/check/modules/bucket/TestBucket.java +++ b/datachecker-check/src/test/java/org/opengauss/datachecker/check/modules/bucket/TestBucket.java @@ -1,25 +1,39 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.modules.bucket; import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; -import org.opengauss.datachecker.common.util.HashUtil; -import org.opengauss.datachecker.common.util.IdWorker; +import org.opengauss.datachecker.common.util.IdGenerator; +import org.opengauss.datachecker.common.util.LongHashFunctionWrapper; import java.util.stream.IntStream; /** + * TestBucket + * * @author :wangchao * @date :Created in 2022/6/10 * @since :11 */ @Slf4j public class TestBucket { + private static final LongHashFunctionWrapper HASH_UTIL = new LongHashFunctionWrapper(); private static final int[] BUCKET_COUNT_LIMITS = new int[15]; - /** - * 空桶容量大小,用于构造特殊的空桶 - */ private static final int EMPTY_INITIAL_CAPACITY = 0; - private static final int BUCKET_MAX_COUNT_LIMITS = 1 << 15; static { @@ -43,28 +57,29 @@ public class TestBucket { @Test public void test() { IntStream.rangeClosed(0, 14).forEach(idx -> { - System.out.println("1<<" + (idx + 1) + " == " + BUCKET_COUNT_LIMITS[idx]); + log.info("1<<" + (idx + 1) + " == " + BUCKET_COUNT_LIMITS[idx]); }); - } - @Test public void test2() { final int limit = BUCKET_COUNT_LIMITS[6]; IntStream.rangeClosed(0, 14).forEach(idx -> { - final String squeueID = IdWorker.nextId("F"); - final long hashVal = HashUtil.hashBytes(squeueID); - log.info("squeueID[{}] % limit[{}] calacA={}, calacB={}", hashVal, limit, calacA(hashVal, limit), calacB(hashVal, limit)); + final String squeueID = IdGenerator.nextId("F"); + final long hashVal = HASH_UTIL.hashBytes(squeueID); + log.info("squeueID[{}] % limit[{}] calacA={}, calacB={}", hashVal, limit, calacA(hashVal, limit), + calacB(hashVal, limit)); }); - } private int calacA(long primaryKeyHash, int bucketCount) { -// return (int) (primaryKeyHash & (bucketCount - 1)); return (int) (Math.abs(primaryKeyHash) % bucketCount); } + private int calacA2(long primaryKeyHash, int bucketCount) { + return (int) (primaryKeyHash & (bucketCount - 1)); + } + private int calacB(long primaryKeyHash, int bucketCount) { return (int) (Math.abs(primaryKeyHash) & (bucketCount - 1)); } diff --git a/datachecker-check/src/test/java/org/opengauss/datachecker/check/task/TableStatusRegisterTest.java b/datachecker-check/src/test/java/org/opengauss/datachecker/check/task/TableStatusRegisterTest.java index 2648772ac2ace4fe1840cc4d73bfbfd328e1a23f..6e4805b281634e4864dfb60419336abe7181d1db 100644 --- a/datachecker-check/src/test/java/org/opengauss/datachecker/check/task/TableStatusRegisterTest.java +++ b/datachecker-check/src/test/java/org/opengauss/datachecker/check/task/TableStatusRegisterTest.java @@ -1,5 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.check.task; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.opengauss.datachecker.check.cache.TableStatusRegister; @@ -8,8 +24,15 @@ import java.util.Set; import static org.assertj.core.api.Assertions.assertThat; +/** + * TableStatusRegisterTest + * + * @author :wangchao + * @date :Created in 2022/7/20 + * @since :11 + */ +@Slf4j class TableStatusRegisterTest { - private TableStatusRegister tableStatusRegisterUnderTest; @BeforeEach @@ -38,15 +61,14 @@ class TableStatusRegisterTest { @Test void testUpdate() { - System.out.println("0|1 = " + (0 | 1)); - System.out.println("0|2 = " + (0 | 2)); - System.out.println("1|1 = " + (1 | 1)); - System.out.println("1|2 = " + (1 | 2)); - System.out.println("1|2|4 = " + (1 | 2 | 4)); - System.out.println("4 = " + Integer.toBinaryString(4)); - System.out.println(tableStatusRegisterUnderTest.get("tabel1")); + log.info("0|1 = " + (0 | 1)); + log.info("0|2 = " + (0 | 2)); + log.info("1|1 = " + (1 | 1)); + log.info("1|2 = " + (1 | 2)); + log.info("1|2|4 = " + (1 | 2 | 4)); + log.info("4 = " + Integer.toBinaryString(4)); + log.info("" + tableStatusRegisterUnderTest.get("tabel1")); assertThat(tableStatusRegisterUnderTest.update("tabel1", 1)).isEqualTo(1); - } @Test @@ -54,7 +76,6 @@ class TableStatusRegisterTest { // Setup // Run the test tableStatusRegisterUnderTest.remove("key"); - // Verify the results } @@ -63,52 +84,6 @@ class TableStatusRegisterTest { // Setup // Run the test tableStatusRegisterUnderTest.removeAll(); - // Verify the results } - - /** - * 线程状态观测 - * Thread.State - * 线程状态。线程可处于以下状态之一: - * NEW 尚未启动的线程处于此状态 - * RUNNABLE 在Java虚拟机中执行的线程处于此状态 - * BLOCKED 被阻塞等待监视器锁定的线程处于此状态 - * WAITING 正在等待另一个线程执行特定的动作的线程处于此状态 - * TIMED_WAITING 正在等待另一个线程执行动作达到指定等待时间的线程处于此状态 - * TERMINATED 已退出的线程处于此状态 - * - * @throws InterruptedException - */ - @Test - void testPersistent() throws InterruptedException { - Thread thread = new Thread(() -> { - for (int i = 0; i < 5; i++) { - try { - Thread.sleep(100); - } catch (InterruptedException e) { - e.printStackTrace(); - } - } - System.out.println("------------"); - }); - - Thread.State state = thread.getState(); - System.out.println(state); - - thread.start(); - state = thread.getState(); - System.out.println(state); - - boolean a = true; - while (a) { - Thread.sleep(2000); - System.out.println(thread.getState()); - - thread.start(); - - System.out.println(thread.getState()); - a = false; - } - } } diff --git a/datachecker-check/src/test/java/org/opengauss/datacheckercheck/DatacheckerCheckApplicationTests.java b/datachecker-check/src/test/java/org/opengauss/datacheckercheck/DatacheckerCheckApplicationTests.java deleted file mode 100644 index 4fe7ba62a6c1cdd46dcb6d4575967ebaf68c16d3..0000000000000000000000000000000000000000 --- a/datachecker-check/src/test/java/org/opengauss/datacheckercheck/DatacheckerCheckApplicationTests.java +++ /dev/null @@ -1,13 +0,0 @@ -package org.opengauss.datacheckercheck; - -import org.junit.jupiter.api.Test; -import org.springframework.boot.test.context.SpringBootTest; - -@SpringBootTest -class DatacheckerCheckApplicationTests { - - @Test - void contextLoads() { - } - -} diff --git a/datachecker-common/pom.xml b/datachecker-common/pom.xml index 9feb772ff0cc248f58f4f3b03f71d218db80513d..adbb9f1a22a923aeb939161385c47a43516e08ef 100644 --- a/datachecker-common/pom.xml +++ b/datachecker-common/pom.xml @@ -1,4 +1,19 @@ + + 4.0.0 diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/constant/Constants.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/constant/Constants.java index 0eea78c1be5886705c48a8a614d51aa8f68c960d..55b57ffce050947f0a95882107e847bba14d138f 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/constant/Constants.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/constant/Constants.java @@ -1,12 +1,42 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.constant; /** - * 系统常量定义 + * Constants + * + * @author :wangchao + * @date :Created in 2022/5/24 + * @since :11 */ public interface Constants { + /** + * Combined primary key splice connector + */ String PRIMARY_DELIMITER = "_#_"; + /** + * DELIMITER + */ + String DELIMITER = ","; + interface InitialCapacity { + /** + * map initial capacity + */ int MAP = 0; } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DataCheckParam.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DataCheckParam.java index 0a0e49472e91f1dad52cd1fd9433543035e36399..f6c1810a7375d3889000e7e324344e5e7a90b858 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DataCheckParam.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DataCheckParam.java @@ -1,52 +1,55 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.check; -import lombok.Getter; +import lombok.Data; +import lombok.experimental.Accessors; import org.opengauss.datachecker.common.entry.extract.Topic; -import org.springframework.lang.NonNull; +import org.springframework.boot.autoconfigure.kafka.KafkaProperties; /** - * 数据校验线程参数 + * Data verification thread parameters * * @author :wangchao * @date :Created in 2022/6/10 * @since :11 */ -@Getter +@Data +@Accessors(chain = true) public class DataCheckParam { /** - * 构建桶容量参数 + * Build bucket capacity parameters */ - private final int bucketCapacity; + private int bucketCapacity; /** - * 数据校验TOPIC对象 + * Data verification topic object */ - private final Topic topic; + private Topic topic; /** - * 校验Topic 分区 + * Verify topic partition */ - private final int partitions; + private int partitions; /** - * 校验结果输出路径 + * Verification result output path */ - private final String path; + private String path; - private final String schema; + private String schema; - /** - * 校验参数构建器 - * - * @param bucketCapacity 构建桶容量参数 - * @param topic 数据校验TOPIC对象 - * @param partitions 校验Topic 分区 - * @param path 校验结果输出路径 - */ - public DataCheckParam(int bucketCapacity, @NonNull Topic topic, int partitions, @NonNull String path, String schema) { - this.bucketCapacity = bucketCapacity; - this.topic = topic; - this.partitions = partitions; - this.path = path; - this.schema = schema; - } + private KafkaProperties properties; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DifferencePair.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DifferencePair.java index 0eaa47556938e7c3599c8df883e3d5a544ccb76c..56c312dff3f996f2ecfe615b75998c9d129b589d 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DifferencePair.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/DifferencePair.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.check; import lombok.EqualsAndHashCode; diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckConfig.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckConfig.java new file mode 100644 index 0000000000000000000000000000000000000000..d93f442b7a3285c6c1ccad4d45d4f23d05577a9f --- /dev/null +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckConfig.java @@ -0,0 +1,58 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.common.entry.check; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.experimental.Accessors; + +import javax.validation.constraints.NotEmpty; +import javax.validation.constraints.NotNull; +import java.util.List; + +/** + * Debezium incremental migration verification initialization configuration + * + * @author :wangchao + * @date :Created in 2022/6/24 + * @since :11 + */ +@Schema(name = "Debezium incremental migration verification initialization configuration") +@Data +@Accessors(chain = true) +public class IncrementCheckConfig { + /** + * Debezium incremental migration topic, debezium monitors table incremental data, + * and uses a single topic for incremental data management + */ + @Schema(name = "debeziumTopic", required = true) + @NotNull(message = "Debezium incremental migration topic cannot be empty") + private String debeziumTopic; + + @Schema(name = "groupId", description = "Topic grouping") + @NotNull(message = "Debezium incremental migration topic groupid cannot be empty") + private String groupId; + + @Schema(name = "partitions", description = "Topic partition", defaultValue = "1") + private int partitions = 1; + + /** + * Incremental migration table name list + */ + @Schema(name = "debeziumTables", required = true, description = "Incremental migration table name list") + @NotEmpty(message = "Incremental migration table name list cannot be empty") + private List debeziumTables; +} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckConifg.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckConifg.java new file mode 100644 index 0000000000000000000000000000000000000000..713271f1eeca7d3c3ec60a0f4537f04439be2720 --- /dev/null +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckConifg.java @@ -0,0 +1,58 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.common.entry.check; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.experimental.Accessors; + +import javax.validation.constraints.NotEmpty; +import javax.validation.constraints.NotNull; +import java.util.List; + +/** + * Debezium incremental migration verification initialization configuration + * + * @author :wangchao + * @date :Created in 2022/6/24 + * @since :11 + */ +@Schema(name = "Debezium incremental migration verification initialization configuration") +@Data +@Accessors(chain = true) +public class IncrementCheckConifg { + /** + * Debezium incremental migration topic, debezium monitors table incremental data, + * and uses a single topic for incremental data management + */ + @Schema(name = "debeziumTopic", required = true) + @NotNull(message = "Debezium incremental migration topic cannot be empty") + private String debeziumTopic; + + @Schema(name = "groupId", description = "Topic grouping") + @NotNull(message = "Debezium incremental migration topic groupid cannot be empty") + private String groupId; + + @Schema(name = "partitions", description = "Topic partition", defaultValue = "1") + private int partitions = 1; + + /** + * Incremental migration table name list + */ + @Schema(name = "debeziumTables", required = true, description = "Incremental migration table name list") + @NotEmpty(message = "Incremental migration table name list cannot be empty") + private List debeziumTables; +} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckTopic.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckTopic.java new file mode 100644 index 0000000000000000000000000000000000000000..d0e1796da6aa74d8e0e20156733e93e5730edd77 --- /dev/null +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/IncrementCheckTopic.java @@ -0,0 +1,51 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.common.entry.check; + +import io.swagger.v3.oas.annotations.media.Schema; +import lombok.Data; +import lombok.experimental.Accessors; + +/** + * IncrementCheckTopic + * + * @author :wangchao + * @date :Created in 2022/6/24 + * @since :11 + */ +@Schema(name = "debezium增量校验topic信息") +@Data +@Accessors(chain = true) +public class IncrementCheckTopic { + /** + * Debezium incremental migration topic, debezium monitors table incremental data, + * and uses a single topic for incremental data management + */ + @Schema(name = "debeziumTopic") + private String topic; + + @Schema(name = "groupId", description = "Topic grouping") + private String groupId; + + @Schema(name = "partitions", description = "Topic partition") + private int partitions; + + @Schema(name = "begin", description = "Topic start offset") + private Long begin; + + @Schema(name = "end", description = "Topic end offset") + private Long end; +} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/Pair.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/Pair.java index ba42658da5b8a5c6071326cfbb537d6335a89f81..0f2c16612e8d4a3cea21c63ddd4dba591cbfb4d8 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/Pair.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/check/Pair.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.check; import lombok.EqualsAndHashCode; @@ -12,7 +27,6 @@ import org.springframework.lang.NonNull; @Getter @EqualsAndHashCode public final class Pair { - private S source; private T sink; @@ -33,19 +47,28 @@ public final class Pair { } /** - * 修改pair + * modify pair sink * - * @param pair - * @param sink - * @param - * @param - * @return + * @param pair pair + * @param sink sink + * @param source + * @param sink + * @return pair */ public static Pair of(@NonNull Pair pair, T sink) { pair.sink = sink; return pair; } + /** + * modify pair source + * + * @param source source + * @param pair pair + * @param source + * @param sink + * @return pair + */ public static Pair of(S source, @NonNull Pair pair) { pair.source = source; return pair; @@ -56,6 +79,6 @@ public final class Pair { */ @Override public String toString() { - return String.format("%s->%s", this.source, this.sink); + return String.format("%s->%s", source, sink); } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckBlackWhiteMode.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckBlackWhiteMode.java index a9953067a9bd521797d7e94003d7c5adb5ac529a..6c6c30b89fe513a56f012f963ff20e5a17f774a3 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckBlackWhiteMode.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckBlackWhiteMode.java @@ -1,9 +1,24 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; /** - * {@value API_DESCRIPTION } + * CheckBlackWhiteMode {@value API_DESCRIPTION } * * @author :wangchao * @date :Created in 2022/5/29 @@ -12,15 +27,15 @@ import lombok.Getter; @Getter public enum CheckBlackWhiteMode implements IEnum { /** - * 不开启黑白名单模式 + * Do not turn on black and white list mode */ NONE("NONE", "do not turn on black and white list mode"), /** - * 黑名单校验 + * Enable black list verification mode */ - BLACK("BLACK", "blacklist verification mode"), + BLACK("BLACK", "black list verification mode"), /** - * 白名单校验 + * Enable white list verification mode */ WHITE("WHITE", "white list verification mode"); @@ -32,9 +47,10 @@ public enum CheckBlackWhiteMode implements IEnum { this.description = description; } - public static final String API_DESCRIPTION = "black and white list verification mode [" + - " NONE-do not turn on black and white list mode," + - " BLACK-blacklist verification mode," + - " WHITE-white list verification mode" + - "]"; + /** + * CheckBlackWhiteMode api description + */ + public static final String API_DESCRIPTION = + "black and white list verification mode [" + " NONE-do not turn on black and white list mode," + + " BLACK-blacklist verification mode," + " WHITE-white list verification mode" + "]"; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckMode.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckMode.java index d8008013fdcdf65023f7e34c9d0724596cd0b441..a5b591636f2537c372f81e1e48f5892f27f69abc 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckMode.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/CheckMode.java @@ -1,9 +1,24 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; /** - * 校验方式 + * CheckMode {@value API_DESCRIPTION } * * @author :wangchao * @date :Created in 2022/5/29 @@ -12,11 +27,11 @@ import lombok.Getter; @Getter public enum CheckMode implements IEnum { /** - * 全量校验 + * full check mode */ FULL("FULL", "full check mode"), /** - * 增量校验 + * increment check mode */ INCREMENT("INCREMENT", "increment check mode"); @@ -28,5 +43,8 @@ public enum CheckMode implements IEnum { this.description = description; } + /** + * CheckMode api description + */ public static final String API_DESCRIPTION = "CheckMode [FULL-full check mode,INCREMENT-increment check mode]"; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ColumnKey.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ColumnKey.java index 39bf174431be0c03163c86725d94769df5c1826f..5e549eb2bd7d40c9d241fd895d142a55307f684e 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ColumnKey.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ColumnKey.java @@ -1,12 +1,33 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; - +/** + * ColumnKey {@value API_DESCRIPTION } + * + * @author :wangchao + * @date :Created in 2022/5/29 + * @since :11 + */ @Getter public enum ColumnKey implements IEnum { /** - * 主键 + * PRI */ PRI("PRI"), /** @@ -25,4 +46,8 @@ public enum ColumnKey implements IEnum { this.code = code; } + /** + * ColumnKey api description + */ + public static final String API_DESCRIPTION = "ColumnKey [PRI-PRI,UNI-UNI, MUL-MUL]"; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DML.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DML.java index 40d78bb3c06eb102a1afcdf5bc798193d2b3fd79..52e2807ac3a5f10c6e0e1a61a019fa8097942671 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DML.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DML.java @@ -1,25 +1,41 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; /** - * {@value API_DESCRIPTION} + * DML {@value API_DESCRIPTION } + * * @author :wangchao - * @date :Created in 2022/6/12 + * @date :Created in 2022/5/29 * @since :11 */ @Getter public enum DML implements IEnum { /** - * Insert插入语句 + * Insert statement */ INSERT("INSERT", "InsertStatement"), /** - * Delete删除语句 + * Delete statement */ DELETE("DELETE", "DeleteStatement"), /** - * Replace修改语句 + * Replace statement */ REPLACE("REPLACE", "ReplaceStatement"); @@ -31,5 +47,9 @@ public enum DML implements IEnum { this.description = description; } - public static final String API_DESCRIPTION = "DML [INSERT-InsertStatement,DELETE-DeleteStatement,REPLACE-ReplaceStatement]"; + /** + * DML api description + */ + public static final String API_DESCRIPTION = + "DML [INSERT-InsertStatement,DELETE-DeleteStatement,REPLACE-ReplaceStatement]"; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseMeta.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseMeta.java index 4e566071f2b4f28e0ebea2268b12e72161a6943e..8a0acdf2e33db2e030a26508ec5ce7574a3290cb 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseMeta.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseMeta.java @@ -1,9 +1,35 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; +/** + * DataBaseMeta + * + * @author :wangchao + * @date :Created in 2022/5/29 + * @since :11 + */ @Getter public enum DataBaseMeta implements IEnum { + /** + * DataBaseHealth + */ + HEALTH("HealthMeta", "DataBaseHealth"), /** * TableMetaData */ diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseType.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseType.java index 2ba577b3aefdc490c6a31ca6af007ec76eddf74b..23204777c3eb2da333cc8a5f4ce1d54ab8c912b7 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseType.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataBaseType.java @@ -1,22 +1,41 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; /** - * {@value API_DESCRIPTION} + * DataBaseType {@value API_DESCRIPTION } + * + * @author :wangchao + * @date :Created in 2022/5/29 + * @since :11 */ @Getter public enum DataBaseType implements IEnum { /** - * MySQL数据库类型 + * MySQL database type */ MS("MYSQL", "MYSQL"), /** - * open gauss数据库 + * open gauss database type */ OG("OPENGAUSS", "OPENGAUSS"), /** - * oracle数据库 + * oracle database type */ O("ORACLE", "ORACLE"); @@ -28,5 +47,8 @@ public enum DataBaseType implements IEnum { this.description = description; } + /** + * DataBaseType api description + */ public static final String API_DESCRIPTION = "Database type [MS-MYSQL,OG-OPENGAUSS,O-ORACLE]"; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataSourceType.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataSourceType.java index 2860916c3875e590a73ecd4ccf75bbe5b9d3d0f0..321f74980e5642a54e9b19bed34e92bcf153f80b 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataSourceType.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/DataSourceType.java @@ -1,15 +1,37 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; +/** + * DataSourceType {@value API_DESCRIPTION } + * + * @author :wangchao + * @date :Created in 2022/5/29 + * @since :11 + */ @Getter public enum DataSourceType implements IEnum { /** - * 源端 + * Source */ Source("Source"), /** - * 宿端 + * Sink */ Sink("Sink"); @@ -20,4 +42,8 @@ public enum DataSourceType implements IEnum { this.code = code; } + /** + * DataSourceType api description + */ + public static final String API_DESCRIPTION = "DataSource type [Source-Source,Sink-Sink]"; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/Endpoint.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/Endpoint.java index 1eca6296d59553cfac0508974fae535abf42fe99..242e4a9468cdcfb58e12cf13300a90a6b1a20825 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/Endpoint.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/Endpoint.java @@ -1,10 +1,25 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import io.swagger.v3.oas.annotations.media.Schema; import lombok.Getter; /** - * {@value API_DESCRIPTION} + * Endpoint {@value API_DESCRIPTION} * * @author :wangchao * @date :Created in 2022/5/25 @@ -14,15 +29,15 @@ import lombok.Getter; @Getter public enum Endpoint { /** - * 源端 + * source endpoint */ SOURCE(1, "SourceEndpoint"), /** - * 宿端 + * sink endpoint */ SINK(2, "SinkEndpoint"), /** - * 校验端 + * check endpoint */ CHECK(3, "CheckEndpoint"); @@ -34,6 +49,9 @@ public enum Endpoint { this.description = description; } - public static final String API_DESCRIPTION = "data verification endpoint type " + - "[SOURCE-1-SourceEndpoint,SINK-2-SinkEndpoint,CHECK-3-CheckEndpoint]"; + /** + * Endpoint api description + */ + public static final String API_DESCRIPTION = + "data verification endpoint type " + "[SOURCE-1-SourceEndpoint,SINK-2-SinkEndpoint,CHECK-3-CheckEndpoint]"; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/IEnum.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/IEnum.java index 6ae625e8f7674b222b7ea312536212731b026eb1..3b457dcdf6340c08f44cddead7ec1949785f55e1 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/IEnum.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/IEnum.java @@ -1,17 +1,39 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; +/** + * IEnum + * + * @author :wangchao + * @date :Created in 2022/6/24 + * @since :11 + */ public interface IEnum { /** - * 定义枚举code + * Define enumeration code * - * @return 返回枚举code + * @return Return enumeration code */ String getCode(); /** - * 声明枚举描述 + * Declaration enumeration description * - * @return 返回枚举描述 + * @return Return enumeration description */ String getDescription(); } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ResultEnum.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ResultEnum.java index bc396ede502aa88e808c15ffadb3f22109a6de72..58e2d8f643f943f4c265354ab6fc3c300429fe05 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ResultEnum.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/enums/ResultEnum.java @@ -1,75 +1,161 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.enums; import lombok.Getter; /** + * ResultEnum {@value API_DESCRIPTION} + * * @author :wangchao * @date :Created in 2022/5/26 * @since :11 */ @Getter public enum ResultEnum { + /** + * verification service exception + */ + CHECKING(1000, "verification service exception"), - //自定义系列 校验异常 - //校验服务异常: - CHECKING(1000, "verification service exception:"), - //校验服务地址端口冲突 + /** + * verify service address port conflict + */ CHECKING_ADDRESS_CONFLICT(1001, "verify service address port conflict."), - //校验服务Meta数据异常 + + /** + * verification service meta data exception + */ CHECK_META_DATA(1002, "verification service meta data exception."), - //校验服务-表数据差异过大,无法校验 - LARGE_DATA_DIFF(1003, "verification service - the table data difference is too large to be verified."), - //校验服务-默克尔树高度不一致 + + /** + * verification service - the table data difference is too large to be verified + */ + LARGE_DATA_DIFF(1003, "verification service -the data difference is too large to be verified."), + + /** + * verification service - height of Merkel tree is inconsistent. + */ MERKLE_TREE_DEPTH(1004, "verification service - height of Merkel tree is inconsistent."), - //自定义系列 抽取异常 - //抽取服务异常 + /** + * extraction service exception + */ EXTRACT(2000, "extraction service exception:"), - //创建KafkaTopic异常: + + /** + * create kafka topic exception + */ CREATE_TOPIC(2001, "create kafka topic exception:"), - //当前实例正在执行数据抽取服务,不能重新开启新的校验。 - PROCESS_MULTIPLE(2002, "The current instance is executing the data extraction service and cannot restart the new verification."), - //数据抽取服务,未找到待执行抽取任务 + + /** + * The current instance is executing the data extraction service and cannot restart the new verification + */ + PROCESS_MULTIPLE(2002, "The instance is executing and cannot restart the new verification."), + + /** + * data extraction service, no extraction task to be executed found + */ TASK_NOT_FOUND(2003, "data extraction service, no extraction task to be executed found."), - //数据抽取服务,当前表对应元数据不存在 - TABLE_NOT_FOUND(2004, "data extraction service. The metadata corresponding to the current table does not exist."), - //Debezium配置错误 + + /** + * data extraction service. The metadata corresponding to the current table does not exist. + */ + TABLE_NOT_FOUND(2004, "The metadata corresponding to the current table does not exist."), + + /** + * debezium configuration error + */ DEBEZIUM_CONFIG_ERROR(2005, "debezium configuration error"), - //自定义系列 抽取异常 - //Feign客户端异常 + /** + * feign client exception + */ FEIGN_CLIENT(3000, "feign client exception"), - //调度Feign客户端异常 + + /** + * scheduling feign client exception + */ DISPATCH_CLIENT(3001, "scheduling feign client exception"), + /** + * SUCCESS + */ SUCCESS(200, "SUCCESS"), + + /** + * ERROR + */ SERVER_ERROR(400, "ERROR"), - //400系列 - //请求的数据格式不符 + + /** + * The requested data format does not match + */ BAD_REQUEST(400, "The requested data format does not match!"), - //登录凭证过期! + + /** + * login certificate expired + */ UNAUTHORIZED(401, "login certificate expired!"), - //抱歉,你无权限访问! + + /** + * Sorry, you have no access! + */ FORBIDDEN(403, "Sorry, you have no access!"), - //请求的资源找不到! + + /** + * The requested resource cannot be found! + */ NOT_FOUND(404, "The requested resource cannot be found!"), - //参数丢失 + + /** + * Parameter missing + */ PARAM_MISSING(405, "Parameter missing"), - //参数类型不匹配 + + /** + * Parameter type mismatch + */ PARAM_TYPE_MISMATCH(406, "Parameter type mismatch"), - //请求方法不支持 + + /** + * request method is not supported + */ HTTP_REQUEST_METHOD_NOT_SUPPORTED_ERROR(407, "request method is not supported"), - //非法参数异常 + + /** + * illegal parameter exception + */ SERVER_ERROR_PRARM(408, "illegal parameter exception"), - //500系列 - //服务器内部错误! + /** + * server internal error + */ INTERNAL_SERVER_ERROR(500, "server internal error!"), - //服务器正忙,请稍后再试! + + /** + * the server is busy, please try again later! + */ SERVICE_UNAVAILABLE(503, "the server is busy, please try again later!"), - //未知异常 + /** + * Unknown exception!" + */ UNKNOWN(7000, "Unknown exception!"); + private final int code; private final String description; diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/CheckDiffResult.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/CheckDiffResult.java deleted file mode 100644 index 1e1e7ef8620ab6c138e97d35f0079bcc83681bfe..0000000000000000000000000000000000000000 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/CheckDiffResult.java +++ /dev/null @@ -1,44 +0,0 @@ -//package org.opengauss.datachecker.common.entry.extract; -// -//import lombok.Data; -//import lombok.experimental.Accessors; -//import lombok.experimental.SuperBuilder; -// -//import java.time.LocalDateTime; -//import java.util.List; -//import java.util.Set; -// -///** -// * @author :wangchao -// * @date :Created in 2022/6/18 -// * @since :11 -// */ -//@Data -//@Accessors(chain = true) -//public class CheckDiffResult { -// private String table; -// private int partitions; -// private String topic; -// private LocalDateTime createTime; -// -// private Set keyUpdateSet; -// private Set keyInsertSet; -// private Set keyDeleteSet; -// -// private List repairUpdate; -// private List repairInsert; -// private List repairDelete; -// -// public CheckDiffResult(final CheckDiffResultBuilder b) { -// this.table = b.table; -// this.partitions = b.partitions; -// this.topic = b.topic; -// this.createTime = b.createTime; -// this.keyUpdateSet = b.keyUpdateSet; -// this.keyInsertSet = b.keyInsertSet; -// this.keyDeleteSet = b.keyDeleteSet; -// this.repairUpdate = b.repairUpdate; -// this.repairInsert = b.repairInsert; -// this.repairDelete = b.repairDelete; -// } -//} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ColumnsMetaData.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ColumnsMetaData.java index 5c6aa9d06e6f95e78b9fa37587f19fd9ddac1349..59c9cfa48720a87541bf14c63247721b22929370 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ColumnsMetaData.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ColumnsMetaData.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import lombok.Data; @@ -6,34 +21,38 @@ import lombok.experimental.Accessors; import org.opengauss.datachecker.common.entry.enums.ColumnKey; /** - * 表元数据信息 + * Table metadata information + * + * @author :wangchao + * @date :Created in 2022/6/24 + * @since :11 */ @Data @Accessors(chain = true) @ToString public class ColumnsMetaData { /** - * 表名 + * Table */ private String tableName; /** - * 主键列名称 + * Primary key column name */ private String columnName; /** - * 主键列数据类型 + * Primary key column data type */ private String columnType; /** - * 主键列数据类型 + * Primary key column data type */ private String dataType; /** - * 主键表序号 + * Table field sequence number */ private int ordinalPosition; /** - * 主键 + * {@value ColumnKey#API_DESCRIPTION} */ private ColumnKey columnKey; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractIncrementTask.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractIncrementTask.java index ee30b14dd7e7a00c1648a89fc76bb8f38c0f95cc..eea00080ec1b85b694cae8db0c470779e00f554e 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractIncrementTask.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractIncrementTask.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import lombok.Data; @@ -14,20 +29,20 @@ import lombok.experimental.Accessors; @Accessors(chain = true) public class ExtractIncrementTask { /** - * 表名称 + * tableName */ private String tableName; /** - * 当前抽取端点 schema + * Currently extract endpoint database schema */ private String schema; /** - * 任务名称 + * taskName */ private String taskName; /** - * 数据变更日志 + * Data change log */ private SourceDataLog sourceDataLog; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractTask.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractTask.java index 24152f9ff672355b75883de8a6edbd9646cfbf5e..db9ef9a41a60e2cf71f44a1fba0d4f2cf83cfaab 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractTask.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/ExtractTask.java @@ -1,43 +1,71 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import lombok.Data; import lombok.ToString; import lombok.experimental.Accessors; +/** + * ExtractTask + * + * @author :wangchao + * @date :Created in 2022/6/1 + * @since :11 + */ @ToString @Data @Accessors(chain = true) public class ExtractTask { /** - * 任务名称 + * taskName */ private String taskName; /** - * 表名称 + * tableName */ private String tableName; /** - * 任务分拆总数:1 表示未分拆,大于1则表示分拆为divisionsTotalNumber个任务 + * Total number of tasks split: 1 means not split, + * and greater than 1 means divided into divisionsTotalNumber tasks */ - private int divisionsTotalNumber; + private int divisionsTotalNumber = 1; /** - * 当前表,拆分任务序列 + * Current table, split task sequence */ - private int divisionsOrdinal; + private int divisionsOrdinal = 1; /** - * 任务执行起始位置 + * Start position of task execution */ - private long start; + private long start = 0L; /** - * 任务执行偏移量 + * Task execution offset */ private long offset; /** - * 表元数据信息 + * Table metadata information */ private TableMetadata tableMetadata; + /** + * Whether to slice the table corresponding to the current task + * + * @return If true is returned, it indicates fragmentation, and false indicates no fragmentation + */ public boolean isDivisions() { return divisionsTotalNumber > 1; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/PrimaryMeta.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/PrimaryMeta.java index 45d13fccc1bfd6c8a3a6fbad7b0c6fb9401131aa..63e012f23d3fe06d92956287544a58c1eecf7a5d 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/PrimaryMeta.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/PrimaryMeta.java @@ -1,21 +1,43 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import lombok.Data; import lombok.experimental.Accessors; +/** + * PrimaryMeta + * + * @author :wangchao + * @date :Created in 2022/6/1 + * @since :11 + */ @Data @Accessors public class PrimaryMeta { /** - * 主键列名称 + * Primary key column name */ private String columnName; /** - * 主键列数据类型 + * Primary key column data type */ private String columnType; /** - * 主键表序号 + * Primary key table serial number */ private int ordinalPosition; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/RowDataHash.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/RowDataHash.java index 21db8f98c3f17472841ad485854d0d0bdb124f43..4ff5f50f8cc5f9ccfa47cedaa94a6773f489fa94 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/RowDataHash.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/RowDataHash.java @@ -1,25 +1,53 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import lombok.Data; import lombok.EqualsAndHashCode; import lombok.experimental.Accessors; +/** + * RowDataHash + * + * @author :wangchao + * @date :Created in 2022/6/1 + * @since :11 + */ @Data @EqualsAndHashCode @Accessors(chain = true) public class RowDataHash { /** - * 主键为数字类型则 转字符串,表主键为联合主键,则当前属性为表主键联合字段对应值 拼接字符串 以下划线拼接 + *

+     * If the primary key is a numeric type, it will be converted to a string.
+     * If the table primary key is a joint primary key, the current attribute will be a table primary key,
+     * and the corresponding values of the joint fields will be spliced. String splicing will be underlined
+     * 
*/ private String primaryKey; /** - * 主键对应值的哈希值 + * Hash value of the corresponding value of the primary key */ private long primaryKeyHash; /** - * 当前记录的总体哈希值 + * Total hash value of the current record */ private long rowHash; + + private int partition; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/SourceDataLog.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/SourceDataLog.java index 206e4eb0872b8006d94570d0b54bba3a355583aa..e0726f8c0529e5040d0c613b72cfe4e965a57d6b 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/SourceDataLog.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/SourceDataLog.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import io.swagger.v3.oas.annotations.media.Schema; @@ -8,35 +23,38 @@ import org.opengauss.datachecker.common.constant.Constants; import java.util.List; /** - * 源端数据变更日志 + * Source side data change log * * @author :wangchao * @date :Created in 2022/6/14 * @since :11 */ -@Schema(description = "源端数据变更日志") +@Schema(description = "Source side data change log") @Data @Accessors(chain = true) public class SourceDataLog { private static final String PRIMARY_DELIMITER = Constants.PRIMARY_DELIMITER; /** - * 数据变更日志 对应表名称 + * Data change log corresponding table name */ - @Schema(name = "tableName", description = "表名称") + @Schema(name = "tableName") private String tableName; /** - * 当前表的主键字段名称列表 + * List of primary key field names of the current table */ - @Schema(name = "compositePrimarys", description = "当前表的主键字段名称列表") + @Schema(name = "compositePrimarys", description = "List of primary key field names of the current table") private List compositePrimarys; /** - * 相同数据操作类型{@code operateCategory}的数据变更的主键值列表

- * 单主键表 :主键值直接添加进{@code compositePrimarysValues}集合。

- * 复合主键:对主键值进行组装,根据{@code compositePrimarys}记录的主键字段顺序,进行拼接。链接符{@value PRIMARY_DELIMITER} + *

+     * List of primary key values of data changes of the same data operation type {@code operateCategory} 

+ * Single primary key table: primary key values are directly added to the {@code compositePrimaryValues} set< p> + * Composite primary key: assemble the primary key values and splice them according to + * the order of the primary key fields recorded in {@code compositePrimarys}. Linker {@value PRIMARY_DELIMITER} + *

*/ - @Schema(name = "compositePrimaryValues", description = "相同数据操作类型{@code operateCategory}的数据变更的主键值列表") + @Schema(name = "compositePrimaryValues") private List compositePrimaryValues; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadata.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadata.java index 17d20ab320efc8ca99dd481d34bccd18581c09a0..3e794ecf902a8184e539fb201bb9370c7ccd822d 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadata.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadata.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import io.swagger.v3.oas.annotations.media.Schema; @@ -8,30 +23,34 @@ import lombok.experimental.Accessors; import java.util.List; /** - * 表元数据信息 + * Table metadata information + * + * @author :wangchao + * @date :Created in 2022/6/14 + * @since :11 */ -@Schema(name = "表元数据信息") +@Schema(name = "Table metadata information") @Data @Accessors(chain = true) @ToString public class TableMetadata { /** - * 表名 + * tableName */ private String tableName; /** - * 表数据总量 + * Total table data */ private long tableRows; /** - * 主键列属性 + * Primary key column properties */ private List primaryMetas; /** - * 表列属性 + * Table column properties */ private List columnsMetas; diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadataHash.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadataHash.java index 9ce461aac5822c4170b2a45642715f7d3617d4bd..eac79fd7d4a94f062bfad85ceda3a834e191a40d 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadataHash.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/TableMetadataHash.java @@ -1,21 +1,43 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import lombok.Data; import lombok.EqualsAndHashCode; import lombok.experimental.Accessors; +/** + * Table metadata hash information + * + * @author :wangchao + * @date :Created in 2022/6/14 + * @since :11 + */ @Data @EqualsAndHashCode @Accessors(chain = true) public class TableMetadataHash { /** - * 表名 + * tableName */ private String tableName; /** - * 当前记录的总体哈希值 + * Total hash value of the current record */ private long tableHash; } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/Topic.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/Topic.java index 8d89d2d28e83c890da837401b9822f79027178f2..2215a1bbb325a7dd01e1c7c576ca43387d8cd26f 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/Topic.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/entry/extract/Topic.java @@ -1,23 +1,45 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.entry.extract; import lombok.Data; import lombok.ToString; import lombok.experimental.Accessors; +/** + * Topic + * + * @author :wangchao + * @date :Created in 2022/6/14 + * @since :11 + */ @ToString @Data @Accessors(chain = true) public class Topic { /** - * 表名称 + * tableName */ private String tableName; /** - * 当前表,对应的Topic名称 + * Current table, corresponding topic name */ private String topicName; /** - * 当前表存在在Kafka Topic中的数据的分区总数 + * The total number of partitions of data in the current table in Kafka topic */ private int partitions; diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckMetaDataException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckMetaDataException.java index 4796e8bd967c8d32a251acddd503bfa279cd5e16..94fd19449887f5a910a07de2d9d6093791f74298 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckMetaDataException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckMetaDataException.java @@ -1,13 +1,29 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; /** - * 校验服务 + * Verification service * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class CheckMetaDataException extends CheckingException { + private static final long serialVersionUID = -754185878538320560L; public CheckMetaDataException(String message) { super(message); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingAddressConflictException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingAddressConflictException.java index 040aacc8f256557c5702989dc76112f6c9123059..0ed3a11b69297b1197072a32817b37c50cdfea3b 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingAddressConflictException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingAddressConflictException.java @@ -1,13 +1,30 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; /** - * 校验服务 配置源端和宿端地址进行约束性检查:源端和宿端地址不能重复 + * Verify that the source and destination addresses of the service configuration are checked for constraints: + * the source and destination addresses cannot be duplicate * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class CheckingAddressConflictException extends CheckingException { + private static final long serialVersionUID = -4644169559429602053L; public CheckingAddressConflictException(String message) { super(message); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingException.java index 15446a15e97b10c9b19573a86af54eb1227b179a..61a574a73e706618e98334fa5546086b7c372ef0 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingException.java @@ -1,20 +1,44 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; import java.util.Objects; +/** + * CheckingException + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ public class CheckingException extends RuntimeException { + private static final long serialVersionUID = -5335567756924351615L; private final String msg; public CheckingException(String message) { - this.msg = message; + msg = message; } + @Override public String getMessage() { String message = super.getMessage(); if (Objects.isNull(message)) { - return this.msg; + return msg; } - return this.msg + message; + return msg + message; } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingPollingException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingPollingException.java index 54bc8a2ae6b9622c9cf0566d681aeed289d25da0..078efb83500d99e09ff9d965f3cc0117569a3146 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingPollingException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CheckingPollingException.java @@ -1,13 +1,29 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; /** - * 校验服务 校验轮询异常 + * Verification service verification polling exception * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class CheckingPollingException extends CheckingException { + private static final long serialVersionUID = 6526279344405897976L; public CheckingPollingException(String message) { super(message); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CommonException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CommonException.java new file mode 100644 index 0000000000000000000000000000000000000000..69c09d59886728393934e46c970b0ec64aadf643 --- /dev/null +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CommonException.java @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.common.exception; + +/** + * CommonException + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ +public class CommonException extends RuntimeException { + private static final long serialVersionUID = 6537806426136781330L; + + public CommonException(String message) { + super(message); + } +} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CreateTopicException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CreateTopicException.java index a8fc6f297de6420abe9f9c05d6490e7af30faf5e..da9593f15d2915696fb869bb0ad7e3bfffcdb3c7 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CreateTopicException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/CreateTopicException.java @@ -1,32 +1,38 @@ /* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. * - * * Copyright 2019-2020 the original author or authors. - * * - * * Licensed under the Apache License, Version 2.0 (the "License"); - * * you may not use this file except in compliance with the License. - * * You may obtain a copy of the License at - * * - * * https://www.apache.org/licenses/LICENSE-2.0 - * * - * * Unless required by applicable law or agreed to in writing, software - * * distributed under the License is distributed on an "AS IS" BASIS, - * * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * * See the License for the specific language governing permissions and - * * limitations under the License. + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. */ package org.opengauss.datachecker.common.exception; -@SuppressWarnings("serial") +/** + * CreateTopicException + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ public class CreateTopicException extends ExtractException { + private static final long serialVersionUID = -5186262422768834244L; + private final String msg; public CreateTopicException(String message) { - this.msg = message; + msg = message; } + @Override public String getMessage() { - return this.msg + "_" + super.getMessage(); + return msg + "_" + super.getMessage(); } } \ No newline at end of file diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/DebeziumConfigException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/DebeziumConfigException.java new file mode 100644 index 0000000000000000000000000000000000000000..d70fb9c2f58e33e659e9a9e1dea779ae4efbce5f --- /dev/null +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/DebeziumConfigException.java @@ -0,0 +1,31 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.common.exception; + +/** + * Debezium configuration error + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ +public class DebeziumConfigException extends ExtractException { + private static final long serialVersionUID = 5536708506446596642L; + + public DebeziumConfigException(String message) { + super(message); + } +} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/DispatchClientException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/DispatchClientException.java index 0c0d9feb517d5fa9da7d8aa74929bb418769aba6..543d44cc74a768c7289ae2e81b434580e9cfdf47 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/DispatchClientException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/DispatchClientException.java @@ -1,23 +1,33 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; import org.opengauss.datachecker.common.entry.enums.Endpoint; import org.springframework.lang.NonNull; /** - * 调度客户端异常 + * Scheduling client exception * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class DispatchClientException extends FeignClientException { + private static final long serialVersionUID = -7957291159832049055L; - /** - * 调度客户端异常 - * - * @param endpoint 端点 - * @param message 异常信息 - */ public DispatchClientException(@NonNull Endpoint endpoint, String message) { super(endpoint, message); } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ExtractException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ExtractException.java index 9f29c4f8a6a90ef1e51f08b1a3735b46a36e83c2..dabb5873b6c1e91b014270c4694a22d2a420d0fa 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ExtractException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ExtractException.java @@ -1,11 +1,30 @@ -package org.opengauss.datachecker.common.exception; +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ -import lombok.Getter; +package org.opengauss.datachecker.common.exception; -@Getter +/** + * ExtractException + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ public class ExtractException extends RuntimeException { + private static final long serialVersionUID = 414115892399622074L; - //数据抽取服务异常 private String message = "Data extraction service exception"; public ExtractException(String message) { diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/FeignClientException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/FeignClientException.java index ed6847bf6a6c459ef3c34220e5e4ad89488a81e5..69edcd3f58e251aa5581a1006689e16ae566305a 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/FeignClientException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/FeignClientException.java @@ -1,16 +1,32 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; import org.opengauss.datachecker.common.entry.enums.Endpoint; import org.springframework.lang.NonNull; /** - * 工具FeignClient 调用异常 + * Tool feignclient call exception * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class FeignClientException extends RuntimeException { + private static final long serialVersionUID = 4698075893341122469L; public FeignClientException(@NonNull Endpoint endpoint, String message) { super(endpoint.getDescription() + " " + message); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/GlobalCommonExceptionHandler.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/GlobalCommonExceptionHandler.java index cbcd7532263f96f1a7005e3b0024ce701f25aae9..4d7de20836b0e31073f2d7a83cee138f65cb0704 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/GlobalCommonExceptionHandler.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/GlobalCommonExceptionHandler.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; import lombok.extern.slf4j.Slf4j; @@ -8,73 +23,119 @@ import org.springframework.web.bind.MissingServletRequestParameterException; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.method.annotation.MethodArgumentTypeMismatchException; - import javax.servlet.http.HttpServletRequest; +/** + * GlobalCommonExceptionHandler + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ @Slf4j public class GlobalCommonExceptionHandler { /** - * 缺少必要的参数 + * Missing required parameters + * + * @param request request + * @param exp exception + * @return method request result */ @ExceptionHandler(value = MissingServletRequestParameterException.class) - public Result missingParameterHandler(HttpServletRequest request, MissingServletRequestParameterException e) { - this.logError(request, e); + public Result missingParameterHandler(HttpServletRequest request, MissingServletRequestParameterException exp) { + logError(request, exp); return Result.fail(ResultEnum.PARAM_MISSING); } /** - * 参数类型不匹配 + * Parameter type mismatch + * + * @param request request + * @param exp exception + * @return method request result */ @ExceptionHandler(value = MethodArgumentTypeMismatchException.class) - public Result methodArgumentTypeMismatchException(HttpServletRequest request, MethodArgumentTypeMismatchException e) { - this.logError(request, e); + public Result methodArgumentTypeMismatchException(HttpServletRequest request, + MethodArgumentTypeMismatchException exp) { + logError(request, exp); return Result.fail(ResultEnum.PARAM_TYPE_MISMATCH); } /** - * 不支持的请求方法 + * Unsupported request method + * + * @param request request + * @param exp exception + * @return method request result */ @ExceptionHandler(value = HttpRequestMethodNotSupportedException.class) - public Result httpRequestMethodNotSupportedException(HttpServletRequest request, HttpRequestMethodNotSupportedException e) { - this.logError(request, e); + public Result httpRequestMethodNotSupportedException(HttpServletRequest request, + HttpRequestMethodNotSupportedException exp) { + logError(request, exp); return Result.fail(ResultEnum.HTTP_REQUEST_METHOD_NOT_SUPPORTED_ERROR); } /** - * 参数错误 + * bad parameter + * + * @param request request + * @param exp exception + * @return method request result */ @ExceptionHandler(value = IllegalArgumentException.class) - public Result illegalArgumentException(HttpServletRequest request, IllegalArgumentException e) { - this.logError(request, e); + public Result illegalArgumentException(HttpServletRequest request, IllegalArgumentException exp) { + logError(request, exp); return Result.fail(ResultEnum.SERVER_ERROR_PRARM); } + /** + * FeignClientException + * + * @param request request + * @param exp exception + * @return method request result + */ @ExceptionHandler(value = FeignClientException.class) - public Result feignClientException(HttpServletRequest request, FeignClientException e) { - this.logError(request, e); + public Result feignClientException(HttpServletRequest request, FeignClientException exp) { + logError(request, exp); return Result.fail(ResultEnum.FEIGN_CLIENT); } + /** + * DispatchClientException + * + * @param request request + * @param exp exception + * @return method request result + */ @ExceptionHandler(value = DispatchClientException.class) - public Result dispatchClientException(HttpServletRequest request, DispatchClientException e) { - this.logError(request, e); + public Result dispatchClientException(HttpServletRequest request, DispatchClientException exp) { + logError(request, exp); return Result.fail(ResultEnum.DISPATCH_CLIENT); } /** - * 其他异常统一处理 + * Unified handling of other exceptions + * + * @param request request + * @param exp exception + * @return method request result */ @ExceptionHandler(value = Exception.class) - public Result exception(HttpServletRequest request, Exception e) { - this.logError(request, e); + public Result exception(HttpServletRequest request, Exception exp) { + logError(request, exp); return Result.fail(ResultEnum.SERVER_ERROR); } /** - * 记录错误日志 + * Log errors + * + * @param request request + * @param exp exception */ - protected void logError(HttpServletRequest request, Exception e) { - log.error("path:{}, queryParam:{}, errorMessage:{}", request.getRequestURI(), request.getQueryString(), e.getMessage(), e); + protected void logError(HttpServletRequest request, Exception exp) { + log.error("path:{}, queryParam:{}, errorMessage:{}", request.getRequestURI(), request.getQueryString(), + exp.getMessage(), exp); } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/LargeDataDiffException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/LargeDataDiffException.java index 7971a66c0ae35ad1fba1b3a2f8b093ae83741ae0..0cf7a64330a18b878cdce71dcdccfb489f7be20d 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/LargeDataDiffException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/LargeDataDiffException.java @@ -1,13 +1,29 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; /** - * 校验服务 数据量产生过大差异,无法进行校验。 + * The data volume of the verification service is too different to be verified. * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class LargeDataDiffException extends CheckingException { + private static final long serialVersionUID = 603462988493634839L; public LargeDataDiffException(String message) { super(message); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/MerkleTreeDepthException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/MerkleTreeDepthException.java index 10df9ef5c3218b761f8ae57dd76af43dd1fc1ea8..b0c41538772bfe938603b8df014825d7fac3551f 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/MerkleTreeDepthException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/MerkleTreeDepthException.java @@ -1,13 +1,30 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; /** - * 校验服务 数据量产生过大差异,导致构建默克尔树高度不一致,无法进行校验。 + * There is too much difference in the amount of verification service data, + * resulting in inconsistent height of the constructed Merkel tree, which cannot be verified. * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class MerkleTreeDepthException extends LargeDataDiffException { + private static final long serialVersionUID = -1180146612763125240L; public MerkleTreeDepthException(String message) { super(message); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ProcessMultipleException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ProcessMultipleException.java index e5fc7f95011879c480beadeced6de70af212af2e..cb059692300eab953060ad5dad02d98efe546e63 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ProcessMultipleException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/ProcessMultipleException.java @@ -1,13 +1,29 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; /** - * 当前实例正在执行数据抽取服务,不能重新开启新的校验。 + * The current instance is executing the data extraction service and cannot restart the new verification. * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class ProcessMultipleException extends ExtractException { + private static final long serialVersionUID = -5298809357642777004L; public ProcessMultipleException(String message) { super(message); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TableNotExistException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TableNotExistException.java index 6b4be3e2812b750d382a39f219dbff7742c29db9..7116a3d36d764a15a43a42d79319010708d3f929 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TableNotExistException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TableNotExistException.java @@ -1,16 +1,30 @@ -package org.opengauss.datachecker.common.exception; +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ +package org.opengauss.datachecker.common.exception; /** - * 数据抽取服务,表元数据信息不存在 + * Data extraction service, table metadata information does not exist * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class TableNotExistException extends ExtractException { - private static final String ERROR_MESSAGE = "table of Meatedata [%s] is not exist!"; - + private static final long serialVersionUID = -8904713692472534432L; + private static final String ERROR_MESSAGE = "table of Metadata [%s] is not exist!"; public TableNotExistException(String tableName) { super(String.format(ERROR_MESSAGE, tableName)); diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TaskNotFoundException.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TaskNotFoundException.java index aca470c5636cd36003df96591c6953cb3448495f..552a8b739dff990f3e21bfe8f97849b1fa85b130 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TaskNotFoundException.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/exception/TaskNotFoundException.java @@ -1,15 +1,31 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.exception; import org.opengauss.datachecker.common.entry.enums.Endpoint; /** - * 数据抽取服务,未找到待执行抽取任务 + * Data extraction service, no extraction task to be executed found * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ public class TaskNotFoundException extends ExtractException { + private static final long serialVersionUID = -3242004357180803240L; private static final String ERROR_MESSAGE = "task %s is not found,please checking something error!"; private static final String ERROR_ENDPOINT_MESSAGE = "endpoint [%s] and process[%s] task is empty!"; diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ByteUtil.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ByteUtil.java index f71a1ea973b9c11e3e43fda29208931983a567e4..1ed5d657173dc13976e282fc71e2f91aa1bab410 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ByteUtil.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ByteUtil.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; /** @@ -6,37 +21,30 @@ package org.opengauss.datachecker.common.util; * @since :11 */ public class ByteUtil { - + private static final LongHashFunctionWrapper HASH_UTIL = new LongHashFunctionWrapper(); + /** - * 比较两个字节数组是否一致 + * Compare whether the two byte arrays are consistent * - * @param byte1 字节数组 - * @param byte2 字节数组 + * @param byte1 byte arrays + * @param byte2 byte arrays * @return true|false */ public static boolean isEqual(byte[] byte1, byte[] byte2) { if (byte1 == null || byte2 == null || byte1.length != byte2.length) { return false; } - return HashUtil.hashBytes(byte1) == HashUtil.hashBytes(byte2); + return HASH_UTIL.hashBytes(byte1) == HASH_UTIL.hashBytes(byte2); } /** - * 将long型数字转化为byte字节数组 + * Convert a long number to a byte array * - * @param value long型数组 - * @return 字节数组 + * @param value Long type number + * @return byte array */ public static byte[] toBytes(long value) { - return new byte[]{ - (byte) (value >> 56), - (byte) (value >> 48), - (byte) (value >> 40), - (byte) (value >> 32), - (byte) (value >> 24), - (byte) (value >> 16), - (byte) (value >> 8), - (byte) value - }; + return new byte[] {(byte) (value >> 56), (byte) (value >> 48), (byte) (value >> 40), (byte) (value >> 32), + (byte) (value >> 24), (byte) (value >> 16), (byte) (value >> 8), (byte) value}; } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/EnumUtil.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/EnumUtil.java index b84e8279ac5df0f363ee0e53229c62d2fb362d73..1717d28fe5c8a7167f8e5a38b6aea30ff70aaf3b 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/EnumUtil.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/EnumUtil.java @@ -1,33 +1,54 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; import org.opengauss.datachecker.common.entry.enums.IEnum; +/** + * EnumUtil + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ public class EnumUtil { /** - * 获取枚举 + * Returns the elements of this enum class or null if this Class object does not represent an enum type * - * @param clazz - * @param code - * @return + * @param clazz clazz + * @param code code + * @param clazz type + * @return enum elements */ public static T valueOfIgnoreCase(Class clazz, String code) { return valueOf(clazz, code, true); } /** - * 获取枚举,区分大小写 + * Returns the elements of this enum class or null if this Class object does not represent an enum type * - * @param clazz - * @param code - * @param isIgnore - * @return + * @param clazz clazz + * @param code code + * @param isIgnore isIgnore + * @param clazz type + * @return enum elements */ public static T valueOf(Class clazz, String code, boolean isIgnore) { - - //得到values T[] enums = values(clazz); - if (enums == null || enums.length == 0) { return null; } @@ -41,29 +62,30 @@ public class EnumUtil { } return null; } + /** - * 获取枚举,区分大小写 + * Returns the elements of this enum class or null if this Class object does not represent an enum type * - * @param clazz - * @param code - * @return + * @param clazz clazz + * @param code code + * @param clazz type + * @return enum elements */ public static T valueOf(Class clazz, String code) { return valueOf(clazz, code, false); } /** - * 获取枚举集合 + * Returns the elements of this enum class or null if this Class object does not represent an enum type * - * @param clazz - * @return + * @param clazz clazz + * @param clazz type + * @return enum */ public static T[] values(Class clazz) { if (!clazz.isEnum()) { - throw new IllegalArgumentException("Class[" + clazz + "]不是枚举类型"); + throw new IllegalArgumentException("Class[" + clazz + "] is not an enumeration type"); } - //得到values return clazz.getEnumConstants(); } - } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/FileUtils.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/FileUtils.java index fde41372bd1d1c5f76391a027f1e9305bc32405b..6c08131fe3ac6edbe668aca3e4e13a9be3cfa9a0 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/FileUtils.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/FileUtils.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; import lombok.extern.slf4j.Slf4j; @@ -12,13 +27,19 @@ import java.util.List; import java.util.Set; /** + * FileUtils + * * @author :wangchao * @date :Created in 2022/5/23 * @since :11 */ @Slf4j public class FileUtils { - + /** + * Creates a directory by creating all nonexistent parent directories first. + * + * @param path path + */ public static void createDirectories(String path) { File file = new File(path); if (!file.exists()) { @@ -30,6 +51,13 @@ public class FileUtils { } } + /** + * Write lines of text to a file. Characters are encoded into bytes using the UTF-8 charset. + * This method works as if invoking it were equivalent to evaluating the expression: + * + * @param filename filename + * @param content content + */ public static void writeAppendFile(String filename, List content) { try { Files.write(Paths.get(filename), content, StandardOpenOption.APPEND, StandardOpenOption.CREATE); @@ -38,7 +66,12 @@ public class FileUtils { } } - + /** + * Write lines of text to a file. Characters are encoded into bytes using the UTF-8 charset. + * + * @param filename filename + * @param content content + */ public static void writeAppendFile(String filename, Set content) { try { Files.write(Paths.get(filename), content, StandardOpenOption.APPEND, StandardOpenOption.CREATE); @@ -47,9 +80,29 @@ public class FileUtils { } } + /** + * Write lines of text to a file. Characters are encoded into bytes using the UTF-8 charset. + * + * @param filename filename + * @param content content + */ public static void writeAppendFile(String filename, String content) { try { - Files.write(Paths.get(filename), content.getBytes(StandardCharsets.UTF_8), StandardOpenOption.APPEND, StandardOpenOption.CREATE); + Files.write(Paths.get(filename), content.getBytes(StandardCharsets.UTF_8), StandardOpenOption.APPEND, + StandardOpenOption.CREATE); + } catch (IOException e) { + log.error("file write error:", e); + } + } + + /** + * Deletes a file if it exists. + * + * @param filename filename + */ + public static void deleteFile(String filename) { + try { + Files.deleteIfExists(Paths.get(filename)); } catch (IOException e) { log.error("file write error:", e); } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/IdGenerator.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/IdGenerator.java new file mode 100644 index 0000000000000000000000000000000000000000..72b9b8c1152f26e1c9ee046e2c73e183ae8bf12b --- /dev/null +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/IdGenerator.java @@ -0,0 +1,253 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.common.util; + +import lombok.extern.slf4j.Slf4j; + +import java.lang.management.ManagementFactory; +import java.net.InetAddress; +import java.net.NetworkInterface; +import java.net.SocketException; +import java.net.UnknownHostException; +import java.util.Locale; + +/** + *
+ * Description: self growing ID snowflake snowflake algorithm java implementation
+ * The core code is implemented by its AsSnowflakeIdGenerator class. Its principle structure is as follows.
+ * I use a 0 to represent a bit, and use - to separate the functions of parts:
+ * 1||0---0000000000 0000000000 0000000000 0000000000 0 --- 00000 ---00000 ---000000000000
+ * In the above string, the first bit is unused (in fact, it can also be used as the symbol bit of long),
+ * the next 41 bits are millisecond time, then 5 bits of data service code bit,
+ * 5 bits of machine ID (not an identifier, but actually a thread identification),
+ * and then 12 bits of the current millisecond count within this millisecond,
+ * which adds up to just 64 bits, which is a long type.
+ * The advantage of this is that on the whole, it is sorted according to the time increment,
+ * and there is no ID collision in the whole distributed system (distinguished by data service and machine ID),
+ * and the efficiency is high. After testing, snowflake can generate about 260000 IDS per second,
+ * which fully meets the needs.
+ *
+ * 64 bit ID (42 (MS) +5 (machine ID) +5 (service code) +12 (repeated accumulation))
+ * 
+ * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ +@Slf4j +public class IdGenerator { + /** + * Get ID auto increment sequence + * + * @return Return to next ID + */ + public static long nextId() { + return AsSnowflakeIdGenerator.nextId(); + } + + /** + * 36 hexadecimal string ID + * + * @return 36 hexadecimal string ID + */ + public static String nextId36() { + return Long.toString(AsSnowflakeIdGenerator.nextId(), Character.MAX_RADIX).toUpperCase(Locale.ROOT); + } + + /** + * Get the autoincrement sequence with the specified prefix + * + * @param prefix Self increasing sequence prefix + * @return Next string ID with the specified prefix + */ + public static String nextId(String prefix) { + return prefix + AsSnowflakeIdGenerator.nextId(); + } + + /** + * Gets the hexadecimal autoincrement sequence with the specified prefix + * + * @param prefix Self increasing sequence prefix + * @return Next string ID with the specified prefix + */ + public static String nextId36(String prefix) { + return prefix + nextId36(); + } + + private static class AsSnowflakeIdGenerator { + /** + * The starting mark point of time, as the benchmark, + * generally takes the latest time of the system (once it is determined, it cannot be changed) + */ + private static final long BENCHMARK = 1653824897654L; + + /** + * Machine identification digit + */ + private static final long MACHINE_ID_BITS = 5L; + + /** + * Data service identification digit + */ + private static final long DATA_SERVICE_ID_BITS = 5L; + + /** + * Machine ID Max 31 + */ + private static final long MAX_WORKER_MACHINE_ID = ~(-1L << MACHINE_ID_BITS); + + /** + * Maximum data service ID 31 + */ + private static final long MAX_DATA_SERVICE_ID = ~(-1L << DATA_SERVICE_ID_BITS); + + /** + * Self increment in milliseconds + */ + private static final long SELF_INCREMENT_SEQUENCE_BITS = 12L; + + /** + * Machine ID shifts 12 bits to the left + */ + private static final long WORKER_ID_SHIFT = SELF_INCREMENT_SEQUENCE_BITS; + + /** + * Data service ID shifts 17 bits left + */ + private static final long DATA_SERVICE_ID_SHIFT = SELF_INCREMENT_SEQUENCE_BITS + MACHINE_ID_BITS; + + /** + * Shift 22 bits left in milliseconds + */ + private static final long TIMESTAMP_LEFT_SHIFT = + SELF_INCREMENT_SEQUENCE_BITS + MACHINE_ID_BITS + DATA_SERVICE_ID_BITS; + + /** + * max self increment sequence is 4095} + */ + private static final long SEQUENCE_MASK = ~(-1L << SELF_INCREMENT_SEQUENCE_BITS); + + /** + * single instance + */ + private static final AsSnowflakeIdGenerator ID_GENERATOR = new AsSnowflakeIdGenerator(); + + /** + * Last production ID timestamp + */ + private static long lastTimeMillis = -1L; + + private final long generatorId; + + /** + * Data identification ID part + */ + private final long dataServiceId; + + /** + * 0,Concurrency control + */ + private long sequence = 0L; + + private AsSnowflakeIdGenerator() { + dataServiceId = getDataServiceId(); + generatorId = getMaxGeneratorId(dataServiceId); + } + + /** + * Get ID auto increment sequence + * + * @return Return to next ID + */ + public static long nextId() { + return ID_GENERATOR.next(); + } + + private long getDataServiceId() { + long serviceId = 0L; + try { + NetworkInterface network = NetworkInterface.getByInetAddress(InetAddress.getLocalHost()); + if (network == null || network.getHardwareAddress() == null) { + serviceId = 1L; + } else { + byte[] macAddress = network.getHardwareAddress(); + serviceId = ((0x000000FF & (long) macAddress[macAddress.length - 1]) | (0x0000FF00 & ( + ((long) macAddress[macAddress.length - 2]) << 8))) >> 6; + serviceId = serviceId % (MAX_DATA_SERVICE_ID + 1); + } + } catch (SocketException | UnknownHostException e) { + log.error(" getDataServiceId: {}", e.getMessage()); + } + return serviceId; + } + + private long getMaxGeneratorId(long dataServiceId) { + StringBuffer jvmPid = new StringBuffer(); + jvmPid.append(dataServiceId); + String jvmName = ManagementFactory.getRuntimeMXBean().getName(); + if (!jvmName.isEmpty()) { + jvmPid.append(jvmName.split("@")[0]); + } + return (jvmPid.toString().hashCode() & 0xffff) % (MAX_WORKER_MACHINE_ID + 1); + } + + /** + * Get next ID + * + * @return Return to next ID + */ + private synchronized long next() { + long currentTimestamp = currentTimeMillis(); + if (currentTimestamp < lastTimeMillis) { + throw new ClockMovedBackwardException(String + .format(Locale.ROOT, "Clock moved backwards. Refusing to generate id for %d milliseconds", + lastTimeMillis - currentTimestamp)); + } + + if (lastTimeMillis == currentTimestamp) { + sequence = (sequence + 1) & SEQUENCE_MASK; + if (sequence == 0) { + currentTimestamp = nextMillis(lastTimeMillis); + } + } else { + sequence = 0L; + } + lastTimeMillis = currentTimestamp; + return ((currentTimestamp - BENCHMARK) << TIMESTAMP_LEFT_SHIFT) | (dataServiceId << DATA_SERVICE_ID_SHIFT) + | (generatorId << WORKER_ID_SHIFT) | sequence; + } + + private long nextMillis(final long lastTimeMillis) { + long timeMillis = currentTimeMillis(); + while (timeMillis <= lastTimeMillis) { + timeMillis = currentTimeMillis(); + } + return timeMillis; + } + + private long currentTimeMillis() { + return System.currentTimeMillis(); + } + } + + static class ClockMovedBackwardException extends RuntimeException { + private static final long serialVersionUID = -382053228395414722L; + + public ClockMovedBackwardException(String message) { + super(message); + } + } +} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/IdWorker.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/IdWorker.java deleted file mode 100644 index e030f47c147fdffbd40dcc1258899ae9dd4a95ed..0000000000000000000000000000000000000000 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/IdWorker.java +++ /dev/null @@ -1,224 +0,0 @@ -package org.opengauss.datachecker.common.util; - -import java.lang.management.ManagementFactory; -import java.net.InetAddress; -import java.net.NetworkInterface; - -/** - *

描述:自增长ID Snowflake 雪花算法 Java实现

- * 核心代码为其IdWorker这个类实现,其原理结构如下,我分别用一个0表示一位,用—分割开部分的作用: - * 1||0---0000000000 0000000000 0000000000 0000000000 0 --- 00000 ---00000 ---000000000000 - * 在上面的字符串中,第一位为未使用(实际上也可作为long的符号位),接下来的41位为毫秒级时间, - * 然后5位data center标识位,5位机器ID(并不算标识符,实际是为线程标识), - * 然后12位该毫秒内的当前毫秒内的计数,加起来刚好64位,为一个Long型。 - * 这样的好处是,整体上按照时间自增排序,并且整个分布式系统内不会产生ID碰撞(由datacenter和机器ID作区分), - * 并且效率较高,经测试,snowflake每秒能够产生26万ID左右,完全满足需要。 - *

- * 64位ID (42(毫秒)+5(机器ID)+5(业务编码)+12(重复累加)) - */ -public class IdWorker { - /** - * 时间起始标记点,作为基准,一般取系统的最近时间(一旦确定不能变动) - */ - private final static long BENCHMARK = 1653824897654L; - /** - * 机器标识位数 - */ - private final static long WORKER_ID_BITS = 5L; - /** - * 数据中心标识位数 - */ - private final static long DATACENTER_ID_BITS = 5L; - /** - * 机器ID最大值 - */ - private final static long MAX_WORKER_ID = ~(-1L << WORKER_ID_BITS); - /** - * 数据中心ID最大值 - */ - private final static long MAX_DATA_CENTER_ID = ~(-1L << DATACENTER_ID_BITS); - /** - * 毫秒内自增位 - */ - private final static long SEQUENCE_BITS = 12L; - /** - * 机器ID偏左移12位 - */ - private final static long WORKER_ID_SHIFT = SEQUENCE_BITS; - /** - * 数据中心ID左移17位 - */ - private final static long DATA_CENTER_ID_SHIFT = SEQUENCE_BITS + WORKER_ID_BITS; - /** - * 时间毫秒左移22位 - */ - private final static long TIMESTAMP_LEFT_SHIFT = SEQUENCE_BITS + WORKER_ID_BITS + DATACENTER_ID_BITS; - - private final static long SEQUENCE_MASK = ~(-1L << SEQUENCE_BITS); - /** - * 上次生产id时间戳 - */ - private static long lastTimestamp = -1L; - /** - * 0,并发控制 - */ - private volatile long sequence = 0L; - - private final long workerId; - /** - * 数据标识id部分 - */ - private final long dataCenterId; - - private static final Object LOCK = new Object(); - - private static IdWorker idWorker; - - public IdWorker() { - this.dataCenterId = getDataCenterId(MAX_DATA_CENTER_ID); - this.workerId = getMaxWorkerId(dataCenterId, MAX_WORKER_ID); - } - - /** - * @param workerId * 工作机器ID - * @param dataCenterId * 序列号 - */ - public IdWorker(long workerId, long dataCenterId) { - if (workerId > MAX_WORKER_ID || workerId < 0) { - throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", MAX_WORKER_ID)); - } - if (dataCenterId > MAX_DATA_CENTER_ID || dataCenterId < 0) { - throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", MAX_DATA_CENTER_ID)); - } - this.workerId = workerId; - this.dataCenterId = dataCenterId; - } - - public static IdWorker getInstance() { - if (idWorker != null) { - return idWorker; - } else { - synchronized (LOCK) { - if (idWorker == null) { - idWorker = new IdWorker(); - } - } - } - return idWorker; - } - - /** - * 获取ID自增序列 - * - * @return 返回下一个ID - */ - public static long nextId() { - return getInstance().next(); - } - - /** - * 36进制字符串ID - * - * @return 36进制字符串ID - */ - public static String nextId36() { - return Long.toString(getInstance().next(), Character.MAX_RADIX).toUpperCase(); - } - - /** - * 获取带有指定前缀的自增序列 - * - * @param prefix 自增序列前缀 - * @return 带指定前缀的下一个字符串ID - */ - public static String nextId(String prefix) { - return prefix + getInstance().next(); - } - - /** - * 获取下一个ID - * - * @return 返回下一个ID - */ - private synchronized long next() { - long timestamp = timeGen(); - if (timestamp < lastTimestamp) { - throw new RuntimeException(String.format("Clock moved backwards. Refusing to generate id for %d milliseconds", lastTimestamp - timestamp)); - } - - if (lastTimestamp == timestamp) { - // 当前毫秒内,则+1 - sequence = (sequence + 1) & SEQUENCE_MASK; - if (sequence == 0) { - // 当前毫秒内计数满了,则等待下一秒 - timestamp = tilNextMillis(lastTimestamp); - } - } else { - sequence = 0L; - } - lastTimestamp = timestamp; - // ID偏移组合生成最终的ID,并返回ID - long nextGenId = ((timestamp - BENCHMARK) << TIMESTAMP_LEFT_SHIFT) - | (dataCenterId << DATA_CENTER_ID_SHIFT) - | (workerId << WORKER_ID_SHIFT) | sequence; - - return nextGenId; - } - - private long tilNextMillis(final long lastTimestamp) { - long timestamp = this.timeGen(); - while (timestamp <= lastTimestamp) { - timestamp = this.timeGen(); - } - return timestamp; - } - - private long timeGen() { - return System.currentTimeMillis(); - } - - /** - *

- * 获取 maxWorkerId - *

- */ - protected static long getMaxWorkerId(long datacenterId, long maxWorkerId) { - StringBuffer mpid = new StringBuffer(); - mpid.append(datacenterId); - String name = ManagementFactory.getRuntimeMXBean().getName(); - if (!name.isEmpty()) { - /* - * GET jvmPid - */ - mpid.append(name.split("@")[0]); - } - /* - * MAC + PID 的 hashcode 获取16个低位 - */ - return (mpid.toString().hashCode() & 0xffff) % (maxWorkerId + 1); - } - - /** - *

- * 数据标识id部分 - *

- */ - protected static long getDataCenterId(long maxDataCenterId) { - long id = 0L; - try { - InetAddress ip = InetAddress.getLocalHost(); - NetworkInterface network = NetworkInterface.getByInetAddress(ip); - if (network == null) { - id = 1L; - } else { - byte[] mac = network.getHardwareAddress(); - id = ((0x000000FF & (long) mac[mac.length - 1]) - | (0x0000FF00 & (((long) mac[mac.length - 2]) << 8))) >> 6; - id = id % (maxDataCenterId + 1); - } - } catch (Exception e) { - System.out.println(" getDataCenterId: " + e.getMessage()); - } - return id; - } -} diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/JsonObjectUtil.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/JsonObjectUtil.java index e8fe130711487b0f59741aeba8bbcb9123a6049b..9323c74d0e7e4bf53227b0ca812c6a3baeb06aa4 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/JsonObjectUtil.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/JsonObjectUtil.java @@ -1,9 +1,26 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; import com.alibaba.fastjson.JSONObject; import com.alibaba.fastjson.serializer.SerializerFeature; import lombok.extern.slf4j.Slf4j; +import java.time.LocalDateTime; +import java.time.format.DateTimeFormatter; /** * @author :wangchao @@ -13,17 +30,30 @@ import lombok.extern.slf4j.Slf4j; @Slf4j public class JsonObjectUtil { + private static final DateTimeFormatter FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS"); + /** - * 对象格式化为JSON字符串,格式化根据属性进行自动换行

+ * The object is formatted as a JSON string, + * and the formatting is automatically wrapped according to the attributes * {@code SerializerFeature.PrettyFormat}

- * {@code SerializerFeature.WriteMapNullValue} 空指针格式化

- * {@code SerializerFeature.WriteDateUseDateFormat} 日期格式化

+ * {@code SerializerFeature.WriteMapNullValue} Null pointer formatting

+ * {@code SerializerFeature.WriteDateUseDateFormat} date format

* - * @param object 格式化对象 - * @return 格式化字符串 + * @param object Formatting Objects + * @return formatting string */ public static String format(Object object) { return JSONObject.toJSONString(object, SerializerFeature.PrettyFormat, SerializerFeature.WriteMapNullValue, - SerializerFeature.WriteDateUseDateFormat); + SerializerFeature.WriteDateUseDateFormat); + } + + /** + * Localdatetime time is formatted as yyyy-MM-dd HH:mm:ss.SSS + * + * @param localDateTime time + * @return time of string + */ + public static String formatTime(LocalDateTime localDateTime) { + return FORMATTER.format(localDateTime); } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/HashUtil.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/LongHashFunctionWrapper.java similarity index 32% rename from datachecker-common/src/main/java/org/opengauss/datachecker/common/util/HashUtil.java rename to datachecker-common/src/main/java/org/opengauss/datachecker/common/util/LongHashFunctionWrapper.java index bfe9892689d809686918e6848ab5e09107da9cd1..b3e0271ee9c8eedd4d2ba836106a269153d88b2e 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/HashUtil.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/LongHashFunctionWrapper.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; import net.openhft.hashing.LongHashFunction; @@ -5,45 +20,48 @@ import org.springframework.lang.NonNull; import java.nio.charset.Charset; - /** + * LongHashFunctionWrapper + * * @author :wangchao * @date :Created in 2022/5/24 * @since :11 */ -public class HashUtil { +public class LongHashFunctionWrapper { + private static final long XX3_SEED = 199972221018L; /** - * 哈希算法 + * hashing algorithm */ - private static final LongHashFunction XX_3_HASH = LongHashFunction.xx3(); + private static final LongHashFunction XX_3_HASH = LongHashFunction.xx3(XX3_SEED); /** - * 使用xx3哈希算法 对字符串进行哈希计算 + * Hash the string using the XX3 hash algorithm * - * @param input 字符串 - * @return 哈希值 + * @param input string + * @return Hash value */ - public static long hashChars(@NonNull String input) { + public long hashChars(@NonNull String input) { return XX_3_HASH.hashChars(input); } + /** - * 使用xx3哈希算法 对字节数组进行哈希计算 + * Hash the byte array using the XX3 hash algorithm * - * @param input 字节数组 - * @return 哈希值 + * @param input byte array + * @return Hash value */ - public static long hashBytes(@NonNull byte[] input) { + public long hashBytes(@NonNull byte[] input) { return XX_3_HASH.hashBytes(input); } /** - * 使用xx3哈希算法 对字符串进行哈希计算 + * Hash the string using the XX3 hash algorithm * - * @param input 字符串 - * @return 哈希值 + * @param input string + * @return Hash value */ - public static long hashBytes(@NonNull String input) { + public long hashBytes(@NonNull String input) { return XX_3_HASH.hashBytes(input.getBytes(Charset.defaultCharset())); } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/SpringUtil.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/SpringUtil.java index 1e0501b3299d95b558e5e032bc1e415a85f8cfa7..794894782866d555ed76bc81d4686e461b7ec823 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/SpringUtil.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/SpringUtil.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; import org.springframework.beans.BeansException; @@ -8,11 +23,14 @@ import org.springframework.stereotype.Component; @Component public class SpringUtil implements ApplicationContextAware { - /** - * 上下文对象实例 - */ private static ApplicationContext applicationContext; + /** + * set ApplicationContext + * + * @param applicationContext applicationContext + * @throws BeansException BeansException + */ @Autowired @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { @@ -20,40 +38,44 @@ public class SpringUtil implements ApplicationContextAware { } /** - * 获取applicationContext - * @return + * get ApplicationContext + * + * @return applicationContext */ public static ApplicationContext getApplicationContext() { return applicationContext; } /** - * 通过name获取 Bean. - * @param name - * @return + * Get the corresponding bean instance according to the bean name + * + * @param name bean name + * @return bean */ - public static Object getBean(String name){ + public static Object getBean(String name) { return getApplicationContext().getBean(name); } /** - * 通过class获取Bean. - * @param clazz - * @param - * @return + * Get the corresponding bean instance according to the clazz type + * + * @param clazz clazz + * @param clazz type + * @return bean */ - public static T getBean(Class clazz){ + public static T getBean(Class clazz) { return getApplicationContext().getBean(clazz); } /** - * 通过name,以及Clazz返回指定的Bean - * @param name - * @param clazz - * @param - * @return + * Get the corresponding bean instance according to the clazz type + * + * @param name bean name + * @param clazz clazz + * @param clazz type + * @return bean */ - public static T getBean(String name,Class clazz){ + public static T getBean(String name, Class clazz) { return getApplicationContext().getBean(name, clazz); } } \ No newline at end of file diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ThreadUtil.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ThreadUtil.java index 1428ab1ee7b2ac5b2af8f1a6c6345e670a651399..a72ed1d4204372138e4596cb8b29a23135ab5afb 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ThreadUtil.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/util/ThreadUtil.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; import lombok.extern.slf4j.Slf4j; @@ -15,9 +30,9 @@ import java.util.concurrent.TimeUnit; @Slf4j public class ThreadUtil { /** - * 线程休眠 + * Thread hibernation * - * @param millisTime 休眠时间毫秒 + * @param millisTime Sleep time MS */ public static void sleep(int millisTime) { try { @@ -28,10 +43,7 @@ public class ThreadUtil { } public static ThreadPoolExecutor newSingleThreadExecutor() { - return new ThreadPoolExecutor(1, 1, 60L, TimeUnit.SECONDS, - new LinkedBlockingDeque<>(100), - Executors.defaultThreadFactory(), - new ThreadPoolExecutor.DiscardOldestPolicy()); - + return new ThreadPoolExecutor(1, 1, 60L, TimeUnit.SECONDS, new LinkedBlockingDeque<>(100), + Executors.defaultThreadFactory(), new ThreadPoolExecutor.DiscardOldestPolicy()); } } diff --git a/datachecker-common/src/main/java/org/opengauss/datachecker/common/web/Result.java b/datachecker-common/src/main/java/org/opengauss/datachecker/common/web/Result.java index d42d6567e05f8a0c924087da3ef6b04aad4b3e22..00ebe466da67a0e875988a8c39d8f3401a7683c0 100644 --- a/datachecker-common/src/main/java/org/opengauss/datachecker/common/web/Result.java +++ b/datachecker-common/src/main/java/org/opengauss/datachecker/common/web/Result.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.web; import io.swagger.v3.oas.annotations.media.Schema; @@ -12,22 +27,21 @@ import org.opengauss.datachecker.common.entry.enums.ResultEnum; * @date :Created in 2022/5/26 * @since :11 */ -@Tag(name = "API 接口消息返回结果封装类") +@Tag(name = "API Interface message return result encapsulation class") @Data @NoArgsConstructor @AllArgsConstructor public class Result { - @Schema(name = "code", description = "消息响应码") + @Schema(name = "code", description = "Message response code") private int code; - @Schema(name = "message", description = "消息内容") + @Schema(name = "message", description = "message") private String message; - @Schema(name = "data", description = "接口返回数据") + @Schema(name = "data", description = "data") private T data; - public static Result success() { return new Result<>(ResultEnum.SUCCESS.getCode(), ResultEnum.SUCCESS.getDescription(), null); } @@ -36,7 +50,6 @@ public class Result { return new Result<>(ResultEnum.SUCCESS.getCode(), ResultEnum.SUCCESS.getDescription(), data); } - public static Result of(T data, int code, String message) { return new Result<>(code, message, data); } diff --git a/datachecker-common/src/main/resources/application.properties b/datachecker-common/src/main/resources/application.properties index 8b137891791fe96927ad78e64b0aad7bded08bdc..9ce58968b2794a58b3b3f12f6fb2b9e0562d4d46 100644 --- a/datachecker-common/src/main/resources/application.properties +++ b/datachecker-common/src/main/resources/application.properties @@ -1 +1,16 @@ +# +# Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. +# +# openGauss is licensed under Mulan PSL v2. +# You can use this software according to the terms and conditions of the Mulan PSL v2. +# You may obtain a copy of Mulan PSL v2 at: +# +# http://license.coscl.org.cn/MulanPSL2 +# +# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, +# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, +# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. +# See the Mulan PSL v2 for more details. +# + diff --git a/datachecker-common/src/test/java/org/opengauss/datachecker/common/DatacheckerCommonApplicationTests.java b/datachecker-common/src/test/java/org/opengauss/datachecker/common/DatacheckerCommonApplicationTests.java deleted file mode 100644 index d69b4bdbbfba386988442d98e6e8a2808e87ae83..0000000000000000000000000000000000000000 --- a/datachecker-common/src/test/java/org/opengauss/datachecker/common/DatacheckerCommonApplicationTests.java +++ /dev/null @@ -1,13 +0,0 @@ -package org.opengauss.datachecker.common; - -import org.junit.jupiter.api.Test; -import org.springframework.boot.test.context.SpringBootTest; - -@SpringBootTest -class DatacheckerCommonApplicationTests { - - @Test - void contextLoads() { - } - -} diff --git a/datachecker-common/src/test/java/org/opengauss/datachecker/common/util/IdGeneratorTest.java b/datachecker-common/src/test/java/org/opengauss/datachecker/common/util/IdGeneratorTest.java new file mode 100644 index 0000000000000000000000000000000000000000..552382551318b1324ce6bbc8cbe7d47b55b3db7a --- /dev/null +++ b/datachecker-common/src/test/java/org/opengauss/datachecker/common/util/IdGeneratorTest.java @@ -0,0 +1,67 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.common.util; + +import lombok.extern.slf4j.Slf4j; +import org.junit.jupiter.api.Test; + +/** + * IdGeneratorTest + * + * @author :wangchao + * @date :Created in 2022/8/9 + * @since :11 + */ +@Slf4j +public class IdGeneratorTest { + /** + * Data center identification digit + */ + private static final long DATA_SERVICE_ID_BITS = 5L; + + /** + * Maximum data center ID + */ + private static final long MAX_DATA_CENTER_ID = ~(-1L << DATA_SERVICE_ID_BITS); + + /** + * Self increment in milliseconds + */ + private static final long SELF_INCREMENT_SEQUENCE_BITS = 12L; + private static final long SEQUENCE_MASK = ~(-1L << SELF_INCREMENT_SEQUENCE_BITS); + + @Test + void testNextId() { + log.info("" + IdGenerator.nextId()); + log.info("" + MAX_DATA_CENTER_ID); + log.info("SEQUENCE_MASK = " + SEQUENCE_MASK); + } + + @Test + void testNextId36() { + log.info(IdGenerator.nextId36()); + } + + @Test + void testNextIdPrefix() { + log.info(IdGenerator.nextId("M")); + } + + @Test + void testNextId36Prefix() { + log.info(IdGenerator.nextId36("M")); + } +} diff --git a/datachecker-common/src/test/java/org/opengauss/datachecker/common/util/HashUtilTest.java b/datachecker-common/src/test/java/org/opengauss/datachecker/common/util/LongHashFunctionWrapperTest.java similarity index 43% rename from datachecker-common/src/test/java/org/opengauss/datachecker/common/util/HashUtilTest.java rename to datachecker-common/src/test/java/org/opengauss/datachecker/common/util/LongHashFunctionWrapperTest.java index e76fb2c1551be52d9b7e1c6de1d0aaff661fd6fc..b5c6e686ab836eb4ebe30d59277357ab7b5bc6ab 100644 --- a/datachecker-common/src/test/java/org/opengauss/datachecker/common/util/HashUtilTest.java +++ b/datachecker-common/src/test/java/org/opengauss/datachecker/common/util/LongHashFunctionWrapperTest.java @@ -1,13 +1,57 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.common.util; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; +import java.nio.charset.StandardCharsets; import java.util.UUID; import java.util.concurrent.atomic.AtomicInteger; import java.util.stream.IntStream; +/** + * LongHashFunctionWrapperTest + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ +@Slf4j +class LongHashFunctionWrapperTest { + private static final LongHashFunctionWrapper HASH_UTIL = new LongHashFunctionWrapper(); + private static final int[] SCOP_BUCKET_COUNT = new int[15]; -class HashUtilTest { + static { + SCOP_BUCKET_COUNT[0] = 1 << 1; + SCOP_BUCKET_COUNT[1] = 1 << 2; + SCOP_BUCKET_COUNT[2] = 1 << 3; + SCOP_BUCKET_COUNT[3] = 1 << 4; + SCOP_BUCKET_COUNT[4] = 1 << 5; + SCOP_BUCKET_COUNT[5] = 1 << 6; + SCOP_BUCKET_COUNT[6] = 1 << 7; + SCOP_BUCKET_COUNT[7] = 1 << 8; + SCOP_BUCKET_COUNT[8] = 1 << 9; + SCOP_BUCKET_COUNT[9] = 1 << 10; + SCOP_BUCKET_COUNT[10] = 1 << 11; + SCOP_BUCKET_COUNT[11] = 1 << 12; + SCOP_BUCKET_COUNT[12] = 1 << 13; + SCOP_BUCKET_COUNT[13] = 1 << 14; + SCOP_BUCKET_COUNT[14] = 1 << 15; + } @Test void testHashBytes1() { @@ -16,76 +60,44 @@ class HashUtilTest { AtomicInteger max = new AtomicInteger(); int[] resultCount = new int[mod * 2]; IntStream.range(1, 100000).forEach(idx -> { - long x = HashUtil.hashBytes(UUID.randomUUID().toString().getBytes()); - int xmod = (int) (x % mod + mod); + long xHash = HASH_UTIL.hashBytes(UUID.randomUUID().toString().getBytes(StandardCharsets.UTF_8)); + int xmod = (int) (xHash % mod + mod); max.set(Math.max(xmod, max.get())); min.set(Math.min(xmod, min.get())); resultCount[xmod]++; - //System.out.println(x + " " + xmod); }); - System.out.println("min=" + min.get() + " max=" + max.get()); + log.info("min=" + min.get() + " max=" + max.get()); IntStream.range(0, mod * 2).forEach(idx -> { - System.out.println("idx=" + idx + " " + resultCount[idx]); + log.info("idx=" + idx + " " + resultCount[idx]); }); - } @Test void testHashBytes2() { - System.out.println("bucket average capacity " + 100000 / 1200); - System.out.println("merkle leaf node size " + Math.pow(2, 15)); - System.out.println("merkle leaf node size " + (1 << 15)); + log.info("bucket average capacity " + 100000 / 1200); + log.info("merkle leaf node size " + Math.pow(2, 15)); + log.info("merkle leaf node size " + (1 << 15)); } @Test void testMode() { int mod = (int) Math.pow(2, 3); - long x = HashUtil.hashBytes(UUID.randomUUID().toString().getBytes()); - int xmod = (int) (x % mod + mod); - System.out.println(" mod= " + mod); - System.out.println(x + " " + xmod); - - System.out.println(x + " (int) (x % mod ) =" + (int) (x % mod)); - System.out.println(x + " ( x & (2^n - 1) )= " + (x & (mod - 1))); - - System.out.println(x + " (int) (x % mod + mod) =" + (int) (x % mod + mod)); - System.out.println(x + " ( x & (2^n - 1) + 2^n )= " + ((x & (mod - 1)) + mod)); - - - } - - private static final int[] SCOP_BUCKET_COUNT = new int[15]; - - static { - SCOP_BUCKET_COUNT[0] = 1 << 1; - SCOP_BUCKET_COUNT[1] = 1 << 2; - SCOP_BUCKET_COUNT[2] = 1 << 3; - SCOP_BUCKET_COUNT[3] = 1 << 4; - SCOP_BUCKET_COUNT[4] = 1 << 5; - SCOP_BUCKET_COUNT[5] = 1 << 6; - SCOP_BUCKET_COUNT[6] = 1 << 7; - SCOP_BUCKET_COUNT[7] = 1 << 8; - SCOP_BUCKET_COUNT[8] = 1 << 9; - SCOP_BUCKET_COUNT[9] = 1 << 10; - SCOP_BUCKET_COUNT[10] = 1 << 11; - SCOP_BUCKET_COUNT[11] = 1 << 12; - SCOP_BUCKET_COUNT[12] = 1 << 13; - SCOP_BUCKET_COUNT[13] = 1 << 14; - SCOP_BUCKET_COUNT[14] = 1 << 15; + long xHash = HASH_UTIL.hashBytes(UUID.randomUUID().toString().getBytes(StandardCharsets.UTF_8)); + int xmod = (int) (xHash % mod + mod); + log.info(" mod= " + mod); + log.info(xHash + " " + xmod); + log.info(xHash + " (int) (x % mod ) =" + (int) (xHash % mod)); + log.info(xHash + " ( x & (2^n - 1) )= " + (xHash & (mod - 1))); + log.info(xHash + " (int) (x % mod + mod) =" + (int) (xHash % mod + mod)); + log.info(xHash + " ( x & (2^n - 1) + 2^n )= " + ((xHash & (mod - 1)) + mod)); } @Test public void calacBucketCount() { int totalCount = 5; int bucketCount = totalCount / 5; - System.out.println(bucketCount); - int asInt = IntStream.range(0, 15) - .filter(idx -> SCOP_BUCKET_COUNT[idx] > bucketCount) - .peek(System.out::println) - .findFirst() - .orElse(15); - System.out.println(SCOP_BUCKET_COUNT[asInt]); - + log.info("" + bucketCount); + int asInt = IntStream.range(0, 15).filter(idx -> SCOP_BUCKET_COUNT[idx] > bucketCount).findFirst().orElse(15); + log.info("" + SCOP_BUCKET_COUNT[asInt]); } - } diff --git a/datachecker-extract/pom.xml b/datachecker-extract/pom.xml index 3fd952c19c2634b3e68982713d15cc005e56184e..b72f676cf2b8f27804a033d351db84dda6ebd6ce 100644 --- a/datachecker-extract/pom.xml +++ b/datachecker-extract/pom.xml @@ -1,4 +1,19 @@ + + 4.0.0 @@ -47,7 +62,10 @@ mysql mysql-connector-java - provided + + + org.opengauss + opengauss-jdbc com.alibaba @@ -83,7 +101,10 @@ org.springframework.kafka spring-kafka - + + org.apache.commons + commons-collections4 + com.google.guava guava @@ -119,10 +140,6 @@ org.projectlombok lombok - - mysql - mysql-connector-java - diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/ExtractApplication.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/ExtractApplication.java index b3916c52297fb1cfd36b1fb36f914a3f31239c24..5170a2308ac34a357fde38f47d15176c8d2576f4 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/ExtractApplication.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/ExtractApplication.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract; import org.springframework.boot.SpringApplication; @@ -5,6 +20,13 @@ import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.openfeign.EnableFeignClients; import org.springframework.scheduling.annotation.EnableAsync; +/** + * ExtractApplication + * + * @author wang chao + * @date 2022/5/8 19:27 + * @since 11 + **/ @EnableAsync @EnableFeignClients(basePackages = {"org.opengauss.datachecker.extract.client"}) @SpringBootApplication diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/MetaDataCache.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/MetaDataCache.java index b4d0dfc3ebc809fd1222036b3c04eae5bd791257..f5ca87fbcef7a12178edce74bbd53770c8dbd480 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/MetaDataCache.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/MetaDataCache.java @@ -1,12 +1,31 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.cache; -import com.google.common.cache.*; +import com.google.common.cache.CacheBuilder; +import com.google.common.cache.CacheLoader; +import com.google.common.cache.LoadingCache; +import com.google.common.cache.RemovalListener; import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.common.entry.extract.TableMetadata; import org.springframework.lang.NonNull; -import java.util.*; -import java.util.concurrent.TimeUnit; +import java.util.Map; +import java.util.Objects; +import java.util.Set; @Slf4j public class MetaDataCache { @@ -16,25 +35,22 @@ public class MetaDataCache { * Initializing the Metadata Cache Method */ public static void initCache() { - CACHE = - CacheBuilder.newBuilder() - //Set the concurrent read/write level based on the number of CPU cores; - .concurrencyLevel(Runtime.getRuntime().availableProcessors()) - // Size of the buffer pool - .maximumSize(Integer.MAX_VALUE) - // Removing a Listener - .removalListener( - (RemovalListener) remove -> log.info("cache: [{}], removed", remove)) - .recordStats() - .build( - // Method of handing a Key that does not exist - new CacheLoader<>() { - @Override - public TableMetadata load(String tableName) { - log.info("cache: [{}], does not exist", tableName); - return null; - } - }); + CACHE = CacheBuilder.newBuilder() + //Set the concurrent read/write level based on the number of CPU cores; + .concurrencyLevel(Runtime.getRuntime().availableProcessors()) + // Size of the buffer pool + .maximumSize(Integer.MAX_VALUE) + // Removing a Listener + .removalListener((RemovalListener) remove -> log + .debug("cache: [{}], removed", remove.getKey())).recordStats().build( + // Method of handing a Key that does not exist + new CacheLoader<>() { + @Override + public TableMetadata load(String tableName) { + log.info("cache: [{}], does not exist", tableName); + return null; + } + }); log.info("initialize table meta data cache"); } @@ -46,7 +62,6 @@ public class MetaDataCache { */ public static void put(@NonNull String key, TableMetadata value) { try { - log.info("put in cache:[{}]-[{}]", key, value); CACHE.put(key, value); } catch (Exception exception) { log.error("put in cache exception ", exception); @@ -61,7 +76,6 @@ public class MetaDataCache { public static void putMap(@NonNull Map map) { try { CACHE.putAll(map); - map.forEach((key, value) -> log.debug("batch cache deposit:[{},{}]", key, value)); } catch (Exception exception) { log.error("batch storage cache exception", exception); } @@ -81,6 +95,21 @@ public class MetaDataCache { } } + /** + * Check whether the specified key is in the cache + * + * @param key table name as cached key + * @return result + */ + public static boolean containsKey(String key) { + try { + return Objects.nonNull(CACHE.get(key)); + } catch (Exception exception) { + log.error("get cache exception", exception); + return false; + } + } + /** * Obtains all cached key sets * diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/TableExtractStatusCache.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/TableExtractStatusCache.java index 3c2ed3dfd8fbb39563def8bc23ace5812984543e..129dba6d6c706705f84de253d74a8bbc31fb49ed 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/TableExtractStatusCache.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/cache/TableExtractStatusCache.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.cache; import lombok.extern.slf4j.Slf4j; @@ -44,7 +59,6 @@ public class TableExtractStatusCache { */ private static final Map> TABLE_EXTRACT_STATUS_MAP = new ConcurrentHashMap<>(); - /** * Table data extraction task status initialization. {code map} is a set of table decomposition tasks. * @@ -54,16 +68,14 @@ public class TableExtractStatusCache { Assert.isTrue(Objects.nonNull(map), Message.INIT_STATUS_PARAM_EMPTY); map.forEach((table, taskCount) -> { Map tableStatus = new ConcurrentHashMap<>(); - IntStream.rangeClosed(TASK_ORDINAL_START_INDEX, taskCount) - .forEach(ordinal -> { - tableStatus.put(ordinal, STATUS_INIT); - }); + IntStream.rangeClosed(TASK_ORDINAL_START_INDEX, taskCount).forEach(ordinal -> { + tableStatus.put(ordinal, STATUS_INIT); + }); TABLE_EXTRACT_STATUS_MAP.put(table, tableStatus); }); log.info(Message.INIT_STATUS); } - /** * Updates the execution status of a task in a specified table. * @@ -73,16 +85,19 @@ public class TableExtractStatusCache { public static synchronized void update(@NonNull String tableName, Integer ordinal) { try { // the table must exist. - Assert.isTrue(TABLE_EXTRACT_STATUS_MAP.containsKey(tableName), String.format(Message.TABLE_STATUS_NOT_EXIST, tableName)); + Assert.isTrue(TABLE_EXTRACT_STATUS_MAP.containsKey(tableName), + String.format(Message.TABLE_STATUS_NOT_EXIST, tableName)); // Obtain the status information corresponding to the current table // and verify the validity of the task status parameters to be updated. Map tableStatus = TABLE_EXTRACT_STATUS_MAP.get(tableName); - Assert.isTrue(tableStatus.containsKey(ordinal), String.format(Message.TABLE_ORDINAL_NOT_EXIST, tableName, ordinal)); + Assert.isTrue(tableStatus.containsKey(ordinal), + String.format(Message.TABLE_ORDINAL_NOT_EXIST, tableName, ordinal)); // update status tableStatus.put(ordinal, STATUS_COMPLATE); - log.info("update tableName : {}, ordinal : {} check completed-status {}", tableName, ordinal, STATUS_COMPLATE); + log.info("update tableName : {}, ordinal : {} check completed-status {}", tableName, ordinal, + STATUS_COMPLATE); } catch (Exception exception) { log.error(Message.UPDATE_STATUS_EXCEPTION, exception); } @@ -92,14 +107,31 @@ public class TableExtractStatusCache { * data extraction status cache message management */ interface Message { - String TABLE_STATUS_NOT_EXIST = "The status information of the current table {%s} does not exist. Please initialize it and update it again."; - String TABLE_ORDINAL_NOT_EXIST = "The current table {%s} sequence {%s} task status information does not exist. Please initialize it and update it again."; + /** + * data extraction status message :table not exist + */ + String TABLE_STATUS_NOT_EXIST = "The status information of the current table {%s} does not exist. " + + "Please initialize it and update it again."; + /** + * data extraction status message :table ordinal not exist + */ + String TABLE_ORDINAL_NOT_EXIST = "The current table {%s} sequence {%s} task status information does not exist." + + " Please initialize it and update it again."; + /** + * data extraction status message :update table status exception + */ String UPDATE_STATUS_EXCEPTION = "Failed to update the task status of the specified table."; + /** + * data extraction status message :Initializing the data extraction task status + */ String INIT_STATUS = "Initializing the data extraction task status."; - String INIT_STATUS_PARAM_EMPTY = "The initialization parameter of the data extraction task status cannot be empty."; + /** + * data extraction status message :initialization parameter of extraction task status cannot be empty + */ + String INIT_STATUS_PARAM_EMPTY = + "The initialization parameter of the data extraction task status cannot be empty."; } - /** * Check whether the execution status of all tasks corresponding to the current table is complete. * if true is returned ,all task are completed. @@ -107,9 +139,10 @@ public class TableExtractStatusCache { * * @param tableName table name */ - public static boolean checkComplated(@NonNull String tableName) { + public static boolean checkCompleted(@NonNull String tableName) { // check whether the table name exists. - Assert.isTrue(TABLE_EXTRACT_STATUS_MAP.containsKey(tableName), String.format(Message.TABLE_STATUS_NOT_EXIST, tableName)); + Assert.isTrue(TABLE_EXTRACT_STATUS_MAP.containsKey(tableName), + String.format(Message.TABLE_STATUS_NOT_EXIST, tableName)); return !TABLE_EXTRACT_STATUS_MAP.get(tableName).containsValue(STATUS_INIT); } @@ -127,14 +160,15 @@ public class TableExtractStatusCache { * @param ordinal sequence number of a table splitting task. * @return */ - public static boolean checkComplated(@NonNull String tableName, int ordinal) { + public static boolean checkCompleted(@NonNull String tableName, int ordinal) { // check whether the table name exists. - Assert.isTrue(TABLE_EXTRACT_STATUS_MAP.containsKey(tableName), String.format(Message.TABLE_STATUS_NOT_EXIST, tableName)); + Assert.isTrue(TABLE_EXTRACT_STATUS_MAP.containsKey(tableName), + String.format(Message.TABLE_STATUS_NOT_EXIST, tableName)); Map tableStatus = TABLE_EXTRACT_STATUS_MAP.get(tableName); - long noComplated = IntStream.range(TASK_ORDINAL_START_INDEX, ordinal) - .filter(idx -> Objects.equals(tableStatus.get(idx), STATUS_INIT)).count(); - log.info("tableName : {}, ordinal : {} check noComplated=[{}]", tableName, ordinal, noComplated); - return noComplated == 0; + long noCompleted = IntStream.range(TASK_ORDINAL_START_INDEX, ordinal) + .filter(idx -> Objects.equals(tableStatus.get(idx), STATUS_INIT)).count(); + log.info("tableName : {}, ordinal : {} check noCompleted=[{}]", tableName, ordinal, noCompleted); + return noCompleted == 0; } /** @@ -161,6 +195,11 @@ public class TableExtractStatusCache { } } + /** + * extract table + * + * @return extract table + */ public static Set getAllKeys() { try { return TABLE_EXTRACT_STATUS_MAP.keySet(); diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/client/CheckingFeignClient.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/client/CheckingFeignClient.java index 63f8ee6be0a3a796cf6cb14aabf99ad67c8cdc2e..8a229ee537872cfd362ace4ca81b03da0eca05fa 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/client/CheckingFeignClient.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/client/CheckingFeignClient.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.client; import org.opengauss.datachecker.common.entry.enums.Endpoint; @@ -35,7 +50,7 @@ public interface CheckingFeignClient { * @param endpoint endpoint enum type {@link org.opengauss.datachecker.common.entry.enums.Endpoint} */ @PostMapping("/table/extract/status") - void refushTableExtractStatus(@RequestParam(value = "tableName") @NotEmpty String tableName, + void refreshTableExtractStatus(@RequestParam(value = "tableName") @NotEmpty String tableName, @RequestParam(value = "endpoint") @NonNull Endpoint endpoint); /** diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java index d76f78d088afdbc5dce2f2ff0d76dd4e805e01eb..21d254b0756fb0f33658491ceadd9973a8f614af 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import org.springframework.context.annotation.Bean; @@ -18,17 +33,17 @@ public class AsyncConfig { public ThreadPoolTaskExecutor doAsyncExecutor() { ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); // Number of core threads, which is the number of threads initialized when the thread pool is created. - executor.setCorePoolSize(Runtime.getRuntime().availableProcessors() * 2); + executor.setCorePoolSize(Runtime.getRuntime().availableProcessors()); // Maximum number of threads, maximum number of threads in the thread pool. - executor.setMaxPoolSize(Runtime.getRuntime().availableProcessors() * 4); + executor.setMaxPoolSize(Runtime.getRuntime().availableProcessors()); // Buffer queue: A queue used to buffer execution tasks. - executor.setQueueCapacity(Integer.MAX_VALUE); + executor.setQueueCapacity(Integer.MAX_VALUE / 100); // Allow thread idle time. executor.setKeepAliveSeconds(60); // Allow Core Thread Timeout Shutdown executor.setAllowCoreThreadTimeOut(true); // Thread pool thread name prefix - executor.setThreadNamePrefix("extract-thread"); + executor.setThreadNamePrefix("EXTRACT_"); // Deny policy executor.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardPolicy()); executor.initialize(); diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java index 569b9022fe5b50dc0d3ccd1ce7e9ffc402169dbd..88bcddbdcaec0bd7e8efdf6d5aa791b2189b9e94 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import com.alibaba.druid.pool.DruidDataSource; @@ -9,29 +24,26 @@ import org.springframework.boot.web.servlet.FilterRegistrationBean; import org.springframework.boot.web.servlet.ServletRegistrationBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; -import org.springframework.context.annotation.Primary; import org.springframework.jdbc.core.JdbcTemplate; -import javax.annotation.PostConstruct; import javax.sql.DataSource; import java.util.Arrays; import java.util.HashMap; import java.util.Map; +/** + * DruidDataSourceConfig + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ @Configuration public class DruidDataSourceConfig { - /** - *

-     *  Add custom Druid data sources to the container,no longer let Spring boot automatically create them.
-     *  Bind the druid data source adttributes in the global configuration file to the com.alibaba.druid.pool.DruidDataSource
-     *  to make them take effect.
-     *  {@code @ConfigurationProperties(prefix="spring.datasource.druid.datasourceone")}: Injects the attribute value
-     *  prefixed with spring.datasource in the global configuration file to the com.alibaba.druid.pool.DruidDataSource
-     *  parameter with the same name.
-     *  
+ * build extract DruidDataSource * - * @return + * @return DruidDataSource */ @Bean("dataSourceOne") @ConfigurationProperties(prefix = "spring.datasource.druid.datasourceone") @@ -39,7 +51,12 @@ public class DruidDataSourceConfig { return new DruidDataSource(); } - + /** + * build extract JdbcTemplate + * + * @param dataSourceOne DataSource + * @return JdbcTemplate + */ @Bean("jdbcTemplateOne") public JdbcTemplate jdbcTemplateOne(@Qualifier("dataSourceOne") DataSource dataSourceOne) { return new JdbcTemplate(dataSourceOne); @@ -50,26 +67,15 @@ public class DruidDataSourceConfig { * Configure the servlet of the Druid monitoring management background. * There is no web.xml file when the servlet container is built in. Therefore ,the servlet registration mode of * Spring Boot is used. - * Startup access address : http://localhost:8080/druid/api.html * * @return return ServletRegistrationBean */ @Bean public ServletRegistrationBean initServletRegistrationBean() { - ServletRegistrationBean bean = - new ServletRegistrationBean<>(new StatViewServlet(), "/druid/*"); - // Configuring the account and password + new ServletRegistrationBean<>(new StatViewServlet(), "/druid/*"); HashMap initParameters = new HashMap<>(); - // Add configuration - // the login key is a fixed loginUsername loginPassword - initParameters.put("loginUsername", "admin"); - initParameters.put("loginPassword", "123456"); - - // if the second parameter is empty,everyone can access it. initParameters.put("allow", ""); - - // Setting initialization parameters bean.setInitParameters(initParameters); return bean; } @@ -85,13 +91,13 @@ public class DruidDataSourceConfig { FilterRegistrationBean bean = new FilterRegistrationBean(); bean.setFilter(new WebStatFilter()); - //exclusions: sets the requests to be filtered out so that statistics are not collected. + // exclusions: sets the requests to be filtered out so that statistics are not collected. Map initParams = new HashMap<>(); // this things don't count. initParams.put("exclusions", "*.js,*.css,/druid/*,/jdbc/*"); bean.setInitParameters(initParams); - //"/*" indicates that all requests are filtered. + // "/*" indicates that all requests are filtered. bean.setUrlPatterns(Arrays.asList("/*")); return bean; } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java index 73daafff6c8e3e9e03d20ac015c9782708eb1149..faaba159e10ed8da046232dcfc642a22eba1f703 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import lombok.extern.slf4j.Slf4j; @@ -15,13 +30,14 @@ import javax.annotation.PostConstruct; @Slf4j @Component public class ExtractConfig { - @Autowired private ExtractProperties extractProperties; + /** + * Start loading check config properties + */ @PostConstruct public void initLoad() { log.info("check config properties [{}]", JsonObjectUtil.format(extractProperties)); } - } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractProperties.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractProperties.java index 9e3fc37c6a6fca1feca462246a07f7e6e49bc259..c2a6c56cdc8de7d96c5210bcabe61fa1ba5032e9 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractProperties.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/ExtractProperties.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import com.alibaba.fastjson.annotation.JSONType; diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/GlobalExtractExceptionHandler.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/GlobalExtractExceptionHandler.java index b982531c89a4beb57e58e98843d8afe0a4b1ac64..69c8268f1bda8d247fda8b53d26446233b6dbf50 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/GlobalExtractExceptionHandler.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/GlobalExtractExceptionHandler.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import lombok.extern.slf4j.Slf4j; diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaConsumerConfig.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaConsumerConfig.java index 6bca578b02952ae0958040d611771f0a4cfc6045..7319cb4d2365be19fe20af8786b9bfe163a4602b 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaConsumerConfig.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaConsumerConfig.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import lombok.extern.slf4j.Slf4j; @@ -11,13 +26,14 @@ import org.springframework.boot.autoconfigure.kafka.KafkaProperties; import org.springframework.boot.context.properties.EnableConfigurationProperties; import org.springframework.stereotype.Component; -import java.util.List; import java.util.Map; import java.util.Objects; import java.util.Properties; import java.util.concurrent.ConcurrentHashMap; /** + * KafkaConsumerConfig + * * @author :wangchao * @date :Created in 2022/5/17 * @since :11 @@ -56,36 +72,24 @@ public class KafkaConsumerConfig { } public KafkaConsumer getDebeziumConsumer(IncrementCheckTopic topic) { - // configuration information Properties props = new Properties(); - // kafka server address - props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, String.join(ExtConstants.DELIMITER, properties.getBootstrapServers())); - // consumer group must be specified + props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, + String.join(ExtConstants.DELIMITER, properties.getBootstrapServers())); props.put(ConsumerConfig.GROUP_ID_CONFIG, topic.getGroupId()); - // if there are committed offsets in each partition,consumption starts from the submitted offsets. - // when there is no submitted offset,consumption is started from the beginning props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, properties.getConsumer().getAutoOffsetReset()); - // sets the serialization processing class for data keys and values. props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); - // creating a kafka consumer instance return new KafkaConsumer<>(props); } private KafkaConsumer buildKafkaConsumer() { - // configuration information Properties props = new Properties(); - // kafka server address - props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, String.join(ExtConstants.DELIMITER, properties.getBootstrapServers())); - // consumer group must be specified + props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, + String.join(ExtConstants.DELIMITER, properties.getBootstrapServers())); props.put(ConsumerConfig.GROUP_ID_CONFIG, properties.getConsumer().getGroupId()); - // if there are committed offsets in each partition,consumption starts from the submitted offsets. - // when there is no submitted offset,consumption is started from the beginning props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, properties.getConsumer().getAutoOffsetReset()); - // sets the serialization processing class for data keys and values. props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); - // creating a kafka consumer instance return new KafkaConsumer<>(props); } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaProducerConfig.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaProducerConfig.java index a570dc6f2690d0eab14261542f1d0a3c308f8152..62fa5bc7c5c80b8e9fe10d7e63f8b05b2f3f6e91 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaProducerConfig.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/KafkaProducerConfig.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import lombok.extern.slf4j.Slf4j; diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java index e1fd4f93de78bebf0536aab60956eafab2ed8293..98de42b5ce9fff3280fa5ebfe2b308593b5de537 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java @@ -1,9 +1,26 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import io.swagger.v3.oas.models.OpenAPI; import io.swagger.v3.oas.models.info.Info; import io.swagger.v3.oas.models.parameters.HeaderParameter; +import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.reflect.FieldUtils; +import org.opengauss.datachecker.common.exception.CommonException; import org.springdoc.core.customizers.OpenApiCustomiser; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @@ -17,57 +34,59 @@ import java.util.List; /** * swagger2 configuration - * http://localhost:8080/swagger-ui/index.html * * @author :wangchao * @date :Created in 2022/5/17 * @since :11 */ - -/** - * 2021/8/13 - */ - +@Slf4j @Configuration public class SpringDocConfig implements WebMvcConfigurer { + /** + * mallTinyOpenAPI + * + * @return OpenAPI + */ @Bean public OpenAPI mallTinyOpenAPI() { - return new OpenAPI() - .info(new Info() - .title("Data extraction") - .description("Data validation tool data extraction API") - .version("v1.0.0")); + return new OpenAPI().info( + new Info().title("Data extraction").description("Data validation tool data extraction API") + .version("v1.0.0")); } /** - * add global request header parameters. + * customerGlobalHeader OpenApiCustomiser + * + * @return OpenApiCustomiser */ @Bean public OpenApiCustomiser customerGlobalHeaderOpenApiCustomiser() { return openApi -> openApi.getPaths().values().stream().flatMap(pathItem -> pathItem.readOperations().stream()) - .forEach(operation -> { - operation.addParametersItem(new HeaderParameter().$ref("#/components/parameters/myGlobalHeader")); - }); + .forEach(operation -> { + operation.addParametersItem( + new HeaderParameter().$ref("#/components/parameters/myGlobalHeader")); + }); } /** - * general interceptor exclusion settings.all interceptors automatically add springdoc-opapi-related - * resource exclusion information. - * you do not need to add it to the interceptor definition of the application. + * registry Interceptors + * + * @param registry registry Interceptors */ @SuppressWarnings("unchecked") @Override public void addInterceptors(InterceptorRegistry registry) { try { Field registrationsField = FieldUtils.getField(InterceptorRegistry.class, "registrations", true); - List registrations = (List) ReflectionUtils.getField(registrationsField, registry); + List registrations = + (List) ReflectionUtils.getField(registrationsField, registry); if (registrations != null) { for (InterceptorRegistration interceptorRegistration : registrations) { interceptorRegistration.excludePathPatterns("/springdoc**/**"); } } - } catch (Exception e) { - e.printStackTrace(); + } catch (CommonException e) { + log.error("swagger2 configuration addInterceptors error"); } } } \ No newline at end of file diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/constants/ExtConstants.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/constants/ExtConstants.java index acfed12512c51b0506ffa239aff4df183d7679aa..039528ef004abe7c265758c3d551347a45b9503a 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/constants/ExtConstants.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/constants/ExtConstants.java @@ -1,10 +1,39 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.constants; import org.opengauss.datachecker.common.constant.Constants; +/** + * ExtConstants + * + * @author :wangchao + * @date :Created in 2022/7/25 + * @since :11 + */ public interface ExtConstants { + /** + * Combined primary key splice connector + */ String PRIMARY_DELIMITER = Constants.PRIMARY_DELIMITER; - String DELIMITER = ","; + + /** + * DELIMITER , + */ + String DELIMITER = Constants.DELIMITER; /** * query result parsing ResultSet data result set,default start index position diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractCleanController.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractCleanController.java index 8a54099d657c7cb288d4fbc943a710d48c97fad8..b3aa492d51fa1ed96f32c838e1983aeb6edb8081 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractCleanController.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractCleanController.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.controller; import io.swagger.v3.oas.annotations.Operation; @@ -7,9 +22,17 @@ import org.opengauss.datachecker.extract.kafka.KafkaManagerService; import org.opengauss.datachecker.extract.service.DataExtractService; import org.opengauss.datachecker.extract.service.MetaDataService; import org.springframework.beans.factory.annotation.Autowired; -import org.springframework.web.bind.annotation.*; - +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestParam; +import org.springframework.web.bind.annotation.RestController; +/** + * Clearing the environment at the extraction endpoint + * + * @author :wangchao + * @date :Created in 2022/6/23 + * @since :11 + */ @Tag(name = "Clearing the environment at the extraction endpoint") @RestController public class ExtractCleanController { @@ -32,7 +55,7 @@ public class ExtractCleanController { @PostMapping("/extract/clean/environment") Result cleanEnvironment(@RequestParam(name = "processNo") String processNo) { metaDataService.init(); - dataExtractService.cleanBuildedTask(); + dataExtractService.cleanBuildTask(); kafkaManagerService.cleanKafka(processNo); return Result.success(); } @@ -40,7 +63,7 @@ public class ExtractCleanController { @Operation(summary = "clears the task cache information of the current ednpoint") @PostMapping("/extract/clean/task") Result cleanTask() { - dataExtractService.cleanBuildedTask(); + dataExtractService.cleanBuildTask(); return Result.success(); } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractController.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractController.java index f3a64ae21bb3c7b871c5812448595aff43130beb..6a6af2701fa6a14a1317014b4d2ca28bb6320953 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractController.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractController.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.controller; import io.swagger.v3.oas.annotations.Operation; @@ -5,7 +20,11 @@ import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import org.opengauss.datachecker.common.entry.enums.CheckBlackWhiteMode; import org.opengauss.datachecker.common.entry.enums.DML; -import org.opengauss.datachecker.common.entry.extract.*; +import org.opengauss.datachecker.common.entry.extract.ExtractTask; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.SourceDataLog; +import org.opengauss.datachecker.common.entry.extract.TableMetadata; +import org.opengauss.datachecker.common.entry.extract.TableMetadataHash; import org.opengauss.datachecker.common.exception.ProcessMultipleException; import org.opengauss.datachecker.common.exception.TaskNotFoundException; import org.opengauss.datachecker.common.web.Result; @@ -13,7 +32,11 @@ import org.opengauss.datachecker.extract.cache.MetaDataCache; import org.opengauss.datachecker.extract.service.DataExtractService; import org.opengauss.datachecker.extract.service.MetaDataService; import org.springframework.beans.factory.annotation.Autowired; -import org.springframework.web.bind.annotation.*; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.PostMapping; +import org.springframework.web.bind.annotation.RequestBody; +import org.springframework.web.bind.annotation.RequestParam; +import org.springframework.web.bind.annotation.RestController; import javax.validation.constraints.NotEmpty; import javax.validation.constraints.NotNull; @@ -22,6 +45,13 @@ import java.util.List; import java.util.Map; import java.util.Set; +/** + * data extracton service + * + * @author :wangchao + * @date :Created in 2022/6/23 + * @since :11 + */ @Tag(name = "data extracton service") @RestController public class ExtractController { @@ -32,8 +62,12 @@ public class ExtractController { @Autowired private DataExtractService dataExtractService; - @Operation(summary = "loading database metadata information", - description = "loading database metadata information(including the table name,primary key field information list, and column field information list)") + /** + * loading database metadata information + * including the table name,primary key field information list, and column field information list + * + * @return database metadata information + */ @GetMapping("/extract/load/database/meta/data") public Result> queryMetaDataOfSchema() { Map metaDataMap = metaDataService.queryMetaDataOfSchema(); @@ -41,10 +75,16 @@ public class ExtractController { return Result.success(metaDataMap); } + /** + * refreshing the block list and trust list + * + * @param mode {@value CheckBlackWhiteMode#API_DESCRIPTION} + * @param tableList tableList + */ @Operation(summary = "refreshing the block list and trust list") - @PostMapping("/extract/refush/black/white/list") - void refushBlackWhiteList(@RequestParam CheckBlackWhiteMode mode, @RequestBody List tableList) { - metaDataService.refushBlackWhiteList(mode, tableList); + @PostMapping("/extract/refresh/black/white/list") + void refreshBlackWhiteList(@RequestParam CheckBlackWhiteMode mode, @RequestBody List tableList) { + metaDataService.refreshBlackWhiteList(mode, tableList); } /** @@ -56,8 +96,9 @@ public class ExtractController { */ @Operation(summary = "construction a data extraction task for the current endpoint") @PostMapping("/extract/build/task/all") - public Result> buildExtractTaskAllTables(@Parameter(name = "processNo", description = "execution process no") - @RequestParam(name = "processNo") String processNo) { + public Result> buildExtractTaskAllTables( + @Parameter(name = "processNo", description = "execution process no") @RequestParam(name = "processNo") + String processNo) { return Result.success(dataExtractService.buildExtractTaskAllTables(processNo)); } @@ -71,9 +112,9 @@ public class ExtractController { */ @Operation(summary = "sink endpoint task configuration") @PostMapping("/extract/config/sink/task/all") - Result buildExtractTaskAllTables(@Parameter(name = "processNo", description = "execution process no") - @RequestParam(name = "processNo") String processNo, - @RequestBody List taskList) { + Result buildExtractTaskAllTables( + @Parameter(name = "processNo", description = "execution process no") @RequestParam(name = "processNo") + String processNo, @RequestBody List taskList) { dataExtractService.buildExtractTaskAllTables(processNo, taskList); return Result.success(); } @@ -95,8 +136,9 @@ public class ExtractController { */ @Operation(summary = "execute the data extraction task that has been created for the current endpoint") @PostMapping("/extract/exec/task/all") - public Result execExtractTaskAllTables(@Parameter(name = "processNo", description = "execution process no") - @RequestParam(name = "processNo") String processNo) { + public Result execExtractTaskAllTables( + @Parameter(name = "processNo", description = "execution process no") @RequestParam(name = "processNo") + String processNo) { dataExtractService.execExtractTaskAllTables(processNo); return Result.success(); } @@ -109,7 +151,7 @@ public class ExtractController { @Operation(summary = " clear the cached task information of the corresponding endpoint and rest the task.") @PostMapping("/extract/clean/build/task") public Result cleanBuildedTask() { - dataExtractService.cleanBuildedTask(); + dataExtractService.cleanBuildTask(); return Result.success(); } @@ -121,8 +163,8 @@ public class ExtractController { */ @GetMapping("/extract/table/info") @Operation(summary = "queries information about data extraction tasks in a specified table in the current process.") - Result queryTableInfo(@Parameter(name = "tableName", description = "table name") - @RequestParam(name = "tableName") String tableName) { + Result queryTableInfo( + @Parameter(name = "tableName", description = "table name") @RequestParam(name = "tableName") String tableName) { return Result.success(dataExtractService.queryTableInfo(tableName)); } @@ -136,14 +178,14 @@ public class ExtractController { */ @Operation(summary = "DML statements required to generate a repair report") @PostMapping("/extract/build/repairDML") - Result> buildRepairDml(@NotEmpty(message = "the schema to which the table to be repaired belongs cannot be empty") - @RequestParam(name = "schema") String schema, - @NotEmpty(message = "the name of the table to be repaired belongs cannot be empty") - @RequestParam(name = "tableName") String tableName, - @NotNull(message = "the DML type to be repaired belongs cannot be empty") - @RequestParam(name = "dml") DML dml, - @NotEmpty(message = "the primary key set to be repaired belongs cannot be empty") - @RequestBody Set diffSet) { + Result> buildRepairDml( + @NotEmpty(message = "the schema to which the table to be repaired belongs cannot be empty") + @RequestParam(name = "schema") String schema, + @NotEmpty(message = "the name of the table to be repaired belongs cannot be empty") + @RequestParam(name = "tableName") String tableName, + @NotNull(message = "the DML type to be repaired belongs cannot be empty") @RequestParam(name = "dml") DML dml, + @NotEmpty(message = "the primary key set to be repaired belongs cannot be empty") @RequestBody + Set diffSet) { return Result.success(dataExtractService.buildRepairDml(schema, tableName, dml, diffSet)); } @@ -156,10 +198,11 @@ public class ExtractController { */ @Operation(summary = "querying table data") @PostMapping("/extract/query/table/data") - Result>> queryTableColumnValues(@NotEmpty(message = "the name of the table to be repaired belongs cannot be empty") - @RequestParam(name = "tableName") String tableName, - @NotEmpty(message = "the primary key set to be repaired belongs cannot be empty") - @RequestBody Set compositeKeySet) { + Result>> queryTableColumnValues( + @NotEmpty(message = "the name of the table to be repaired belongs cannot be empty") + @RequestParam(name = "tableName") String tableName, + @NotEmpty(message = "the primary key set to be repaired belongs cannot be empty") @RequestBody + Set compositeKeySet) { return Result.success(dataExtractService.queryTableColumnValues(tableName, new ArrayList<>(compositeKeySet))); } @@ -171,7 +214,8 @@ public class ExtractController { */ @Operation(summary = "creating an incremental extraction task based on data change logs") @PostMapping("/extract/increment/logs/data") - Result notifyIncrementDataLogs(@RequestBody @NotNull(message = "数据变更日志不能为空") List sourceDataLogList) { + Result notifyIncrementDataLogs( + @RequestBody @NotNull(message = "Data change log cannot be empty") List sourceDataLogList) { dataExtractService.buildExtractIncrementTaskByLogs(sourceDataLogList); dataExtractService.execExtractIncrementTaskByLogs(); return Result.success(); @@ -184,17 +228,22 @@ public class ExtractController { } /** - * queries data corresponding to a specified primary key value in a table and performs hash for secondary verification data query. + * queries data corresponding to a specified primary key value in a table + * and performs hash for secondary verification data query. * * @param dataLog data change logs - * @return rowdata hash + * @return row data hash */ - @Operation(summary = "queries data corresponding to a specified primary key value in a table and performs hash for secondary verification data query.") @PostMapping("/extract/query/secondary/data/row/hash") Result> querySecondaryCheckRowData(@RequestBody SourceDataLog dataLog) { return Result.success(dataExtractService.querySecondaryCheckRowData(dataLog)); } + /** + * queryDatabaseSchema + * + * @return DatabaseSchema + */ @GetMapping("/extract/query/database/schema") Result getDatabaseSchema() { return Result.success(dataExtractService.queryDatabaseSchema()); diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractHealthController.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractHealthController.java index d14b1024ec941c29d31983005f9323c80617425b..28212e415399086a9937094bb9c219f9d432c3d6 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractHealthController.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/ExtractHealthController.java @@ -1,18 +1,45 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.controller; import io.swagger.v3.oas.annotations.Operation; import io.swagger.v3.oas.annotations.tags.Tag; import org.opengauss.datachecker.common.web.Result; +import org.opengauss.datachecker.extract.dao.MetaDataDAO; +import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; +/** + * health check of the data extraction service + * + * @author :wangchao + * @date :Created in 2022/6/23 + * @since :11 + */ @Tag(name = "ExtractHealthController", description = "health check of the data extraction service") @RestController public class ExtractHealthController { + @Autowired + private MetaDataDAO baseMetaDataDAO; @Operation(summary = "data extraction health check") @GetMapping("/extract/health") public Result health() { + baseMetaDataDAO.health(); return Result.success(); } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/KafkaManagerController.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/KafkaManagerController.java index 3ac6efa9f4929039b34f8a3ab48a0b7e36d839bc..4d074dd2f19eff50d9a5e4f5e5a1b3f8429aadbb 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/KafkaManagerController.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/controller/KafkaManagerController.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.controller; import io.swagger.v3.oas.annotations.Operation; @@ -5,7 +20,7 @@ import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import org.opengauss.datachecker.common.entry.extract.RowDataHash; import org.opengauss.datachecker.common.entry.extract.Topic; -import org.opengauss.datachecker.common.util.IdWorker; +import org.opengauss.datachecker.common.util.IdGenerator; import org.opengauss.datachecker.common.web.Result; import org.opengauss.datachecker.extract.kafka.KafkaConsumerService; import org.opengauss.datachecker.extract.kafka.KafkaManagerService; @@ -17,11 +32,16 @@ import org.springframework.web.bind.annotation.RestController; import java.util.List; - +/** + * Data extraction service: Kafka management service + * + * @author :wangchao + * @date :Created in 2022/6/23 + * @since :11 + */ @Tag(name = "KafkaManagerController", description = "Data extraction service: Kafka management service") @RestController public class KafkaManagerController { - @Autowired private KafkaManagerService kafkaManagerService; @Autowired @@ -36,87 +56,116 @@ public class KafkaManagerController { */ @Operation(summary = "Query for specified topic data") @GetMapping("/extract/query/topic/data") - public Result> queryTopicData(@Parameter(name = "tableName", description = "table Name") - @RequestParam("tableName") String tableName, - @Parameter(name = "partitions", description = "kafka partition number") - @RequestParam("partitions") int partitions) { + public Result> queryTopicData( + @Parameter(name = "tableName", description = "table Name") @RequestParam("tableName") String tableName, + @Parameter(name = "partitions", description = "kafka partition number") @RequestParam("partitions") + int partitions) { return Result.success(kafkaConsumerService.getTopicRecords(tableName, partitions)); } /** - * 查询指定增量topic数据 + * Query the specified incremental topic data * - * @param tableName 表名称 - * @return topic数据 + * @param tableName tableName + * @return topic data */ - @Operation(summary = "查询指定增量topic数据") + @Operation(summary = "Query the specified incremental topic data") @GetMapping("/extract/query/increment/topic/data") - public Result> queryIncrementTopicData(@Parameter(name = "tableName", description = "表名称") - @RequestParam("tableName") String tableName) { + public Result> queryIncrementTopicData( + @Parameter(name = "tableName", description = "tableName") @RequestParam("tableName") String tableName) { return Result.success(kafkaConsumerService.getIncrementTopicRecords(tableName)); } /** - * 根据表名称,创建topic + * Create topic according to the table name * - * @param tableName 表名称 - * @param partitions 分区总数 - * @return 创建成功后的topic名称 + * @param tableName tableName + * @param partitions partitions + * @return Topic name after successful creation */ - @Operation(summary = "根据表名称创建topic", description = "用于测试kafka topic创建") + @Operation(summary = "Create topic according to the table name", description = "Used to test Kafka topic creation") @PostMapping("/extract/create/topic") - public Result createTopic(@Parameter(name = "tableName", description = "表名称") - @RequestParam("tableName") String tableName, - @Parameter(name = "partitions", description = "kafka分区总数") - @RequestParam("partitions") int partitions) { - String process = IdWorker.nextId36(); + public Result createTopic( + @Parameter(name = "tableName", description = "tableName") @RequestParam("tableName") String tableName, + @Parameter(name = "partitions", description = "Total number of Kafka partitions") @RequestParam("partitions") + int partitions) { + String process = IdGenerator.nextId36(); return Result.success(kafkaManagerService.createTopic(process, tableName, partitions)); } /** - * 查询所有的topic名称列表 + * Query the list of all topic names * - * @return topic名称列表 + * @return Topic name list */ - @Operation(summary = "查询当前端点所有的topic名称列表") + @Operation(summary = "Query the list of all topic names") @GetMapping("/extract/query/topic") public Result> queryTopicData() { return Result.success(kafkaManagerService.getAllTopic()); } - @Operation(summary = "查询指定表名的Topic信息") + /** + * Query topic information of the specified table name + * + * @param tableName tableName + * @return kafka topic info + */ + @Operation(summary = "Query topic information of the specified table name") @GetMapping("/extract/topic/info") - public Result queryTopicInfo(@Parameter(name = "tableName", description = "表名称") - @RequestParam(name = "tableName") String tableName) { + public Result queryTopicInfo( + @Parameter(name = "tableName", description = "tableName") @RequestParam(name = "tableName") String tableName) { return Result.success(kafkaManagerService.getTopic(tableName)); } - @Operation(summary = "查询指定表名的Topic信息") + /** + * Query topic information of the specified table name + * + * @param tableName tableName + * @return kafka topic info + */ + @Operation(summary = "Query topic information of the specified table name") @GetMapping("/extract/increment/topic/info") - public Result getIncrementTopicInfo(@Parameter(name = "tableName", description = "表名称") - @RequestParam(name = "tableName") String tableName) { + public Result getIncrementTopicInfo( + @Parameter(name = "tableName", description = "tableName") @RequestParam(name = "tableName") String tableName) { return Result.success(kafkaManagerService.getIncrementTopicInfo(tableName)); } - @Operation(summary = "清理所有数据抽取相关topic", description = "清理kafka中 前缀TOPIC_EXTRACT_Endpoint_process_ 的所有Topic") + /** + * Clean up all topics related to data extraction + * + * @param processNo processNo + * @return request result + */ + @Operation(summary = "Clean up all topics related to data extraction") @PostMapping("/extract/delete/topic/history") - public Result deleteTopic(@Parameter(name = "processNo", description = "校验流程号") - @RequestParam(name = "processNo") String processNo) { + public Result deleteTopic( + @Parameter(name = "processNo", description = "processNo") @RequestParam(name = "processNo") String processNo) { kafkaManagerService.deleteTopic(processNo); return Result.success(); } - @Operation(summary = "清理所有数据抽取相关topic", description = "清理kafka中 前缀TOPIC_EXTRACT_Endpoint_ 的所有Topic") + /** + * Clean up all topics related to data extraction + * + * @return request result + */ + @Operation(summary = "Clean up all topics related to data extraction", description = "Clean up all topics in Kafka") @PostMapping("/extract/super/delete/topic/history") public Result deleteTopic() { kafkaManagerService.deleteTopic(); return Result.success(); } - @Operation(summary = "删除kafka中指定topic", description = "删除kafka中指定topic") + /** + * Delete the topic specified in Kafka + * + * @param topicName topicName + * @return request result + */ + @Operation(summary = "Delete the topic specified in Kafka", description = "Delete the topic specified in Kafka") @PostMapping("/extract/delete/topic") - public Result deleteTopicHistory(@Parameter(name = "topicName", description = "topic名称") - @RequestParam(name = "topicName") String topicName) { + public Result deleteTopicHistory( + @Parameter(name = "topicName", description = "topic Name") @RequestParam(name = "topicName") String topicName) { kafkaManagerService.deleteTopicByName(topicName); return Result.success(); } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImpl.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImpl.java index 060b0af0e72b15b8cfa6518518e3f10b1c7b264b..1060da3ca38f195487f58543a37b0f67dc4ba968 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImpl.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImpl.java @@ -1,13 +1,28 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dao; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.common.constant.Constants; import org.opengauss.datachecker.common.entry.enums.CheckBlackWhiteMode; +import org.opengauss.datachecker.common.entry.enums.ColumnKey; import org.opengauss.datachecker.common.entry.enums.DataBaseMeta; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; import org.opengauss.datachecker.common.entry.extract.TableMetadata; -import org.opengauss.datachecker.common.entry.enums.ColumnKey; import org.opengauss.datachecker.common.util.EnumUtil; import org.opengauss.datachecker.extract.config.ExtractProperties; import org.springframework.jdbc.core.JdbcTemplate; @@ -19,18 +34,31 @@ import org.springframework.util.CollectionUtils; import java.sql.ResultSet; import java.sql.SQLException; -import java.util.*; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; import java.util.concurrent.atomic.AtomicReference; import java.util.stream.Collectors; import static org.opengauss.datachecker.extract.constants.ExtConstants.COLUMN_INDEX_FIRST_ZERO; +/** + * DataBaseMetaDataDAOImpl + * + * @author :wangchao + * @date :Created in 2022/6/23 + * @since :11 + */ @Slf4j @Component @RequiredArgsConstructor public class DataBaseMetaDataDAOImpl implements MetaDataDAO { - private static final AtomicReference MODE_REF = new AtomicReference<>(CheckBlackWhiteMode.NONE); + private static final AtomicReference MODE_REF = + new AtomicReference<>(CheckBlackWhiteMode.NONE); private static final AtomicReference> WHITE_REF = new AtomicReference<>(); private static final AtomicReference> BLACK_REF = new AtomicReference<>(); @@ -38,6 +66,19 @@ public class DataBaseMetaDataDAOImpl implements MetaDataDAO { private final ExtractProperties extractProperties; + @Override + public boolean health() { + String sql = MetaSqlMapper.getMetaSql(extractProperties.getDatabaseType(), DataBaseMeta.HEALTH); + List result = new ArrayList<>(); + JdbcTemplateOne.query(sql, ps -> ps.setString(1, getSchema()), new RowCountCallbackHandler() { + @Override + protected void processRow(ResultSet rs, int rowNum) throws SQLException { + result.add(rs.getString(1)); + } + }); + return result.size() > 0; + } + @Override public void resetBlackWhite(CheckBlackWhiteMode mode, List tableList) { MODE_REF.set(mode); @@ -88,13 +129,15 @@ public class DataBaseMetaDataDAOImpl implements MetaDataDAO { if (CollectionUtils.isEmpty(whiteList)) { return tableMetaList; } - return tableMetaList.stream().filter(table -> whiteList.contains(table.getTableName())).collect(Collectors.toList()); + return tableMetaList.stream().filter(table -> whiteList.contains(table.getTableName())) + .collect(Collectors.toList()); } else if (Objects.equals(MODE_REF.get(), CheckBlackWhiteMode.BLACK)) { final List blackList = BLACK_REF.get(); if (CollectionUtils.isEmpty(blackList)) { return tableMetaList; } - return tableMetaList.stream().filter(table -> !blackList.contains(table.getTableName())).collect(Collectors.toList()); + return tableMetaList.stream().filter(table -> !blackList.contains(table.getTableName())) + .collect(Collectors.toList()); } else { return tableMetaList; } @@ -107,9 +150,8 @@ public class DataBaseMetaDataDAOImpl implements MetaDataDAO { JdbcTemplateOne.query(sql, ps -> ps.setString(1, getSchema()), new RowCountCallbackHandler() { @Override protected void processRow(ResultSet rs, int rowNum) throws SQLException { - final TableMetadata metadata = new TableMetadata() - .setTableName(rs.getString(1)) - .setTableRows(rs.getLong(2)); + final TableMetadata metadata = + new TableMetadata().setTableName(rs.getString(1)).setTableRows(rs.getLong(2)); log.debug("queryTableMetadataFast {}", metadata.toString()); tableMetadata.add(metadata); } @@ -122,7 +164,8 @@ public class DataBaseMetaDataDAOImpl implements MetaDataDAO { String sqlQueryTableRowCount = MetaSqlMapper.getTableCount(); final String schema = getSchema(); tableNameList.stream().forEach(tableName -> { - final Long rowCount = JdbcTemplateOne.queryForObject(String.format(sqlQueryTableRowCount, schema, tableName), Long.class); + final Long rowCount = + JdbcTemplateOne.queryForObject(String.format(sqlQueryTableRowCount, schema, tableName), Long.class); tableMetadata.add(new TableMetadata().setTableName(tableName).setTableRows(rowCount)); }); return tableMetadata; @@ -146,13 +189,13 @@ public class DataBaseMetaDataDAOImpl implements MetaDataDAO { @Override public ColumnsMetaData mapRow(ResultSet rs, int rowNum) throws SQLException { - ColumnsMetaData columnsMetaData = new ColumnsMetaData() - .setTableName(rs.getString(++columnIndex)) - .setColumnName(rs.getString(++columnIndex)) - .setOrdinalPosition(rs.getInt(++columnIndex)) - .setDataType(rs.getString(++columnIndex)) - .setColumnType(rs.getString(++columnIndex)) - .setColumnKey(EnumUtil.valueOf(ColumnKey.class, rs.getString(++columnIndex))); + ColumnsMetaData columnsMetaData = new ColumnsMetaData().setTableName(rs.getString(++columnIndex)) + .setColumnName(rs.getString(++columnIndex)) + .setOrdinalPosition(rs.getInt(++columnIndex)) + .setDataType(rs.getString(++columnIndex)) + .setColumnType(rs.getString(++columnIndex)) + .setColumnKey(EnumUtil.valueOf(ColumnKey.class, + rs.getString(++columnIndex))); columnIndex = COLUMN_INDEX_FIRST_ZERO; return columnsMetaData; } @@ -160,13 +203,12 @@ public class DataBaseMetaDataDAOImpl implements MetaDataDAO { } /** - * 动态获取当前数据源的schema信息 + * Dynamically obtain the schema information of the current data source * - * @return + * @return database schema */ private String getSchema() { return extractProperties.getSchema(); } - } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaDataDAO.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaDataDAO.java index 43489aa320926f859dfbb8a1e3d5af6a1969a855..b6a7e8b787cba8022e93f58f9a585ebb632a12a3 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaDataDAO.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaDataDAO.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dao; import org.opengauss.datachecker.common.entry.enums.CheckBlackWhiteMode; @@ -6,42 +21,56 @@ import org.opengauss.datachecker.common.entry.extract.TableMetadata; import java.util.List; +/** + * MetaDataDAO + * + * @author :wangchao + * @date :Created in 2022/6/23 + * @since :11 + */ public interface MetaDataDAO { /** - * 重置黑白名单 + * health + * + * @return health status + */ + boolean health(); + + /** + * Reset black and white list * - * @param mode 黑白名单模式{@link CheckBlackWhiteMode} - * @param tableList 表名称列表 + * @param mode Black and white list mode{@link CheckBlackWhiteMode} + * @param tableList tableList */ void resetBlackWhite(CheckBlackWhiteMode mode, List tableList); /** - * 查询表元数据 + * Query table metadata * - * @return 返回表元数据信息 + * @return table metadata information */ List queryTableMetadata(); /** - * 快速查询表元数据 -直接从information_schema获取 + * Quick query table metadata - directly from information_ Schema acquisition * - * @return 返回表元数据信息 + * @return table metadata information */ List queryTableMetadataFast(); /** - * 查询表对应列元数据信息 + * Query the metadata information of the corresponding column of the table * - * @param tableName 表名称 - * @return 列元数据信息 + * @param tableName tableName + * @return Column metadata information */ List queryColumnMetadata(String tableName); /** - * 查询表对应列元数据信息 + * Query the metadata information of the corresponding column of the table * - * @param tableNames 表名称 - * @return 列元数据信息 + * @param tableNames tableNames + * @return Column metadata information */ List queryColumnMetadata(List tableNames); diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaSqlMapper.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaSqlMapper.java index 083582b795b5b78ee033409dd8dfa8fd85b2b51b..b7d55282bf9ff63bf4d7afcd57a0114181f15e77 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaSqlMapper.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dao/MetaSqlMapper.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dao; import org.opengauss.datachecker.common.entry.enums.DataBaseMeta; @@ -7,66 +22,117 @@ import org.springframework.util.Assert; import java.util.HashMap; import java.util.Map; +/** + * MetaSqlMapper + * + * @author :wangchao + * @date :Created in 2022/5/24 + * @since :11 + */ public class MetaSqlMapper { - - public static String getTableCount() { - return "select count(1) rowCount from %s.%s"; - } - - interface DataBaseMySql { - String TABLE_METADATA_SQL = "select table_name tableName , table_rows tableRows from information_schema.tables WHERE table_schema=?"; - String TABLES_COLUMN_META_DATA_SQL = "select table_name tableName ,column_name columnName, ordinal_position ordinalPosition, data_type dataType, column_type columnType,column_key columnKey from information_schema.columns where table_schema=:databaseSchema and table_name in (:tableNames)"; - } - - interface DataBaseOpenGauss { - String TABLE_METADATA_SQL = "select table_name tableName , 0 tableRows from information_schema.tables WHERE table_schema=? and TABLE_TYPE='BASE TABLE';"; - String TABLES_COLUMN_META_DATA_SQL = "select c.table_name tableName ,c.column_name columnName, c.ordinal_position ordinalPosition, c.data_type dataType , c.data_type columnType,pkc.column_key\n" + - " from information_schema.columns c \n" + - " left join (\n" + - " select kcu.table_name,kcu.column_name,'PRI' column_key\n" + - " from information_schema.key_column_usage kcu \n" + - " WHERE kcu.constraint_name in (\n" + - " select constraint_name from information_schema.table_constraints tc where tc.constraint_schema=:databaseSchema and tc.constraint_type='PRIMARY KEY'\n" + - " )\n" + - " ) pkc on c.table_name=pkc.table_name and c.column_name=pkc.column_name\n" + - " where c.table_schema =:databaseSchema and c.table_name in (:tableNames)"; - } - - interface DataBaseO { - String TABLE_METADATA_SQL = ""; - String TABLES_COLUMN_META_DATA_SQL = ""; - } - private static final Map> DATABASE_META_MAPPER = new HashMap<>(); - static { Map dataBaseMySql = new HashMap<>(); dataBaseMySql.put(DataBaseMeta.TABLE, DataBaseMySql.TABLE_METADATA_SQL); dataBaseMySql.put(DataBaseMeta.COLUMN, DataBaseMySql.TABLES_COLUMN_META_DATA_SQL); + dataBaseMySql.put(DataBaseMeta.HEALTH, DataBaseMySql.HEALTH_SQL); DATABASE_META_MAPPER.put(DataBaseType.MS, dataBaseMySql); - Map dataBaseOpenGauss = new HashMap<>(); dataBaseOpenGauss.put(DataBaseMeta.TABLE, DataBaseOpenGauss.TABLE_METADATA_SQL); dataBaseOpenGauss.put(DataBaseMeta.COLUMN, DataBaseOpenGauss.TABLES_COLUMN_META_DATA_SQL); + dataBaseOpenGauss.put(DataBaseMeta.HEALTH, DataBaseMySql.HEALTH_SQL); DATABASE_META_MAPPER.put(DataBaseType.OG, dataBaseOpenGauss); - Map databaseO = new HashMap<>(); databaseO.put(DataBaseMeta.TABLE, DataBaseO.TABLE_METADATA_SQL); databaseO.put(DataBaseMeta.COLUMN, DataBaseO.TABLES_COLUMN_META_DATA_SQL); + databaseO.put(DataBaseMeta.HEALTH, DataBaseMySql.HEALTH_SQL); DATABASE_META_MAPPER.put(DataBaseType.O, databaseO); + } + /** + * build sql of query table row count + * + * @return table row count sql + */ + public static String getTableCount() { + return "select count(1) rowCount from %s.%s"; } /** - * 根据数据库类型 以及当前要执行的元数据查询类型 返回对应的元数据执行语句 + * Return the corresponding metadata execution statement according to the database type + * and the metadata query type currently to be executed * - * @param dataBaseType 数据库类型 - * @param dataBaseMeta 数据库元数据 - * @return + * @param databaseType database type + * @param databaseMeta 数据库元数据 + * @return execute sql */ - public static String getMetaSql(DataBaseType dataBaseType, DataBaseMeta dataBaseMeta) { - Assert.isTrue(DATABASE_META_MAPPER.containsKey(dataBaseType), "数据库类型不匹配"); - return DATABASE_META_MAPPER.get(dataBaseType).get(dataBaseMeta); + public static String getMetaSql(DataBaseType databaseType, DataBaseMeta databaseMeta) { + Assert.isTrue(DATABASE_META_MAPPER.containsKey(databaseType), "Database type mismatch"); + return DATABASE_META_MAPPER.get(databaseType).get(databaseMeta); + } + + interface DataBaseMySql { + /** + * Health check SQL + */ + String HEALTH_SQL = "select table_name from information_schema.tables WHERE table_schema=? limit 1"; + + /** + * Table metadata query SQL + */ + String TABLE_METADATA_SQL = "select table_name tableName , table_rows tableRows " + + "from information_schema.tables WHERE table_schema=?"; + + /** + * column metadata query SQL + */ + String TABLES_COLUMN_META_DATA_SQL = "select table_name tableName ,column_name columnName," + + " ordinal_position ordinalPosition, data_type dataType, column_type columnType,column_key columnKey" + + " from information_schema.columns" + + " where table_schema=:databaseSchema and table_name in (:tableNames)"; + } + + interface DataBaseOpenGauss { + /** + * Health check SQL + */ + String HEALTH_SQL = "select table_name from information_schema.tables WHERE table_schema=? limit 1"; + + /** + * Table metadata query SQL + */ + String TABLE_METADATA_SQL = "select table_name tableName , 0 tableRows from information_schema.tables " + + "WHERE table_schema=? and TABLE_TYPE='BASE TABLE';"; + + /** + * column metadata query SQL + */ + String TABLES_COLUMN_META_DATA_SQL = "select c.table_name tableName ,c.column_name columnName, " + + " c.ordinal_position ordinalPosition, c.data_type dataType , c.data_type columnType,pkc.column_key " + + " from information_schema.columns c left join ( " + + " select kcu.table_name,kcu.column_name,'PRI' column_key " + + " from information_schema.key_column_usage kcu " + " WHERE kcu.constraint_name in (" + + " select constraint_name from information_schema.table_constraints tc" + + " where tc.constraint_schema=:databaseSchema and tc.constraint_type='PRIMARY KEY' " + + " ) ) pkc on c.table_name=pkc.table_name and c.column_name=pkc.column_name " + + " where c.table_schema =:databaseSchema and c.table_name in (:tableNames)"; + } + + interface DataBaseO { + /** + * Health check SQL + */ + String HEALTH_SQL = ""; + + /** + * Table metadata query SQL + */ + String TABLE_METADATA_SQL = ""; + + /** + * column metadata query SQL + */ + String TABLES_COLUMN_META_DATA_SQL = ""; } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/BatchDeleteDmlBuilder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/BatchDeleteDmlBuilder.java index 5404b0d8d539be7f252e5c1038ba6ee31ffbed29..d64c22a62223859874682a3cc9a6e8409ea886b8 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/BatchDeleteDmlBuilder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/BatchDeleteDmlBuilder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dml; import org.apache.commons.lang3.StringUtils; @@ -16,20 +31,21 @@ import java.util.List; public class BatchDeleteDmlBuilder extends DmlBuilder { /** - * 构建 Schema + * build Schema * * @param schema Schema - * @return DeleteDMLBuilder 构建器 + * @return DeleteDMLBuilder */ public BatchDeleteDmlBuilder schema(@NotNull String schema) { super.buildSchema(schema); return this; } + /** - * 构建 tableName + * build tableName * * @param tableName tableName - * @return DeleteDMLBuilder 构建器 + * @return DeleteDMLBuilder */ public BatchDeleteDmlBuilder tableName(@NotNull String tableName) { super.buildTableName(tableName); @@ -37,35 +53,34 @@ public class BatchDeleteDmlBuilder extends DmlBuilder { } /** - * 生成单一主键字段 delete from schema.table where pk in (参数...) 条件语句 + * Generate a single primary key field[ delete from schema.table where pk in (param...) conditional statement * - * @param primaryMeta 主键元数据 - * @return DeleteDMLBuilder 构建器 + * @param primaryMeta Builder primary key metadata + * @return DeleteDMLBuilder */ public BatchDeleteDmlBuilder conditionPrimary(@NonNull ColumnsMetaData primaryMeta) { - Assert.isTrue(StringUtils.isNotEmpty(primaryMeta.getColumnName()), "表元数据主键字段名称为空"); - this.condition = primaryMeta.getColumnName().concat(IN); + Assert.isTrue(StringUtils.isNotEmpty(primaryMeta.getColumnName()), + "Table metadata primary key field name is empty"); + condition = primaryMeta.getColumnName().concat(IN); return this; } + /** - * 构建复合主键参数的条件查询语句

+ * Construct conditional query statements of composite primary key parameters

* select columns... from table where (pk1,pk2) in ((pk1_val,pk2_val),(pk1_val,pk2_val))

* * @param primaryMeta - * @return SelectDMLBuilder构建器 + * @return SelectDMLBuilder */ public BatchDeleteDmlBuilder conditionCompositePrimary(@NonNull List primaryMeta) { - this.condition = buildConditionCompositePrimary(primaryMeta).concat(IN); + condition = buildConditionCompositePrimary(primaryMeta).concat(IN); return this; } public String build() { StringBuffer sb = new StringBuffer(); - sb.append(Fragment.DELETE).append(Fragment.FROM) - .append(schema).append(Fragment.LINKER).append(tableName) - .append(Fragment.WHERE).append(condition) - .append(Fragment.END) - ; + sb.append(Fragment.DELETE).append(Fragment.FROM).append(schema).append(Fragment.LINKER).append(tableName) + .append(Fragment.WHERE).append(condition).append(Fragment.END); return sb.toString(); } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DeleteDmlBuilder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DeleteDmlBuilder.java index d7514f826c76213da9372008eb3a7d69d7634045..c0a34d539d54cce2da3e1a5bf016542b9cce590a 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DeleteDmlBuilder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DeleteDmlBuilder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dml; import org.apache.commons.lang3.StringUtils; @@ -18,10 +33,10 @@ import java.util.stream.IntStream; public class DeleteDmlBuilder extends DmlBuilder { /** - * 构建 Schema + * build Schema * * @param schema Schema - * @return DeleteDMLBuilder 构建器 + * @return DeleteDMLBuilder */ public DeleteDmlBuilder schema(@NotNull String schema) { super.buildSchema(schema); @@ -29,10 +44,10 @@ public class DeleteDmlBuilder extends DmlBuilder { } /** - * 构建 tableName + * build tableName * * @param tableName tableName - * @return DeleteDMLBuilder 构建器 + * @return DeleteDMLBuilder */ public DeleteDmlBuilder tableName(@NotNull String tableName) { super.buildTableName(tableName); @@ -40,44 +55,45 @@ public class DeleteDmlBuilder extends DmlBuilder { } /** - * 生成单一主键字段 delete from schema.table where pk = 参数 条件语句 + * Generate a single primary key field delete from schema.table where pk = Parameter+conditional statement * - * @param primaryMeta 主键元数据 - * @return DeleteDMLBuilder 构建器 + * @param primaryMeta Primary key metadata + * @return DeleteDMLBuilder */ public DeleteDmlBuilder condition(@NonNull ColumnsMetaData primaryMeta, String value) { - Assert.isTrue(StringUtils.isNotEmpty(primaryMeta.getColumnName()), "表元数据主键字段名称为空"); + Assert.isTrue(StringUtils.isNotEmpty(primaryMeta.getColumnName()), + "Table metadata primary key field name is empty"); if (DIGITAL.contains(primaryMeta.getDataType())) { - this.condition = primaryMeta.getColumnName().concat(EQUAL).concat(value); + condition = primaryMeta.getColumnName().concat(EQUAL).concat(value); } else { - this.condition = primaryMeta.getColumnName().concat(EQUAL) - .concat(SINGLE_QUOTES).concat(value).concat(SINGLE_QUOTES); + condition = + primaryMeta.getColumnName().concat(EQUAL).concat(SINGLE_QUOTES).concat(value).concat(SINGLE_QUOTES); } return this; } /** - * 构建复合主键参数的条件 delete语句

+ * Construct conditional delete statements for composite primary key parameters

* delete from schema.table where pk1 = pk1_val and pk2 = pk2_val

* - * @param compositeKey 复合主键 - * @param primaryMetas 主键元数据 - * @return SelectDMLBuilder 构建器 + * @param compositeKey composite primary key + * @param primaryMetas Primary key metadata + * @return SelectDMLBuilder */ public DeleteDmlBuilder conditionCompositePrimary(String compositeKey, List primaryMetas) { - this.condition = buildCondition(compositeKey, primaryMetas); + condition = buildCondition(compositeKey, primaryMetas); return this; } /** - * 构建主键过滤(where)条件

+ * Build primary key filter (where) conditions

* pk = pk_value

* or

* pk = 'pk_value'

* - * @param compositeKey 复合主键 - * @param primaryMetas 主键元数据 - * @return 返回主键where条件 + * @param compositeKey composite primary key + * @param primaryMetas Primary key metadata + * @return Return the primary key where condition */ public String buildCondition(String compositeKey, List primaryMetas) { final String[] split = compositeKey.split(ExtConstants.PRIMARY_DELIMITER); @@ -90,15 +106,11 @@ public class DeleteDmlBuilder extends DmlBuilder { condition = condition.concat(AND); } if (DIGITAL.contains(mate.getDataType())) { - condition = condition.concat(mate.getColumnName()) - .concat(EQUAL) - .concat(split[idx]); + condition = condition.concat(mate.getColumnName()).concat(EQUAL).concat(split[idx]); } else { - condition = condition.concat(mate.getColumnName()) - .concat(EQUAL) - .concat(SINGLE_QUOTES) - .concat(split[idx]) - .concat(SINGLE_QUOTES); + condition = + condition.concat(mate.getColumnName()).concat(EQUAL).concat(SINGLE_QUOTES).concat(split[idx]) + .concat(SINGLE_QUOTES); } conditionBuffer.append(condition); }); @@ -108,11 +120,8 @@ public class DeleteDmlBuilder extends DmlBuilder { public String build() { StringBuffer sb = new StringBuffer(); - sb.append(Fragment.DELETE).append(Fragment.FROM) - .append(schema).append(Fragment.LINKER).append(tableName) - .append(Fragment.WHERE).append(condition) - .append(Fragment.END) - ; + sb.append(Fragment.DELETE).append(Fragment.FROM).append(schema).append(Fragment.LINKER).append(tableName) + .append(Fragment.WHERE).append(condition).append(Fragment.END); return sb.toString(); } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DmlBuilder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DmlBuilder.java index c27ee2fc44f6b5b8b2b232c669691baf9ad60f9e..885ef3f40532b5bf56324fc27e5e8de1e3138e87 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DmlBuilder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/DmlBuilder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dml; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; @@ -15,56 +30,116 @@ import java.util.stream.Collectors; * @since :11 */ public class DmlBuilder { - + /** + * primaryKeys + */ + public static final String PRIMARY_KEYS = "primaryKeys"; + /** + * sql delimiter + */ protected static final String DELIMITER = ","; + /** + * left bracket + */ protected static final String LEFT_BRACKET = "("; + /** + * right bracket + */ protected static final String RIGHT_BRACKET = ")"; + /** + * SQL statement conditional query in statement fragment + */ protected static final String IN = " in ( :primaryKeys )"; + /** + * single quotes + */ protected static final String SINGLE_QUOTES = "'"; + /** + * equal + */ protected static final String EQUAL = " = "; + /** + * and + */ protected static final String AND = " and "; - public static final String PRIMARY_KEYS = "primaryKeys"; /** * mysql dataType */ - protected final List DIGITAL = List.of("int", "tinyint", "smallint", "mediumint", "bit", "bigint", "double", "float", "decimal"); - + protected static final List DIGITAL = + List.of("int", "tinyint", "smallint", "mediumint", "bit", "bigint", "double", "float", "decimal"); + /** + * columns + */ protected String columns; + /** + * columnsValue + */ protected String columnsValue; + /** + * schema + */ protected String schema; + /** + * tableName + */ protected String tableName; + /** + * condition + */ protected String condition; + /** + * conditionValue + */ protected String conditionValue; - /** - * 构建SQL column 语句片段 + * Build SQL column statement fragment * - * @param columnsMetas 字段元数据 - * @return SQL column 语句片段 + * @param columnsMetas Field Metadata */ protected void buildColumns(@NotNull List columnsMetas) { - this.columns = columnsMetas.stream() - .map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining(DELIMITER)); + columns = columnsMetas.stream().map(ColumnsMetaData::getColumnName).collect(Collectors.joining(DELIMITER)); } + /** + * DML Builder: setting schema parameters + * + * @param schema schema + */ protected void buildSchema(@NotNull String schema) { this.schema = schema; } + /** + * DML Builder: setting tableName parameters + * + * @param tableName tableName + */ protected void buildTableName(@NotNull String tableName) { this.tableName = tableName; } + /** + * DML Builder: setting primaryMetas parameters + * + * @param primaryMetas primaryMetas + * @return sql value fragment + */ protected String buildConditionCompositePrimary(List primaryMetas) { - return primaryMetas.stream() - .map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining(DELIMITER, LEFT_BRACKET, RIGHT_BRACKET)); + return primaryMetas.stream().map(ColumnsMetaData::getColumnName) + .collect(Collectors.joining(DELIMITER, LEFT_BRACKET, RIGHT_BRACKET)); } - public List columnsValueList(@NotNull Map columnsValue, @NotNull List columnsMetaList) { + /** + * columnsValueList + * + * @param columnsValue columnsValue + * @param columnsMetaList columnsMetaList + * @return columnsValueList + */ + public List columnsValueList(@NotNull Map columnsValue, + @NotNull List columnsMetaList) { List valueList = new ArrayList<>(); columnsMetaList.forEach(columnMeta -> { if (DIGITAL.contains(columnMeta.getDataType())) { @@ -82,19 +157,57 @@ public class DmlBuilder { } interface Fragment { + /** + * DML SQL statement insert fragment + */ String DML_INSERT = "insert into #schema.#tablename (#columns) value (#value);"; + /** + * DML SQL statement replace fragment + */ String DML_REPLACE = "replace into #schema.#tablename (#columns) value (#value);"; + /** + * DML SQL statement select fragment + */ String SELECT = "select "; + /** + * DML SQL statement delete fragment + */ String DELETE = "delete "; + /** + * DML SQL statement from fragment + */ String FROM = " from "; + /** + * DML SQL statement where fragment + */ String WHERE = " where "; + /** + * DML SQL statement space fragment + */ String SPACE = " "; + /** + * DML SQL statement END fragment + */ String END = ";"; + /** + * DML SQL statement linker fragment + */ String LINKER = "."; - + /** + * DML SQL statement schema fragment + */ String SCHEMA = "#schema"; + /** + * DML SQL statement tablename fragment + */ String TABLE_NAME = "#tablename"; + /** + * DML SQL statement columns fragment + */ String COLUMNS = "#columns"; + /** + * DML SQL statement value fragment + */ String VALUE = "#value"; } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/InsertDmlBuilder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/InsertDmlBuilder.java index fcce89fadd9fac25a334401450dfb4ebc895291c..7b9ae81c6e9982dd5ac04a87054135297356da3e 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/InsertDmlBuilder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/InsertDmlBuilder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dml; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; @@ -15,12 +30,11 @@ import java.util.stream.Collectors; */ public class InsertDmlBuilder extends DmlBuilder { - /** - * 构建 Schema + * build Schema * * @param schema Schema - * @return InsertDMLBuilder 构建器 + * @return InsertDMLBuilder */ public InsertDmlBuilder schema(@NotNull String schema) { super.buildSchema(schema); @@ -28,10 +42,10 @@ public class InsertDmlBuilder extends DmlBuilder { } /** - * 构建 tableName + * build tableName * * @param tableName tableName - * @return InsertDMLBuilder 构建器 + * @return InsertDMLBuilder */ public InsertDmlBuilder tableName(@NotNull String tableName) { super.buildTableName(tableName); @@ -39,36 +53,32 @@ public class InsertDmlBuilder extends DmlBuilder { } /** - * 构建SQL column 语句片段 + * build sql column statement fragment * - * @param columnsMetas 字段元数据 - * @return InsertDMLBuilder 构建器 + * @param columnsMetas Field Metadata + * @return InsertDMLBuilder */ public InsertDmlBuilder columns(@NotNull List columnsMetas) { - this.columns = columnsMetas.stream() - .map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining(DELIMITER)); + columns = columnsMetas.stream().map(ColumnsMetaData::getColumnName).collect(Collectors.joining(DELIMITER)); return this; } /** - * 构建SQL column value 语句片段 + * build sql column value statement fragment * - * @param columnsMetaList 字段元数据 - * @return InsertDMLBuilder 构建器 + * @param columnsMetaList Field Metadata + * @return InsertDMLBuilder */ - public InsertDmlBuilder columnsValue(@NotNull Map columnsValue, @NotNull List columnsMetaList) { + public InsertDmlBuilder columnsValue(@NotNull Map columnsValue, + @NotNull List columnsMetaList) { List valueList = new ArrayList<>(columnsValueList(columnsValue, columnsMetaList)); this.columnsValue = String.join(DELIMITER, valueList); return this; } public String build() { - return Fragment.DML_INSERT.replace(Fragment.SCHEMA, schema) - .replace(Fragment.TABLE_NAME, tableName) - .replace(Fragment.COLUMNS, columns) - .replace(Fragment.VALUE, columnsValue) - ; + return Fragment.DML_INSERT.replace(Fragment.SCHEMA, schema).replace(Fragment.TABLE_NAME, tableName) + .replace(Fragment.COLUMNS, columns).replace(Fragment.VALUE, columnsValue); } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/ReplaceDmlBuilder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/ReplaceDmlBuilder.java index 1b7a2daea96ac69e2781bcdf790e4ddc67e72db1..9d820929567100d45df9f6be172f0e6570f77e9d 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/ReplaceDmlBuilder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/ReplaceDmlBuilder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dml; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; @@ -15,12 +30,11 @@ import java.util.stream.Collectors; */ public class ReplaceDmlBuilder extends DmlBuilder { - /** - * 构建 Schema + * build Schema * * @param schema Schema - * @return InsertDMLBuilder 构建器 + * @return InsertDMLBuilder */ public ReplaceDmlBuilder schema(@NotNull String schema) { super.buildSchema(schema); @@ -28,10 +42,10 @@ public class ReplaceDmlBuilder extends DmlBuilder { } /** - * 构建 tableName + * build tableName * * @param tableName tableName - * @return InsertDMLBuilder 构建器 + * @return InsertDMLBuilder */ public ReplaceDmlBuilder tableName(@NotNull String tableName) { super.buildTableName(tableName); @@ -39,37 +53,32 @@ public class ReplaceDmlBuilder extends DmlBuilder { } /** - * 构建SQL column 语句片段 + * build SQL column statement fragment * - * @param columnsMetas 字段元数据 - * @return InsertDMLBuilder 构建器 + * @param columnsMetas Field Metadata + * @return InsertDMLBuilder */ public ReplaceDmlBuilder columns(@NotNull List columnsMetas) { - this.columns = columnsMetas.stream() - .map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining(DELIMITER)); + columns = columnsMetas.stream().map(ColumnsMetaData::getColumnName).collect(Collectors.joining(DELIMITER)); return this; } /** - * 构建SQL column value 语句片段 + * build SQL column value statement fragment * - * @param columnsMetaList 字段元数据 - * @return InsertDMLBuilder 构建器 + * @param columnsMetaList Field Metadata + * @return InsertDMLBuilder */ - public ReplaceDmlBuilder columnsValue(@NotNull Map columnsValue, @NotNull List columnsMetaList) { + public ReplaceDmlBuilder columnsValue(@NotNull Map columnsValue, + @NotNull List columnsMetaList) { List valueList = new ArrayList<>(columnsValueList(columnsValue, columnsMetaList)); this.columnsValue = String.join(DELIMITER, valueList); return this; } - public String build() { - return Fragment.DML_REPLACE.replace(Fragment.SCHEMA, schema) - .replace(Fragment.TABLE_NAME, tableName) - .replace(Fragment.COLUMNS, columns) - .replace(Fragment.VALUE, columnsValue) - ; + return Fragment.DML_REPLACE.replace(Fragment.SCHEMA, schema).replace(Fragment.TABLE_NAME, tableName) + .replace(Fragment.COLUMNS, columns).replace(Fragment.VALUE, columnsValue); } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/SelectDmlBuilder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/SelectDmlBuilder.java index 8e14cc2326afd96620b1289161649b7bd953e7c5..b05b0b063033d0fe535841bbd5ac1e9e336aa938 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/SelectDmlBuilder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/dml/SelectDmlBuilder.java @@ -1,5 +1,19 @@ -package org.opengauss.datachecker.extract.dml; +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ +package org.opengauss.datachecker.extract.dml; import org.apache.commons.lang3.StringUtils; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; @@ -20,10 +34,10 @@ import java.util.stream.IntStream; public class SelectDmlBuilder extends DmlBuilder { /** - * 构建SQL column 语句片段 + * build SQL column statement fragment * - * @param columnsMetas 字段元数据 - * @return SelectDMLBuilder构建器 + * @param columnsMetas Field Metadata + * @return SelectDMLBuilder */ public SelectDmlBuilder columns(@NotNull List columnsMetas) { super.buildColumns(columnsMetas); @@ -31,10 +45,10 @@ public class SelectDmlBuilder extends DmlBuilder { } /** - * 构建 Schema + * build Schema * * @param schema Schema - * @return SelectDMLBuilder构建器 + * @return SelectDMLBuilder */ public SelectDmlBuilder schema(@NotNull String schema) { super.buildSchema(schema); @@ -42,40 +56,40 @@ public class SelectDmlBuilder extends DmlBuilder { } /** - * 生成单一主键字段 select columns... from where pk in (参数...) 条件语句 + * Generate single primary key field SQL: select columns... from where pk in (Parameter...) conditional statement * - * @param primaryMeta 主键元数据 - * @return SelectDMLBuilder构建器 + * @param primaryMeta Primary key metadata + * @return SelectDMLBuilder */ public SelectDmlBuilder conditionPrimary(@NonNull ColumnsMetaData primaryMeta) { - Assert.isTrue(StringUtils.isNotEmpty(primaryMeta.getColumnName()), "表元数据主键字段名称为空"); - this.condition = primaryMeta.getColumnName().concat(IN); + Assert.isTrue(StringUtils.isNotEmpty(primaryMeta.getColumnName()), + "Table metadata primary key field name is empty"); + condition = primaryMeta.getColumnName().concat(IN); return this; } /** - * 构建复合主键参数的条件查询语句

+ * Construct conditional query SQL of composite primary key parameters

* select columns... from table where (pk1,pk2) in ((pk1_val,pk2_val),(pk1_val,pk2_val))

* - * @param primaryMeta - * @return SelectDMLBuilder构建器 + * @param primaryMeta Primary key metadata + * @return SelectDMLBuilder */ public SelectDmlBuilder conditionCompositePrimary(@NonNull List primaryMeta) { - this.condition = buildConditionCompositePrimary(primaryMeta).concat(IN); + condition = buildConditionCompositePrimary(primaryMeta).concat(IN); return this; } - - /** - * 构建复合主键参数的条件查询语句 value 参数

+ * Construct the condition of compound primary key parameters to query the value parameter of SQL

* select columns... from table where (pk1,pk2) in ((pk1_val,pk2_val),(pk1_val,pk2_val))

* - * @param primaryMetas 主键元数据信息 - * @param compositeKeys 主键值列表 - * @return SelectDMLBuilder构建器 + * @param primaryMetas Primary key metadata + * @param compositeKeys composite Keys value + * @return SelectDMLBuilder */ - public List conditionCompositePrimaryValue(@NonNull List primaryMetas, List compositeKeys) { + public List conditionCompositePrimaryValue(@NonNull List primaryMetas, + List compositeKeys) { List batchParam = new ArrayList<>(); final int size = primaryMetas.size(); compositeKeys.forEach(compositeKey -> { @@ -92,24 +106,20 @@ public class SelectDmlBuilder extends DmlBuilder { } /** - * 构建 tableName + * build tableName * * @param tableName tableName - * @return SelectDMLBuilder构建器 + * @return SelectDMLBuilder */ public SelectDmlBuilder tableName(@NotNull String tableName) { super.buildTableName(tableName); return this; } - public String build() { StringBuffer sb = new StringBuffer(); - sb.append(Fragment.SELECT).append(columns).append(Fragment.FROM) - .append(schema).append(Fragment.LINKER).append(tableName) - .append(Fragment.WHERE).append(condition) - .append(Fragment.END) - ; + sb.append(Fragment.SELECT).append(columns).append(Fragment.FROM).append(schema).append(Fragment.LINKER) + .append(tableName).append(Fragment.WHERE).append(condition).append(Fragment.END); return sb.toString(); } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaAdminService.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaAdminService.java new file mode 100644 index 0000000000000000000000000000000000000000..6f505620aadd9de7c6b949c7bd34673f1fa493fe --- /dev/null +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaAdminService.java @@ -0,0 +1,161 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.kafka; + +import lombok.extern.slf4j.Slf4j; +import org.apache.kafka.clients.admin.AdminClient; +import org.apache.kafka.clients.admin.AdminClientConfig; +import org.apache.kafka.clients.admin.DeleteTopicsResult; +import org.apache.kafka.clients.admin.KafkaAdminClient; +import org.apache.kafka.clients.admin.NewTopic; +import org.apache.kafka.clients.admin.TopicListing; +import org.apache.kafka.common.KafkaFuture; +import org.opengauss.datachecker.common.exception.CreateTopicException; +import org.springframework.beans.factory.annotation.Value; +import org.springframework.kafka.KafkaException; +import org.springframework.stereotype.Component; + +import javax.annotation.PostConstruct; +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ExecutionException; +import java.util.stream.Collectors; + +/** + * kafka Topic admin + * + * @author :wangchao + * @date :Created in 2022/5/17 + * @since :11 + */ +@Component +@Slf4j +public class KafkaAdminService { + @Value("${spring.kafka.bootstrap-servers}") + private String springKafkaBootstrapServers; + private AdminClient adminClient; + + /** + * Initialize Admin Client + */ + @PostConstruct + private void initAdminClient() { + Map props = new HashMap<>(1); + props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, springKafkaBootstrapServers); + adminClient = KafkaAdminClient.create(props); + try { + adminClient.listTopics().listings().get(); + } catch (ExecutionException | InterruptedException ex) { + log.error("kafka Client link exception: ", ex); + throw new KafkaException("kafka Client link exception"); + } + } + + /** + * Create a Kafka theme. If it exists, it will not be created. + * + * @param topic topic + * @param partitions partitions + * @return topic name + */ + public String createTopic(String topic, int partitions) { + try { + KafkaFuture> names = adminClient.listTopics().names(); + if (names.get().contains(topic)) { + log.info("topic={} has existed,no create", topic); + return topic; + } else { + adminClient.createTopics(List.of(new NewTopic(topic, partitions, (short) 1))); + log.info("topic={} create,numPartitions={}, short replicationFactor={}", topic, partitions, 1); + return topic; + } + } catch (InterruptedException | ExecutionException e) { + log.error("topic={} is delete error : {}", topic, e); + throw new CreateTopicException(topic); + } + } + + /** + * Delete topic and support batch + * + * @param topics topic + */ + public void deleteTopic(Collection topics) { + DeleteTopicsResult deleteTopicsResult = adminClient.deleteTopics(topics); + Map> kafkaFutureMap = deleteTopicsResult.topicNameValues(); + kafkaFutureMap.forEach((topic, future) -> { + try { + future.get(); + log.info("topic={} is delete successfull", topic); + } catch (InterruptedException | ExecutionException e) { + log.error("topic={} is delete error : {}", topic, e); + } + }); + } + + /** + * Gets the topic with the specified prefix + * + * @param prefix prefix + * @return Topic with the specified prefix + */ + public List getAllTopic(String prefix) { + try { + log.info("topic prefix :{}", prefix); + return adminClient.listTopics().listings().get().stream().map(TopicListing::name) + .filter(name -> name.startsWith(prefix)).collect(Collectors.toList()); + } catch (InterruptedException | ExecutionException e) { + log.error("admin client get topic error:", e); + } + return new ArrayList<>(); + } + + /** + * Gets all of the topics + * + * @return topics + */ + public List getAllTopic() { + try { + return adminClient.listTopics().listings().get().stream().map(TopicListing::name) + .collect(Collectors.toList()); + } catch (InterruptedException | ExecutionException e) { + log.error("admin client get topic error:", e); + } + return new ArrayList<>(); + } + + /** + * Check whether the current topic exists + * + * @param topicName topic Name + * @return Does it exist + */ + public boolean isTopicExists(String topicName) { + try { + log.info("topic name :{}", topicName); + return adminClient.listTopics().listings().get().stream().map(TopicListing::name) + .anyMatch(name -> name.equalsIgnoreCase(topicName)); + } catch (InterruptedException | ExecutionException e) { + log.error("admin client get topic error:", e); + } + return false; + } +} diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaCommonService.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaCommonService.java new file mode 100644 index 0000000000000000000000000000000000000000..73331b607fe20caee0a4c66cc6c61158d2978f06 --- /dev/null +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaCommonService.java @@ -0,0 +1,176 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.kafka; + +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.common.entry.enums.Endpoint; +import org.opengauss.datachecker.common.entry.extract.Topic; +import org.opengauss.datachecker.extract.config.ExtractProperties; +import org.springframework.lang.NonNull; +import org.springframework.stereotype.Service; + +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; +import java.util.Objects; + +/** + * KafkaCommonService + * + * @author :wangchao + * @date :Created in 2022/5/17 + * @since :11 + */ +@Slf4j +@Service +@RequiredArgsConstructor +public class KafkaCommonService { + /** + * Full calibration extraction topic name template TOPIC_EXTRACT_%s_%s_

+ * The first % is the endpoint {@link Endpoint} + * The second % is the process verification process number + * Last splicing table name + */ + private static final String TOPIC_PROCESS_PREFIX = "TOPIC_EXTRACT_%s_%s_"; + + /** + * Full calibration extraction topic name template TOPIC_EXTRACT_%s_

+ * The first % is the endpoint {@link Endpoint} + * Used to batch query all topic names created by the verification process in Kafka + */ + private static final String TOPIC_PREFIX_PR = "TOPIC_EXTRACT_%s_"; + private static final String TOPIC_PREFIX = "TOPIC_EXTRACT_"; + + /** + * Incremental verification topic prefix + */ + private static final String INCREMENT_TOPIC_PREFIX = "TOPIC_EXTRACT_INCREMENT_"; + private static final Object LOCK = new Object(); + private static final Map TABLE_TOPIC_CACHE = new HashMap<>(); + private static final Map DEBEZIUM_TOPIC_CACHE = new HashMap<>(); + + private final ExtractProperties extractProperties; + + /** + * Get data verification Kafka topic prefix + * + * @param process Verification process No + * @return topic prefix + */ + public String getTopicPrefixProcess(String process) { + return String.format(TOPIC_PROCESS_PREFIX, extractProperties.getEndpoint().getCode(), process); + } + + /** + * Get data verification Kafka topic prefix + * + * @return Data verification Kafka topic prefix + */ + public String getTopicPrefixEndpoint() { + return String.format(TOPIC_PREFIX_PR, extractProperties.getEndpoint().getCode()); + } + + /** + * Get data verification Kafka topic prefix + * + * @return topic prefix + */ + public String getTopicPrefix() { + return TOPIC_PREFIX; + } + + /** + * Get the corresponding topic according to the table name + * + * @param tableName tableName + * @return topic + */ + public Topic getTopic(@NonNull String tableName) { + return TABLE_TOPIC_CACHE.get(tableName); + } + + /** + * Obtain the corresponding topic according to relevant information + * + * @param process process + * @param tableName tableName + * @param divisions divisions + * @return topic + */ + public Topic getTopicInfo(String process, @NonNull String tableName, int divisions) { + Topic topic = TABLE_TOPIC_CACHE.get(tableName); + if (Objects.isNull(topic)) { + synchronized (LOCK) { + topic = TABLE_TOPIC_CACHE.get(tableName); + if (Objects.isNull(topic)) { + topic = new Topic().setTableName(tableName).setTopicName( + getTopicPrefixProcess(process).concat(tableName.toUpperCase(Locale.ROOT))) + .setPartitions(calcPartitions(divisions)); + TABLE_TOPIC_CACHE.put(tableName, topic); + } + } + } + log.debug("kafka topic info : [{}] ", topic.toString()); + return topic; + } + + /** + * Calculate the Kafka partition according to the total number of task slices. + * The total number of Kafka partitions shall not exceed 10 + * + * @param divisions Number of task slices extracted + * @return Total number of Kafka partitions + */ + public int calcPartitions(int divisions) { + return Math.min(divisions, 10); + } + + /** + * Clean up table name and topic information + */ + public void cleanTopicMapping() { + TABLE_TOPIC_CACHE.clear(); + log.info("clear table topic cache information"); + } + + /** + * Get incremental topic information + * + * @param tableName tableName + * @return Topic + */ + public Topic getIncrementTopicInfo(String tableName) { + Topic topic = TABLE_TOPIC_CACHE.get(tableName); + if (Objects.isNull(topic)) { + synchronized (LOCK) { + topic = TABLE_TOPIC_CACHE.get(tableName); + if (Objects.isNull(topic)) { + topic = new Topic().setTableName(tableName).setTopicName(getIncrementTopicName(tableName)) + .setPartitions(1); + TABLE_TOPIC_CACHE.put(tableName, topic); + } + } + } + log.debug("kafka topic info : [{}] ", topic.toString()); + return topic; + } + + private String getIncrementTopicName(String tableName) { + return INCREMENT_TOPIC_PREFIX.concat(Integer.toString(extractProperties.getEndpoint().getCode())).concat("_") + .concat(tableName.toUpperCase(Locale.ROOT)); + } +} diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaConsumerService.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaConsumerService.java new file mode 100644 index 0000000000000000000000000000000000000000..f111e4cfb11ecae9b4bd895fd872376220983d7c --- /dev/null +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaConsumerService.java @@ -0,0 +1,89 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.kafka; + +import com.alibaba.fastjson.JSON; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.apache.kafka.clients.consumer.ConsumerRecords; +import org.apache.kafka.clients.consumer.KafkaConsumer; +import org.apache.kafka.common.TopicPartition; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.Topic; +import org.opengauss.datachecker.extract.config.KafkaConsumerConfig; +import org.springframework.stereotype.Component; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; + +/** + * KafkaConsumerService + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j +@Component +@RequiredArgsConstructor +public class KafkaConsumerService { + private final KafkaConsumerConfig consumerConfig; + private final KafkaCommonService kafkaCommonService; + + /** + * Get the data of the specified topic partition + * + * @param tableName tableName + * @param partitions partitions + * @return kafka topic data + */ + public List getTopicRecords(String tableName, int partitions) { + Topic topic = kafkaCommonService.getTopic(tableName); + KafkaConsumer kafkaConsumer = consumerConfig.getKafkaConsumer(topic.getTopicName(), partitions); + + // Consume a partition data of a topic + kafkaConsumer.assign(List.of(new TopicPartition(topic.getTopicName(), partitions))); + List dataList = new ArrayList<>(); + ConsumerRecords consumerRecords = kafkaConsumer.poll(Duration.ofMillis(200)); + consumerRecords.forEach(record -> { + dataList.add(JSON.parseObject(record.value(), RowDataHash.class)); + }); + log.debug("kafka consumer topic=[{}] partitions=[{}] dataList=[{}]", topic.toString(), partitions, + dataList.size()); + return dataList; + } + + /** + * Get incremental verification topic data + * + * @param tableName tableName + * @return kafka topic data + */ + public List getIncrementTopicRecords(String tableName) { + Topic topic = kafkaCommonService.getIncrementTopicInfo(tableName); + KafkaConsumer kafkaConsumer = consumerConfig.getKafkaConsumer(topic.getTopicName(), 1); + kafkaConsumer.subscribe(List.of(topic.getTopicName())); + List dataList = new ArrayList<>(); + ConsumerRecords consumerRecords = kafkaConsumer.poll(Duration.ofMillis(200)); + consumerRecords.forEach(record -> { + dataList.add(JSON.parseObject(record.value(), RowDataHash.class)); + }); + kafkaConsumer.commitAsync(); + log.debug("kafka consumer topic=[{}] dataList=[{}]", topic.toString(), dataList.size()); + return dataList; + } +} diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaManagerService.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaManagerService.java new file mode 100644 index 0000000000000000000000000000000000000000..f99268659770850124238d0de3e512a0208d4bd8 --- /dev/null +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaManagerService.java @@ -0,0 +1,141 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.kafka; + +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.common.entry.extract.Topic; +import org.opengauss.datachecker.extract.config.KafkaConsumerConfig; +import org.opengauss.datachecker.extract.config.KafkaProducerConfig; +import org.springframework.stereotype.Service; + +import java.util.List; + +/** + * KafkaManagerService + * + * @author :wangchao + * @date :Created in 2022/6/10 + * @since :11 + */ +@Slf4j +@RequiredArgsConstructor +@Service +public class KafkaManagerService { + private final KafkaAdminService kafkaAdminService; + private final KafkaCommonService kafkaCommonService; + private final KafkaConsumerConfig kafkaConsumerConfig; + private final KafkaProducerConfig kafkaProducerConfig; + + /** + * get kafka topic list + * + * @return topic list + */ + public List getAllTopic() { + return kafkaAdminService.getAllTopic(kafkaCommonService.getTopicPrefixEndpoint()); + } + + /** + * Create a topic according to the table name + * + * @param process process + * @param tableName tableName + * @param partitions Total partitions + * @return Topic name after successful creation + */ + public String createTopic(String process, String tableName, int partitions) { + final Topic topicInfo = kafkaCommonService.getTopicInfo(process, tableName, partitions); + return kafkaAdminService.createTopic(topicInfo.getTopicName(), partitions); + } + + /** + * Clear Kafka information + * + * @param processNo processNo + */ + public void cleanKafka(String processNo) { + kafkaCommonService.cleanTopicMapping(); + log.info("Extract service cleanup Kafka topic mapping information"); + kafkaConsumerConfig.cleanKafkaConsumer(); + log.info("Extract service to clean up Kafka consumer information"); + kafkaProducerConfig.cleanKafkaProducer(); + log.info("Extract service cleanup Kafka producer mapping information"); + List topics = kafkaAdminService.getAllTopic(kafkaCommonService.getTopicPrefixProcess(processNo)); + kafkaAdminService.deleteTopic(topics); + log.info("Extract service cleanup current process ({}) Kafka topics {}", processNo, topics); + kafkaAdminService.deleteTopic(topics); + log.info("Extract service cleanup current process ({}) Kafka topics {}", processNo, topics); + } + + /** + * Clear Kafka information + */ + public void cleanKafka() { + kafkaCommonService.cleanTopicMapping(); + kafkaConsumerConfig.cleanKafkaConsumer(); + kafkaProducerConfig.cleanKafkaProducer(); + List topics = kafkaAdminService.getAllTopic(kafkaCommonService.getTopicPrefix()); + kafkaAdminService.deleteTopic(topics); + } + + /** + * Clean up all topics with prefix 前缀TOPIC_EXTRACT_Endpoint_process_ in Kafka + * + * @param processNo process + */ + public void deleteTopic(String processNo) { + List topics = kafkaAdminService.getAllTopic(kafkaCommonService.getTopicPrefixProcess(processNo)); + kafkaAdminService.deleteTopic(topics); + } + + /** + * Clean up all topics + */ + public void deleteTopic() { + List topics = kafkaAdminService.getAllTopic(); + kafkaAdminService.deleteTopic(topics); + } + + /** + * Query the topic information of the specified table name + * + * @param tableName tableName + * @return topic information + */ + public Topic getTopic(String tableName) { + return kafkaCommonService.getTopic(tableName); + } + + /** + * Query the topic information of the specified table name + * + * @param tableName tableName + * @return topic information + */ + public Topic getIncrementTopicInfo(String tableName) { + return kafkaCommonService.getIncrementTopicInfo(tableName); + } + + /** + * Delete the specified topic + * + * @param topicName topicName + */ + public void deleteTopicByName(String topicName) { + kafkaAdminService.deleteTopic(List.of(topicName)); + } +} diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaProducerWapper.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaProducerWapper.java new file mode 100644 index 0000000000000000000000000000000000000000..4ad148b4c4b5e09a3df64bdee633d9580b361d1f --- /dev/null +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/kafka/KafkaProducerWapper.java @@ -0,0 +1,113 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.kafka; + +import com.alibaba.fastjson.JSON; +import lombok.extern.slf4j.Slf4j; +import org.apache.kafka.clients.producer.KafkaProducer; +import org.apache.kafka.clients.producer.ProducerRecord; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.Topic; +import org.opengauss.datachecker.extract.config.KafkaProducerConfig; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * KafkaProducerWapper + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j +public class KafkaProducerWapper { + private static final int FLUSH_KAFKA_PARALLEL_THRESHOLD = 1000; + private static final int DEFAULT_PARTITION = 0; + private static final int MIN_PARTITION_NUM = 1; + + private KafkaProducerConfig kafkaProducerConfig; + + /** + * KafkaProducerWapper build + * + * @param config config + */ + public KafkaProducerWapper(KafkaProducerConfig config) { + kafkaProducerConfig = config; + } + + /** + * Push the data to the topic corresponding to the specified table in batch + * + * @param topic topic + * @param recordHashList data + */ + public void syncSend(Topic topic, List recordHashList) { + final int partitions = topic.getPartitions(); + if (partitions <= MIN_PARTITION_NUM) { + sendRecordToSinglePartitionTopic(recordHashList, topic.getTopicName()); + } else { + sendMultiPartitionTopic(recordHashList, topic.getTopicName(), partitions); + } + } + + private void sendRecordToSinglePartitionTopic(List recordHashList, String topicName) { + final KafkaProducer kafkaProducer = kafkaProducerConfig.getKafkaProducer(topicName); + AtomicInteger cnt = new AtomicInteger(0); + recordHashList.forEach(record -> { + record.setPartition(DEFAULT_PARTITION); + final ProducerRecord producerRecord = + new ProducerRecord<>(topicName, DEFAULT_PARTITION, record.getPrimaryKey(), JSON.toJSONString(record)); + sendMessage(kafkaProducer, producerRecord, cnt); + }); + kafkaProducer.flush(); + log.info("send topic={}, record size :{},cnt:{}", topicName, recordHashList.size(), cnt.get()); + } + + private void sendMultiPartitionTopic(List recordHashList, String topicName, int partitions) { + final KafkaProducer kafkaProducer = kafkaProducerConfig.getKafkaProducer(topicName); + AtomicInteger cnt = new AtomicInteger(0); + List kafkaRecordList = new ArrayList<>(); + recordHashList.forEach(record -> { + int partition = calcSimplePartition(record.getPrimaryKeyHash(), partitions); + record.setPartition(partition); + ProducerRecord producerRecord = + new ProducerRecord<>(topicName, partition, record.getPrimaryKey(), JSON.toJSONString(record)); + kafkaRecordList.add(producerRecord); + sendMessage(kafkaProducer, producerRecord, cnt); + }); + kafkaProducer.flush(); + } + + private int calcSimplePartition(long value, int mod) { + return (int) Math.abs(value % mod); + } + + private void sendMessage(KafkaProducer kafkaProducer, ProducerRecord producerRecord, + AtomicInteger cnt) { + kafkaProducer.send(producerRecord, (metadata, exception) -> { + if (exception != null) { + log.error("send failed,topic={},key:{} ,partition:{},offset:{}", metadata.topic(), producerRecord.key(), + metadata.partition(), metadata.offset(), exception); + } + }); + if (cnt.incrementAndGet() % FLUSH_KAFKA_PARALLEL_THRESHOLD == 0) { + kafkaProducer.flush(); + } + } +} diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractService.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractService.java index c9f019cbbf9c222c03f552116839fc96df56cccb..e0251be901ca020be63850ae7a23a44048d4e364 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractService.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractService.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.service; import org.opengauss.datachecker.common.entry.enums.DML; @@ -21,96 +36,98 @@ import java.util.Set; public interface DataExtractService { /** - * 抽取任务构建 + * Extraction task construction * - * @param processNo 执行进程编号 - * @return 指定processNo下 构建抽取任务集合 - * @throws ProcessMultipleException 当前实例正在执行数据抽取服务,不能重新开启新的校验。 + * @param processNo processNo + * @return Specify the construction extraction task set under processno + * @throws ProcessMultipleException The current instance is executing the data extraction service + * and cannot restart the new verification. */ List buildExtractTaskAllTables(String processNo) throws ProcessMultipleException; /** - * 宿端任务配置 + * Destination task configuration * - * @param processNo 执行进程编号 - * @param taskList 任务列表 - * @throws ProcessMultipleException 前实例正在执行数据抽取服务,不能重新开启新的校验。 + * @param processNo processNo + * @param taskList taskList + * @throws ProcessMultipleException The current instance is executing the data extraction service + * and cannot restart the new verification. */ void buildExtractTaskAllTables(String processNo, List taskList) throws ProcessMultipleException; /** - * 执行表数据抽取任务 + * Execute table data extraction task * - * @param processNo 执行进程编号 - * @throws TaskNotFoundException 任务数据为空,则抛出异常 TaskNotFoundException + * @param processNo processNo + * @throws TaskNotFoundException If the task data is empty, an exception TaskNotFoundException will be thrown */ void execExtractTaskAllTables(String processNo) throws TaskNotFoundException; /** - * 清理当前构建任务 + * Clean up the current build task */ - void cleanBuildedTask(); + void cleanBuildTask(); /** - * 查询当前流程下,指定名称的详细任务信息 + * Query the detailed task information of the specified name under the current process * - * @param taskName 任务名称 - * @return 任务详细信息,若不存在返回{@code null} + * @param taskName taskName + * @return Task details, if not, return {@code null} */ ExtractTask queryTableInfo(String taskName); /** - * 生成修复报告的DML语句 + * DML statement generating repair report * - * @param schema schema信息 - * @param tableName 表名 - * @param dml dml 类型 - * @param diffSet 待生成主键集合 - * @return DML语句 + * @param schema schema + * @param tableName tableName + * @param dml dml + * @param diffSet Primary key set to be generated + * @return DML statement */ List buildRepairDml(String schema, String tableName, DML dml, Set diffSet); /** - * 查询表数据 + * Query table data * - * @param tableName 表名称 - * @param compositeKeySet 复核主键集合 - * @return 主键对应表数据 + * @param tableName tableName + * @param compositeKeySet compositeKeySet + * @return Primary key corresponds to table data */ List> queryTableColumnValues(String tableName, List compositeKeySet); /** - * 根据数据变更日志 构建增量抽取任务 + * Build an incremental extraction task according to the data change log * - * @param sourceDataLogs 数据变更日志 + * @param sourceDataLogs source data logs */ void buildExtractIncrementTaskByLogs(List sourceDataLogs); /** - * 执行增量校验数据抽取 + * Perform incremental check data extraction */ void execExtractIncrementTaskByLogs(); /** - * 查询当前表结构元数据信息,并进行Hash + * Query the metadata information of the current table structure and hash * - * @param tableName 表名称 - * @return 表结构Hash + * @param tableName tableName + * @return Table structure hash */ TableMetadataHash queryTableMetadataHash(String tableName); /** - * 查询表指定PK列表数据,并进行Hash 用于二次校验数据查询 + * PK list data is specified in the query table, and hash is used for secondary verification data query * - * @param dataLog 数据日志 - * @return rowdata hash + * @param dataLog dataLog + * @return row data hash */ List querySecondaryCheckRowData(SourceDataLog dataLog); /** - * 查询当前链接数据库 的schema + * Query the schema of the current linked database * - * @return 数据库的schema + * @return database schema */ String queryDatabaseSchema(); } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractServiceImpl.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractServiceImpl.java index 5fbb116f75689a0d7149daa646482dc6dbc971ee..9a69e5e3d951135414f96158cf4987f39afab9fe 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractServiceImpl.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/DataExtractServiceImpl.java @@ -1,10 +1,32 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.service; import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.common.constant.Constants; import org.opengauss.datachecker.common.entry.enums.DML; import org.opengauss.datachecker.common.entry.enums.Endpoint; -import org.opengauss.datachecker.common.entry.extract.*; +import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; +import org.opengauss.datachecker.common.entry.extract.ExtractIncrementTask; +import org.opengauss.datachecker.common.entry.extract.ExtractTask; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.SourceDataLog; +import org.opengauss.datachecker.common.entry.extract.TableMetadata; +import org.opengauss.datachecker.common.entry.extract.TableMetadataHash; +import org.opengauss.datachecker.common.entry.extract.Topic; import org.opengauss.datachecker.common.exception.ProcessMultipleException; import org.opengauss.datachecker.common.exception.TableNotExistException; import org.opengauss.datachecker.common.exception.TaskNotFoundException; @@ -15,9 +37,16 @@ import org.opengauss.datachecker.extract.client.CheckingFeignClient; import org.opengauss.datachecker.extract.config.ExtractProperties; import org.opengauss.datachecker.extract.kafka.KafkaAdminService; import org.opengauss.datachecker.extract.kafka.KafkaCommonService; -import org.opengauss.datachecker.extract.task.*; +import org.opengauss.datachecker.extract.task.DataManipulationService; +import org.opengauss.datachecker.extract.task.ExtractTaskBuilder; +import org.opengauss.datachecker.extract.task.ExtractTaskRunnable; +import org.opengauss.datachecker.extract.task.ExtractThreadSupport; +import org.opengauss.datachecker.extract.task.IncrementExtractTaskRunnable; +import org.opengauss.datachecker.extract.task.IncrementExtractThreadSupport; +import org.opengauss.datachecker.extract.task.RowDataHashHandler; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; +import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.DependsOn; import org.springframework.lang.NonNull; import org.springframework.scheduling.annotation.Async; @@ -26,7 +55,15 @@ import org.springframework.stereotype.Service; import org.springframework.util.CollectionUtils; import javax.validation.constraints.NotEmpty; -import java.util.*; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.Future; import java.util.concurrent.atomic.AtomicReference; import java.util.stream.Collectors; @@ -35,21 +72,20 @@ import java.util.stream.Collectors; @DependsOn("extractThreadExecutor") public class DataExtractServiceImpl implements DataExtractService { - /** - * 执行数据抽取任务的线程 最大休眠次数 + * Maximum number of sleeps of threads executing data extraction tasks */ private static final int MAX_SLEEP_COUNT = 30; /** - * 执行数据抽取任务的线程 每次休眠时间,单位毫秒 + * The sleep time of the thread executing the data extraction task each time, in milliseconds */ private static final int MAX_SLEEP_MILLIS_TIME = 2000; private static final String PROCESS_NO_RESET = "0"; /** - * 服务启动后,会对{code atomicProcessNo}属性进行初始化, + * After the service is started, the {code atomicProcessNo} attribute will be initialized, *

- * 用户启动校验流程,会对{code atomicProcessNo}属性进行校验和设置 + * When the user starts the verification process, the {code atomicProcessNo} attribute will be verified and set */ private final AtomicReference atomicProcessNo = new AtomicReference<>(PROCESS_NO_RESET); @@ -84,20 +120,29 @@ public class DataExtractServiceImpl implements DataExtractService { @Autowired private DataManipulationService dataManipulationService; + @Value("${spring.extract.sync-extract}") + private boolean isSyncExtract = true; + /** - * 数据抽取服务 + * Data extraction service *

- * 校验服务通过下发数据抽取流程请求,抽取服务对进程号进行校验,防止同一时间重复发起启动命令 + * The verification service verifies the process number by issuing a request for data extraction process, + * so as to prevent repeated starting commands at the same time *

- * 根据元数据缓存信息,构建数据抽取任务,保存当前任务信息到{@code taskReference}中,等待校验服务发起任务执行指令。 - * 上报任务列表到校验服务。 + * According to the metadata cache information, build a data extraction task, + * save the current task information to {@code taskReference}, + * and wait for the verification service to initiate the task execution instruction. + *

+ * Submit the task list to the verification service. * - * @param processNo 执行进程编号 - * @throws ProcessMultipleException 当前实例正在执行数据抽取服务,不能重新开启新的校验。 + * @param processNo Execution process number + * @throws ProcessMultipleException The previous instance is executing the data extraction service. + * It cannot restart the new verification + * and throws a ProcessMultipleException exception. */ @Override public List buildExtractTaskAllTables(String processNo) throws ProcessMultipleException { - // 调用端点不是源端,则直接返回空 + // If the calling end point is not the source end, it directly returns null if (!Objects.equals(extractProperties.getEndpoint(), Endpoint.SOURCE)) { log.info("The current endpoint is not the source endpoint, and the task cannot be built"); return Collections.EMPTY_LIST; @@ -112,10 +157,8 @@ public class DataExtractServiceImpl implements DataExtractService { log.info("build extract task process={} count={}", processNo, taskList.size()); atomicProcessNo.set(processNo); - List taskNameList = taskList.stream() - .map(ExtractTask::getTaskName) - .map(String::toUpperCase) - .collect(Collectors.toList()); + List taskNameList = + taskList.stream().map(ExtractTask::getTaskName).map(String::toUpperCase).collect(Collectors.toList()); initTableExtractStatus(new ArrayList<>(tableNames)); return taskList; } else { @@ -125,48 +168,49 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 宿端任务配置 + * Destination task configuration * - * @param processNo 执行进程编号 - * @param taskList 任务列表 - * @throws ProcessMultipleException 前实例正在执行数据抽取服务,不能重新开启新的校验。 + * @param processNo Execution process number + * @param taskList taskList + * @throws ProcessMultipleException The previous instance is executing the data extraction service. + * It cannot restart the new verification + * and throws a ProcessMultipleException exception. */ @Override - public void buildExtractTaskAllTables(String processNo, @NonNull List taskList) throws ProcessMultipleException { + public void buildExtractTaskAllTables(String processNo, @NonNull List taskList) + throws ProcessMultipleException { if (!Objects.equals(extractProperties.getEndpoint(), Endpoint.SINK)) { return; } - // 校验源端构建的任务列表 在宿端是否存在 ,将不存在任务列表过滤 + // Verify whether the task list built on the source side exists on the destination side, + // and filter the nonexistent task list final Set tableNames = MetaDataCache.getAllKeys(); if (atomicProcessNo.compareAndSet(PROCESS_NO_RESET, processNo)) { if (CollectionUtils.isEmpty(taskList) || CollectionUtils.isEmpty(tableNames)) { return; } - final List extractTasks = taskList.stream() - .filter(task -> tableNames.contains(task.getTableName())) - .collect(Collectors.toList()); + final List extractTasks = + taskList.stream().filter(task -> tableNames.contains(task.getTableName())).collect(Collectors.toList()); taskReference.set(extractTasks); log.info("build extract task process={} count={}", processNo, extractTasks.size()); atomicProcessNo.set(processNo); - // taskCountMap用于统计表分片查询的任务数量 + // taskCountMap is used to count the number of tasks in table fragment query Map taskCountMap = new HashMap<>(Constants.InitialCapacity.MAP); taskList.forEach(task -> { if (!taskCountMap.containsKey(task.getTableName())) { taskCountMap.put(task.getTableName(), task.getDivisionsTotalNumber()); } }); - // 初始化数据抽取任务执行状态 + // Initialization data extraction task execution status TableExtractStatusCache.init(taskCountMap); - final List filterTaskTables = taskList.stream() - .filter(task -> !tableNames.contains(task.getTableName())) - .map(ExtractTask::getTableName) - .distinct() - .collect(Collectors.toList()); + final List filterTaskTables = + taskList.stream().filter(task -> !tableNames.contains(task.getTableName())) + .map(ExtractTask::getTableName).distinct().collect(Collectors.toList()); if (!CollectionUtils.isEmpty(filterTaskTables)) { - log.info("process={} ,source endpoint database have some tables ,not in the sink tables[{}]", - processNo, filterTaskTables); + log.info("process={} ,source endpoint database have some tables ,not in the sink tables[{}]", processNo, + filterTaskTables); } } else { log.error("process={} is running extract task , {} please wait ... ", atomicProcessNo.get(), processNo); @@ -175,10 +219,10 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 清理当前构建任务 + * Clean up the current build task */ @Override - public void cleanBuildedTask() { + public void cleanBuildTask() { if (Objects.nonNull(taskReference.getAcquire())) { taskReference.getAcquire().clear(); } @@ -192,10 +236,10 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 查询当前执行流程下,指定表的数据抽取相关信息 + * Query the data extraction related information of the specified table under the current execution process * - * @param tableName 表名称 - * @return 表的数据抽取相关信息 + * @param tableName tableName + * @return Table data extraction related information */ @Override public ExtractTask queryTableInfo(String tableName) { @@ -216,15 +260,21 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 执行指定进程编号的数据抽取任务。 - *

- * 执行抽取任务,对当前进程编号进行校验,并对抽取任务进行校验。 - * 对于抽取任务的校验,采用轮询方式,进行多次校验。 - * 因为源端和宿端的抽取执行逻辑是异步且属于不同的Java进程。为确保不同进程之间流程数据状态一致,采用轮询方式多次进行确认。 - * 若多次确认还不能获取任务数据{@code taskReference}中数据为空,则抛出异常{@link org.opengauss.datachecker.common.exception.TaskNotFoundException} + *

+     * Execute the data extraction task of the specified process number.
+     *
+     * Execute the extraction task, verify the current process number, and verify the extraction task.
+     * For the verification of the extraction task, the polling method is used for multiple verifications.
+     * Because the extraction execution logic of the source side and the destination side is asynchronous
+     * and belongs to different Java processes.
+     * In order to ensure the consistency of process data status between different processes,
+     * polling method is adopted for multiple confirmation.
+     * If the data in {@code taskReference} cannot be obtained after multiple confirmations,
+     * an exception {@link org.opengauss.datachecker.common.exception.TaskNotFoundException} will be thrown
+     * 
* - * @param processNo 执行进程编号 - * @throws TaskNotFoundException 任务数据为空,则抛出异常 TaskNotFoundException + * @param processNo Execution process number + * @throws TaskNotFoundException If the task data is empty, an exception TaskNotFoundException will be thrown */ @Async @Override @@ -234,7 +284,8 @@ public class DataExtractServiceImpl implements DataExtractService { while (CollectionUtils.isEmpty(taskReference.get())) { ThreadUtil.sleep(MAX_SLEEP_MILLIS_TIME); if (sleepCount++ > MAX_SLEEP_COUNT) { - log.info("endpoint [{}] and process[{}}] task is empty!", extractProperties.getEndpoint().getDescription(), processNo); + log.info("endpoint [{}] and process[{}}] task is empty!", + extractProperties.getEndpoint().getDescription(), processNo); break; } } @@ -242,26 +293,55 @@ public class DataExtractServiceImpl implements DataExtractService { if (CollectionUtils.isEmpty(taskList)) { return; } + List> taskFutureList = new ArrayList<>(); taskList.forEach(task -> { - log.info("执行数据抽取任务:{}", task); - ThreadUtil.sleep(100); - Topic topic = kafkaCommonService.getTopicInfo(processNo, task.getTableName(), task.getDivisionsTotalNumber()); + log.debug("Perform data extraction tasks {}", task.getTaskName()); + Topic topic = + kafkaCommonService.getTopicInfo(processNo, task.getTableName(), task.getDivisionsTotalNumber()); kafkaAdminService.createTopic(topic.getTopicName(), topic.getPartitions()); - extractThreadExecutor.submit(new ExtractTaskThread(task, topic, extractThreadSupport)); + final ExtractTaskRunnable extractRunnable = new ExtractTaskRunnable(task, topic, extractThreadSupport); + taskFutureList.add(extractThreadExecutor.submit(extractRunnable)); }); + if (isSyncExtract) { + taskFutureList.forEach(future -> { + while (true) { + if (future.isDone() && !future.isCancelled()) { + break; + } + } + }); + } + } + } + + static class DataExtractThreadExceptionHandler implements Thread.UncaughtExceptionHandler { + + /** + * Method invoked when the given thread terminates due to the + * given uncaught exception. + *

Any exception thrown by this method will be ignored by the + * Java Virtual Machine. + * + * @param thread the thread + * @param throwable the exception + */ + @Override + public void uncaughtException(Thread thread, Throwable throwable) { + log.error(thread.getName() + " exception: " + throwable); } } /** - * 生成修复报告的DML语句 + * DML statement generating repair report * - * @param tableName 表名 - * @param dml dml 类型 - * @param diffSet 待生成主键集合 - * @return DML语句 + * @param tableName tableName + * @param dml dml + * @param diffSet Primary key set to be generated + * @return DML statement */ @Override - public List buildRepairDml(String schema, @NotEmpty String tableName, @NonNull DML dml, @NotEmpty Set diffSet) { + public List buildRepairDml(String schema, @NotEmpty String tableName, @NonNull DML dml, + @NotEmpty Set diffSet) { if (CollectionUtils.isEmpty(diffSet)) { return new ArrayList<>(); } @@ -280,11 +360,11 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 查询表数据 + * Query table data * - * @param tableName 表名称 - * @param compositeKeys 复核主键集合 - * @return 主键对应表数据 + * @param tableName tableName + * @param compositeKeys Review primary key set + * @return Primary key corresponds to table data */ @Override public List> queryTableColumnValues(String tableName, List compositeKeys) { @@ -296,23 +376,22 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 根据数据变更日志 构建增量抽取任务 + * Build an incremental extraction task according to the data change log * - * @param sourceDataLogs 数据变更日志 + * @param sourceDataLogs data change log */ @Override public void buildExtractIncrementTaskByLogs(List sourceDataLogs) { final String schema = extractProperties.getSchema(); List taskList = extractTaskBuilder.buildIncrementTask(schema, sourceDataLogs); - log.info("构建增量抽取任务完成:{}", taskList.size()); + log.info("Build incremental extraction task completed {}", taskList.size()); if (CollectionUtils.isEmpty(taskList)) { return; } incrementTaskReference.set(taskList); - List tableNameList = sourceDataLogs.stream() - .map(SourceDataLog::getTableName) - .collect(Collectors.toList()); + List tableNameList = + sourceDataLogs.stream().map(SourceDataLog::getTableName).collect(Collectors.toList()); Map taskCount = new HashMap<>(Constants.InitialCapacity.MAP); createTaskCountMapping(tableNameList, taskCount); TableExtractStatusCache.init(taskCount); @@ -326,7 +405,7 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 执行增量校验数据抽取 + * Perform incremental check data extraction */ @Override public void execExtractIncrementTaskByLogs() { @@ -337,19 +416,21 @@ public class DataExtractServiceImpl implements DataExtractService { return; } taskList.forEach(task -> { - log.info("执行数据抽取任务:{}", task); + log.info("Perform data extraction increment tasks:{}", task.getTaskName()); ThreadUtil.sleep(100); Topic topic = kafkaCommonService.getIncrementTopicInfo(task.getTableName()); kafkaAdminService.createTopic(topic.getTopicName(), topic.getPartitions()); - extractThreadExecutor.submit(new IncrementExtractTaskThread(task, topic, incrementExtractThreadSupport)); + final IncrementExtractTaskRunnable extractRunnable = + new IncrementExtractTaskRunnable(task, topic, incrementExtractThreadSupport); + extractThreadExecutor.submit(extractRunnable); }); } /** - * 查询当前表结构元数据信息,并进行Hash + * Query the metadata information of the current table structure and perform hash calculation * - * @param tableName 表名称 - * @return 表结构Hash + * @param tableName tableName + * @return Table structure hash */ @Override public TableMetadataHash queryTableMetadataHash(String tableName) { @@ -357,10 +438,10 @@ public class DataExtractServiceImpl implements DataExtractService { } /** - * 查询表指定PK列表数据,并进行Hash 用于二次校验数据查询 + * PK list data is specified in the query table, and hash is used for secondary verification data query * - * @param dataLog 数据日志 - * @return rowdata hash + * @param dataLog data log + * @return row data hash */ @Override public List querySecondaryCheckRowData(SourceDataLog dataLog) { @@ -371,7 +452,8 @@ public class DataExtractServiceImpl implements DataExtractService { if (Objects.isNull(metadata)) { throw new TableNotExistException(tableName); } - List> dataRowList = dataManipulationService.queryColumnValues(tableName, compositeKeys, metadata); + List> dataRowList = + dataManipulationService.queryColumnValues(tableName, compositeKeys, metadata); RowDataHashHandler handler = new RowDataHashHandler(); return handler.handlerQueryResult(metadata, dataRowList); } @@ -381,11 +463,10 @@ public class DataExtractServiceImpl implements DataExtractService { return extractProperties.getSchema(); } - private void initTableExtractStatus(List tableNameList) { if (Objects.equals(extractProperties.getEndpoint(), Endpoint.SOURCE)) { checkingFeignClient.initTableExtractStatus(new ArrayList<>(tableNameList)); - log.info("通知校验服务初始化增量抽取任务状态:{}", tableNameList); + log.info("Notify the verification service to initialize the extraction task status:{}", tableNameList); } } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/MetaDataService.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/MetaDataService.java index 7b47a09fdc2145575cbf1291a142fb3299af2e7c..bfda5e7cb624a5682e19014aca5d9bf1bb7dfa77 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/MetaDataService.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/service/MetaDataService.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.service; import lombok.RequiredArgsConstructor; @@ -14,13 +29,15 @@ import org.springframework.stereotype.Service; import org.springframework.util.CollectionUtils; import javax.annotation.PostConstruct; -import java.util.*; +import java.util.List; +import java.util.Map; import java.util.function.Function; import java.util.stream.Collectors; /** + * MetaDataService + * * @author wang chao - * @description 元数据服务 * @date 2022/5/8 19:27 * @since 11 **/ @@ -28,12 +45,14 @@ import java.util.stream.Collectors; @Slf4j @RequiredArgsConstructor public class MetaDataService { - private final DataBaseMetaDataDAOImpl dataBaseMetadataDAOImpl; @Value("${spring.extract.query-table-row-count}") private boolean queryTableRowCount; + /** + * Metadata cache load + */ @PostConstruct public void init() { MetaDataCache.removeAll(); @@ -42,52 +61,67 @@ public class MetaDataService { MetaDataCache.putMap(metaDataMap); } + /** + * query Metadata info + * + * @return Metadata info + */ public Map queryMetaDataOfSchema() { - List tableMetadata = queryTableMetadata(); - List tableNames = tableMetadata - .stream() - .map(TableMetadata::getTableName) - .collect(Collectors.toList()); + List tableNames = tableMetadata.stream().map(TableMetadata::getTableName).collect(Collectors.toList()); if (!CollectionUtils.isEmpty(tableNames)) { List columnsMetadata = dataBaseMetadataDAOImpl.queryColumnMetadata(tableNames); - Map> tableColumnMap = columnsMetadata.stream().collect(Collectors.groupingBy(ColumnsMetaData::getTableName)); - + Map> tableColumnMap = + columnsMetadata.stream().collect(Collectors.groupingBy(ColumnsMetaData::getTableName)); tableMetadata.stream().forEach(tableMeta -> { tableMeta.setColumnsMetas(tableColumnMap.get(tableMeta.getTableName())) - .setPrimaryMetas(getTablePrimaryColumn(tableColumnMap.get(tableMeta.getTableName()))); + .setPrimaryMetas(getTablePrimaryColumn(tableColumnMap.get(tableMeta.getTableName()))); }); - log.info("查询数据库元数据信息完成 total=" + columnsMetadata.size()); + log.info("Query database metadata information completed total=" + columnsMetadata.size()); } return tableMetadata.stream().collect(Collectors.toMap(TableMetadata::getTableName, Function.identity())); } - public void refushBlackWhiteList(CheckBlackWhiteMode mode, List tableList) { + /** + * refresh black or white list + * + * @param mode mode{@value CheckBlackWhiteMode#API_DESCRIPTION } + * @param tableList tableList + */ + public void refreshBlackWhiteList(CheckBlackWhiteMode mode, List tableList) { dataBaseMetadataDAOImpl.resetBlackWhite(mode, tableList); init(); + log.info("refresh black or white list ,mode=[{}],list=[{}]", mode.getDescription(), tableList); } + /** + * query table Metadata info + * + * @param tableName tableName + * @return table Metadata info + */ public TableMetadata queryMetaDataOfSchema(String tableName) { TableMetadata tableMetadata = queryTableMetadataByTableName(tableName); List columnsMetadata = dataBaseMetadataDAOImpl.queryColumnMetadata(List.of(tableName)); - tableMetadata - .setColumnsMetas(columnsMetadata) - .setPrimaryMetas(getTablePrimaryColumn(columnsMetadata)); - - log.info("查询数据库元数据信息完成 total={}", columnsMetadata); + tableMetadata.setColumnsMetas(columnsMetadata).setPrimaryMetas(getTablePrimaryColumn(columnsMetadata)); + log.info("Query database metadata information completed total={}", columnsMetadata); return tableMetadata; } + /** + * query column Metadata info + * + * @param tableName tableName + * @return column Metadata info + */ public List queryTableColumnMetaDataOfSchema(String tableName) { return dataBaseMetadataDAOImpl.queryColumnMetadata(List.of(tableName)); } private TableMetadata queryTableMetadataByTableName(String tableName) { final List tableMetadatas = queryTableMetadata(); - return tableMetadatas.stream() - .filter(meta -> StringUtils.equalsIgnoreCase(meta.getTableName(), tableName)) - .findFirst() - .orElseGet(null); + return tableMetadatas.stream().filter(meta -> StringUtils.equalsIgnoreCase(meta.getTableName(), tableName)) + .findFirst().orElseGet(null); } private List queryTableMetadata() { @@ -98,16 +132,8 @@ public class MetaDataService { } } - /** - * 获取表主键列元数据信息 - * - * @param columnsMetaData - * @return - */ private List getTablePrimaryColumn(List columnsMetaData) { - return columnsMetaData.stream() - .filter(meta -> ColumnKey.PRI.equals(meta.getColumnKey())) - .collect(Collectors.toList()); + return columnsMetaData.stream().filter(meta -> ColumnKey.PRI.equals(meta.getColumnKey())) + .collect(Collectors.toList()); } - } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/DataManipulationService.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/DataManipulationService.java index 9fd805b7eb5937ec027cb37e762e52a257d5c27e..28a908aac2992b39dff8fd773565e862a992a470 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/DataManipulationService.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/DataManipulationService.java @@ -1,13 +1,33 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; import org.apache.commons.lang3.StringUtils; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; import org.opengauss.datachecker.common.entry.extract.TableMetadata; import org.opengauss.datachecker.common.entry.extract.TableMetadataHash; -import org.opengauss.datachecker.common.util.HashUtil; +import org.opengauss.datachecker.common.util.LongHashFunctionWrapper; import org.opengauss.datachecker.extract.config.ExtractProperties; import org.opengauss.datachecker.extract.constants.ExtConstants; -import org.opengauss.datachecker.extract.dml.*; +import org.opengauss.datachecker.extract.dml.BatchDeleteDmlBuilder; +import org.opengauss.datachecker.extract.dml.DeleteDmlBuilder; +import org.opengauss.datachecker.extract.dml.DmlBuilder; +import org.opengauss.datachecker.extract.dml.InsertDmlBuilder; +import org.opengauss.datachecker.extract.dml.ReplaceDmlBuilder; +import org.opengauss.datachecker.extract.dml.SelectDmlBuilder; import org.opengauss.datachecker.extract.service.MetaDataService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jdbc.core.JdbcTemplate; @@ -17,11 +37,17 @@ import org.springframework.util.Assert; import org.springframework.util.CollectionUtils; import java.sql.ResultSetMetaData; -import java.util.*; +import java.util.ArrayList; +import java.util.Comparator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; import java.util.stream.Collectors; /** - * DML 数据操作服务 实现数据的动态查询 + * DML Data operation service realizes dynamic query of data * * @author :wangchao * @date :Created in 2022/6/13 @@ -29,6 +55,7 @@ import java.util.stream.Collectors; */ @Service public class DataManipulationService { + private static final LongHashFunctionWrapper HASH_UTIL = new LongHashFunctionWrapper(); @Autowired private JdbcTemplate jdbcTemplateOne; @@ -37,103 +64,106 @@ public class DataManipulationService { @Autowired private ExtractProperties extractProperties; - public List> queryColumnValues(String tableName, List compositeKeys, TableMetadata metadata) { - Assert.isTrue(Objects.nonNull(metadata), "表元数据信息异常,构建Select SQL失败"); + /** + * queryColumnValues + * + * @param tableName tableName + * @param compositeKeys compositeKeys + * @param metadata metadata + * @return query result + */ + public List> queryColumnValues(String tableName, List compositeKeys, + TableMetadata metadata) { + Assert.isTrue(Objects.nonNull(metadata), "Abnormal table metadata information, failed to build select SQL"); final List primaryMetas = metadata.getPrimaryMetas(); - Assert.isTrue(!CollectionUtils.isEmpty(primaryMetas), "表主键元数据信息异常,构建Select SQL失败"); + Assert.isTrue(!CollectionUtils.isEmpty(primaryMetas), + "The metadata information of the table primary key is abnormal, and the construction of select SQL failed"); - // 单一主键表数据查询 + // Single primary key table data query if (primaryMetas.size() == 1) { final ColumnsMetaData primaryData = primaryMetas.get(0); - String querySql = new SelectDmlBuilder() - .schema(extractProperties.getSchema()) - .columns(metadata.getColumnsMetas()) - .tableName(tableName) - .conditionPrimary(primaryData) - .build(); + String querySql = + new SelectDmlBuilder().schema(extractProperties.getSchema()).columns(metadata.getColumnsMetas()) + .tableName(tableName).conditionPrimary(primaryData).build(); return queryColumnValues(querySql, compositeKeys); } else { - // 复合主键表数据查询 + // Compound primary key table data query final SelectDmlBuilder dmlBuilder = new SelectDmlBuilder(); - String querySql = dmlBuilder - .schema(extractProperties.getSchema()) - .columns(metadata.getColumnsMetas()) - .tableName(tableName) - .conditionCompositePrimary(primaryMetas) - .build(); + String querySql = dmlBuilder.schema(extractProperties.getSchema()).columns(metadata.getColumnsMetas()) + .tableName(tableName).conditionCompositePrimary(primaryMetas).build(); List batchParam = dmlBuilder.conditionCompositePrimaryValue(primaryMetas, compositeKeys); return queryColumnValuesByCompositePrimary(querySql, batchParam); } } /** - * 复合主键表数据查询 + * Compound primary key table data query * - * @param selectDml 查询SQL - * @param batchParam 复合主键查询参数 - * @return 查询数据结果 + * @param selectDml Query SQL + * @param batchParam Compound PK query parameters + * @return Query data results */ private List> queryColumnValuesByCompositePrimary(String selectDml, List batchParam) { - // 查询当前任务数据,并对数据进行规整 + // Query the current task data and organize the data HashMap paramMap = new HashMap<>(); paramMap.put(DmlBuilder.PRIMARY_KEYS, batchParam); - return queryColumnValues(selectDml, paramMap); } /** - * 单一主键表数据查询 + * Single primary key table data query * - * @param selectDml 查询SQL - * @param primaryKeys 查询主键集合 - * @return 查询数据结果 + * @param selectDml Query SQL + * @param primaryKeys Query primary key collection + * @return Query data results */ private List> queryColumnValues(String selectDml, List primaryKeys) { - // 查询当前任务数据,并对数据进行规整 + // Query the current task data and organize the data HashMap paramMap = new HashMap<>(); paramMap.put(DmlBuilder.PRIMARY_KEYS, primaryKeys); - return queryColumnValues(selectDml, paramMap); } /** - * 主键表数据查询 + * Primary key table data query * - * @param selectDml 查询SQL - * @param paramMap 查询参数 - * @return 查询结果 + * @param selectDml Query SQL + * @param paramMap query parameters + * @return query result */ private List> queryColumnValues(String selectDml, Map paramMap) { - // 使用JDBC查询当前任务抽取数据 + // Use JDBC to query the current task to extract data NamedParameterJdbcTemplate jdbc = new NamedParameterJdbcTemplate(jdbcTemplateOne); return jdbc.query(selectDml, paramMap, (rs, rowNum) -> { - // 获取当前结果集对应的元数据信息 + // Get the metadata information corresponding to the current result set ResultSetMetaData metaData = rs.getMetaData(); - // 结果集处理器 + // Result set processor ResultSetHandler handler = new ResultSetHandler(); - // 查询结果集 根据元数据信息 进行数据转换 + // Data conversion of query result set according to metadata information return handler.putOneResultSetToMap(rs, metaData); }); } /** - * 构建指定表的 Replace SQL语句 + * Build the replace SQL statement of the specified table * - * @param tableName 表名称 - * @param compositeKeySet 复合主键集合 - * @param metadata 元数据信息 - * @return 返回SQL列表 + * @param tableName tableName + * @param compositeKeySet composite key set + * @param metadata metadata + * @return Return to SQL list */ - public List buildReplace(String schema, String tableName, Set compositeKeySet, TableMetadata metadata) { + public List buildReplace(String schema, String tableName, Set compositeKeySet, + TableMetadata metadata) { List resultList = new ArrayList<>(); final String localSchema = getLocalSchema(schema); - ReplaceDmlBuilder builder = new ReplaceDmlBuilder().schema(localSchema) - .tableName(tableName) - .columns(metadata.getColumnsMetas()); + ReplaceDmlBuilder builder = + new ReplaceDmlBuilder().schema(localSchema).tableName(tableName).columns(metadata.getColumnsMetas()); - List> columnValues = queryColumnValues(tableName, new ArrayList<>(compositeKeySet), metadata); - Map> compositeKeyValues = transtlateColumnValues(columnValues, metadata.getPrimaryMetas()); + List> columnValues = + queryColumnValues(tableName, new ArrayList<>(compositeKeySet), metadata); + Map> compositeKeyValues = + transtlateColumnValues(columnValues, metadata.getPrimaryMetas()); compositeKeySet.forEach(compositeKey -> { Map columnValue = compositeKeyValues.get(compositeKey); if (Objects.nonNull(columnValue) && !columnValue.isEmpty()) { @@ -143,25 +173,26 @@ public class DataManipulationService { return resultList; } - /** - * 构建指定表的 Insert SQL语句 + * Build the insert SQL statement of the specified table * - * @param tableName 表名称 - * @param compositeKeySet 复合主键集合 - * @param metadata 元数据信息 - * @return 返回SQL列表 + * @param tableName tableName + * @param compositeKeySet composite key set + * @param metadata metadata + * @return Return to SQL list */ - public List buildInsert(String schema, String tableName, Set compositeKeySet, TableMetadata metadata) { + public List buildInsert(String schema, String tableName, Set compositeKeySet, + TableMetadata metadata) { List resultList = new ArrayList<>(); final String localSchema = getLocalSchema(schema); - InsertDmlBuilder builder = new InsertDmlBuilder().schema(localSchema) - .tableName(tableName) - .columns(metadata.getColumnsMetas()); + InsertDmlBuilder builder = + new InsertDmlBuilder().schema(localSchema).tableName(tableName).columns(metadata.getColumnsMetas()); - List> columnValues = queryColumnValues(tableName, new ArrayList<>(compositeKeySet), metadata); - Map> compositeKeyValues = transtlateColumnValues(columnValues, metadata.getPrimaryMetas()); + List> columnValues = + queryColumnValues(tableName, new ArrayList<>(compositeKeySet), metadata); + Map> compositeKeyValues = + transtlateColumnValues(columnValues, metadata.getPrimaryMetas()); compositeKeySet.forEach(compositeKey -> { Map columnValue = compositeKeyValues.get(compositeKey); if (Objects.nonNull(columnValue) && !columnValue.isEmpty()) { @@ -171,7 +202,8 @@ public class DataManipulationService { return resultList; } - private Map> transtlateColumnValues(List> columnValues, List primaryMetas) { + private Map> transtlateColumnValues(List> columnValues, + List primaryMetas) { final List primaryKeys = getCompositeKeyColumns(primaryMetas); Map> map = new HashMap<>(); columnValues.forEach(values -> { @@ -185,76 +217,66 @@ public class DataManipulationService { } private String getCompositeKey(Map columnValues, List primaryKeys) { - return primaryKeys.stream().map(key -> columnValues.get(key)).collect(Collectors.joining(ExtConstants.PRIMARY_DELIMITER)); + return primaryKeys.stream().map(key -> columnValues.get(key)) + .collect(Collectors.joining(ExtConstants.PRIMARY_DELIMITER)); } - /** - * 构建指定表的批量 Delete SQL语句 + * Build a batch delete SQL statement for the specified table * - * @param tableName 表名称 - * @param compositeKeySet 复合主键集合 - * @param primaryMetas 主键元数据信息 - * @return 返回SQL列表 + * @param tableName tableName + * @param compositeKeySet composite key set + * @param primaryMetas Primary key metadata information + * @return Return to SQL list */ - public List buildBatchDelete(String schema, String tableName, Set compositeKeySet, List primaryMetas) { + public List buildBatchDelete(String schema, String tableName, Set compositeKeySet, + List primaryMetas) { List resultList = new ArrayList<>(); final String localSchema = getLocalSchema(schema); if (primaryMetas.size() == 1) { final ColumnsMetaData primaryMeta = primaryMetas.stream().findFirst().get(); compositeKeySet.forEach(compositeKey -> { - final String deleteDml = new BatchDeleteDmlBuilder() - .tableName(tableName) - .schema(localSchema) - .conditionPrimary(primaryMeta) - .build(); + final String deleteDml = + new BatchDeleteDmlBuilder().tableName(tableName).schema(localSchema).conditionPrimary(primaryMeta) + .build(); resultList.add(deleteDml); }); } else { compositeKeySet.forEach(compositeKey -> { - resultList.add(new BatchDeleteDmlBuilder() - .tableName(tableName) - .schema(localSchema) - .conditionCompositePrimary(primaryMetas) - .build()); + resultList.add(new BatchDeleteDmlBuilder().tableName(tableName).schema(localSchema) + .conditionCompositePrimary(primaryMetas).build()); }); } - return resultList; } /** - * 构建指定表的 Delete SQL语句 + * Build the delete SQL statement of the specified table * - * @param tableName 表名称 - * @param compositeKeySet 复合主键集合 - * @param primaryMetas 主键元数据信息 - * @return 返回SQL列表 + * @param tableName tableName + * @param compositeKeySet composite key set + * @param primaryMetas Primary key metadata information + * @return Return to SQL list */ - public List buildDelete(String schema, String tableName, Set compositeKeySet, List primaryMetas) { + public List buildDelete(String schema, String tableName, Set compositeKeySet, + List primaryMetas) { List resultList = new ArrayList<>(); final String localSchema = getLocalSchema(schema); if (primaryMetas.size() == 1) { final ColumnsMetaData primaryMeta = primaryMetas.stream().findFirst().get(); compositeKeySet.forEach(compositeKey -> { - final String deleteDml = new DeleteDmlBuilder() - .tableName(tableName) - .schema(localSchema) - .condition(primaryMeta, compositeKey) - .build(); + final String deleteDml = + new DeleteDmlBuilder().tableName(tableName).schema(localSchema).condition(primaryMeta, compositeKey) + .build(); resultList.add(deleteDml); }); } else { compositeKeySet.forEach(compositeKey -> { - resultList.add(new DeleteDmlBuilder() - .tableName(tableName) - .schema(localSchema) - .conditionCompositePrimary(compositeKey, primaryMetas) - .build()); + resultList.add(new DeleteDmlBuilder().tableName(tableName).schema(localSchema) + .conditionCompositePrimary(compositeKey, primaryMetas).build()); }); } - return resultList; } @@ -266,10 +288,10 @@ public class DataManipulationService { } /** - * 查询当前表结构元数据信息,并进行Hash + * Query the metadata information of the current table structure and hash * - * @param tableName 表名称 - * @return 表结构Hash + * @param tableName tableName + * @return Table structure hash */ public TableMetadataHash queryTableMetadataHash(String tableName) { final TableMetadataHash tableMetadataHash = new TableMetadataHash().setTableName(tableName); @@ -278,13 +300,11 @@ public class DataManipulationService { if (!CollectionUtils.isEmpty(columnsMetaData)) { columnsMetaData.sort(Comparator.comparing(ColumnsMetaData::getColumnName)); columnsMetaData.forEach(column -> { - buffer.append(column.getColumnName()) - .append(column.getColumnType()) - .append(column.getDataType()) - .append(column.getOrdinalPosition()); + buffer.append(column.getColumnName()).append(column.getColumnType()).append(column.getDataType()) + .append(column.getOrdinalPosition()); }); } - tableMetadataHash.setTableHash(HashUtil.hashBytes(buffer.toString())); + tableMetadataHash.setTableHash(HASH_UTIL.hashBytes(buffer.toString())); return tableMetadataHash; } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilder.java index 37d59576f9e92718a21984a36bdbeda2bd0249b8..b5c248a400b18dfaacd3f8df94c1b0832d5db6d3 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; import org.opengauss.datachecker.common.entry.extract.ExtractIncrementTask; @@ -11,13 +26,20 @@ import org.springframework.stereotype.Service; import org.springframework.util.Assert; import org.springframework.util.CollectionUtils; -import java.util.*; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Objects; +import java.util.Set; import java.util.stream.Collectors; import java.util.stream.IntStream; /** + * Data extraction task builder + * * @author wang chao - * @description 数据抽取任务构建器 * @date 2022/5/8 19:27 * @since 11 **/ @@ -27,47 +49,41 @@ public class ExtractTaskBuilder { private static final String TASK_NAME_PREFIX = "TASK_TABLE_"; private static final String INCREMENT_TASK_NAME_PREFIX = "INCREMENT_TASK_TABLE_"; - /** *

-     * 根据元数据缓存信息构建 表数据抽取任务。并初始化数据抽取任务执行状态。
-     * 任务构建依赖于元数据缓存信息,以及元数据缓存中加载的当前表记录总数。单个分片任务查询数据总数不超过{@value EXTRACT_MAX_ROW_COUNT}
-     * {@code taskCountMap} 用于统计所有待抽取表的分片查询的任务数量
-     * {@code tableRows} 为表元数据信息中统计的当前表数据量
+     * Construct the table data extraction task according to the metadata cache information.
+     * And initialize the execution state of the data extraction task.
+     * Task construction depends on metadata cache information and the total number of current table records
+     * loaded in the metadata cache.
+     * The total number of query data of a single fragment task does not exceed {@value EXTRACT_MAX_ROW_COUNT}
+     * {@code taskCountMap} It is used to count the number of tasks of fragment query of all tables to be extracted
+     * {@code tableRows} Is the current table data amount counted in the table metadata information
      * 
* - * @param tableNames 待构建抽取任务表集合 - * @return 任务列表 + * @param tableNames Extraction task table set to be built + * @return task list */ public List builder(Set tableNames) { - Assert.isTrue(!CollectionUtils.isEmpty(tableNames), "构建数据抽取任务表不能为空"); + Assert.isTrue(!CollectionUtils.isEmpty(tableNames), "Build data extraction task table cannot be empty"); List taskList = new ArrayList<>(); - final List tableNameOrderList = tableNames.stream().sorted((tableName1, tableName2) -> { - TableMetadata metadata1 = MetaDataCache.get(tableName1); - TableMetadata metadata2 = MetaDataCache.get(tableName2); - // 排序异常情况处理 - if (Objects.isNull(metadata1) && Objects.isNull(metadata2)) { - return 0; - } - if (Objects.isNull(metadata1)) { - return -1; - } - if (Objects.isNull(metadata2)) { - return 1; - } - return (int) (metadata1.getTableRows() - metadata2.getTableRows()); - }).collect(Collectors.toList()); - // taskCountMap用于统计表分片查询的任务数量 + final List tableNameOrderList = + tableNames.stream().filter(MetaDataCache::containsKey).sorted((tableName1, tableName2) -> { + TableMetadata metadata1 = MetaDataCache.get(tableName1); + TableMetadata metadata2 = MetaDataCache.get(tableName2); + return (int) (metadata1.getTableRows() - metadata2.getTableRows()); + }).collect(Collectors.toList()); + + // taskCountMap is used to count the number of tasks in table fragment query Map taskCountMap = new HashMap<>(); tableNameOrderList.forEach(tableName -> { TableMetadata metadata = MetaDataCache.get(tableName); if (Objects.nonNull(metadata)) { - // tableRows为表元数据信息中统计的当前表数据量 + // tableRows is the current table data amount counted in the table metadata information long tableRows = metadata.getTableRows(); if (tableRows > EXTRACT_MAX_ROW_COUNT) { - // 根据表元数据信息构建抽取任务 + // Construct extraction tasks based on table metadata information List taskEntryList = buildTaskList(metadata); taskCountMap.put(tableName, taskEntryList.size()); taskList.addAll(taskEntryList); @@ -78,29 +94,27 @@ public class ExtractTaskBuilder { } }); - // 初始化数据抽取任务执行状态 + // Initialization data extraction task execution status TableExtractStatusCache.init(taskCountMap); return taskList; } private ExtractTask buildTask(TableMetadata metadata) { - return new ExtractTask().setDivisionsTotalNumber(1) - .setTableMetadata(metadata) - .setDivisionsTotalNumber(1) - .setDivisionsOrdinal(1) - .setOffset(metadata.getTableRows()) - .setStart(0) - .setTableName(metadata.getTableName()) - .setTaskName(taskNameBuilder(metadata.getTableName(), 1, 1)); + return new ExtractTask().setTableMetadata(metadata).setOffset(metadata.getTableRows()) + .setTableName(metadata.getTableName()) + .setTaskName(taskNameBuilder(metadata.getTableName(), 1, 1)); } - /** - * 根据表元数据信息 构建表数据抽取任务。 - * 根据元数据信息中表数据总数估值进行任务分片,单个分片任务查询数据总数不超过 {@value EXTRACT_MAX_ROW_COUNT} + *
+     * The table data extraction task is constructed according to the table metadata information.
+     * Task segmentation is carried out according to the estimation
+     * of the total amount of table data in the metadata information.
+     * The total amount of query data of a single segmented task does not exceed {@value EXTRACT_MAX_ROW_COUNT}
+     * 
* - * @param metadata 元数据信息 - * @return 任务列表 + * @param metadata metadata information + * @return task list */ private List buildTaskList(TableMetadata metadata) { List taskList = new ArrayList<>(); @@ -110,79 +124,78 @@ public class ExtractTaskBuilder { IntStream.rangeClosed(1, taskCount).forEach(idx -> { long remainingExtractNumber = tableRows - (idx - 1) * EXTRACT_MAX_ROW_COUNT; ExtractTask extractTask = buildExtractTask(taskCount, idx, EXTRACT_MAX_ROW_COUNT, remainingExtractNumber); - extractTask.setDivisionsTotalNumber(taskCount) - .setTableMetadata(metadata) - .setTableName(metadata.getTableName()) - .setTaskName(taskNameBuilder(metadata.getTableName(), taskCount, idx)); + extractTask.setDivisionsTotalNumber(taskCount).setTableMetadata(metadata) + .setTableName(metadata.getTableName()) + .setTaskName(taskNameBuilder(metadata.getTableName(), taskCount, idx)); taskList.add(extractTask); }); return taskList; } /** - * 根据表记录总数,计算分片任务数量 + * Calculate the number of segmented tasks according to the total number recorded in the table * - * @param tableRows 表记录总数 - * @return 分拆任务总数 + * @param tableRows Total table records + * @return Total number of split tasks */ private int calcTaskCount(long tableRows) { return (int) (tableRows / EXTRACT_MAX_ROW_COUNT); } /** - * 任务名称构建 + * Task name build *
-     * 若任务分拆总数大于1,名称由:前缀信息 {@value TASK_NAME_PREFIX} 、表名称 、表序列 构建
-     * 若任务分拆总数为1,即未拆分 ,则根据 前缀信息 {@value TASK_NAME_PREFIX} 、表名称 构建
+     * If the total number of task partitions is greater than 1,
+     * the name is constructed by: prefix information {@value TASK_NAME_PREFIX}, table name, and table sequence
+     * If the total number of task splits is 1, that is, it is not split,
+     * it is built according to the prefix information {@value TASK_NAME_PREFIX}, table name
      * 
* - * @param tableName 表名 - * @param taskCount 任务分拆总数 - * @param ordinal 表任务分拆序列 - * @return 任务名称 + * @param tableName tableName + * @param taskCount Total number of task splits + * @param ordinal Table task split sequence + * @return task name */ private String taskNameBuilder(@NonNull String tableName, int taskCount, int ordinal) { if (taskCount > 1) { - return TASK_NAME_PREFIX.concat(tableName.toUpperCase()).concat("_").concat(String.valueOf(ordinal)); + return TASK_NAME_PREFIX.concat(tableName.toUpperCase(Locale.ROOT)).concat("_") + .concat(String.valueOf(ordinal)); } else { - return TASK_NAME_PREFIX.concat(tableName.toUpperCase()); + return TASK_NAME_PREFIX.concat(tableName.toUpperCase(Locale.ROOT)); } } /** - * @param taskCount 任务总数 - * @param ordinal 任务序列 - * @param planedExtractNumber 当前任务计划抽取记录总数 - * @param remainingExtractNumber 实际剩余抽取记录总数 - * @return 构建任务对象 + * @param taskCount Total number of task splits + * @param ordinal Table task split sequence + * @param planedNumber Total number of current task plan extraction records + * @param remainingNumber Total number of actual remaining extraction records + * @return Build task object */ - private ExtractTask buildExtractTask(int taskCount, int ordinal, long planedExtractNumber, long remainingExtractNumber) { - ExtractTask extractTask = new ExtractTask() - .setDivisionsOrdinal(ordinal) - .setStart(((ordinal - 1) * planedExtractNumber)) - .setOffset(ordinal == taskCount ? remainingExtractNumber : planedExtractNumber); - return extractTask; + private ExtractTask buildExtractTask(int taskCount, int ordinal, long planedNumber, long remainingNumber) { + long start = (ordinal - 1) * planedNumber; + long offset = ordinal == taskCount ? remainingNumber : planedNumber; + return new ExtractTask().setDivisionsOrdinal(ordinal).setStart(start).setOffset(offset); } /** - * 增量任务构建 + * Incremental task construction * * @param schema schema - * @param sourceDataLogs 增量日志 - * @return 增量任务 + * @param sourceDataLogs Incremental log + * @return Incremental task */ public List buildIncrementTask(String schema, List sourceDataLogs) { List incrementTasks = new ArrayList<>(); sourceDataLogs.forEach(datalog -> { - incrementTasks.add(new ExtractIncrementTask().setSchema(schema) - .setSourceDataLog(datalog) - .setTableName(datalog.getTableName()) - .setTaskName(incrementTaskNameBuilder(datalog.getTableName()))); + incrementTasks.add(new ExtractIncrementTask().setSchema(schema).setSourceDataLog(datalog) + .setTableName(datalog.getTableName()).setTaskName( + incrementTaskNameBuilder(datalog.getTableName()))); }); return incrementTasks; } private String incrementTaskNameBuilder(@NonNull String tableName) { - return INCREMENT_TASK_NAME_PREFIX.concat(tableName.toUpperCase()); + return INCREMENT_TASK_NAME_PREFIX.concat(tableName.toUpperCase(Locale.ROOT)); } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskThread.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskRunnable.java similarity index 37% rename from datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskThread.java rename to datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskRunnable.java index e906a1173742e8d83420e9deb543ecba236d299d..a2bebe52e036473fd6ceedd17c25ceea3a5123db 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskThread.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractTaskRunnable.java @@ -1,125 +1,126 @@ -package org.opengauss.datachecker.extract.task; +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ +package org.opengauss.datachecker.extract.task; import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.common.entry.enums.Endpoint; -import org.opengauss.datachecker.common.entry.extract.*; +import org.opengauss.datachecker.common.entry.extract.ExtractTask; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.TableMetadata; +import org.opengauss.datachecker.common.entry.extract.Topic; import org.opengauss.datachecker.common.util.ThreadUtil; import org.opengauss.datachecker.extract.cache.TableExtractStatusCache; import org.opengauss.datachecker.extract.client.CheckingFeignClient; -import org.opengauss.datachecker.extract.kafka.KafkaProducerService; -import org.opengauss.datachecker.extract.util.HashHandler; -import org.opengauss.datachecker.extract.util.MetaDataUtil; +import org.opengauss.datachecker.extract.kafka.KafkaProducerWapper; import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate; -import org.springframework.util.CollectionUtils; -import java.sql.*; -import java.util.*; +import java.sql.ResultSetMetaData; +import java.util.HashMap; +import java.util.List; +import java.util.Map; /** + * Data extraction thread class + * * @author wang chao - * @description 数据抽取线程类 * @date 2022/5/12 19:17 * @since 11 **/ @Slf4j -public class ExtractTaskThread implements Runnable { +public class ExtractTaskRunnable extends KafkaProducerWapper implements Runnable { + private static final String EXTRACT_THREAD_NAME_PREFIX = "EXTRACT_"; - /** - * 数据推送Kafka Topic信息 - */ private final Topic topic; - /** - * 当前抽取任务对象 - */ private final ExtractTask task; - /** - * 当前执行端点信息 - */ private final Endpoint endpoint; private final String schema; - private final JdbcTemplate jdbcTemplate; - private final KafkaProducerService kafkaProducerService; private final CheckingFeignClient checkingFeignClient; - /** - * 线程构造函数 + * Thread Constructor * - * @param task Kafka Topic信息 - * @param topic 数据抽取流程编号 - * @param support 线程参数封装 + * @param task task information + * @param topic Kafka topic information + * @param support Thread helper class */ - public ExtractTaskThread(ExtractTask task, Topic topic, ExtractThreadSupport support) { + public ExtractTaskRunnable(ExtractTask task, Topic topic, ExtractThreadSupport support) { + super(support.getKafkaProducerConfig()); this.task = task; this.topic = topic; - this.schema = support.getExtractProperties().getSchema(); - this.endpoint = support.getExtractProperties().getEndpoint(); - this.jdbcTemplate = new JdbcTemplate(support.getDataSourceOne()); - this.kafkaProducerService = support.getKafkaProducerService(); - this.checkingFeignClient = support.getCheckingFeignClient(); + schema = support.getExtractProperties().getSchema(); + endpoint = support.getExtractProperties().getEndpoint(); + jdbcTemplate = new JdbcTemplate(support.getDataSourceOne()); + checkingFeignClient = support.getCheckingFeignClient(); } - @Override public void run() { - log.info("start extract task={}", task.getTaskName()); - + Thread.currentThread().setName(EXTRACT_THREAD_NAME_PREFIX.concat(task.getTaskName())); + log.info("Data extraction task {} is starting", task.getTaskName()); TableMetadata tableMetadata = task.getTableMetadata(); - - // 根据当前任务中表元数据信息,构造查询SQL + // Construct query SQL according to the metadata information of the table in the current task String sql = new SelectSqlBulder(tableMetadata, schema, task.getStart(), task.getOffset()).builder(); - // 通过JDBC SQL 查询数据 + log.debug("selectSql {}", sql); + // Query data through JDBC SQL List> dataRowList = queryColumnValues(sql); - log.info("query extract task={} completed", task.getTaskName()); - // 对查询出的数据结果 进行哈希计算 + + log.info("Data extraction task {} completes basic data query through JDBC", task.getTaskName()); + // Hash the queried data results RowDataHashHandler handler = new RowDataHashHandler(); List recordHashList = handler.handlerQueryResult(tableMetadata, dataRowList); - log.info("hash extract task={} completed", task.getTaskName()); - // 推送本地缓存 根据分片顺序将数据推送到kafka + + // Push the data to Kafka according to the fragmentation order + syncSend(topic, recordHashList); + String tableName = task.getTableName(); - // 当前分片任务,之前的任务状态未执行完成,请稍后再次检查尝试 - kafkaProducerService.syncSend(topic, recordHashList); - log.info("send kafka extract task={} completed", task.getTaskName()); - while (task.isDivisions() && !TableExtractStatusCache.checkComplated(tableName, task.getDivisionsOrdinal())) { - log.debug("task=[{}] wait divisions of before , send data to kafka completed", task.getTaskName()); + // If the current task is a sharding task, check the sharding status of the current task before sharding and + // whether the execution is completed. + // If the previous sharding task is not completed, wait 100 milliseconds, + // check again and try until all the previous sharding tasks are completed, + // and then refresh the current sharding status. + while (task.isDivisions() && !TableExtractStatusCache.checkCompleted(tableName, task.getDivisionsOrdinal())) { + log.info("task=[{}] wait divisions of before , send data to kafka completed", task.getTaskName()); ThreadUtil.sleep(100); } - // 推送完成则更新当前任务的抽取状态 + // When the push is completed, the extraction status of the current task will be updated TableExtractStatusCache.update(tableName, task.getDivisionsOrdinal()); log.info("update extract task={} status completed", task.getTaskName()); if (!task.isDivisions()) { - // 通知校验服务,当前表对应任务数据抽取已经完成 - checkingFeignClient.refushTableExtractStatus(tableName, endpoint); - log.info("refush table extract status tableName={} status completed", task.getTaskName()); + // Notify the verification service that the task data extraction corresponding to + // the current table has been completed + checkingFeignClient.refreshTableExtractStatus(tableName, endpoint); + log.info("refresh table extract status tableName={} status completed", task.getTaskName()); } if (task.isDivisions() && task.getDivisionsOrdinal() == task.getDivisionsTotalNumber()) { - // 当前表的数据抽取任务完成(所有子任务均完成) - // 通知校验服务,当前表对应任务数据抽取已经完成 - checkingFeignClient.refushTableExtractStatus(tableName, endpoint); - log.info("refush table=[{}] extract status completed,task=[{}]", tableName, task.getTaskName()); + // The data extraction task of the current table is completed (all subtasks are completed) + // Notify the verification service that the task data extraction corresponding to + // the current table has been completed + checkingFeignClient.refreshTableExtractStatus(tableName, endpoint); + log.info("refresh table=[{}] extract status completed,task=[{}]", tableName, task.getTaskName()); } } - /** - * 通过JDBC SQL 查询数据 - * - * @param sql 执行SQL - * @return 查询结果 - */ private List> queryColumnValues(String sql) { Map map = new HashMap<>(); - // 使用JDBC查询当前任务抽取数据 NamedParameterJdbcTemplate jdbc = new NamedParameterJdbcTemplate(jdbcTemplate); - // 查询当前任务数据,并对数据进行规整 return jdbc.query(sql, map, (rs, rowNum) -> { - // 获取当前结果集对应的元数据信息 ResultSetMetaData metaData = rs.getMetaData(); - // 结果集处理器 ResultSetHandler handler = new ResultSetHandler(); - // 查询结果集 根据元数据信息 进行数据转换 return handler.putOneResultSetToMap(rs, metaData); }); } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractThreadSupport.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractThreadSupport.java index 1de539a0a9a4b43eb63d48a627f5031b7a124ba3..8edfeecfc589d728aac679c8adf34eb9792b1135 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractThreadSupport.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ExtractThreadSupport.java @@ -1,16 +1,31 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; import lombok.Getter; import org.opengauss.datachecker.extract.client.CheckingFeignClient; import org.opengauss.datachecker.extract.config.ExtractProperties; -import org.opengauss.datachecker.extract.kafka.KafkaProducerService; +import org.opengauss.datachecker.extract.config.KafkaProducerConfig; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import javax.sql.DataSource; /** - * 抽取线程 参数封装 + * ExtractThreadSupport * * @author :wangchao * @date :Created in 2022/5/30 @@ -23,7 +38,7 @@ public class ExtractThreadSupport { private DataSource dataSourceOne; @Autowired - private KafkaProducerService kafkaProducerService; + private KafkaProducerConfig kafkaProducerConfig; @Autowired private CheckingFeignClient checkingFeignClient; diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractTaskThread.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractTaskRunnable.java similarity index 50% rename from datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractTaskThread.java rename to datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractTaskRunnable.java index 7ea669c9304248ec9745d9b65d0d071abc767bbf..5c023669e869921f580286ac6c0b63fe9c612bd8 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractTaskThread.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractTaskRunnable.java @@ -1,37 +1,54 @@ -package org.opengauss.datachecker.extract.task; +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ +package org.opengauss.datachecker.extract.task; import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.common.entry.enums.Endpoint; -import org.opengauss.datachecker.common.entry.extract.*; +import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; +import org.opengauss.datachecker.common.entry.extract.ExtractIncrementTask; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.entry.extract.SourceDataLog; +import org.opengauss.datachecker.common.entry.extract.TableMetadata; +import org.opengauss.datachecker.common.entry.extract.Topic; import org.opengauss.datachecker.common.exception.ExtractException; import org.opengauss.datachecker.extract.cache.TableExtractStatusCache; import org.opengauss.datachecker.extract.client.CheckingFeignClient; import org.opengauss.datachecker.extract.dml.DmlBuilder; import org.opengauss.datachecker.extract.dml.SelectDmlBuilder; -import org.opengauss.datachecker.extract.kafka.KafkaProducerService; +import org.opengauss.datachecker.extract.kafka.KafkaProducerWapper; import org.opengauss.datachecker.extract.service.MetaDataService; import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate; import org.springframework.util.CollectionUtils; import java.sql.ResultSetMetaData; -import java.util.*; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; /** + * Incremental data extraction thread class + * * @author wang chao - * @description 数据抽取线程类 * @date 2022/5/12 19:17 * @since 11 **/ @Slf4j -public class IncrementExtractTaskThread implements Runnable { - - /** - * SQL 单次查询语句,构建查询参数最大个数 - */ - private static final int MAX_QUERY_ROW_COUNT = 1000; - +public class IncrementExtractTaskRunnable extends KafkaProducerWapper implements Runnable { private final Topic topic; private final String schema; private final String taskName; @@ -39,70 +56,69 @@ public class IncrementExtractTaskThread implements Runnable { private final Endpoint endpoint; private final SourceDataLog sourceDataLog; private final JdbcTemplate jdbcTemplate; - private final KafkaProducerService kafkaProducerService; private final CheckingFeignClient checkingFeignClient; private final MetaDataService metaDataService; - private boolean singlePrimaryKey; + private boolean isSinglePrimaryKey; /** - * 线程构造函数 + * IncrementExtractTaskRunnable * - * @param task Kafka Topic信息 - * @param topic 数据抽取流程编号 - * @param support 线程参数封装 + * @param task task + * @param topic topic + * @param support support */ - public IncrementExtractTaskThread(ExtractIncrementTask task, Topic topic, IncrementExtractThreadSupport support) { + public IncrementExtractTaskRunnable(ExtractIncrementTask task, Topic topic, IncrementExtractThreadSupport support) { + super(support.getKafkaProducerConfig()); this.topic = topic; - this.schema = support.getExtractProperties().getSchema(); - this.endpoint = support.getExtractProperties().getEndpoint(); - this.tableName = task.getTableName(); - this.taskName = task.getTaskName(); - this.sourceDataLog = task.getSourceDataLog(); - this.jdbcTemplate = new JdbcTemplate(support.getDataSourceOne()); - this.kafkaProducerService = support.getKafkaProducerService(); - this.checkingFeignClient = support.getCheckingFeignClient(); - this.metaDataService = support.getMetaDataService(); + schema = support.getExtractProperties().getSchema(); + endpoint = support.getExtractProperties().getEndpoint(); + tableName = task.getTableName(); + taskName = task.getTaskName(); + sourceDataLog = task.getSourceDataLog(); + jdbcTemplate = new JdbcTemplate(support.getDataSourceOne()); + checkingFeignClient = support.getCheckingFeignClient(); + metaDataService = support.getMetaDataService(); } - @Override public void run() { log.info("start extract task={}", taskName); TableMetadata tableMetadata = getTableMetadata(); - // 根据当前任务中表元数据信息,构造查询SQL + // Construct query SQL according to the metadata information of the table in the current task SelectDmlBuilder sqlBuilder = buildSelectSql(tableMetadata, schema); - // 查询当前任务数据,并对数据进行规整 + // Query the current task data and organize the data HashMap paramMap = new HashMap<>(); final List compositePrimaryValues = sourceDataLog.getCompositePrimaryValues(); - paramMap.put(DmlBuilder.PRIMARY_KEYS, getSqlParam(sqlBuilder, tableMetadata.getPrimaryMetas(), compositePrimaryValues)); + paramMap.put(DmlBuilder.PRIMARY_KEYS, + getSqlParam(sqlBuilder, tableMetadata.getPrimaryMetas(), compositePrimaryValues)); - // 查询当前任务数据,并对数据进行规整 + // Query the current task data and organize the data List> dataRowList = queryColumnValues(sqlBuilder.build(), paramMap); log.info("query extract task={} completed row count=[{}]", taskName, dataRowList.size()); - // 对查询出的数据结果 进行哈希计算 + // Hash the queried data results RowDataHashHandler handler = new RowDataHashHandler(); List recordHashList = handler.handlerQueryResult(tableMetadata, dataRowList); log.info("hash extract task={} completed", taskName); - // 推送本地缓存 根据分片顺序将数据推送到kafka - kafkaProducerService.syncSend(topic, recordHashList); + // Push the local cache to push the data to Kafka according to the fragmentation order + syncSend(topic, recordHashList); log.info("send kafka extract task={} completed", taskName); - // 推送完成则更新当前任务的抽取状态 + // When the push is completed, the extraction status of the current task will be updated TableExtractStatusCache.update(tableName, 1); log.info("update extract task={} status completed", tableName); - // 通知校验服务,当前表对应任务数据抽取已经完成 - checkingFeignClient.refushTableExtractStatus(tableName, endpoint); + // Notify the verification service that the task data extraction corresponding to + // the current table has been completed + checkingFeignClient.refreshTableExtractStatus(tableName, endpoint); log.info("refush table extract status tableName={} status completed", tableName); - } /** - * 查询SQL构建后期优化 - * 查询SQL 构建 select colums from table where pk in(...)

- * 后期优化方式:

- * 单主键方式 + * Query SQL build post optimization + * Query SQL build select colums from table where pk in(...)

+ * Post optimization method:

+ * Single primary key type * SELECT * * FROM ( * SELECT '14225351881572354' cid UNION ALL @@ -111,7 +127,7 @@ public class IncrementExtractTaskThread implements Runnable { * ) AS tmp, test.test1 t * WHERE tmp.cid = t.b_number;

*

- * 复合主键方式 + * Composite primary key type * SELECT * * FROM ( * SELECT '1523567590573785088' cid,'type_01' ctype UNION ALL @@ -120,40 +136,37 @@ public class IncrementExtractTaskThread implements Runnable { * ) AS tmp, test.test2 t * WHERE tmp.cid = t.b_number AND tmp.ctype=t.b_type; * - * @param tableMetadata 表元数据信息 - * @param schema 数据库schema - * @return SQL构建器对象 + * @param tableMetadata Table metadata information + * @param schema Database schema + * @return SQL builder object */ private SelectDmlBuilder buildSelectSql(TableMetadata tableMetadata, String schema) { - // 复合主键表数据查询 + // Compound primary key table data query SelectDmlBuilder dmlBuilder = new SelectDmlBuilder(); final List primaryMetas = tableMetadata.getPrimaryMetas(); - if (singlePrimaryKey) { + if (isSinglePrimaryKey) { final ColumnsMetaData primaryData = primaryMetas.get(0); - dmlBuilder.schema(schema) - .columns(tableMetadata.getColumnsMetas()) - .tableName(tableMetadata.getTableName()) - .conditionPrimary(primaryData); + dmlBuilder.schema(schema).columns(tableMetadata.getColumnsMetas()).tableName(tableMetadata.getTableName()) + .conditionPrimary(primaryData); } else { - // 复合主键表数据查询 - dmlBuilder.schema(schema) - .columns(tableMetadata.getColumnsMetas()) - .tableName(tableMetadata.getTableName()) - .conditionCompositePrimary(primaryMetas); + // Compound primary key table data query + dmlBuilder.schema(schema).columns(tableMetadata.getColumnsMetas()).tableName(tableMetadata.getTableName()) + .conditionCompositePrimary(primaryMetas); } return dmlBuilder; } /** - * 构建JDBC 查询参数 + * Build JDBC query parameters * - * @param sqlBuilder SQL构建器 - * @param primaryMetas 主键信息 - * @param compositePrimaryValues 查询参数 - * @return 封装后的JDBC查询参数 + * @param sqlBuilder SQL builder + * @param primaryMetas Primary key information + * @param compositePrimaryValues Query Parameter + * @return Encapsulated JDBC query parameters */ - private List getSqlParam(SelectDmlBuilder sqlBuilder, List primaryMetas, List compositePrimaryValues) { - if (singlePrimaryKey) { + private List getSqlParam(SelectDmlBuilder sqlBuilder, List primaryMetas, + List compositePrimaryValues) { + if (isSinglePrimaryKey) { return compositePrimaryValues; } else { return sqlBuilder.conditionCompositePrimaryValue(primaryMetas, compositePrimaryValues); @@ -161,21 +174,17 @@ public class IncrementExtractTaskThread implements Runnable { } /** - * 主键表数据查询 + * Primary key table data query * - * @param selectDml 查询SQL - * @param paramMap 查询参数 - * @return 查询结果 + * @param selectDml Query SQL + * @param paramMap Query Parameter + * @return query results */ private List> queryColumnValues(String selectDml, Map paramMap) { - // 使用JDBC查询当前任务抽取数据 NamedParameterJdbcTemplate jdbc = new NamedParameterJdbcTemplate(jdbcTemplate); return jdbc.query(selectDml, paramMap, (rs, rowNum) -> { - // 获取当前结果集对应的元数据信息 ResultSetMetaData metaData = rs.getMetaData(); - // 结果集处理器 ResultSetHandler handler = new ResultSetHandler(); - // 查询结果集 根据元数据信息 进行数据转换 return handler.putOneResultSetToMap(rs, metaData); }); } @@ -185,7 +194,7 @@ public class IncrementExtractTaskThread implements Runnable { if (Objects.isNull(metadata) || CollectionUtils.isEmpty(metadata.getPrimaryMetas())) { throw new ExtractException(tableName + " metadata not found!"); } - this.singlePrimaryKey = metadata.getPrimaryMetas().size() == 1; + isSinglePrimaryKey = metadata.getPrimaryMetas().size() == 1; return metadata; } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractThreadSupport.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractThreadSupport.java index 26b3c59c9b3f0538b4af96ab9c05d157a23d7ca3..c2e83f6eb650c9ac224fee3e30213d1006cee89a 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractThreadSupport.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/IncrementExtractThreadSupport.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; import lombok.Getter; @@ -6,7 +21,7 @@ import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; /** - * 抽取线程 参数封装 + * IncrementExtractThreadSupport * * @author :wangchao * @date :Created in 2022/5/30 @@ -15,7 +30,6 @@ import org.springframework.stereotype.Service; @Getter @Service public class IncrementExtractThreadSupport extends ExtractThreadSupport { - @Autowired private MetaDataService metaDataService; } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ResultSetHandler.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ResultSetHandler.java index e5cbccff53a5b11b76301cae1f3aa860c2b83d75..c2c4a9f84ba99b5a338e778bfa511ddf7a60954d 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ResultSetHandler.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/ResultSetHandler.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; /** @@ -15,35 +30,47 @@ import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; import org.springframework.lang.NonNull; -import java.sql.*; import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Time; +import java.sql.Timestamp; +import java.sql.Types; import java.text.SimpleDateFormat; import java.time.format.DateTimeFormatter; -import java.util.*; +import java.util.Calendar; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.TimeZone; import java.util.stream.IntStream; /** + * Result set object processor + * * @author wang chao - * @description 结果集对象处理器 * @since 11 **/ @Slf4j public class ResultSetHandler { private final ObjectMapper mapper = ObjectMapperWapper.getObjectMapper(); - private static final List SQL_TIME_TYPES = List.of(Types.DATE, Types.TIME, Types.TIMESTAMP, Types.TIME_WITH_TIMEZONE, Types.TIMESTAMP_WITH_TIMEZONE); + private static final List SQL_TIME_TYPES = + List.of(Types.DATE, Types.TIME, Types.TIMESTAMP, Types.TIME_WITH_TIMEZONE, Types.TIMESTAMP_WITH_TIMEZONE); private static final DateTimeFormatter DATE = DateTimeFormatter.ofPattern("yyyy-MM-dd"); private static final DateTimeFormatter TIME = DateTimeFormatter.ofPattern("HH:mm:ss"); private static final DateTimeFormatter TIMESTAMP = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS"); /** - * 将当前查询结果集 根据结果集元数据信息转换为Map + * Convert the current query result set into map according to the metadata information of the result set * - * @param resultSet JDBC 数据查询结果集 - * @param rsmd JDBC 结果集元数据 - * @return JDBC 数据封装结果 - * @throws SQLException 返回SQL异常 + * @param resultSet JDBC Data query result set + * @param rsmd JDBC ResultSet Metadata + * @return JDBC Data encapsulation results + * @throws SQLException Return SQL exception */ public Map putOneResultSetToMap(ResultSet resultSet, ResultSetMetaData rsmd) throws SQLException { Map values = new HashMap(); @@ -51,9 +78,9 @@ public class ResultSetHandler { IntStream.range(0, rsmd.getColumnCount()).forEach(idx -> { try { int columnIdx = idx + 1; - // 获取列及对应的列名 + // Get the column and its corresponding column name String columnLabel = rsmd.getColumnLabel(columnIdx); - // 根据列名从ResultSet结果集中获得对应的值 + // Get the corresponding value from the resultset result set according to the column name Object columnValue; final int columnType = rsmd.getColumnType(columnIdx); @@ -62,11 +89,11 @@ public class ResultSetHandler { } else { columnValue = resultSet.getObject(columnLabel); } - // 列名为key,列的值为value values.put(columnLabel, mapper.convertValue(columnValue, String.class)); } catch (SQLException ex) { - log.error("putOneResultSetToMap 根据结果集元数据信息转换数据结果集异常 {}", ex.getMessage()); + log.error("putOneResultSetToMap Convert data according to result set metadata information." + + " Result set exception {}", ex.getMessage()); } }); return values; @@ -111,14 +138,16 @@ public class ResultSetHandler { private String getTimestampFormat(@NonNull ResultSet resultSet, int columnIdx) throws SQLException { String formatTime = StringUtils.EMPTY; - final Timestamp timestamp = resultSet.getTimestamp(columnIdx, Calendar.getInstance(TimeZone.getTimeZone("GMT+8"))); + final Timestamp timestamp = + resultSet.getTimestamp(columnIdx, Calendar.getInstance(TimeZone.getTimeZone("GMT+8"))); if (Objects.nonNull(timestamp)) { formatTime = TIMESTAMP.format(timestamp.toLocalDateTime()); } return formatTime; } + /** - * 结果集对象处理器 将结果集数据转换为JSON字符串 + * The result set object processor converts the result set data into JSON strings */ static class ObjectMapperWapper { @@ -129,34 +158,15 @@ public class ResultSetHandler { } static { - //创建ObjectMapper对象 MAPPER = new ObjectMapper(); - - //configure方法 配置一些需要的参数 - // 转换为格式化的json 显示出来的格式美化 MAPPER.enable(SerializationFeature.INDENT_OUTPUT); - - //序列化的时候序列对象的那些属性 - //JsonInclude.Include.NON_DEFAULT 属性为默认值不序列化 - //JsonInclude.Include.ALWAYS 所有属性 - //JsonInclude.Include.NON_EMPTY 属性为 空(“”) 或者为 NULL 都不序列化 - //JsonInclude.Include.NON_NULL 属性为NULL 不序列化 MAPPER.setSerializationInclusion(JsonInclude.Include.ALWAYS); - - //反序列化时,遇到未知属性会不会报错 - //true - 遇到没有的属性就报错 false - 没有的属性不会管,不会报错 MAPPER.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); - - //如果是空对象的时候,不抛异常 MAPPER.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false); - - //修改序列化后日期格式 MAPPER.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false); MAPPER.setDateFormat(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")); - //处理不同的时区偏移格式 MAPPER.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS); MAPPER.registerModule(new JavaTimeModule()); - } } } \ No newline at end of file diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/RowDataHashHandler.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/RowDataHashHandler.java index 106fa721518a04e8c9ced7e7a525fb8ae89e2395..d0b14ba3f5da4b3c3e0808cbce7f2881018e9c69 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/RowDataHashHandler.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/RowDataHashHandler.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; import org.opengauss.datachecker.common.entry.extract.RowDataHash; @@ -6,6 +21,7 @@ import org.opengauss.datachecker.extract.util.HashHandler; import org.opengauss.datachecker.extract.util.MetaDataUtil; import java.util.ArrayList; +import java.util.Collections; import java.util.List; import java.util.Map; @@ -15,29 +31,25 @@ import java.util.Map; * @since :11 */ public class RowDataHashHandler { - /** - * 根据表元数据信息{@code tableMetadata}中列顺序,对查询出的数据结果进行拼接,并对拼接后的结果行哈希计算 + * According to the column order in the table metadata information {@code tableMetadata}, + * the queried data results are spliced, and the hash calculation of the spliced result rows is performed * - * @param tableMetadata 表元数据信息 - * @param dataRowList 查询数据集合 - * @return 返回抽取数据的哈希计算结果 + * @param tableMetadata Table metadata information + * @param dataRowList Query data set + * @return Returns the hash calculation result of extracted data */ public List handlerQueryResult(TableMetadata tableMetadata, List> dataRowList) { - - List recordHashList = new ArrayList<>(); + List recordHashList = Collections.synchronizedList(new ArrayList<>()); HashHandler hashHandler = new HashHandler(); List columns = MetaDataUtil.getTableColumns(tableMetadata); List primarys = MetaDataUtil.getTablePrimaryColumns(tableMetadata); dataRowList.forEach(rowColumnsValueMap -> { long rowHash = hashHandler.xx3Hash(rowColumnsValueMap, columns); - String primaryValue = hashHandler.value(rowColumnsValueMap, primarys); long primaryHash = hashHandler.xx3Hash(rowColumnsValueMap, primarys); - RowDataHash hashData = new RowDataHash() - .setPrimaryKey(primaryValue) - .setPrimaryKeyHash(primaryHash) - .setRowHash(rowHash); + RowDataHash hashData = new RowDataHash(); + hashData.setPrimaryKey(primaryValue).setPrimaryKeyHash(primaryHash).setRowHash(rowHash); recordHashList.add(hashData); }); return recordHashList; diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/SelectSqlBulder.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/SelectSqlBulder.java index dd190f064cdaadc10b370cb2215406c442e21ecf..1feb5dc9d0aa8d62e8475c49fd14d3968009f886 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/SelectSqlBulder.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/SelectSqlBulder.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; @@ -8,34 +23,56 @@ import java.util.List; import java.util.Objects; import java.util.stream.Collectors; -import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.*; - +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.AND_CONDITION; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.COLUMN; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.DELIMITER; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.EQUAL_CONDITION; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.JOIN_ON; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.OFFSET; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.PRIMARY_KEY; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.QUERY_MULTIPLE_PRIMARY_KEY_OFF_SET; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.QUERY_OFF_SET; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.QUERY_OFF_SET_ZERO; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.SCHEMA; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.START; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.SUB_TABLE_ALIAS; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.TABLE_ALIAS; +import static org.opengauss.datachecker.extract.task.SelectSqlBulder.QuerySqlMapper.TABLE_NAME; /** + * Data extraction SQL builder + * * @author wang chao - * @description 数据抽取SQL构建器 * @date 2022/5/12 19:17 * @since 11 **/ public class SelectSqlBulder { private static final long OFF_SET_ZERO = 0L; /** - * 任务执行起始位置 + * Start position of task execution */ private final long start; /** - * 任务执行偏移量 + * Task execution offset */ private final long offset; /** - * 查询数据schema + * Query data schema */ private final String schema; /** - * 表元数据信息 + * Table metadata information */ private final TableMetadata tableMetadata; + /** + * Table fragment query SQL Statement Builder + * + * @param tableMetadata tableMetadata + * @param schema schema + * @param start start + * @param offset offset + */ public SelectSqlBulder(TableMetadata tableMetadata, String schema, long start, long offset) { this.tableMetadata = tableMetadata; this.start = start; @@ -43,8 +80,13 @@ public class SelectSqlBulder { this.schema = schema; } + /** + * Table fragment query SQL Statement Builder + * + * @return build sql + */ public String builder() { - Assert.isTrue(Objects.nonNull(tableMetadata), "表元数据信息异常,构建SQL失败"); + Assert.isTrue(Objects.nonNull(tableMetadata), "Abnormal table metadata information, failed to build SQL"); List columnsMetas = tableMetadata.getColumnsMetas(); if (offset == OFF_SET_ZERO) { return buildSelectSqlOffsetZero(columnsMetas, tableMetadata.getTableName()); @@ -54,125 +96,126 @@ public class SelectSqlBulder { } /** - * 根据元数据信息构建查询语句 SELECT * FROM test.test1 + * Construct query statements based on metadata information SELECT * FROM test.test1 * - * @param columnsMetas 列元数据信息 - * @param tableName 表名 + * @param columnsMetas Column metadata information + * @param tableName tableName * @return */ private String buildSelectSqlOffsetZero(List columnsMetas, String tableName) { - String columnNames = columnsMetas - .stream() - .map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining(DELIMITER)); + String columnNames = + columnsMetas.stream().map(ColumnsMetaData::getColumnName).collect(Collectors.joining(DELIMITER)); return QUERY_OFF_SET_ZERO.replace(COLUMN, columnNames).replace(SCHEMA, schema).replace(TABLE_NAME, tableName); } /** - * 根据元数据和分片信息构建查询语句 - * SELECT * FROM test.test1 WHERE b_number IN (SELECT t.b_number FROM (SELECT b_number FROM test.test1 LIMIT 0,20) t); + *

+     * Construct query statements based on metadata and fragment information
+     * SELECT * FROM test.test1 WHERE b_number IN
+     * (SELECT t.b_number FROM (SELECT b_number FROM test.test1 LIMIT 0,20) t);
+     * 
* - * @param tableMetadata 表元数据信息 - * @param start 分片查询起始位置 - * @param offset 分片查询位移 - * @return 返回构建的Select语句 + * @param tableMetadata Table metadata information + * @param start Start position of fragment query + * @param offset Fragment query start position fragment query displacement + * @return Return the constructed select statement */ // private String buildSelectSqlOffset(TableMetadata tableMetadata, long start, long offset) { List columnsMetas = tableMetadata.getColumnsMetas(); List primaryMetas = tableMetadata.getPrimaryMetas(); - String columnNames; String primaryKey; String tableName = tableMetadata.getTableName(); if (primaryMetas.size() == 1) { - columnNames = columnsMetas - .stream() - .map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining(DELIMITER)); - primaryKey = primaryMetas.stream().map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining()); - return QUERY_OFF_SET.replace(COLUMN, columnNames) - .replace(SCHEMA, schema) - .replace(TABLE_NAME, tableName) - .replace(PRIMARY_KEY, primaryKey) - .replace(START, String.valueOf(start)) - .replace(OFFSET, String.valueOf(offset)); + columnNames = + columnsMetas.stream().map(ColumnsMetaData::getColumnName).collect(Collectors.joining(DELIMITER)); + primaryKey = primaryMetas.stream().map(ColumnsMetaData::getColumnName).collect(Collectors.joining()); + return QUERY_OFF_SET.replace(COLUMN, columnNames).replace(SCHEMA, schema).replace(TABLE_NAME, tableName) + .replace(PRIMARY_KEY, primaryKey).replace(START, String.valueOf(start)) + .replace(OFFSET, String.valueOf(offset)); } else { - columnNames = columnsMetas - .stream() - .map(ColumnsMetaData::getColumnName) - .map(counm -> TABLE_ALAIS.concat(counm)) - .collect(Collectors.joining(DELIMITER)); - primaryKey = primaryMetas.stream().map(ColumnsMetaData::getColumnName) - .collect(Collectors.joining(DELIMITER)); - String joinOn = primaryMetas.stream() - .map(ColumnsMetaData::getColumnName) - .map(coumn -> TABLE_ALAIS.concat(coumn).concat(EQUAL_CONDITION).concat(SUB_TABLE_ALAIS).concat(coumn)) - .collect(Collectors.joining(AND_CONDITION)); - return QUERY_MULTIPLE_PRIMARY_KEY_OFF_SET.replace(COLUMN, columnNames) - .replace(SCHEMA, schema) - .replace(TABLE_NAME, tableName) - .replace(PRIMARY_KEY, primaryKey) - .replace(JOIN_ON, joinOn) - .replace(START, String.valueOf(start)) - .replace(OFFSET, String.valueOf(offset)); + columnNames = + columnsMetas.stream().map(ColumnsMetaData::getColumnName).map(counm -> TABLE_ALIAS.concat(counm)) + .collect(Collectors.joining(DELIMITER)); + primaryKey = + primaryMetas.stream().map(ColumnsMetaData::getColumnName).collect(Collectors.joining(DELIMITER)); + String joinOn = primaryMetas.stream().map(ColumnsMetaData::getColumnName).map( + coumn -> TABLE_ALIAS.concat(coumn).concat(EQUAL_CONDITION).concat(SUB_TABLE_ALIAS).concat(coumn)) + .collect(Collectors.joining(AND_CONDITION)); + return QUERY_MULTIPLE_PRIMARY_KEY_OFF_SET.replace(COLUMN, columnNames).replace(SCHEMA, schema) + .replace(TABLE_NAME, tableName).replace(PRIMARY_KEY, primaryKey) + .replace(JOIN_ON, joinOn).replace(START, String.valueOf(start)) + .replace(OFFSET, String.valueOf(offset)); } } /** - * 查询SQL构建模版 + * Query SQL build template */ interface QuerySqlMapper { /** - * 表字段 + * Query SQL statement columnsList fragment */ String COLUMN = ":columnsList"; - /** - * 表名称 + * Query SQL statement tableName fragment */ String TABLE_NAME = ":tableName"; - /** - * 表主键 + * Query SQL statement primaryKey fragment */ String PRIMARY_KEY = ":primaryKey"; + /** + * Query SQL statement schema fragment + */ String SCHEMA = ":schema"; /** - * 分片查询起始位置 + * Query SQL statement start fragment: Start position of fragment query */ String START = ":start"; /** - * 分片查询偏移量 + * Query SQL statement offset fragment: Fragment query offset */ String OFFSET = ":offset"; + /** + * Query SQL statement joinOn fragment: Query SQL statement joinOn fragment + */ String JOIN_ON = ":joinOn"; /** - * 无偏移量场景下,查询SQL语句 + * Query SQL statement fragment: Query SQL statements in the scenario without offset */ String QUERY_OFF_SET_ZERO = "SELECT :columnsList FROM :schema.:tableName"; /** - * 单一主键场景下,使用偏移量进行分片查询的SQL语句 + * Query SQL statement fragment: SQL statement for fragment query using offset in single primary key scenario */ - String QUERY_OFF_SET = "SELECT :columnsList FROM :schema.:tableName WHERE :primaryKey IN (SELECT t.:primaryKey FROM (SELECT :primaryKey FROM :schema.:tableName LIMIT :start,:offset) t)"; - String QUERY_MULTIPLE_PRIMARY_KEY_OFF_SET = "SELECT :columnsList FROM :schema.:tableName a RIGHT JOIN (SELECT :primaryKey FROM :schema.:tableName LIMIT :start,:offset) b ON :joinOn"; + String QUERY_OFF_SET = "SELECT :columnsList FROM :schema.:tableName WHERE :primaryKey IN " + + "(SELECT t.:primaryKey FROM (SELECT :primaryKey FROM :schema.:tableName order by :primaryKey " + + " LIMIT :start,:offset) t)"; /** - * SQL语句字段间隔符号 + * Query SQL statement fragment: SQL statement for fragment query using offset in multiple primary key scenario + */ + String QUERY_MULTIPLE_PRIMARY_KEY_OFF_SET = "SELECT :columnsList FROM :schema.:tableName a RIGHT JOIN " + + " (SELECT :primaryKey FROM :schema.:tableName order by :primaryKey LIMIT :start,:offset) b ON :joinOn"; + /** + * Query SQL statement fragment: SQL statement field spacing symbol */ String DELIMITER = ","; /** - * SQL语句 相等条件符号 + * Query SQL statement fragment: SQL statement equality condition symbol */ String EQUAL_CONDITION = "="; + /** + * Query SQL statement and fragment + */ String AND_CONDITION = " and "; /** - * 表别名 + * Query SQL statement table alias fragment: table alias */ - String TABLE_ALAIS = "a."; + String TABLE_ALIAS = "a."; /** - * 子查询结果别名 + * Query SQL statement sub table alias fragment: Sub query result alias */ - String SUB_TABLE_ALAIS = "b."; + String SUB_TABLE_ALIAS = "b."; } } diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/TaskJdbcDataCheckThread.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/TaskJdbcDataCheckThread.java new file mode 100644 index 0000000000000000000000000000000000000000..43ed07ba9eb94b8688ea1fb5b29e5fe14c40f591 --- /dev/null +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/task/TaskJdbcDataCheckThread.java @@ -0,0 +1,87 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.task; + +import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.common.entry.enums.Endpoint; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.util.FileUtils; +import org.opengauss.datachecker.common.util.JsonObjectUtil; + +import java.io.File; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * TaskJdbcDataCheckThread + * + * @author :wangchao + * @date :Created in 2022/7/27 + * @since :11 + */ +@Slf4j +public class TaskJdbcDataCheckThread extends Thread { + private final List dataRowList; + private final String taskName; + private final Endpoint endpoint; + private final List replateList = new ArrayList<>(); + + /** + * TaskJdbcDataCheckThread + * + * @param dataRowList dataRowList + * @param taskName taskName + * @param endpoint endpoint {@value Endpoint#API_DESCRIPTION} + */ + public TaskJdbcDataCheckThread(List dataRowList, String taskName, Endpoint endpoint) { + super.setName("DATA_" + taskName); + this.dataRowList = dataRowList; + this.taskName = taskName; + this.endpoint = endpoint; + } + + /** + * If this thread was constructed using a separate + * {@code Runnable} run object, then that + * {@code Runnable} object's {@code run} method is called; + * otherwise, this method does nothing and returns. + *

+ * Subclasses of {@code Thread} should override this method. + * + * @see #start() + */ + @Override + public void run() { + String path = "." + File.separator + "data"; + String fileName = path + File.separator + taskName + "_" + endpoint.getDescription() + ".json"; + FileUtils.createDirectories(path); + FileUtils.deleteFile(fileName); + FileUtils.writeAppendFile(fileName, JsonObjectUtil.format(dataRowList)); + Map dataMap = new HashMap<>(); + dataRowList.forEach(row -> { + if (dataMap.containsKey(row.getPrimaryKey())) { + replateList.add(row.getPrimaryKey()); + } else { + dataMap.put(row.getPrimaryKey(), row); + } + }); + log.debug("dataRowList:{}", dataRowList.size()); + log.debug("dataMap:{}", dataMap.size()); + log.debug("replateList:{}", replateList); + } +} diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/HashHandler.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/HashHandler.java index 185b9584628ce5245328acb2ca63b40b43ed4419..0b5f0b6587fe03d67f9465d9cdde82b57ec15d9f 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/HashHandler.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/HashHandler.java @@ -1,6 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.util; -import org.opengauss.datachecker.common.util.HashUtil; +import org.opengauss.datachecker.common.util.LongHashFunctionWrapper; import org.springframework.util.CollectionUtils; import java.util.ArrayList; @@ -11,40 +26,51 @@ import java.util.stream.Collectors; import static org.opengauss.datachecker.extract.constants.ExtConstants.PRIMARY_DELIMITER; /** - * 哈希处理器,对查询结果进行哈希计算。 + * The hash processor performs hash calculation on the query results. + * * @author :wangchao * @date :Created in 2022/6/15 * @since :11 */ -public class HashHandler { +public class HashHandler { + private final LongHashFunctionWrapper hashFunctionWrapper = new LongHashFunctionWrapper(); + /** - * 根据columns 集合中字段列表集合,在map中查找字段对应值,并对查找到的值进行拼接。 + * According to the field list set in the columns set, + * find the corresponding value of the field in the map, and splice the found value. * - * @param columnsValueMap 字段对应查询数据 - * @param columns 字段名称列表 - * @return 当前Row对应的哈希计算结果 + * @param columnsValueMap Field corresponding query data + * @param columns List of field names + * @return Hash calculation result corresponding to the current row */ public long xx3Hash(Map columnsValueMap, List columns) { if (CollectionUtils.isEmpty(columns)) { return 0L; } StringBuffer sb = new StringBuffer(); - columns.forEach(colunm -> { - if (columnsValueMap.containsKey(colunm)) { - sb.append(columnsValueMap.get(colunm)); + columns.forEach(column -> { + if (columnsValueMap.containsKey(column)) { + sb.append(columnsValueMap.get(column)); } }); - return HashUtil.hashChars(sb.toString()); + return hashFunctionWrapper.hashChars(sb.toString()); } + /** + * column hash result + * + * @param columnsValueMap columns value + * @param columns column names + * @return column hash result + */ public String value(Map columnsValueMap, List columns) { if (CollectionUtils.isEmpty(columns)) { return ""; } List values = new ArrayList<>(); - columns.forEach(colunm -> { - if (columnsValueMap.containsKey(colunm)) { - values.add(columnsValueMap.get(colunm)); + columns.forEach(column -> { + if (columnsValueMap.containsKey(column)) { + values.add(columnsValueMap.get(column)); } }); return values.stream().map(String::valueOf).collect(Collectors.joining(PRIMARY_DELIMITER)); diff --git a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/MetaDataUtil.java b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/MetaDataUtil.java index 37b1b6ad52bd6828fbb794a633e7d300c43b12fc..1b044808ec294ac9ebfa2fcec7ff878e8777f4b3 100644 --- a/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/MetaDataUtil.java +++ b/datachecker-extract/src/main/java/org/opengauss/datachecker/extract/util/MetaDataUtil.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.util; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; diff --git a/datachecker-extract/src/main/resources/application-sink.yml b/datachecker-extract/src/main/resources/application-sink.yml index faf7ad258b95d2b7d4a7f9193e54244fdd9f92a5..259bb8b895afafa105d708304fbbf247d1c92830 100644 --- a/datachecker-extract/src/main/resources/application-sink.yml +++ b/datachecker-extract/src/main/resources/application-sink.yml @@ -1,5 +1,6 @@ server: port: 7001 + shutdown: graceful debug: false @@ -9,26 +10,28 @@ logging: spring: application: name: DATACHECKER-EXTRACT-${spring.extract.endpoint} - + lifecycle: + timeout-per-shutdown-phase: 5 extract: schema: jack # 宿端数据实例 databaseType: OG # 宿端数据库类型 OG opengauss endpoint: SINK # 宿端端点类型 debezium-enable: false #是否开启增量debezium配置 默认不开启 + sync-extract: true datasource: druid: dataSourceOne: - driver-class-name: org.postgresql.Driver - url: jdbc:postgresql://xxxxx:xxx/xxxx?useSSL=false&useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC + driver-class-name: org.opengauss.Driver + url: jdbc:opengauss://xxxxx:xxx/xxxx?useSSL=false&useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC username: xxxxx password: xxxxxxxx type: com.alibaba.druid.pool.DruidDataSource #Spring Boot 默认是不注入这些属性值的,需要自己绑定 #druid 数据源专有配置 - initialSize: 20 + initialSize: 50 minIdle: 5 - maxActive: 50 + maxActive: 200 maxWait: 60000 timeBetweenEvictionRunsMillis: 60000 minEvictableIdleTimeMillis: 300000 diff --git a/datachecker-extract/src/main/resources/application-source.yml b/datachecker-extract/src/main/resources/application-source.yml index b45d37de53f1563667f1ac8926f0d4c445a48d31..9172b757cecff0566abb6a1495af50471fead935 100644 --- a/datachecker-extract/src/main/resources/application-source.yml +++ b/datachecker-extract/src/main/resources/application-source.yml @@ -1,5 +1,6 @@ server: port: 7002 + shutdown: graceful debug: false logging: @@ -8,18 +9,21 @@ logging: spring: application: name: DATACHECKER-EXTRACT-${spring.extract.endpoint} + lifecycle: + timeout-per-shutdown-phase: 5 extract: schema: test # 源端数据实例 databaseType: MS # 源端数据库类型 MS mysql endpoint: SOURCE # 源端端点类型 - debezium-enable: true #是否开启增量debezium配置 默认不开启 + debezium-enable: false #是否开启增量debezium配置 默认不开启 debezium-topic: # debezium监听表增量数据,使用单一topic进行增量数据管理 debezium-groupId: debezium-extract-group # d debezium增量迁移topic ,groupId消费Group设置 debezium-topic-partitions: 1 # debezium监听topic 分区数量配置 debezium-tables: # debezium-tables配置debezium监听的表名称列表; 该配置只在源端服务配置并生效 debezium-time-period: 1 # debezium增量迁移校验 时间周期 24*60 单位分钟 debezium-num-period: 1000 #debezium增量迁移校验 统计增量变更记录数量阀值,默认值1000 阀值应大于100 + sync-extract: false datasource: druid: @@ -33,7 +37,7 @@ spring: #druid 数据源专有配置 initialSize: 20 minIdle: 5 - maxActive: 50 + maxActive: 200 maxWait: 60000 timeBetweenEvictionRunsMillis: 60000 minEvictableIdleTimeMillis: 300000 diff --git a/datachecker-extract/src/main/resources/application.yml b/datachecker-extract/src/main/resources/application.yml index 77e7345c75a980fdc743b4d9ba39d760d4bb011d..eead9fcd4043366b13083de55f53f270d6952937 100644 --- a/datachecker-extract/src/main/resources/application.yml +++ b/datachecker-extract/src/main/resources/application.yml @@ -1,24 +1,23 @@ debug: false spring: - profiles: - active: sink check: - server-uri: http://127.0.0.1:7000 # 数据校验服务地址 - - + server-uri: http://{ip}:{port} # 数据校验服务地址 + lifecycle: + timeout-per-shutdown-phase: 5 kafka: properties: #这个参数指定producer在发送批量消息前等待的时间,当设置此参数后,即便没有达到批量消息的指定大小(batch-size),到达时间后生产者也会发送批量消息到broker。默认情况下,生产者的发送消息线程只要空闲了就会发送消息,即便只有一条消息。设置这个参数后,发送线程会等待一定的时间,这样可以批量发送消息增加吞吐量,但同时也会增加延迟。 linger.ms: 10 #默认值:0毫秒,当消息发送比较频繁时,增加一些延迟可增加吞吐量和性能。 #这个参数指定producer在一个TCP connection可同时发送多少条消息到broker并且等待broker响应,设置此参数较高的值可以提高吞吐量,但同时也会增加内存消耗。另外,如果设置过高反而会降低吞吐量,因为批量消息效率降低。设置为1,可以保证发送到broker的顺序和调用send方法顺序一致,即便出现失败重试的情况也是如此。 #注意:当前消息符合at-least-once,自kafka1.0.0以后,为保证消息有序以及exactly once,这个配置可适当调大为5。 - max.in.flight.requests.per.connection: 1 #默认值:5,设置为1即表示producer在connection上发送一条消息,至少要等到这条消息被broker确认收到才继续发送下一条,因此是有序的。 + max.in.flight.requests.per.connection: 5 #默认值:5,设置为1即表示producer在connection上发送一条消息,至少要等到这条消息被broker确认收到才继续发送下一条,因此是有序的。 producer: # producer 生产者 - retries: 0 # 重试次数 - acks: 1 # 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1) - batch-size: 16384 # 批量大小 - buffer-memory: 33554432 # 生产端缓冲区大小 + retries: 1 # 重试次数 + acks: all # 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1) + batch-size: 1638400 # 批量大小 + buffer-memory: 335544320 # 生产端缓冲区大小 + key-serializer: org.apache.kafka.common.serialization.StringSerializer # value-serializer: com.itheima.demo.config.MySerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer @@ -33,10 +32,12 @@ spring: # none:topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常 auto-offset-reset: earliest key-deserializer: org.apache.kafka.common.serialization.StringDeserializer - # value-deserializer: com.itheima.demo.config.MyDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer max-poll-records: 10000 +feign: + okhttp: + enabled: true # springdoc 相关配置 springdoc: @@ -50,4 +51,4 @@ springdoc: show-actuator: true group-configs: - group: stores - paths-to-match: /extract/** \ No newline at end of file + paths-to-match: /extract/** diff --git a/datachecker-extract/src/main/resources/log4j2-sink.xml b/datachecker-extract/src/main/resources/log4j2-sink.xml index c746d9dca53518be50a8a8fe1f46e524dc9b1881..3a1f7cba04d448e25995f39fded4b0d8a084201a 100644 --- a/datachecker-extract/src/main/resources/log4j2-sink.xml +++ b/datachecker-extract/src/main/resources/log4j2-sink.xml @@ -1,73 +1,57 @@ - + + + + - logs/sink - + - - - - - - - - - - - - + + + - - - + - - - - - - + + - - - - + - @@ -79,63 +63,21 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - + + - \ No newline at end of file diff --git a/datachecker-extract/src/main/resources/log4j2-source.xml b/datachecker-extract/src/main/resources/log4j2-source.xml index fcf52fcff7243731260b548e911dbedb7e4d113b..b265fb5a58ef40da83f296b842bf1340291cf248 100644 --- a/datachecker-extract/src/main/resources/log4j2-source.xml +++ b/datachecker-extract/src/main/resources/log4j2-source.xml @@ -1,75 +1,58 @@ - + + + - logs/source - + + - - - - - - - - - - - - + + + - - - - + - - - - - - + + - - - - + - @@ -81,63 +64,21 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - + + - \ No newline at end of file diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/ExtractApplicationTests.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/ExtractApplicationTests.java deleted file mode 100644 index f3141a236a6691e09e96adaa9027bf935b3e72c3..0000000000000000000000000000000000000000 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/ExtractApplicationTests.java +++ /dev/null @@ -1,13 +0,0 @@ -package org.opengauss.datachecker.extract; - -import org.junit.jupiter.api.Test; -import org.springframework.boot.test.context.SpringBootTest; - -@SpringBootTest -class ExtractApplicationTests { - - @Test - void contextLoads() { - } - -} diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/MetaDataCacheTest.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/MetaDataCacheTest.java index 1c07f53a557617a4ef8a353e559184ca98fbaffd..a8c8374498de59c0b2b4db3b8434938c92412c6b 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/MetaDataCacheTest.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/MetaDataCacheTest.java @@ -1,5 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.cache; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import org.opengauss.datachecker.extract.service.MetaDataService; import org.springframework.beans.factory.annotation.Autowired; @@ -7,27 +23,41 @@ import org.springframework.boot.test.context.SpringBootTest; import javax.annotation.PostConstruct; - +/** + * MetaDataCacheTest + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j @SpringBootTest public class MetaDataCacheTest { - @Autowired private MetaDataService metadataService; + /** + * init + */ @PostConstruct public void init() { MetaDataCache.initCache(); MetaDataCache.putMap(metadataService.queryMetaDataOfSchema()); } + /** + * getTest + */ @Test public void getTest() { - System.out.println(MetaDataCache.get("client")); + log.info("" + MetaDataCache.get("client")); } + /** + * getAllKeysTest + */ @Test public void getAllKeysTest() { - System.out.println(MetaDataCache.getAllKeys()); + log.info("" + MetaDataCache.getAllKeys()); } - } diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/TestByteXOR.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/TestByteXOR.java index fadc29bd2b9a6b5f80c33095900f9c800deeda4d..11225e78dc8169a80935785d225f9ab9957630cb 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/TestByteXOR.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/cache/TestByteXOR.java @@ -1,5 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.cache; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import java.util.HashMap; @@ -7,18 +23,20 @@ import java.util.Map; import java.util.stream.IntStream; /** + * TestByteXOR + * * @author :wangchao * @date :Created in 2022/5/14 * @since :11 */ +@Slf4j public class TestByteXOR { - @Test public void testXOR() { long old = 0L; for (int i = 0; i < 63; i++) { old = byteXor(old, i); - System.out.println("0 ," + i + " =" + old + " Long.toBinaryString()" + Long.toBinaryString(old)); + log.info("0 ," + i + " =" + old + " Long.toBinaryString()" + Long.toBinaryString(old)); } } @@ -29,33 +47,24 @@ public class TestByteXOR { byte byteVal = 0; for (int i = 0; i < 63; i++) { old = byteXor(old, i); - System.out.println("0 ," + i + " =" + old + " Long.toBinaryString()" + Long.toBinaryString(old)); + log.info("0 ," + i + " =" + old + " Long.toBinaryString()" + Long.toBinaryString(old)); } } @Test public void testIntStream() { IntStream.range(1, 10).forEach(idx -> { - System.out.println("range " + idx); + log.info("range " + idx); }); IntStream.rangeClosed(1, 10).forEach(idx -> { - System.out.println("rangeClosed " + idx); + log.info("rangeClosed " + idx); }); - IntStream.rangeClosed(1, 10) - .filter(i -> i == 6) - .count() - ; + IntStream.rangeClosed(1, 10).filter(i -> i == 6).count(); } - /** - * long 64为 该方法计算后63标识符保存数据状态。 - * - * @param value - * @param index - * @return - */ - long byteXor(long value, int index) { + private long byteXor(long value, int index) { + // long 64为 该方法计算后63标识符保存数据状态。 return (value | (1L << index)); } } diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/config/DataSourceTest.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/config/DataSourceTest.java index ea6d815e2013eb8728737e8110afdbf4a74f3ff0..48995ea6897f2e20d4427a0c681c624253e1d454 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/config/DataSourceTest.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/config/DataSourceTest.java @@ -1,73 +1,64 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import com.alibaba.druid.pool.DruidDataSource; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import org.opengauss.datachecker.extract.ExtractApplication; import org.springframework.beans.factory.annotation.Autowired; -import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.context.ApplicationContext; import org.springframework.jdbc.core.JdbcTemplate; -import javax.sql.DataSource; import java.sql.SQLException; -import java.util.List; -import java.util.Map; -@SpringBootTest (classes = ExtractApplication.class) +/** + * DataSourceTest + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j +@SpringBootTest(classes = ExtractApplication.class) public class DataSourceTest { - @Autowired private ApplicationContext applicationContext; - @Test public void contextLoadTest() throws SQLException { - DruidDataSource dataSourceOne = (DruidDataSource)applicationContext.getBean("dataSourceOne"); -// DruidDataSource dataSourceTwo = (DruidDataSource)applicationContext.getBean("dataSourceTwo"); -// DruidDataSource dataSourceThree = (DruidDataSource)applicationContext.getBean("dataSourceThree"); - - System.out.println("dataSourceOne " + dataSourceOne.getClass()); - System.out.println("dataSourceOne " + dataSourceOne.getConnection()); - System.out.println("druid dataSourceOne 最大连接数 :" + dataSourceOne.getMaxActive()); - System.out.println("druid dataSourceOne 最大初始化连接数 :" + dataSourceOne.getInitialSize()); - - System.out.println(" =========================================== "); -// -// System.out.println("dataSourceTwo " + dataSourceTwo.getClass()); -// System.out.println("dataSourceTwo " + dataSourceTwo.getConnection()); -// System.out.println("druid dataSourceTwo 最大连接数 :" + dataSourceTwo.getMaxActive()); -// System.out.println("druid dataSourceTwo 最大初始化连接数 :" + dataSourceTwo.getInitialSize()); -// -// System.out.println(" =========================================== "); -// -// System.out.println("dataSourceThree " + dataSourceThree.getClass()); -// System.out.println("dataSourceThree " + dataSourceThree.getConnection()); -// System.out.println("druid dataSourceThree 最大连接数 :" + dataSourceThree.getMaxActive()); -// System.out.println("druid dataSourceThree 最大初始化连接数 :" + dataSourceThree.getInitialSize()); - + DruidDataSource dataSourceOne = null; + final Object dataSourceObject = applicationContext.getBean("dataSourceOne"); + if (dataSourceObject instanceof DruidDataSource) { + dataSourceOne = (DruidDataSource) dataSourceObject; + } + log.info("dataSourceOne " + dataSourceOne.getClass()); + log.info("dataSourceOne " + dataSourceOne.getConnection()); + log.info("druid dataSourceOne getMaxActive " + dataSourceOne.getMaxActive()); + log.info("druid dataSourceOne getInitialSize " + dataSourceOne.getInitialSize()); dataSourceOne.close(); -// dataSourceTwo.close(); -// dataSourceThree.close(); - } @Test public void JdbcTemplateTest() { - - JdbcTemplate JdbcTemplateOne = (JdbcTemplate)applicationContext.getBean("JdbcTemplateOne"); -// JdbcTemplate JdbcTemplateTwo = (JdbcTemplate)applicationContext.getBean("JdbcTemplateTwo"); -// JdbcTemplate dataSourceThree = (JdbcTemplate)applicationContext.getBean("JdbcTemplateThree"); -// -// List> listTwo = JdbcTemplateTwo.queryForList("select * from client"); -// for (Map map : listTwo) { -// System.out.println(map); -// } -// -// System.out.println("======================================================================================"); -// List> listThree = dataSourceThree.queryForList("select * from client"); -// for (Map map : listThree) { -// System.out.println(map); -// } + JdbcTemplate jdbcTemplateOne = null; + final Object jdbcTemplateObject = applicationContext.getBean("JdbcTemplateOne"); + if (jdbcTemplateObject instanceof JdbcTemplate) { + jdbcTemplateOne = (JdbcTemplate) jdbcTemplateObject; + } } } diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImplTests.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImplTests.java index 16315976c0b9d3be89bfa98abbccf6022366e400..73878a64aa847ff6296eb7a8d836300a12c0d856 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImplTests.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/DataBaseMetaDataDAOImplTests.java @@ -1,5 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dao; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; import org.opengauss.datachecker.common.entry.extract.TableMetadata; @@ -9,9 +25,16 @@ import org.springframework.boot.test.context.SpringBootTest; import java.sql.SQLException; import java.util.List; +/** + * DataBaseMetaDataDAOImplTests + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j @SpringBootTest class DataBaseMetaDataDAOImplTests { - @Autowired private MetaDataDAO mysqlMetadataDAO; @@ -19,7 +42,7 @@ class DataBaseMetaDataDAOImplTests { void queryTableMetadata() throws SQLException { List tableMetadata = mysqlMetadataDAO.queryTableMetadata(); for (TableMetadata metadata : tableMetadata) { - System.out.println(metadata.toString()); + log.info(metadata.toString()); } } @@ -29,9 +52,8 @@ class DataBaseMetaDataDAOImplTests { for (TableMetadata metadata : tableMetadata) { List columnsMetadata = mysqlMetadataDAO.queryColumnMetadata(metadata.getTableName()); for (ColumnsMetaData colMetadata : columnsMetadata) { - System.out.println(colMetadata.toString()); + log.info(colMetadata.toString()); } } - } } diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/enums/EnumTest.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/enums/EnumTest.java index 28abcf585383020e253746e69922a5d39b71e220..c4db4aa8a7ac1b60fc58f9644af8bd1d419a1e18 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/enums/EnumTest.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/dao/enums/EnumTest.java @@ -1,27 +1,45 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.dao.enums; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import org.opengauss.datachecker.common.entry.enums.DataSourceType; import org.opengauss.datachecker.common.util.EnumUtil; import org.springframework.boot.test.context.SpringBootTest; - +/** + * EnumTest + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j @SpringBootTest public class EnumTest { - @Test - void testEnum() { + void testEnum() { DataSourceType type = DataSourceType.Sink; - - System.out.println(type); - System.out.println(type.equals(DataSourceType.valueOf("Sink"))); - System.out.println(EnumUtil.valueOfIgnoreCase(DataSourceType.class,"Sinkl")); - - System.out.println(EnumUtil.valueOf(DataSourceType.class,"Sinkl")); - System.out.println(EnumUtil.valueOf(DataSourceType.class,"Sink")); - - System.out.println(EnumUtil.valueOf(DataSourceType.class,"sink")); - - System.out.println(EnumUtil.valueOfIgnoreCase(DataSourceType.class,"sink")); + log.info("" + type); + log.info("" + type.equals(DataSourceType.valueOf("Sink"))); + log.info("" + EnumUtil.valueOfIgnoreCase(DataSourceType.class, "Sink")); + log.info("" + EnumUtil.valueOf(DataSourceType.class, "Sink")); + log.info("" + EnumUtil.valueOf(DataSourceType.class, "Sink")); + log.info("" + EnumUtil.valueOf(DataSourceType.class, "sink")); + log.info("" + EnumUtil.valueOfIgnoreCase(DataSourceType.class, "sink")); } } diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/service/MetaDataServiceTest.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/service/MetaDataServiceTest.java index 0b44a34d54a6fe851442e6ff2d13f63f3a53f180..38ad138e6d96e68cec9f25fcb26324bcdc287780 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/service/MetaDataServiceTest.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/service/MetaDataServiceTest.java @@ -1,5 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.service; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import org.opengauss.datachecker.common.entry.extract.TableMetadata; import org.springframework.beans.factory.annotation.Autowired; @@ -8,9 +24,16 @@ import org.springframework.boot.test.context.SpringBootTest; import java.sql.SQLException; import java.util.Map; +/** + * MetaDataServiceTest + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j @SpringBootTest public class MetaDataServiceTest { - @Autowired private MetaDataService metaDataService; @@ -18,7 +41,7 @@ public class MetaDataServiceTest { void queryMetadataOfSourceDBSchema() throws SQLException { Map stringTableMetadataMap = metaDataService.queryMetaDataOfSchema(); for (TableMetadata metadata : stringTableMetadataMap.values()) { - System.out.println(metadata.toString()); + log.info(metadata.toString()); } } } diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilderTest.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilderTest.java index 80575a85540638216333f6ff0be47d50467556fa..b1b78e710f195f886020c754d0659b56ba963be8 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilderTest.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/ExtractTaskBuilderTest.java @@ -1,5 +1,21 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; +import lombok.extern.slf4j.Slf4j; import org.junit.jupiter.api.Test; import org.opengauss.datachecker.common.entry.extract.ExtractTask; import org.opengauss.datachecker.extract.cache.MetaDataCache; @@ -11,26 +27,39 @@ import javax.annotation.PostConstruct; import java.util.List; import java.util.Set; +/** + * ExtractTaskBuilderTest + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j @SpringBootTest public class ExtractTaskBuilderTest { - @Autowired private ExtractTaskBuilder extractTaskBuilder; @Autowired private MetaDataService metadataService; + /** + * init + */ @PostConstruct public void init() { MetaDataCache.initCache(); MetaDataCache.putMap(metadataService.queryMetaDataOfSchema()); } + /** + * builderTest + */ @Test public void builderTest() { Set tables = MetaDataCache.getAllKeys(); List extractTasks = extractTaskBuilder.builder(tables); for (ExtractTask task : extractTasks) { - System.out.println(task); + log.info("" + task); } } } diff --git a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/SelectSqlBulderTest.java b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/SelectSqlBulderTest.java index d4b58937b659176793197ea480acc8c536c05443..a8f5d1183a9c5b5fb3dce3e3beaffcba801f2708 100644 --- a/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/SelectSqlBulderTest.java +++ b/datachecker-extract/src/test/java/org/opengauss/datachecker/extract/task/SelectSqlBulderTest.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.task; import org.junit.jupiter.api.BeforeEach; @@ -9,15 +24,20 @@ import org.opengauss.datachecker.common.entry.enums.ColumnKey; import org.opengauss.datachecker.common.entry.extract.ColumnsMetaData; import org.opengauss.datachecker.common.entry.extract.TableMetadata; -import java.util.Collections; import java.util.List; import static org.assertj.core.api.Assertions.assertThat; import static org.mockito.Mockito.when; +/** + * SelectSqlBulderTest + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ @ExtendWith(MockitoExtension.class) class SelectSqlBulderTest { - @Mock private TableMetadata mockTableMetadata; @@ -25,12 +45,11 @@ class SelectSqlBulderTest { @BeforeEach void setUp() { - selectSqlBulderUnderTest = new SelectSqlBulder(mockTableMetadata, "test",0L, 0L); + selectSqlBulderUnderTest = new SelectSqlBulder(mockTableMetadata, "test", 0L, 0L); } @Test void testBuilder() { - // Configure TableMetadata.getColumnsMetas(...). final ColumnsMetaData columnsMeta1 = new ColumnsMetaData(); columnsMeta1.setTableName("tableName"); @@ -44,13 +63,12 @@ class SelectSqlBulderTest { // Run the test final String result = selectSqlBulderUnderTest.builder(); // Verify the results - assertThat(result).isEqualTo("SELECT columnName1 FROM tableName"); + assertThat(result).isEqualTo("SELECT columnName1 FROM test.tableName"); } - @Test void testBuilderOffSet() { - selectSqlBulderUnderTest = new SelectSqlBulder(mockTableMetadata, "test",0L, 1000L); + selectSqlBulderUnderTest = new SelectSqlBulder(mockTableMetadata, "test", 0L, 1000L); // Setup // Configure TableMetadata.getPrimaryMetas(...). final ColumnsMetaData columnsMetaPri = new ColumnsMetaData(); @@ -75,12 +93,14 @@ class SelectSqlBulderTest { // Run the test final String result = selectSqlBulderUnderTest.builder(); // Verify the results - assertThat(result).isEqualTo("SELECT columnName1,columnName2 FROM tableName WHERE columnName1 IN (SELECT t.columnName1 FROM (SELECT columnName1 FROM tableName LIMIT 0,1000) t)"); + assertThat(result).isEqualTo("SELECT columnName1,columnName2 FROM test.tableName WHERE columnName1 IN " + + "(SELECT t.columnName1 FROM (SELECT columnName1 FROM test.tableName order by columnName1" + + " LIMIT 0,1000) t)"); } @Test void testBuilderMuliPrimaryLeyOffSet() { - selectSqlBulderUnderTest = new SelectSqlBulder(mockTableMetadata, "test",0L, 1000L); + selectSqlBulderUnderTest = new SelectSqlBulder(mockTableMetadata, "test", 0L, 1000L); // Setup // Configure TableMetadata.getPrimaryMetas(...). final ColumnsMetaData columnsMetaPri = new ColumnsMetaData(); @@ -107,6 +127,8 @@ class SelectSqlBulderTest { // Run the test final String result = selectSqlBulderUnderTest.builder(); // Verify the results - assertThat(result).isEqualTo("SELECT a.columnName1,a.columnName2 FROM tableName a RIGHT JOIN (SELECT columnName1,columnName2 FROM tableName LIMIT 0,1000) b ON a.columnName1=b.columnName1 and a.columnName2=b.columnName2"); + assertThat(result).isEqualTo("SELECT a.columnName1,a.columnName2 FROM test.tableName a RIGHT JOIN " + + " (SELECT columnName1,columnName2 FROM test.tableName order by columnName1,columnName2 LIMIT 0,1000) b" + + " ON a.columnName1=b.columnName1 and a.columnName2=b.columnName2"); } } diff --git a/datachecker-mock-data/pom.xml b/datachecker-mock-data/pom.xml index 397139268a9530928a16c281943df09b77f35dd8..c764d42411ea46a17432e27f23ac5839b62b3602 100644 --- a/datachecker-mock-data/pom.xml +++ b/datachecker-mock-data/pom.xml @@ -1,4 +1,19 @@ + + 4.0.0 @@ -44,16 +59,36 @@ mysql mysql-connector-java - provided + + + org.opengauss + opengauss-jdbc com.alibaba druid + + net.openhft + zero-allocation-hashing + org.springframework.boot spring-boot-starter-validation + + org.apache.kafka + kafka-streams + + + org.springframework.kafka + spring-kafka + + + org.springframework.boot + spring-boot-devtools + true + @@ -67,10 +102,6 @@ org.projectlombok lombok - - mysql - mysql-connector-java - diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/MockDataApplication.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/MockDataApplication.java index e952af509432996aa9fea90cd394689e578d480d..6b25b53375fb12d2ef0a084deacc721197fd68eb 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/MockDataApplication.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/MockDataApplication.java @@ -1,13 +1,35 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; +/** + * MockDataApplication + * + * @author wang chao + * @date 2022/5/8 19:27 + * @since 11 + **/ @SpringBootApplication public class MockDataApplication { - public static void main(String[] args) { - SpringApplication.run(MockDataApplication.class, args); - } + public static void main(String[] args) { + SpringApplication.run(MockDataApplication.class, args); + } } diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/TableRowCount.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/TableRowCount.java deleted file mode 100644 index dc4f57b902aee48a9eb4974093e3943fb57f4fc3..0000000000000000000000000000000000000000 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/TableRowCount.java +++ /dev/null @@ -1,17 +0,0 @@ -package org.opengauss.datachecker.extract; - -import lombok.AllArgsConstructor; -import lombok.Data; -import lombok.experimental.Accessors; - -/** - * @author :wangchao - * @date :Created in 2022/6/6 - * @since :11 - */ -@Data -@AllArgsConstructor -public class TableRowCount { - private String tableName; - private long count; -} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java index 7bc84d6770750b30865f198a71cad74567fe2c9d..75591c3596cda8c9c9502f47c0d525f353d0362d 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/AsyncConfig.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import org.springframework.context.annotation.Bean; @@ -19,17 +34,11 @@ public class AsyncConfig { @Bean("threadPoolTaskExecutor") public ThreadPoolTaskExecutor doAsyncExecutor() { ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); - // 核心线程数, 当前机器的核心数 线程池创建时初始化线程数量 executor.setCorePoolSize(16); - // 最大线程数:线程池最大的线程数,只有在缓冲队列满了之后才会申请超过核心线程数的线程 executor.setMaxPoolSize(32); - // 缓冲队列: 用来缓冲执行任务的队列 executor.setQueueCapacity(4000); - //允许线程空闲时间 executor.setKeepAliveSeconds(60); - // 线程池名称前缀 executor.setThreadNamePrefix("extract-"); - // 缓冲队列满了之后的拒绝策略: executor.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardPolicy()); executor.initialize(); return executor; diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java index 41e0d6e0dc0897e113e0430a13c96a572f68108e..71180afd9a6821b0f3f0c3c31180bbad40736a1d 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/DruidDataSourceConfig.java @@ -1,46 +1,81 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import com.alibaba.druid.pool.DruidDataSource; -import com.alibaba.druid.support.http.StatViewServlet; -import com.alibaba.druid.support.http.WebStatFilter; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.boot.context.properties.ConfigurationProperties; -import org.springframework.boot.web.servlet.FilterRegistrationBean; -import org.springframework.boot.web.servlet.ServletRegistrationBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Primary; import org.springframework.jdbc.core.JdbcTemplate; import javax.sql.DataSource; -import java.util.Arrays; -import java.util.HashMap; -import java.util.Map; +/** + * DruidDataSourceConfig + * + * @author :wangchao + * @date :Created in 2022/5/23 + * @since :11 + */ @Configuration public class DruidDataSourceConfig { - /** - *

-     *  将自定义的 Druid数据源添加到容器中,不再让 Spring Boot 自动创建
-     *  绑定全局配置文件中的 druid 数据源属性到 com.alibaba.druid.pool.DruidDataSource从而让它们生效
-     *  @ConfigurationProperties(prefix = "spring.datasource"):作用就是将 全局配置文件中
-     *  前缀为 spring.datasource的属性值注入到 com.alibaba.druid.pool.DruidDataSource 的同名参数中
-     *  
+ * build mysql DruidDataSource * - * @return + * @return druidDataSourceMysql */ @Primary - @Bean("dataSourceOne") - @ConfigurationProperties(prefix = "spring.datasource.druid") - public DataSource druidDataSourceOne() { + @Bean("dataSourceMysql") + @ConfigurationProperties(prefix = "spring.datasource.druid.mysql") + public DataSource druidDataSourceMysql() { return new DruidDataSource(); } + /** + * build mysql JdbcTemplate + * + * @param dataSourceMysql Mysql dataSource + * @return JdbcTemplate + */ + @Bean("jdbcTemplateMysql") + public JdbcTemplate jdbcTemplateMysql(@Qualifier("dataSourceMysql") DataSource dataSourceMysql) { + return new JdbcTemplate(dataSourceMysql); + } - @Bean("jdbcTemplateOne") - public JdbcTemplate jdbcTemplateOne(@Qualifier("dataSourceOne") DataSource dataSourceOne) { - return new JdbcTemplate(dataSourceOne); + /** + * build OpenGauss DruidDataSource + * + * @return DruidDataSource + */ + @Bean("dataSourceOpenGauss") + @ConfigurationProperties(prefix = "spring.datasource.druid.opengauss") + public DataSource druidDataSourceOpenGauss() { + return new DruidDataSource(); } + /** + * build OpenGauss JdbcTemplate + * + * @param dataSourceOpenGauss dataSourceOpenGauss + * @return JdbcTemplate + */ + @Bean("jdbcTemplateOpenGauss") + public JdbcTemplate jdbcTemplateOpenGauss(@Qualifier("dataSourceOpenGauss") DataSource dataSourceOpenGauss) { + return new JdbcTemplate(dataSourceOpenGauss); + } } diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java index e5e08697221274bedd3829bcba892b664f05e26e..2ad163a19670e7fd2bbc6efdaf9e231ef8a2a8b8 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/ExtractConfig.java @@ -1,3 +1,18 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import lombok.Getter; @@ -8,18 +23,7 @@ import org.springframework.context.annotation.Configuration; @Configuration @Getter public class ExtractConfig { - /** - * schema的初始化值由配置文件加载 - */ private final String schema = "test"; - - /** - * schema的初始化值由配置文件加载 - */ private final DataSourceType dataSourceType = DataSourceType.Source; - - /** - * schema的初始化值由配置文件加载 - */ private final DataBaseType databaseType = DataBaseType.MS; } diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java index 91a1e8b6937d1bd0605a11bc21aa353493f7058a..04595805175f53e95615babe63a127492ed5ed15 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/config/SpringDocConfig.java @@ -1,10 +1,25 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.config; import io.swagger.v3.oas.models.OpenAPI; import io.swagger.v3.oas.models.info.Info; -import io.swagger.v3.oas.models.parameters.HeaderParameter; +import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.reflect.FieldUtils; -import org.springdoc.core.customizers.OpenApiCustomiser; +import org.opengauss.datachecker.common.exception.CommonException; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.util.ReflectionUtils; @@ -16,45 +31,45 @@ import java.lang.reflect.Field; import java.util.List; /** - * swagger2配置 - * http://localhost:8080/swagger-ui/index.html + * swagger2 configuration * * @author :wangchao * @date :Created in 2022/5/17 * @since :11 */ - -/** - * 2021/8/13 - */ - +@Slf4j @Configuration public class SpringDocConfig implements WebMvcConfigurer { + /** + * mallTinyOpenAPI + * + * @return OpenAPI + */ @Bean public OpenAPI mallTinyOpenAPI() { - return new OpenAPI() - .info(new Info() - .title("数据打桩服务") - .description("数据打桩服务 自动化执行测试表创建以及数据插入 API") - .version("v1.0.0")); + return new OpenAPI().info(new Info().title("Data Piling Service").description( + "Data Picking Service Automation Execution Test Table Creation and Data Insertion API").version("v1.0.0")); } /** - * 通用拦截器排除设置,所有拦截器都会自动加springdoc-opapi相关的资源排除信息,不用在应用程序自身拦截器定义的地方去添加,算是良心解耦实现。 + * registry Interceptors + * + * @param registry registry Interceptors */ @SuppressWarnings("unchecked") @Override public void addInterceptors(InterceptorRegistry registry) { try { Field registrationsField = FieldUtils.getField(InterceptorRegistry.class, "registrations", true); - List registrations = (List) ReflectionUtils.getField(registrationsField, registry); + List registrations = + (List) ReflectionUtils.getField(registrationsField, registry); if (registrations != null) { for (InterceptorRegistration interceptorRegistration : registrations) { interceptorRegistration.excludePathPatterns("/springdoc**/**"); } } - } catch (Exception e) { - e.printStackTrace(); + } catch (CommonException e) { + log.error("swagger2 configuration addInterceptors error"); } } } \ No newline at end of file diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/controller/DataExtractController.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/controller/DataExtractController.java new file mode 100644 index 0000000000000000000000000000000000000000..6a4679958d6a554176b2d230eed9fcb92ea67239 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/controller/DataExtractController.java @@ -0,0 +1,109 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.controller; + +import org.opengauss.datachecker.extract.service.ExtractFileDataService; +import org.opengauss.datachecker.extract.service.ExtractKafkaDataService; +import org.opengauss.datachecker.extract.service.ExtractTableDataAnalyseService; +import org.opengauss.datachecker.extract.service.ExtractTableDataService; +import org.opengauss.datachecker.extract.service.KafkaAnalyseService; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.web.bind.annotation.GetMapping; +import org.springframework.web.bind.annotation.RequestParam; +import org.springframework.web.bind.annotation.RestController; + +import java.util.List; + +/** + * DataExtractController + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@RestController +public class DataExtractController { + @Autowired + private ExtractKafkaDataService extractKafkaDataService; + @Autowired + private ExtractFileDataService extractFileDataService; + @Autowired + private ExtractTableDataService extractTableDataService; + @Autowired + private ExtractTableDataAnalyseService extractTableDataAnalyseService; + @Autowired + private KafkaAnalyseService kafkaAnalyseService; + + /** + * checkKafkaTopicData + * + * @param topicSource topicSource + * @param topicSink topicSink + * @return check kafka topic name + */ + @GetMapping("/check/kafka/topic/data") + public List checkKafkaTopicData(@RequestParam("topicSource") String topicSource, + @RequestParam("topicSink") String topicSink) { + return extractKafkaDataService.checkKafkaTopicData(topicSource, topicSink); + } + + /** + * checkFileData + * + * @param fileSource fileSource + * @param fileSink fileSink + */ + @GetMapping("/check/file/data") + public void checkFileData(@RequestParam("fileSource") String fileSource, + @RequestParam("fileSink") String fileSink) { + extractFileDataService.checkFileData(fileSource, fileSink); + } + + /** + * checkTableData + * + * @param tableName tableName + * @return result + */ + @GetMapping("/check/table/data") + public String checkTableData(@RequestParam("tableName") String tableName) { + return "OK : diffCnt=" + extractTableDataService.checkTable(tableName); + } + + /** + * checkTableDataAnalyse + * + * @param tableName tableName + * @return result + */ + @GetMapping("/check/table/data/analyse") + public String checkTableDataAnalyse(@RequestParam("tableName") String tableName) { + extractTableDataAnalyseService.checkTable(tableName); + return "OK"; + } + + /** + * checkTableDataAnalyse + * + * @param tableName tableName + * @return result + */ + @GetMapping("/check/table/data/kafka/analyse") + public String checkTableDataKafkaAnalyse(@RequestParam("tableName") String tableName) { + kafkaAnalyseService.checkKafkaAnalyse(tableName); + return "OK"; + } +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/controller/ExtractMockController.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/controller/ExtractMockController.java index 422261a7ae43982ff89e6762032f79f2443e6adb..37cc5c21a5fe78082dfcc960820ecba7c4eaa688 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/controller/ExtractMockController.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/controller/ExtractMockController.java @@ -1,10 +1,24 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.controller; -import io.swagger.v3.oas.annotations.Parameter; -import org.hibernate.validator.constraints.Range; -import org.opengauss.datachecker.extract.TableRowCount; +import lombok.extern.slf4j.Slf4j; import org.opengauss.datachecker.extract.service.ExtractMockDataService; import org.opengauss.datachecker.extract.service.ExtractMockTableService; +import org.opengauss.datachecker.extract.vo.TableStatisticsInfo; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; @@ -13,7 +27,14 @@ import org.springframework.web.bind.annotation.RestController; import java.util.List; - +/** + * ExtractMockController + * + * @author :wangchao + * @date :Created in 2022/5/14 + * @since :11 + */ +@Slf4j @RestController public class ExtractMockController { @Autowired @@ -22,44 +43,47 @@ public class ExtractMockController { private ExtractMockTableService extractMockTableService; /** - * 根据名称创建数据库表 表字段目前为固定字段 + * createTable * - * @param tableName 创建表名 - * @return - * @throws Exception + * @param tableName tableName + * @return result + * @throws Exception exception */ @PostMapping("/mock/createTable") public String createTable(@RequestParam("tableName") String tableName) throws Exception { return extractMockTableService.createTable(tableName); } - @GetMapping("/mock/query/all/table/count") - public List getAllTableCount() { - return extractMockTableService.getAllTableCount(); + /** + * queryTableStatisticsInfo + * + * @return TableStatisticsInfo + */ + @GetMapping("/mock/statistics/table/info") + public List queryTableStatisticsInfo() { + return extractMockTableService.getAllTableInfo(); } /** - * 向指定表名称,采用多线程方式批量插入指定数据量的Mock数据 + * batchMockData * - * @param tableName 表名 - * @param totalCount 插入数据总量 - * @param threadCount 线程数 最大线程总数不能超过2000 ,超过2000可能会导致数据丢失 - * @return + * @param tableName tableName + * @param totalCount totalCount + * @param threadCount threadCount + * @param shouldCreateTable shouldCreateTable + * @return result */ @PostMapping("/batch/mock/data") - public String batchMockData(@Parameter(name = "tableName", description = "待插入数据表名") @RequestParam("tableName") String tableName, - @Parameter(name = "totalCount", description = "待插入数据总量") @RequestParam("totalCount") long totalCount, - @Parameter(name = "threadCount", description = "多线程插入,设置线程总数") - @Range(min = 1, max = 30, message = "设置的线程总数必须在[1-30]之间") - @RequestParam("threadCount") int threadCount, - @Parameter(name = "createTable", description = "是否创建表") @RequestParam("createTable") boolean createTable) { + public String batchMockData(@RequestParam("tableName") String tableName, + @RequestParam("totalCount") long totalCount, @RequestParam("threadCount") int threadCount, + @RequestParam("shouldCreateTable") boolean shouldCreateTable) { try { - if (createTable) { + if (shouldCreateTable) { extractMockTableService.createTable(tableName); } extractMockDataService.batchMockData(tableName, totalCount, threadCount); } catch (Exception throwables) { - System.err.println(throwables.getMessage()); + log.error(throwables.getMessage()); } return "OK"; } diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractFileDataService.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractFileDataService.java new file mode 100644 index 0000000000000000000000000000000000000000..b7e7f274615350e0ac7b82193089855450c51ad9 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractFileDataService.java @@ -0,0 +1,102 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.service; + +import com.alibaba.fastjson.JSONObject; +import lombok.SneakyThrows; +import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.util.FileUtils; +import org.opengauss.datachecker.common.util.JsonObjectUtil; +import org.springframework.beans.factory.annotation.Value; +import org.springframework.stereotype.Service; + +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.Function; +import java.util.stream.Collectors; + +/** + * ExtractFileDataService + * + * @author :wangchao + * @date :Created in 2022/7/26 + * @since :11 + */ +@Slf4j +@Service +public class ExtractFileDataService { + @Value("${spring.mock.data-path}") + private String path; + + /** + * checkFileData + * + * @param fileSource fileSource + * @param fileSink fileSource + */ + public void checkFileData(String fileSource, String fileSink) { + checkPathFileData(Path.of(path + fileSource), Path.of(path + fileSink)); + } + + /** + * checkPathFileData + * + * @param fileSource fileSource + * @param fileSink fileSource + */ + public void checkPathFileData(Path fileSource, Path fileSink) { + final List sourceDataList = readAndParseFile(fileSource); + log.info("read and parse file {} record size={}", fileSource.getFileName(), sourceDataList.size()); + final List sinkDataList = readAndParseFile(fileSink); + log.info("read and parse file {} record size={}", fileSink.getFileName(), sinkDataList.size()); + final Map sourceMap = sourceDataList.parallelStream().collect( + Collectors.toConcurrentMap(RowDataHash::getPrimaryKey, Function.identity())); + log.info("transform sourceDataList to map {} ", sourceMap.size()); + final Map sinkMap = sinkDataList.parallelStream().collect( + Collectors.toConcurrentMap(RowDataHash::getPrimaryKey, Function.identity())); + log.info("transform sinkDataList to map {} ", sinkMap.size()); + Map sourceDiffMap = new HashMap<>(); + sourceMap.forEach((key, value) -> { + if (sinkMap.containsKey(key)) { + RowDataHash sinkValue = sinkMap.get(key); + if (value.getPrimaryKeyHash() != sinkValue.getPrimaryKeyHash()) { + sourceDiffMap.put(value, sinkValue); + } + } + }); + log.info("compare sourceMap and sinkMap result {} ", sourceDiffMap.size()); + String sourceReduceFile = path + "sourceReduce.json"; + String sinkReduceFile = path + "sinkReduce.json"; + FileUtils.deleteFile(sourceReduceFile); + FileUtils.deleteFile(sinkReduceFile); + FileUtils.writeAppendFile(sourceReduceFile, JsonObjectUtil.format(sourceDiffMap)); + log.info("data export file name {} , {} ", sourceReduceFile, sinkReduceFile); + } + + @SneakyThrows + private List readAndParseFile(Path pathFileName) { + final List stringList = Files.readAllLines(pathFileName); + StringBuffer buffer = new StringBuffer(); + stringList.forEach(line -> { + buffer.append(line); + }); + return JSONObject.parseArray(buffer.toString(), RowDataHash.class); + } +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractKafkaDataService.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractKafkaDataService.java new file mode 100644 index 0000000000000000000000000000000000000000..354d54f9954c43a536f2ad7c15176f7cb39070b1 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractKafkaDataService.java @@ -0,0 +1,134 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.service; + +import com.alibaba.fastjson.JSON; +import lombok.extern.slf4j.Slf4j; +import org.apache.kafka.clients.consumer.ConsumerConfig; +import org.apache.kafka.clients.consumer.ConsumerRecords; +import org.apache.kafka.clients.consumer.KafkaConsumer; +import org.apache.kafka.common.serialization.StringDeserializer; +import org.opengauss.datachecker.common.entry.check.Pair; +import org.opengauss.datachecker.common.entry.extract.RowDataHash; +import org.opengauss.datachecker.common.util.IdGenerator; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.boot.autoconfigure.kafka.KafkaProperties; +import org.springframework.stereotype.Service; + +import java.time.Duration; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Properties; + +/** + * ExtractKafkaDataService + * + * @author :wangchao + * @date :Created in 2022/7/26 + * @since :11 + */ +@Slf4j +@Service +public class ExtractKafkaDataService { + @Autowired + private KafkaProperties properties; + + /** + * checkKafkaTopicData + * + * @param topicSource SOURCE(1, "SourceEndpoint") + * @param topicSink SINK(2, "SinkEndpoint") + * @return topic + */ + public List checkKafkaTopicData(String topicSource, String topicSink) { + Map source = new HashMap<>(); + List> sourceRepeatList = new ArrayList<>(); + int sourceCount = getTopicRecords(topicSource, source, sourceRepeatList); + Map sink = new HashMap<>(); + List> sinkRepeatList = new ArrayList<>(); + int sinkCount = getTopicRecords(topicSink, sink, sinkRepeatList); + if (sourceCount == sinkCount && sinkCount > 0) { + List primaryList = new ArrayList<>(source.keySet()); + primaryList.forEach(primary -> { + if (source.containsKey(primary) && sink.containsKey(primary)) { + RowDataHash sourceRow = source.get(primary); + RowDataHash sinkRow = sink.get(primary); + if (sourceRow.getPrimaryKeyHash() == sinkRow.getPrimaryKeyHash()) { + source.remove(sourceRow.getPrimaryKey()); + sink.remove(sinkRow.getPrimaryKey()); + } + } + }); + log.info("source={}", sourceCount); + log.info("sink={}", sinkCount); + log.info("sourceRepeatList={}", sourceRepeatList); + log.info("sinkRepeatList={}", sinkRepeatList); + log.info("source={}", source); + log.info("sink={}", sink); + return List.of("source=" + source.size(), "sink=" + sink.size()); + } else { + return List.of("The source and destination query data are inconsistent,source=" + sourceCount + " sink=" + + sinkCount); + } + } + + private int getTopicRecords(String topic, Map dataMap, + List> repeatList) { + KafkaConsumer kafkaConsumer = buildKafkaConsumer(IdGenerator.nextId36()); + kafkaConsumer.subscribe(List.of(topic)); + int consumerRecordCount = consumerAllRecords(kafkaConsumer, dataMap, repeatList); + kafkaConsumer.close(); + return consumerRecordCount; + } + + private int consumerAllRecords(KafkaConsumer kafkaConsumer, Map dataMap, + List> repeatList) { + int consumerRecordCount = 0; + int consumerRecords = getConsumerRecords(kafkaConsumer, dataMap, repeatList); + consumerRecordCount = consumerRecordCount + consumerRecords; + while (consumerRecords > 0) { + consumerRecords = getConsumerRecords(kafkaConsumer, dataMap, repeatList); + consumerRecordCount = consumerRecordCount + consumerRecords; + } + return consumerRecordCount; + } + + private int getConsumerRecords(KafkaConsumer kafkaConsumer, Map dataMap, + List> repeatList) { + ConsumerRecords consumerRecords = kafkaConsumer.poll(Duration.ofMillis(200)); + consumerRecords.forEach(record -> { + RowDataHash rowDataHash = JSON.parseObject(record.value(), RowDataHash.class); + if (dataMap.containsKey(rowDataHash.getPrimaryKey())) { + repeatList.add(Pair.of(dataMap.get(rowDataHash.getPrimaryKey()), rowDataHash)); + } else { + dataMap.put(rowDataHash.getPrimaryKey(), rowDataHash); + } + }); + return consumerRecords.count(); + } + + private KafkaConsumer buildKafkaConsumer(String groupId) { + Properties props = new Properties(); + props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, String.join(",", properties.getBootstrapServers())); + props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId); + props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); + props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); + props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); + return new KafkaConsumer<>(props); + } +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockDataService.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockDataService.java index a0995b9d9f734cf947af4125069ad016ffb6f893..96e82306f7f6663cc15f8f0233d7c5e960831514 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockDataService.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockDataService.java @@ -1,6 +1,22 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.service; import lombok.extern.slf4j.Slf4j; +import org.opengauss.datachecker.common.exception.CommonException; import org.opengauss.datachecker.extract.service.thread.ExtractMockDataThread; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; @@ -8,46 +24,59 @@ import org.springframework.stereotype.Service; import org.springframework.util.Assert; import javax.sql.DataSource; +import java.util.ArrayList; +import java.util.List; import java.util.concurrent.Future; +/** + * ExtractMockDataService + * + * @author :wangchao + * @date :Created in 2022/7/26 + * @since :11 + */ @Slf4j @Service public class ExtractMockDataService { - /** - * 限制最大线程总数 - */ private static final int MAX_THREAD_COUNT = 100; @Autowired private ThreadPoolTaskExecutor threadPoolTaskExecutor; - @Autowired private DataSource dataSourceOne; /** - * 向指定表名称,采用多线程方式批量插入指定数据量的Mock数据 + * batchMockData * - * @param tableName 待插入数据的表名称 - * @param totalCount 插入记录总数 - * @param threadCount 插入记录线程总数 + * @param tableName tableName + * @param totalCount totalCount + * @param threadCount threadCount */ public void batchMockData(String tableName, long totalCount, int threadCount) { try { - Assert.isTrue(threadCount < MAX_THREAD_COUNT, "设置的线程总数不能超过最大线程总数"); + Assert.isTrue(threadCount < MAX_THREAD_COUNT, + "The total number of threads set cannot exceed the maximum total number of threads"); long batchCount = totalCount / threadCount; - - log.info("plan batch insert thread, tableName = {}, threadCount = {} ,totalCount = {} , batchCount = {}", tableName, threadCount, totalCount, batchCount); + log.info("plan batch insert thread, tableName = {}, threadCount = {} ,totalCount = {} , batchCount = {}", + tableName, threadCount, totalCount, batchCount); + List mockFutureList = new ArrayList<>(); for (int i = 0; i < threadCount; i++) { if (i == (threadCount - 1)) { batchCount = batchCount + totalCount % threadCount; } - threadPoolTaskExecutor.submit(new ExtractMockDataThread(dataSourceOne, tableName, batchCount, i + 1)); - + mockFutureList.add(threadPoolTaskExecutor + .submit(new ExtractMockDataThread(dataSourceOne, tableName, batchCount, i + 1))); } - } catch (Exception throwables) { - log.error("=============", throwables.getMessage()); + mockFutureList.forEach(future -> { + while (true) { + if (future.isDone() && !future.isCancelled()) { + break; + } + } + }); + } catch (CommonException ex) { + log.error("batchMockData", ex.getMessage()); } - } } diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockTableService.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockTableService.java index 8768ac994a79b48e596ecaef9a75f7a2aa43926a..9533662faf6d2aa61beaf5327a96ddddffa84108 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockTableService.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractMockTableService.java @@ -1,8 +1,22 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.service; import lombok.extern.slf4j.Slf4j; -import org.opengauss.datachecker.common.util.ThreadUtil; -import org.opengauss.datachecker.extract.TableRowCount; +import org.opengauss.datachecker.extract.vo.TableStatisticsInfo; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; @@ -13,6 +27,8 @@ import java.util.ArrayList; import java.util.List; /** + * ExtractMockTableService + * * @author wang chao * @date 2022/5/8 19:27 * @since 11 @@ -20,84 +36,74 @@ import java.util.List; @Service @Slf4j public class ExtractMockTableService { - @Autowired - protected JdbcTemplate jdbcTemplateOne; + private static final String SQL_QUERY_TABLE_STATISTICS = "SELECT TABLE_NAME tableName,SUM(table_rows) count, " + + "concat(round(sum(data_length/1024/1024),2),'MB') as dataLength " + + " from information_schema.tables where table_schema='test' GROUP BY table_name"; + private static final String SQL_QUERY_TABLE_STATISTICS_SUN = "SELECT 'ALL' tableName ,SUM(table_rows) count, " + + "concat(round(sum(data_length/1024/1024),2),'MB') as dataLength" + + " from information_schema.tables where table_schema='test' GROUP BY table_schema"; + + @Autowired + private JdbcTemplate jdbcTemplateMysql; @Autowired private ThreadPoolTaskExecutor threadPoolTaskExecutor; /** - * 自动创建指定表 + * createTable * - * @param tableName 创建表 - * @return - * @throws Exception 目前对于表名重复未做处理,表名重复这里直接抛出异常信息 + * @param tableName tableName + * @return create result */ - public String createTable(String tableName) throws Exception { - jdbcTemplateOne.execute(MockMapper.CREATE.replace(":TABLENAME", tableName)); + public String createTable(String tableName) { + jdbcTemplateMysql.execute(MockMapper.CREATE.replace(":TABLENAME", tableName)); return tableName; } - private static final String SQL_QUERY_TABLE = "SELECT table_name from information_schema.TABLES WHERE table_schema='test' "; - - public List getAllTableCount() { - long start = System.currentTimeMillis(); - List tableRowCountList = new ArrayList<>(); - final List tableNameList = jdbcTemplateOne.queryForList(SQL_QUERY_TABLE, String.class); + /** + * getAllTableInfo + * + * @return TableStatisticsInfo + */ + public List getAllTableInfo() { + final List tableNameList = queryTableStatisticsInfos(SQL_QUERY_TABLE_STATISTICS); if (CollectionUtils.isEmpty(tableNameList)) { return new ArrayList<>(); } - String sqlQueryTableRowCount = "select count(1) rowCount from test.%s"; - tableNameList.stream().forEach(tableName -> { - threadPoolTaskExecutor.submit(() -> { - final Long rowCount = jdbcTemplateOne.queryForObject(String.format(sqlQueryTableRowCount, tableName), Long.class); - tableRowCountList.add(new TableRowCount(tableName, rowCount)); - }); - }); - - while (tableRowCountList.size() != tableNameList.size()) { - ThreadUtil.sleep(10); - } - - final long sum = tableRowCountList.stream().mapToLong(TableRowCount::getCount).sum(); - tableRowCountList.add(new TableRowCount("all_table_total", sum)); - tableRowCountList.sort((o1, o2) -> (int) (o1.getCount() - o2.getCount())); - - long end = System.currentTimeMillis(); + tableNameList.addAll(queryTableStatisticsInfos(SQL_QUERY_TABLE_STATISTICS_SUN)); + tableNameList.sort((o1, o2) -> (int) (o1.getCount() - o2.getCount())); + return tableNameList; + } - System.out.println(" query cost time =" + (end - start) + " sec"); - return tableRowCountList; + private List queryTableStatisticsInfos(String querySql) { + return jdbcTemplateMysql.query(querySql, + (rs, rowNum) -> new TableStatisticsInfo(rs.getString("tableName"), rs.getLong("count"), + rs.getString("dataLength"))); } - /** - * 构建创建表SQL语句 - */ interface MockMapper { - String CREATE = "CREATE TABLE :TABLENAME (\n" + - "\t b_number VARCHAR(30) NOT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_type VARCHAR(20) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_user VARCHAR(20) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_int INT(10) NULL DEFAULT NULL,\n" + - "\t b_bigint BIGINT(19) NULL DEFAULT '0',\n" + - "\t b_text TEXT NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_longtext LONGTEXT NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_date DATE NULL DEFAULT NULL,\n" + - "\t b_datetime DATETIME NULL DEFAULT NULL,\n" + - "\t b_timestamp TIMESTAMP NULL DEFAULT NULL,\n" + - "\t b_attr1 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr2 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr3 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr4 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr5 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr6 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr7 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr8 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr9 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\t b_attr10 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci',\n" + - "\tPRIMARY KEY (`b_number`) USING BTREE\n" + - ")\n" + - " COLLATE='utf8mb4_0900_ai_ci'\n" + - " ENGINE=InnoDB ;\n"; + /** + * create table sql + */ + String CREATE = "CREATE TABLE :TABLENAME ( b_number VARCHAR(30) NOT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_type VARCHAR(20) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_user VARCHAR(20) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_int INT(10) NULL DEFAULT NULL, b_bigint BIGINT(19) NULL DEFAULT '0'," + + " b_text TEXT NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_longtext LONGTEXT NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_date DATE NULL DEFAULT NULL, b_datetime DATETIME NULL DEFAULT NULL," + + " b_timestamp TIMESTAMP NULL DEFAULT NULL," + + " b_attr1 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr2 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr3 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr4 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr5 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr6 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr7 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr8 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr9 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " b_attr10 VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_0900_ai_ci'," + + " PRIMARY KEY (`b_number`) USING BTREE ) COLLATE='utf8mb4_0900_ai_ci'" + " ENGINE=InnoDB ;"; } } diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractTableDataAnalyseService.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractTableDataAnalyseService.java new file mode 100644 index 0000000000000000000000000000000000000000..854e72c40d8b8b26e1c9a21cc295d7f73f9e1888 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractTableDataAnalyseService.java @@ -0,0 +1,131 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.service; + +import lombok.extern.slf4j.Slf4j; +import org.apache.kafka.clients.producer.KafkaProducer; +import org.apache.kafka.clients.producer.ProducerRecord; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.boot.autoconfigure.kafka.KafkaProperties; +import org.springframework.jdbc.core.JdbcTemplate; +import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * ExtractTableDataAnalyseService + * + * @author :wangchao + * @date :Created in 2022/7/26 + * @since :11 + */ +@Slf4j +@Service +public class ExtractTableDataAnalyseService { + @Autowired + private KafkaProperties properties; + @Autowired + private JdbcTemplate jdbcTemplateMysql; + @Autowired + private JdbcTemplate jdbcTemplateOpenGauss; + @Autowired + private ThreadPoolTaskExecutor threadPoolTaskExecutor; + + /** + * checkTable + * + * @param tableName tableName + */ + public void checkTable(String tableName) { + String topicName = "quickstart"; + String topicName2 = "quickstart2"; + threadPoolTaskExecutor.submit( + new DataAnalyseRunnable(tableName, "mysql", TableSqlWapper.SELECT_PRI_M, jdbcTemplateMysql, topicName)); + threadPoolTaskExecutor.submit( + new DataAnalyseRunnable(tableName, "openGauss", TableSqlWapper.SELECT_PRI_O, jdbcTemplateOpenGauss, + topicName2)); + } + + class DataAnalyseRunnable extends KafkaService implements Runnable { + private String tableName; + private String database; + private String execSql; + private String topicName; + private JdbcTemplate jdbcTemplate; + + /** + * DataAnalyseRunnable + * + * @param tableName tableName + * @param database database + * @param execSql execSql + * @param jdbcTemplate jdbcTemplate + * @param topicName topicName + */ + public DataAnalyseRunnable(String tableName, String database, String execSql, JdbcTemplate jdbcTemplate, + String topicName) { + super(properties); + this.tableName = tableName; + this.database = database; + this.execSql = execSql; + this.jdbcTemplate = jdbcTemplate; + this.topicName = topicName; + } + + @Override + public void run() { + QueryDataWapper queryDataWapper = new QueryDataWapper(); + final List primaryList = queryDataWapper.queryPrimaryValues(jdbcTemplate, execSql, tableName); + log.info("query {} : table={}, row-size={} ", database, tableName, primaryList.size()); + HashHandler hashHandler = new HashHandler(); + List hashList = new ArrayList<>(); + primaryList.forEach(primaryKey -> { + hashList.add(hashHandler.xx3Hash(primaryKey)); + }); + final HashSet hashSet = new HashSet<>(hashList); + log.info("{} row hash list -> {} , set->{}", database, hashList.size(), hashSet.size()); + List hashMode0List = new ArrayList<>(); + List hashMode1List = new ArrayList<>(); + int partition = 2; + hashList.forEach(hash -> { + final int absMod = (int) Math.abs(hash % partition); + if (absMod <= 0) { + hashMode0List.add(hash); + } else { + hashMode1List.add(hash); + } + }); + log.info("{} row hash partition 0 -> {} , 1->{}", database, hashMode0List.size(), hashMode1List.size()); + sendMessage(topicName, partition, hashList); + } + + private void sendMessage(String topicName, int partition, List hashList) { + KafkaProducer kafkaProducer = buildKafkaProducer(); + AtomicInteger cnt = new AtomicInteger(0); + hashList.forEach(record -> { + final int absMod = (int) Math.abs(record % partition); + ProducerRecord producerRecord = + new ProducerRecord<>(topicName, absMod, String.valueOf(record), String.valueOf(record)); + sendMessage(kafkaProducer, producerRecord, cnt); + }); + kafkaProducer.flush(); + } + } +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractTableDataService.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractTableDataService.java new file mode 100644 index 0000000000000000000000000000000000000000..adaebf35873c752f5ff13793625eb01602e9dba5 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/ExtractTableDataService.java @@ -0,0 +1,104 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.service; + +import lombok.extern.slf4j.Slf4j; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.jdbc.core.JdbcTemplate; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * ExtractTableDataService + * + * @author :wangchao + * @date :Created in 2022/7/26 + * @since :11 + */ +@Slf4j +@Service +public class ExtractTableDataService { + @Autowired + private JdbcTemplate jdbcTemplateMysql; + @Autowired + private JdbcTemplate jdbcTemplateOpenGauss; + private QueryDataWapper queryDataWapper = new QueryDataWapper(); + + /** + * checkTable + * + * @param tableName tableName + * @return result + */ + public int checkTable(String tableName) { + AtomicInteger cnt = new AtomicInteger(0); + final List mysqlList = + queryDataWapper.queryPrimaryValues(jdbcTemplateMysql, TableSqlWapper.SELECT_PRI_M, tableName); + log.info("query mysql : table={}, row-size={} ", tableName, mysqlList.size()); + final List openGaussList = + queryDataWapper.queryPrimaryValues(jdbcTemplateOpenGauss, TableSqlWapper.SELECT_PRI_O, tableName); + log.info("query openGauss : table={}, row-size={} ", tableName, openGaussList.size()); + HashHandler source = new HashHandler(); + HashHandler sink = new HashHandler(); + mysqlList.parallelStream().forEach(parmary -> { + final long sourceHash = source.xx3Hash(parmary); + final long sinkHash = sink.xx3Hash(parmary); + if (sourceHash != sinkHash) { + log.info("hash difference : key={},hash source={} : sink={}", parmary, sourceHash, sinkHash); + cnt.incrementAndGet(); + } + }); + log.info("{} key ,hash calc finished ", tableName); + mysqlList.parallelStream().filter(mysqlKey -> !openGaussList.contains(mysqlKey)).forEach(reduce -> { + log.info("mysql row not found in openGauss -> {}", reduce); + }); + log.info("{} mysql row found finished ", tableName); + openGaussList.parallelStream().filter(openGauss -> !mysqlList.contains(openGauss)).forEach(reduce -> { + log.info("openGauss row not fount in mysql -> {}", reduce); + }); + log.info("{} openGauss row found finished ", tableName); + + List mysqlHashList = new ArrayList<>(); + mysqlList.forEach(mysqlKey -> { + mysqlHashList.add(source.xx3Hash(mysqlKey)); + }); + final HashSet mysqlHashSet = new HashSet<>(mysqlHashList); + log.info("mysql row hash list -> {} , set->{}", mysqlHashList.size(), mysqlHashSet.size()); + + List openGaussHashList = new ArrayList<>(); + openGaussList.forEach(openGauss -> { + openGaussHashList.add(sink.xx3Hash(openGauss)); + }); + final HashSet openGaussHashSet = new HashSet<>(openGaussHashList); + log.info("openGauss row hash list -> {} , set->{}", openGaussHashList.size(), openGaussHashSet.size()); + Object lock = new Object(); + List openGaussParallelHashList = new ArrayList<>(); + openGaussList.parallelStream().forEach(openGauss -> { + final long hash = sink.xx3Hash(openGauss); + synchronized (lock) { + openGaussParallelHashList.add(hash); + } + }); + final HashSet openGaussParallelHashSet = new HashSet<>(openGaussParallelHashList); + log.info("openGauss row Parallel hash list -> {} , set->{}", openGaussParallelHashList.size(), + openGaussParallelHashList.size()); + return cnt.get(); + } +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/HashHandler.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/HashHandler.java new file mode 100644 index 0000000000000000000000000000000000000000..afb713c51ef7edd1422c4ee437964848334a9257 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/HashHandler.java @@ -0,0 +1,40 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.service; + +import org.opengauss.datachecker.common.util.LongHashFunctionWrapper; + +/** + * HashHandler + * + * @author :wangchao + * @date :Created in 2022/8/2 + * @since :11 + */ +public class HashHandler { + private static final String PRIMARY_DELIMITER = "_#_"; + private static final LongHashFunctionWrapper LONG_HASH_FUNCTION = new LongHashFunctionWrapper(); + + /** + * hash + * + * @param key hash key + * @return hash result + */ + public long xx3Hash(String key) { + return LONG_HASH_FUNCTION.hashChars(key); + } +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/QueryDataWapper.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/QueryDataWapper.java new file mode 100644 index 0000000000000000000000000000000000000000..de6c2792eb488ee0397479e64d5e150015a76339 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/QueryDataWapper.java @@ -0,0 +1,47 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.service; + +import org.springframework.jdbc.core.JdbcTemplate; +import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * QueryDataWapper + * + * @author :wangchao + * @date :Created in 2022/8/2 + * @since :11 + */ +public class QueryDataWapper { + /** + * query table data + * + * @param jdbcTemplate jdbcTemplate + * @param sql sql + * @param tableName tableName + * @return data + */ + public List queryPrimaryValues(JdbcTemplate jdbcTemplate, String sql, String tableName) { + Map map = new HashMap<>(); + NamedParameterJdbcTemplate jdbc = new NamedParameterJdbcTemplate(jdbcTemplate); + final String execSql = sql.replace(":table", tableName); + return jdbc.queryForList(execSql, map, String.class); + } +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/TableSqlWapper.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/TableSqlWapper.java new file mode 100644 index 0000000000000000000000000000000000000000..93d491d05f667e5f45bbf8bd3993c4062fef51c9 --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/TableSqlWapper.java @@ -0,0 +1,40 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.service; + +/** + * TableSqlWapper + * + * @author :wangchao + * @date :Created in 2022/8/2 + * @since :11 + */ +public interface TableSqlWapper { + /** + * select primary data sql + */ + String SELECT_PRI_M = "select b_number from test.:table"; + + /** + * select all data sql + */ + String SELECT_M = "select * from test.:table"; + + /** + * select primary data sql + */ + String SELECT_PRI_O = "select b_number from jack.:table"; +} diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/thread/ExtractMockDataThread.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/thread/ExtractMockDataThread.java index cf965b98e32f034723ec0a110fb4de8ea883999d..d194071c49bf0c7c7281248b33c4dc4e170340b2 100644 --- a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/thread/ExtractMockDataThread.java +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/service/thread/ExtractMockDataThread.java @@ -1,7 +1,22 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + package org.opengauss.datachecker.extract.service.thread; import lombok.extern.slf4j.Slf4j; -import org.opengauss.datachecker.common.util.IdWorker; +import org.opengauss.datachecker.common.util.IdGenerator; import org.springframework.jdbc.core.JdbcTemplate; import javax.sql.DataSource; @@ -10,8 +25,9 @@ import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; /** + * ExtractMockDataThread + * * @author wang chao - * @description 数据抽取服务,基础数据打桩 * @date 2022/5/8 19:27 * @since 11 **/ @@ -26,15 +42,6 @@ public class ExtractMockDataThread implements Runnable { protected long maxRowCount; protected int taskSn; - - /** - * 线程构造函数 - * - * @param dataSource 数据源注入 - * @param tableName 表名注入 - * @param maxRowCount 当前线程最大插入记录总数 - * @param taskSn 线程任务序列号 - */ public ExtractMockDataThread(DataSource dataSource, String tableName, long maxRowCount, int taskSn) { jdbcTemplate = new JdbcTemplate(dataSource); this.tableName = tableName; @@ -47,26 +54,25 @@ public class ExtractMockDataThread implements Runnable { batchMockData(tableName, maxRowCount, taskSn); } - public void batchMockData(String tableName, long threadMaxRowCount, int taskSn) { try { log.info("batch start insert: table:{},counut={},taskSn={}", tableName, threadMaxRowCount, taskSn); long batchInsertCount = threadMaxRowCount; long insertedCount = 0; - // 循环分批次插入数据,限制SQL每次插入记录的最大数据量 while (batchInsertCount >= MAX_INSERT_ROW_COUNT) { String batchSql = buildBatchSql(tableName, MAX_INSERT_ROW_COUNT, taskSn); jdbcTemplate.batchUpdate(batchSql); batchInsertCount = batchInsertCount - MAX_INSERT_ROW_COUNT; insertedCount += MAX_INSERT_ROW_COUNT; - log.info("batch insert:threadMaxRowCount={},taskSn={},insertedCount={}", threadMaxRowCount, taskSn, insertedCount); + log.info("batch insert:threadMaxRowCount={},taskSn={},insertedCount={}", threadMaxRowCount, taskSn, + insertedCount); } - // 循环分批次插入数据,最后一个批次的数据插入 if (batchInsertCount > 0 && batchInsertCount < MAX_INSERT_ROW_COUNT) { String batchSql = buildBatchSql(tableName, batchInsertCount, taskSn); jdbcTemplate.batchUpdate(batchSql); insertedCount += batchInsertCount; - log.info("batch insert:totalRowCount={},taskSn={},insertedCount={}", threadMaxRowCount, taskSn, insertedCount); + log.info("batch insert:totalRowCount={},taskSn={},insertedCount={}", threadMaxRowCount, taskSn, + insertedCount); } log.info("batch end insert: table:{},counut={},taskSn={}", tableName, threadMaxRowCount, taskSn); } catch (Exception e) { @@ -77,57 +83,54 @@ public class ExtractMockDataThread implements Runnable { private String buildBatchSql(String tableName, long rowCount, int ordler) { StringBuffer sb = new StringBuffer(MockMapper.INSERT.replace(":TABLENAME", tableName)); for (int i = 0; i < rowCount; i++) { - String id = IdWorker.nextId(String.valueOf(ordler)); + String id = IdGenerator.nextId(String.valueOf(ordler)); sb.append("(") - // b_number - .append("'").append(id).append("',") - // b_type - .append("'type_01',") - // b_user - .append("'user_02',") - //b_int - .append("1,") - //b_bigint - .append("32,") - // b_text - .append("'b_text_").append(id).append("',") - // b_longtext - .append("'b_longtext_").append(id).append("',") - // b_date - .append("'").append(DATE_FORMATTER.format(LocalDate.now())).append("',") - // b_datetime - .append("'").append(DATE_TIME_FORMATTER.format(LocalDateTime.now())).append("',") - // b_timestamp - .append("'").append(DATE_TIME_FORMATTER.format(LocalDateTime.now())).append("',") - // b_attr1 - .append("'b_attr1_").append(id).append("',") - // b_attr2 - .append("'b_attr2_").append(id).append("',") - // b_attr3 - .append("'b_attr3_").append(id).append("',") - // b_attr4 - .append("'b_attr4_").append(id).append("',") - // b_attr5 - .append("'b_attr5_").append(id).append("',") - // b_attr6 - .append("'b_attr6_").append(id).append("',") - // b_attr7 - .append("'b_attr7_").append(id).append("',") - // b_attr8 - .append("'b_attr8_").append(id).append("',") - // b_attr9 - .append("'b_attr9_").append(id).append("',") - // b_attr10 - .append("'b_attr10_").append(id).append("'") - .append(")") - .append(","); + // b_number + .append("'").append(id).append("',") + // b_type + .append("'type_01',") + // b_user + .append("'user_02',") + // b_int + .append("1,") + // b_bigint + .append("32,") + // b_text + .append("'b_text_").append(id).append("',") + // b_longtext + .append("'b_longtext_").append(id).append("',") + // b_date + .append("'").append(DATE_FORMATTER.format(LocalDate.now())).append("',") + // b_datetime + .append("'").append(DATE_TIME_FORMATTER.format(LocalDateTime.now())).append("',") + // b_timestamp + .append("'").append(DATE_TIME_FORMATTER.format(LocalDateTime.now())).append("',") + // b_attr1 + .append("'b_attr1_").append(id).append("',") + // b_attr2 + .append("'b_attr2_").append(id).append("',") + // b_attr3 + .append("'b_attr3_").append(id).append("',") + // b_attr4 + .append("'b_attr4_").append(id).append("',") + // b_attr5 + .append("'b_attr5_").append(id).append("',") + // b_attr6 + .append("'b_attr6_").append(id).append("',") + // b_attr7 + .append("'b_attr7_").append(id).append("',") + // b_attr8 + .append("'b_attr8_").append(id).append("',") + // b_attr9 + .append("'b_attr9_").append(id).append("',") + // b_attr10 + .append("'b_attr10_").append(id).append("'").append(")").append(","); } int length = sb.length(); sb.deleteCharAt(length - 1); return sb.toString(); } - interface MockMapper { String INSERT = "INSERT INTO :TABLENAME VALUES "; } diff --git a/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/vo/TableStatisticsInfo.java b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/vo/TableStatisticsInfo.java new file mode 100644 index 0000000000000000000000000000000000000000..e90f849157fc72da466fb9d37c0e251ba22feb8a --- /dev/null +++ b/datachecker-mock-data/src/main/java/org/opengauss/datachecker/extract/vo/TableStatisticsInfo.java @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2022-2022 Huawei Technologies Co.,Ltd. + * + * openGauss is licensed under Mulan PSL v2. + * You can use this software according to the terms and conditions of the Mulan PSL v2. + * You may obtain a copy of Mulan PSL v2 at: + * + * http://license.coscl.org.cn/MulanPSL2 + * + * THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, + * EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, + * MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. + * See the Mulan PSL v2 for more details. + */ + +package org.opengauss.datachecker.extract.vo; + +import lombok.AllArgsConstructor; +import lombok.Data; + +/** + * TableStatisticsInfo + * + * @author :wangchao + * @date :Created in 2022/6/6 + * @since :11 + */ +@Data +@AllArgsConstructor +public class TableStatisticsInfo { + private String tableName; + private long count; + private String dataLength; +} diff --git a/datachecker-mock-data/src/main/resources/application.yml b/datachecker-mock-data/src/main/resources/application.yml index 88a083ea0dbfec9a1e7fc0a19de3592ff1667b23..94bbb48705eea28153e12e8f6b19b97363e29a1c 100644 --- a/datachecker-mock-data/src/main/resources/application.yml +++ b/datachecker-mock-data/src/main/resources/application.yml @@ -7,30 +7,78 @@ logging: spring: application: name: datachecker-extract + mock: + data-path: local_path # like eg: local\path\dir\ + kafka: + properties: + #这个参数指定producer在发送批量消息前等待的时间,当设置此参数后,即便没有达到批量消息的指定大小(batch-size),到达时间后生产者也会发送批量消息到broker。默认情况下,生产者的发送消息线程只要空闲了就会发送消息,即便只有一条消息。设置这个参数后,发送线程会等待一定的时间,这样可以批量发送消息增加吞吐量,但同时也会增加延迟。 + linger.ms: 10 #默认值:0毫秒,当消息发送比较频繁时,增加一些延迟可增加吞吐量和性能。 + #这个参数指定producer在一个TCP connection可同时发送多少条消息到broker并且等待broker响应,设置此参数较高的值可以提高吞吐量,但同时也会增加内存消耗。另外,如果设置过高反而会降低吞吐量,因为批量消息效率降低。设置为1,可以保证发送到broker的顺序和调用send方法顺序一致,即便出现失败重试的情况也是如此。 + #注意:当前消息符合at-least-once,自kafka1.0.0以后,为保证消息有序以及exactly once,这个配置可适当调大为5。 + max.in.flight.requests.per.connection: 1 #默认值:5,设置为1即表示producer在connection上发送一条消息,至少要等到这条消息被broker确认收到才继续发送下一条,因此是有序的。 + producer: # producer 生产者 + retries: 0 # 重试次数 + acks: 1 # 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1) + batch-size: 163840 # 批量大小 + buffer-memory: 335544320 # 生产端缓冲区大小 + key-serializer: org.apache.kafka.common.serialization.StringSerializer + # value-serializer: com.itheima.demo.config.MySerializer + value-serializer: org.apache.kafka.common.serialization.StringSerializer + + consumer: # consumer消费者 + group-id: checkgroup # 默认的消费组ID + enable-auto-commit: true # 是否自动提交offset + auto-commit-interval: 100 # 提交offset延时(接收到消息后多久提交offset) + + # earliest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费 + # latest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据 + # none:topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常 + auto-offset-reset: earliest + key-deserializer: org.apache.kafka.common.serialization.StringDeserializer + # value-deserializer: com.itheima.demo.config.MyDeserializer + value-deserializer: org.apache.kafka.common.serialization.StringDeserializer + max-poll-records: 10000 datasource: druid: - driver-class-name: com.mysql.cj.jdbc.Driver - url: jdbc:mysql://192.168.0.114:3306/test?useSSL=false&useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC&allowPublicKeyRetrieval=true - username: root - password: Huawei@123 - type: com.alibaba.druid.pool.DruidDataSource - #Spring Boot 默认是不注入这些属性值的,需要自己绑定 - #druid 数据源专有配置 - initialSize: 30 - minIdle: 10 - maxActive: 100 - maxWait: 60000 - timeBetweenEvictionRunsMillis: 60000 - minEvictableIdleTimeMillis: 300000 - validationQuery: SELECT 1 FROM DUAL - testWhileIdle: true - testOnBorrow: false - testOnReturn: false - poolPreparedStatements: true mysql: - usePingMethod: false - + driver-class-name: com.mysql.cj.jdbc.Driver + url: jdbc:mysql://xxxxxx:xxx/xxx?useSSL=false&useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC&allowPublicKeyRetrieval=true + username: xxxxx + password: xxxxxx + type: com.alibaba.druid.pool.DruidDataSource + #Spring Boot 默认是不注入这些属性值的,需要自己绑定 + #druid 数据源专有配置 + initialSize: 20 + minIdle: 5 + maxActive: 200 + maxWait: 60000 + timeBetweenEvictionRunsMillis: 60000 + minEvictableIdleTimeMillis: 300000 + validationQuery: SELECT 1 FROM DUAL + testWhileIdle: true + testOnBorrow: false + testOnReturn: false + poolPreparedStatements: true + opengauss: + driver-class-name: org.opengauss.Driver + url: jdbc:opengauss://xxxxx:xxx/xxxx?useSSL=false&useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC + username: xxxxx + password: xxxxxxxx + type: com.alibaba.druid.pool.DruidDataSource + #Spring Boot 默认是不注入这些属性值的,需要自己绑定 + #druid 数据源专有配置 + initialSize: 5 + minIdle: 5 + maxActive: 50 + maxWait: 60000 + timeBetweenEvictionRunsMillis: 60000 + minEvictableIdleTimeMillis: 300000 + #validationQuery: SELECT 1 FROM DUAL + testWhileIdle: true + testOnBorrow: false + testOnReturn: false + poolPreparedStatements: true #配置监控统计拦截的filters,stat:监控统计、log4j:日志记录、wall:防御sql注入 #如果允许时报错 java.lang.ClassNotFoundException: org.apache.log4j.Priority #则导入 log4j 依赖即可,Maven 地址:https://mvnrepository.com/artifact/log4j/log4j diff --git a/datachecker-mock-data/src/main/resources/log4j2.xml b/datachecker-mock-data/src/main/resources/log4j2.xml index 6df859475e2814e8e750f347c5263b4b41793425..62a61ffd2b68f72049e39c8de93d4bba65155719 100644 --- a/datachecker-mock-data/src/main/resources/log4j2.xml +++ b/datachecker-mock-data/src/main/resources/log4j2.xml @@ -1,58 +1,51 @@ - + + + - + logs/mock - + - - - - - - - - - - + - - + + - @@ -60,7 +53,7 @@ - + @@ -68,15 +61,12 @@ - - - diff --git a/pom.xml b/pom.xml index 5abb9a11fb21c985142432433a754bef3bfb84f1..6b5935416f3059dd4df761a3467c06a8b89e498d 100644 --- a/pom.xml +++ b/pom.xml @@ -30,6 +30,7 @@ 1.6.8 0.15 8.0.29 + 3.0.0 datachecker-common @@ -61,6 +62,11 @@ ${mysql.connector.java.version} provided + + org.opengauss + opengauss-jdbc + ${opengauss.jdbc.version} + com.alibaba druid